text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Analysis of Syntax-Based Pronoun Resolution Methods Joel R. Tetreault University of Rochester Department of Computer Science Rochester, NY, 14627 tetreaul@cs, rochester, edu Abstract This paper presents a pronoun resolution algo- rithm that adheres to the constraints and rules of Centering Theory (Grosz et al., 1995) and is an alternative to Brennan et al.'s 1987 algo- rithm. The advantages of this new model, the Left-Right Centering Algorithm (LRC), lie in its incremental processing of utterances and in its low computational overhead. The algorithm is compared with three other pronoun resolu- tion methods: Hobbs' syntax-based algorithm, Strube's S-list approach, and the BFP Center- ing algorithm. All four methods were imple- mented in a system and tested on an annotated subset of the Treebank corpus consisting of 2026 pronouns. The noteworthy results were that Hobbs and LRC performed the best. 1 Introduction The aim of this project is to develop a pro- noun resolution algorithm which performs bet- ter than the Brennan et al. 1987 algorithm 1 as a cognitive model while also performing well empirically. A revised algorithm (Left-Right Centering) was motivated by the fact that the BFP al- gorithm did not allow for incremental process- ing of an utterance and hence of its pronouns, and also by the fact that it occasionally im- poses a high computational load, detracting from its psycholinguistic plausibility. A sec- ond motivation for the project is to remedy the dearth of empirical results on pronoun res- olution methods. Many small comparisons of methods have been made, such as by Strube (1998) and Walker (1989), but those usually consist of statistics based on a small hand- tested corpus. The problem with evaluating 1Henceforth BFP algorithms by hand is that it is time consum- ing and difficult to process corpora that are large enough to provide reliable, broadly based statistics. By creating a system that can run algorithms, one can easily and quickly analyze large amounts of data and generate more reli- able results. In this project, the new algorithm is tested against three leading syntax-based pro- noun resolution methods: Hobbs' naive algo- rithm (1977), S-list (Strube 1998), and BFP. Section 2 presents the motivation and algo- rithm for Left-Right Centering. In Section 3, the results of the algorithms are presented and then discussed in Section 4. 2 Left-Right Centering Algorithm Left-Right Centering (LRC) is a formalized algorithm built upon centering theory's con- straints and rules as detailed in Grosz et. al (1995). The creation of the LRC Algorithm is motivated by two drawbacks found in the BFP method. The first is BFP's limitation as a cognitive model since it makes no provision for incremental resolution of pronouns (Kehler 1997). Psycholinguistic research support the claim that listeners process utterances one word at a time, so when they hear a pronoun they will try to resolve it immediately. If new infor- mation comes into play which makes the reso- lution incorrect (such as a violation of binding constraints), the listener will go back and find a correct antecedent. This incremental resolution problem also motivates Strube's S-list approach. The second drawback to the BFP algorithm is the computational explosion of generating and filtering anchors. In utterances with two or more pronouns and a Cf-list with several can- didate antecedents for each pronoun, thousands of anchors can easily be generated making for a time consuming filtering phase. An exam- 602 ple from the evaluation corpus illustrates this problem (the italics in Un-1 represent possible antecedents for the pronouns (in italics) of Un): Un-l: Separately, the Federal Energy Regu- latory Commission turned down for now a re- quest by Northeast seeking approval of its possi- ble purchase of PS of New Hampshire. Un: Northeast said it would refile its request and still hopes for an expedited review by the FERC so that it could complete the purchase by next summer if its bid is the one approved by the bankruptcy court. With four pronouns in Un, and eight possible antecedents for each in Un-1, 4096 unique Cf- lists are generated. In the cross-product phase, 9 possible Cb's are crossed with the 4096 Cf's, generating 36864 anchors. Given these drawbacks, we propose a revised resolution algorithm that adheres to centering constraints. It works by first searching for an antecedent in the current utterance 2, if one is not found, then the previous Cf-lists (starting with the previous utterance) are searched left- to-right for an antecedent: 1. Preprocessing - from previous utterance: Cb(Un-1) and Cf(Un-1) are available. 2. Process Utterance - parse and extract incrementally from Un all references to dis- course entities. For each pronoun do: (a) Search for an antecedent intrasenten- tially in Cf-partial(Un) 3 that meets feature and binding constraints. If one is found proceed to the next pro- noun within utterance. Else go to (b). (b) Search for an antecedent intersenten- tially in Cf(Un-1) that meets feature and binding constraints. 3. Create Cf- create Cf-list of Un by rank- ing discourse entities of Un according to grammatical function. Our implementa- tion used a left-right breadth-first walk of the parse tree to approximate sorting by grammatical function. 2In this project, a sentence is considered an utterance 3Cf-partial is a list of all processed discourse entities in Un 4. Identify Cb - the backward-looking cen- ter is the most highly ranked entity from Cf(Un-1) realized in Cf(Un). 5. Identify Transition - with the Cb and Cf resolved, use the criteria from (Brennan et al., 1987) to assign the transition. It should be noted that BFP makes use of Centering Rule 2 (Grosz et al., 1995), LRC does not use the transition generated or Rule 2 in steps 4 and 5 since Rule 2's role in pronoun resolution is not yet known (see Kehler 1997 for a critique of its use by BFP). Computational overhead is avoided since no anchors or auxiliary data structures need to be produced and filtered. 3 Evaluation of Algorithms All four algorithms were run on a 3900 utterance subset of the Penn Treebank annotated corpus (Marcus et al., 1993) provided by Charniak and Ge (1998). The corpus consists of 195 different newspaper articles. Sentences are fully brack- eted and have labels that indicate word-class and features. Because the S-list and BFP algo- rithms do not allow resolution of quoted text, all quoted expressions were removed from the corpus, leaving 1696 pronouns (out of 2026) to be resolved. For analysis, the algorithms were broken up into two classes. The "N" group consists of al- gorithms that search intersententially through all Cf-lists for an antecedent. The "1" group consists of algorithms that can only search for an antecedent in Cf(Un-1). The results for the "N" algorithms and "1" algorithms are depicted in Figures 1 and 2 respectively. For comparison, a baseline algorithm was cre- ated which simply took the most recent NP (by surface order) that met binding and feature con- straints. This naive approach resolved 28.6 per- cent of pronouns correctly. Clearly, all four per- form better than the naive approach. The fol- lowing section discusses the performance of each algorithm. 4 Discussion The surprising result from this evaluation is that the Hobbs algorithm, which uses the least amount of information, actually performs the best. The difference of six more pronouns right 603 Algorithm Right % Right % Right Intra % Right Inter Hobbs 1234 72.8 68.4 85.0 LRC-N 1228 72.4 67.8 85.2 Strube-N 1166 68.8 62.9 85.2 Figure 1: "N" algorithms: search all previous Cf lists Algorithm LRC-1 Strube-1 BFP Right % Right % Right Intra % Right Inter 1208 71.2 68.4 80.7 1120 66.0 60.3 71.1 962 56.7 40.7 78.8 Figure 2: "1" algorithms: search Cf(Un-1) only between LRC-N and Hobbs is statistically in- significant so one may conclude that the new centering algorithm is also a viable method. Why do these algorithms perform better than the others? First, both search for referents in- trasententially and then intersentially. In this corpus, over 71% of all pronouns have intrasen- tential referents, so clearly an algorithm that favors the current utterance will perform bet- ter. Second, both search their respective data structures in a salience-first manner. Inter- sententially, both examine previous utterances in the same manner. LRC-N sorts the Cf- list by grammatical function using a breadth- first search and by moving prepended phrases to a less salient position. While Hobbs' algo- rithm does not do the movement it still searches its parse tree in a breadth-first manner thus emulating the Cf-list search. Intrasententially, Hobbs gets slightly more correct since it first favors antecedents close to the pronoun before searching the rest of the tree. LRC favors en- tities near the head of the sentence under the assumption they are more salient. The similar- ities in intra- and intersentential evaluation are reflected in the similarities in their percent right for the respective categories. Because the S-list approach incorporates both semantics and syntax in its familiarity rank- ing scheme, a shallow version which only uses syntax is implemented in this study. Even though several entities were incorrectly labeled, the shallow S-list approach still performed quite well, only 4 percent lower than Hobbs and LRC- i. The standing of the BFP algorithm should not be too surprising given past studies. For example, Strube (1997) had the S-list algorithm performing at 91 percent correct on three New York Times articles while the best version of BFP performed at 81 percent. This ten per- cent difference is reflected in the present eval- uation as well. The main drawback for BFP was its preference for intersentential resolution. Also, BFP as formally defined does not have an intrasentential processing mechanism. For the purposes of the project, the LRC intrasen- tential technique was used to resolve pronouns that were unable to be resolved by the BFP (in- tersentential) algorithm. In additional experiments, Hobbs and LRC- N were tested with quoted expressions included. LRC used an approach similar to the one proposed by Kamayema (1998) for analyzing quoted expressions. Given this new approach, 70.4% of the 2026 pronouns were resolved cor- rectly by LRC while Hobbs performed at 69.8%, a difference of only 13 pronouns right. 5 Conclusions This paper first presented a revised pronoun resolution algorithm that adheres to the con- straints of centering theory. It is inspired by the need to remedy a lack of incremental pro- cessing and computational issues with the BFP algorithm. Second, the performance of LRC was compared against three other leading pro- noun resolution algorithms based solely on syn- tax. The comparison of these algorithms is 604 significant in its own right because they have not been previously compared, in computer- encoded form, on a common corpus. Coding all the algorithms allows one to quickly test them all on a large corpus and eliminates human er- ror, both shortcomings of hand evaluation. Most noteworthy is the performance of Hobbs and LRC. The Hobbs approach reveals that a walk of the parse tree performs just as well as salience based approaches. LRC performs just as well as Hobbs, but the important point is that it can be considered as a replacement for the BFP algorithm not only in terms of perfor- mance but in terms of modeling. In terms of implementation, Hobbs is dependent on a pre- cise parse tree for its analysis. If no parse tree is available, Strube's S-list algorithm and LRC prove more useful since grammatical function can be approximated by using surface order. 6 Future Work The next step is to test all four algorithms on a novel or short stories. Statistics from the Walker and Strube studies suggest that BFP will perform better in these cases. Other future work includes constructing a hybrid algorithm of LRC and S-list in which entities are ranked both by the familiarity scale and by grammati- cal function. Research into how transitions and the Cb can be used in a pronoun resolution al- gorithm should also be examined. Strube and Hahn (1996) developed a heuristic of ranking transition pairs by cost to evaluate different Cf- ranking schemes. Perhaps this heuristic could be used to constrain the search for antecedents. It is quite possible that hybrid algorithms (i.e. using Hobbs for intrasentential resolution, LRC for intersentential) may not produce any sig- nificant improvement over the current systems. If so, this might indicate that purely syntactic methods cannot be pushed much farther, and the upper limit reached can serve as a base line for approaches that combine syntax and seman- tics. 7 Acknowledgments I am grateful to Barbara Grosz for aiding me in the development of the LRC algorithm and discussing centering issues. I am also grate- ful to Donna Byron who was responsible for much brainstorming, cross-checking of results, and coding of the Hobbs algorithm. Special thanks goes to Michael Strube, James Allen, and Lenhart Schubert for their advice and brainstorming. We would also like to thank Charniak and Ge for the annotated, parsed Treebank corpus which proved invaluable. Partial support for the research reported in this paper was provided by the National Sci- ence Foundation under Grants No. IRI-90- 09018, IRI-94-04756 and CDA-94-01024 to Har- yard University and also by the DARPA re- search grant no. F30602-98-2-0133 to the Uni- versity of Rochester. References Susan E. Brennan, Marilyn W. Friedman, and Carl J. Pollard. 1987. A centering approach to pronouns. In Proceedings, 25th Annual Meeting of the ACL, pages 155-162. Niyu Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora res- olution. Proceedings of the Sixth Workshop on Very Large Corpora. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21 (2):203-226. Jerry R. Hobbs. 1977. Resolving pronoun ref- erences. Lingua, 44:311-338. Megumi Kameyama. 1986. Intrasentential cen- tering: A case study. In Centering Theory in Discourse. Andrew Kehler. 1997. Current theories of cen- tering for pronoun interpretation: A crit- ical evaluation. Computational Linguistics, 23(3):467-475. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Lingusitics, 19(2):313-330. Michael Strube and Udo Hahn. 1996. Func- tional centering. In Association for Compu- tational Lingusitics, pages 270-277. Michael Strube. 1998. Never look back: An alternative to centering. In Association for Computational Lingusitics, pages 1251-1257. Marilyn A. Walker. 1989. Evaluating discourse processing algorithms. In Proceedings, 27th Annual Meeting of the Association for Com- puational Linguisites, pages 251-261. 605
1999
79
Finding Parts in Very Large Corpora Matthew Berland, Eugene Charniak rob, ec @ cs. brown, edu Department of Computer Science Brown University, Box 1910 Providence, RI 02912 Abstract We present a method for extracting parts of objects from wholes (e.g. "speedometer" from "car"). Given a very large corpus our method finds part words with 55% accuracy for the top 50 words as ranked by the system. The part list could be scanned by an end-user and added to an existing ontology (such as WordNet), or used as a part of a rough semantic lexicon. 1 Introduction We present a method of extracting parts of objects from wholes (e.g. "speedometer" from "car"). To be more precise, given a single word denoting some entity that has recognizable parts, the system finds and rank-orders other words that may denote parts of the entity in question. Thus the relation found is strictly speaking between words, a relation Miller [1] calls "meronymy." In this paper we use the more colloquial "part-of" terminology. We produce words with 55°£ accuracy for the top 50 words ranked by the system, given a very large corpus. Lacking an objective definition of the part-of relation, we use the majority judgment of five human subjects to decide which proposed parts are correct. The program's output could be scanned by an end- user and added to an existing ontology (e.g., Word- Net), or used as a part of a rough semantic lexicon. To the best of our knowledge, there is no published work on automatically finding parts from unlabeled corpora. Casting our nets wider, the work most sim- ilar to what we present here is that by Hearst [2] on acquisition of hyponyms ("isa" relations). In that pa- per Hearst (a) finds lexical correlates to the hyponym relations by looking in text for cases where known hy- ponyms appear in proximity (e.g., in the construction (NP, NP and (NP other NN)) as in "boats, cars, and other vehicles"), (b) tests the proposed patterns for validity, and (c) uses them to extract relations from a corpus. In this paper we apply much the same methodology to the part-of relation. Indeed, in [2] Hearst states that she tried to apply this strategy to the part-of relation, but failed. We comment later on the differences in our approach that we believe were most important to our comparative success. Looking more widely still, there is an ever- growing literature on the use of statistical/corpus- based techniques in the automatic acquisition of lexical-semantic knowledge ([3-8]). We take it as ax- iomatic that such knowledge is tremendously useful in a wide variety of tasks, from lower-level tasks like noun-phrase reference, and parsing to user-level tasks such as web searches, question answering, and digest- ing. Certainly the large number of projects that use WordNet [1] would support this contention. And al- though WordNet is hand-built, there is general agree- ment that corpus-based methods have an advantage in the relative completeness of their coverage, partic- ularly when used as supplements to the more labor- intensive methods. 2 Finding Parts 2.1 Parts Webster's Dictionary defines "part" as "one of the often indefinite or unequal subdivisions into which something is or is regarded as divided and which to- gether constitute the whole." The vagueness of this definition translates into a lack of guidance on exactly what constitutes a part, which in turn translates into some doubts about evaluating the results of any pro- cedure that claims to find them. More specifically, note that the definition does not claim that parts must be physical objects. Thus, say, "novel" might have "plot" as a part. In this study we handle this problem by asking in- formants which words in a list are parts of some target word, and then declaring majority opinion to be cor- rect. We give more details on this aspect of the study later. Here we simply note that while our subjects often disagreed, there was fair consensus that what might count as a part depends on the nature of the 57 word: a physical object yields physical parts, an in- stitution yields its members, and a concept yields its characteristics and processes. In other words, "floor" is part of "building" and "plot" is part of "book." 2.2 Patterns Our first goal is to find lexical patterns that tend to indicate part-whole relations. Following Hearst [2], we find possible patterns by taking two words that are in a part-whole relation (e.g, basement and build- ing) and finding sentences in our corpus (we used the North American News Corpus (NANC) from LDC) that have these words within close proximity. The first few such sentences are: ... the basement of the building. ... the basement in question is in a four-story apartment building ... ... the basement of the apartment building. From the building's basement ... ... the basement of a building ... ... the basements of buildings ... From these examples we construct the five pat- terns shown in Table 1. We assume here that parts and wholes are represented by individual lexical items (more specifically, as head nouns of noun-phrases) as opposed to complete noun phrases, or as a sequence of "important" noun modifiers together with the head. This occasionally causes problems, e.g., "conditioner" was marked by our informants as not part of "car", whereas "air conditioner" probably would have made it into a part list. Nevertheless, in most cases head nouns have worked quite well on their own. We evaluated these patterns by observing how they performed in an experiment on a single example. Table 2 shows the 20 highest ranked part words (with the seed word "car") for each of the patterns A-E. (We discuss later how the rankings were obtained.) Table 2 shows patterns A and B clearly outper- form patterns C, D, and E. Although parts occur in all five patterns~ the lists for A and B are predom- inately parts-oriented. The relatively poor perfor- mance of patterns C and E was ant!cipated, as many things occur "in" cars (or buildings, etc.) other than their parts. Pattern D is not so obviously bad as it differs from the plural case of pattern B only in the lack of the determiner "the" or "a". However, this difference proves critical in that pattern D tends to pick up "counting" nouns such as "truckload." On the basis of this experiment we decided to proceed using only patterns A and B from Table 1. A. whole NN[-PL] 's POS part NN[-PL] ... building's basement ... B. part NN[-PL] of PREP {theIa } DET roods [JJINN]* whole NN ... basement of a building... C. part NN in PREP {thela } DET roods [JJINN]* whole NN ... basement in a building ... D. parts NN-PL of PREP wholes NN-PL ... basements of buildings ... E. parts NN-PL in PREP wholes NN-PL ... basements in buildings ... Format: type_of_word TAG type_of_word TAG ... NN = Noun, NN-PL = Plural Noun DET = Determiner, PREP = Preposition POS = Possessive, JJ = Adjective Table h Patterns for partOf(basement,building) 3 Algorithm 3.1 Input We use the LDC North American News Corpus (NANC). which is a compilation of the wire output of several US newspapers. The total corpus is about 100,000,000 words. We ran our program on the whole data set, which takes roughly four hours on our net- work. The bulk of that time (around 90%) is spent tagging the corpus. As is typical in this sort of work, we assume that our evidence (occurrences of patterns A and B) is independently and identically distributed (lid). We have found this assumption reasonable, but its break- down has led to a few errors. In particular, a draw- back of the NANC is the occurrence of repeated ar- ticles; since the corpus consists of all of the articles that come over the wire, some days include multiple, updated versions of the same story, containing iden- tical paragraphs or sentences. We wrote programs to weed out such cases, but ultimately found them of little use. First, "update" articles still have sub- stantial variation, so there is a continuum between these and articles that are simply on the same topic. Second, our data is so sparse that any such repeats are very unlikely to manifest themselves as repeated examples of part-type patterns. Nevertheless since two or three occurrences of a word can make it rank highly, our results have a few anomalies that stem from failure of the iid assumption (e.g., quite appro- priately, "clunker"). 58 Pattern A headlight windshield ignition shifter dashboard ra- diator brake tailpipe pipe airbag speedometer con- verter hood trunk visor vent wheel occupant en- gine tyre Pattern B trunk wheel driver hood occupant seat bumper backseat dashboard jalopy fender rear roof wind- shield back clunker window shipment reenactment axle Pattern C passenger gunmen leaflet hop houseplant airbag gun koran cocaine getaway motorist phone men indecency person ride woman detonator kid key Pattern D import caravan make dozen carcass shipment hun- dred thousand sale export model truckload queue million boatload inventory hood registration trunk ten Pattern E airbag packet switch gem amateur device handgun passenger fire smuggler phone tag driver weapon meal compartment croatian defect refugee delay Table 2: Grammatical Pattern Comparison Our seeds are one word (such as "car") and its plural. We do not claim that all single words would fare as well as our seeds, as we picked highly probable words for our corpus (such as "building" and "hos- pital") that we thought would have parts that might also be mentioned therein. With enough text, one could probably get reasonable results with any noun that met these criteria. 3.2 Statistical Methods The program has three phases. The first identifies and records all occurrences of patterns A and B in our corpus. The second filters out all words ending with "ing', "ness', or "ity', since these suffixes typically occur in words that denote a quality rather than a physical object. Finally we order the possible parts by the likelihood that they are true parts according to some appropriate metric. We took some care in the selection of this met- ric. At an intuitive level the metric should be some- thing like p(w [ p). (Here and in what follows w denotes the outcome of the random variable gener- ating wholes, and p the outcome for parts. W(w) states that w appears in the patterns AB as a whole, while P(p) states that p appears as a part.) Met- rics of the form p(w I P) have the desirable property that they are invariant over p with radically different base frequencies, and for this reason have been widely used in corpus-based lexical semantic research [3,6,9]. However, in making this intuitive idea someone more precise we found two closely related versions: p(w, W(w) I P) p(w, w(~,) I p, e(p)) We call metrics based on the first of these "loosely conditioned" and those based on the second "strongly conditioned". While invariance with respect to frequency is gen- erally a good property, such invariant metrics can lead to bad results when used with sparse data. In particular, if a part word p has occurred only once in the data in the AB patterns, then perforce p(w [ P) = 1 for the entity w with which it is paired. Thus this metric must be tempered to take into account the quantity of data that supports its conclusion. To put this another way, we want to pick (w,p) pairs that have two properties, p(w I P) is high and [ w, pl is large. We need a metric that combines these two desiderata in a natural way. We tried two such metrics. The first is Dun- ning's [10] log-likelihood metric which measures how "surprised" one would be to observe the data counts I w,p[,[ -,w, pl, [ w,-,pland I-'w,-'Plifone assumes that p(w I P) = p(w). Intuitively this will be high when the observed p(w I P) >> p(w) and when the counts supporting this calculation are large. The second metric is proposed by Johnson (per- sonal communication). He suggests asking the ques- tion: how far apart can we be sure the distributions p(w [ p)and p(w) are if we require a particular signif- icance level, say .05 or .01. We call this new test the "significant-difference" test, or sigdiff. Johnson ob- serves that compared to sigdiff, log-likelihood tends to overestimate the importance of data frequency at the expense of the distance between p(w I P) and 3.3 Comparison Table 3 shows the 20 highest ranked words for each statistical method, using the seed word "car." The first group contains the words found for the method we perceive as the most accurate, sigdiff and strong conditioning. The other groups show the differences between them and the first group. The + category means that this method adds the word to its list, - means the opposite. For example, "back" is on the sigdiff-loose list but not the sigdiff-strong list. In general, sigdiff worked better than surprise and strong conditioning worked better than loose condi- tioning. In both cases the less favored methods tend to promote words that are less specific ("back" over "airbag", "use" over "radiator"). Furthermore, the 59 Sigdiff, Strong airbag brake bumper dashboard driver fender headlight hood ignition occupant pipe radi- ator seat shifter speedometer tailpipe trunk vent wheel windshield Sigdiff, Loose + back backseat oversteer rear roof vehicle visor - airbag brake bumper pipe speedometer tailpipe vent Surprise, Strong + back cost engine owner price rear roof use value window - airbag bumper fender ignition pipe radiator shifter speedometer tailpipe vent Surprise, Loose + back cost engine front owner price rear roof side value version window - airbag brake bumper dashboard fender ig- nition pipe radiator shifter speedometer tailpipe vent Table 3: Methods Comparison combination of sigdiff and strong conditioning worked better than either by itself. Thus all results in this paper, unless explicitly noted otherwise, were gath- ered using sigdiff and strong conditioning combined. 4 Results 4.1 Testing Humans We tested five subjects (all of whom were unaware of our goals) for their concept of a "part." We asked them to rate sets of 100 words, of which 50 were in our final results set. Tables 6 - 11 show the top 50 words for each of our six seed words along with the number book 10 8 20 14 30 20 40 24 50 28 10 20 30 40 5O hospital 7 16 21 23 26 building car 7 12 18 21 29 plant 5 10 15 20 22 8 17 23 26 31 school 10 14 20 26 31 Table 4: Result Scores of subjects who marked the wordas a part of the seed concept. The score of individual words vary greatly but there was relative consensus on most words. We put an asterisk next to words that the majority sub- jects marked as correct. Lacking a formal definition of part, we can only define those words as correct and the rest as wrong. While the scoring is admit- tedly not perfect 1, it provides an adequate reference result. Table 4 summarizes these results. There we show the number of correct part words in the top 10, 20, 30, 40, and 50 parts for each seed (e.g., for "book", 8 of the top 10 are parts, and 14 of the top 20). Over- all, about 55% of the top 50 words for each seed are parts, and about 70% of the top 20 for each seed. The reader should also note that we tried one ambigu- ous word, "plant" to see what would happen. Our program finds parts corresponding to both senses, though given the nature of our text, the industrial use is more common. Our subjects marked both kinds of parts as correct, but even so, this produced the weak- est part list of the six words we tried. As a baseline we also tried using as our "pattern" the head nouns that immediately surround our target word. We then applied the same "strong condition- ing, sigdiff" statistical test to rank the candidates. This performed quite poorly. Of the top 50 candi- dates for each target, only 8% were parts, as opposed to the 55% for our program. 4.2 WordNet WordNet + door engine floorboard gear grille horn mirror roof tailfin window - brake bumper dashboard driver headlight ig- nition occupant pipe radiator seat shifter speedometer tailpipe vent wheel windshield Table 5: WordNet Comparison We also compared out parts list to those of Word- Net. Table 5 shows the parts of "car" in WordNet that are not in our top 20 (+) and the words in our top 20 that are not in WordNet (-). There are defi- nite tradeoffs, although we would argue that our top- 20 set is both more specific and more comprehensive. Two notable words our top 20 lack are "engine" and "door", both of which occur before 100. More gener- ally, all WordNet parts occur somewhere before 500, with the exception of "tailfin', which never occurs with car. It would seem that our program would be l For instance, "shifter" is undeniably part of a car, while "production" is only arguably part of a plant. 60 a good tool for expanding Wordnet, as a person can scan and mark the list of part words in a few minutes. 5 Discussion and Conclusions The program presented here can find parts of objects given a word denoting the whole object and a large corpus of unmarked text. The program is about 55% accurate for the top 50 proposed parts for each of six examples upon which we tested it. There does not seem to be a single cause for the 45% of the cases that are mistakes. We present here a few problems that have caught our attention. Idiomatic phrases like "a jalopy of a car" or "the son of a gun" provide problems that are not easily weeded out. Depending on the data, these phrases can be as prevalent as the legitimate parts. In some cases problems arose because of tagger mistakes. For example, "re-enactment" would be found as part of a "car" using pattern B in the phrase "the re-enactment of the car crash" if "crash" is tagged as a verb. The program had some tendency to find qualities of objects. For example, "driveability" is strongly correlated with car. We try to weed out most of the qualities by removing words with the suffixes "hess", "ing', and "ity." The most persistent problem is sparse data, which is the source of most of the noise. More data would almost certainly allow us to produce better lists, both because the statistics we are currently collecting would be more accurate, but also because larger num- bers would allow us to find other reliable indicators. For example, idiomatic phrases might be recognized as such. So we see "jalopy of a car" (two times) but not, of course, "the car's jalopy". Words that appear in only one of the two patterns are suspect, but to use this rule we need sufficient counts on the good words to be sure we have a representative sample. At 100 million words, the NANC is not exactly small, but we were able to process it in about four hours with the machines at our disposal, so still larger corpora would not be out of the question. Finally, as noted above, Hearst [2] tried to find parts in corpora but did not achieve good results. She does not say what procedures were used, but as- suming that the work closely paralleled her work on hyponyms, we suspect that our relative success was due to our very large corpus and the use of more re- fined statistical measures for ranking the output. 6 Acknowledgments This research was funded in part by NSF grant IRI- 9319516 and ONR Grant N0014-96-1-0549. Thanks to the entire statistical NLP group at Brown, and particularly to Mark Johnson, Brian Roark, Gideon Mann, and Ann-Maria Popescu who provided invalu- able help on the project. References [1] George Miller, Richard Beckwith, Cristiane Fell- baum, Derek Gross & Katherine J. Miller, "Word- Net: an on-line lexicai database," International Journal of Lexicography 3 (1990), 235-245. [2] Marti Hearst, "Automatic acquisition of hy- ponyms from large text corpora," in Proceed- ings of the Fourteenth International Conference on Computational Linguistics,, 1992. [3] Ellen Riloff & Jessica Shepherd, "A corpus-based approach for building semantic lexicons," in Pro- ceedings of the Second Conference on Empirical Methods in Natural Language Processing, 1997, 117-124. [4] Dekang Lin, "Automatic retrieval and cluster- ing of similar words," in 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computa- tional Linguistics, 1998, 768-774. [5] Gregory Grefenstette, "SEXTANT: extracting se- mantics from raw text implementation details," Heuristics: The Journal of Knowledge Engineer- ing (1993). [6] Brian Roark & Eugene Charniak, "Noun-phrase co-occurrence statistics for semi-automatic se- mantic lexicon construction," in 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, 1998, 1110-1116. [7] Vasileios Hatzivassiloglou & Kathleen R. McKe- own, "Predicting the semantic orientation of ad- jectives," in Proceedings of the 35th Annual Meet- ing of the ACL, 1997, 174-181. [8] Stephen D. Richardson, William B. Dolan & Lucy Vanderwende, "MindNet: acquiring and structur- ing semantic information from text," in 36th An- nual Meeting of the Association for Computa- tional Linguistics and 17th International Confer- ence on Computational Linguistics, 1998, 1098- 1102. [9] William A. Gale, Kenneth W. Church & David Yarowsky, "A method for disambiguating word senses in a large corpus," Computers and the Hu- manities (1992). [10] Ted Dunning, "Accurate methods for the statis- tics of surprise and coincidence," Computational Linguistics 19 (1993), 61-74. 61 Ocr. 853 23 114 7 123 5 9 51 220 125 103 6 13 45 4 69 16 48 2 289 12 45 16 3 57 8 3 6 13 11 30 3 53 9 44 23 8 56 15 47 2 3 6 8 3 3 5 35 6 7 Frame 3069 48 414 16 963 10 32 499 3053 1961 1607 28 122 771 14 1693 240 1243 2 10800 175 1512 366 10 2312 123 13 82 360 295 1390 16 3304 252 2908 1207 218 4265 697 3674 5 22 140 276 25 26 111 3648 194 3OO Word author subtitle co-author foreword publication epigraph co-editor cover copy page title authorship manuscript chapter epilogue publisher jacket subject double-page sale excerpt content plot galley edition protagonist co-publisher spine premise revelation theme fallacy editor translation character tone flaw section introduction release diarist preface narrator format facsimile mock-up essay back heroine pleasure Table 6: book x/5 5* 4* 4* 5* 2 3* 4* 5* 2 5* 5* 2 2 5* 5* 4* 5* 5* 0 0 2 5* 5* 2 3* 4* 3* 5* 1 2 2 2 5* 2 5* 2 2 4* 5* 1 0 4* 4* 2 0 1 2 5* 4* 0 Ocr. Frame 72 154 527 2116 42 156 85 456 100 577 9 23 32 162 28 152 12 45 49 333 7 20 30 250 14 89 14 93 10 60 23 225 4 9 10 62 36 432 7 37 82 1449 23 276 37 572 12 120 3 6 13 156 9 83 32 635 219 6612 7 58 11 143 2 2 2 2 2 2 47 1404 9 115 14 285 129 5616 17 404 25 730 15 358 3 11 6 72 3 12 37 1520 10 207 39 1646 2 3 38 1736 4 31 Word rubble ~oor facade basement roof atrium exterior tenant rooftop wreckage stairwell shell demolition balcony hallway renovation janitor rotunda entrance hulk wall ruin lobby courtyard tenancy debris pipe interior front elevator evacuation web-site airshaft cornice construction landlord occupant owner rear destruction superintendent stairway cellar half-mile step corridor window subbasement door spire Table 7: building x/5 0 5* 4* 5* 5* 4* 5* 1 4* 1 5* 0 0 5* 5* 0 1 5* 3* 0 5* 0 5* 4* 0 1 2 3* 4* 5* 1 0 4* 3* 2 1 1 1 3* 1 1 5* 5* 0 5* 5* 5* 5* 4* 3* 62 Ocr. 92 27 12 13 70 9 43 119 6 4 37 15 5 6 3 8 11 7 108 3 3 3 64 28 2 33 20 4 6 75 2 10 9 3 7 18 19 11 5 3 3 11 6 18 71 5 4 2 2 6 Frame 215 71 24 30 318 21 210 880 13 6 285 83 12 18 4 42 83 36 1985 5 6 6 1646 577 2 784 404 19 68 3648 3 216 179 13 117 635 761 334 73 18 18 376 125 980 6326 88 51 5 5 151 Word trunk windshield dashboard headlight wheel ignition hood driver radiator shifter occupant brake vent fender tailpipe bumper pipe airbag seat speedometer converter backseat window roof . jalopy engine rear visor deficiency back oversteer plate cigarette clunker battery interior speed shipment re-enactment conditioner axle tank attribute location cost paint antenna socket corsa tire Table 8: car x/5 4* 5* 5* 5* 5* 4* 5* 1 5* 1 1 5* 3* 5* 5* 5* 3* 5* 4* 4* 2 5* 5* 5* 0 5* 4* 3* 0 2 1 3* 1 0 5* 3* 1 0 0 2 5* 5* 0 1 1 4* 5* 0 0 5* Oct. 43 3 2 3 3 17 3 18 16 33 68 44 11 19 15 6 25 35 7 2 100 5 3 20 4 4 29 3 2 3 14 2 17 13 4 5 15 8 3 4 2 14 5 15 2 4 16 2 29 3 Frame 302 7 2 9 9 434 11 711 692 2116 5404 3352 432 1237 1041 207 2905 5015 374 11 23692 358 89 5347 299 306 13944 149 33 156 5073 35 7147 4686 416 745 6612 2200 190 457 42 6315 875 7643 46 518 8788 48 25606 276 Word ward radiologist trograncic mortuary hopewell clinic aneasthetist ground patient floor unit room entrance doctor administrator corridor staff department bed pharmacist director superintendent storage chief lawn compound head nurse switchboard debris executive pediatrician board area ceo yard front reputation inmate procedure overhead committee mile center pharmacy laboratory program shah president ruin Table 9: hospital x/5 5* 5* 0 4* 0 5* 5* 1 4* 4* 4* 2 4* 5* 5* 4* 3* 5* 5* 4* 5* 3* 3* 2 2 0 0 5* 4* 0 2 4* 1 1 2 2 3* 1 1 2 0 4* 0 1 4* 5* 1 0 2 1 63 Ocr. 185 5 23 8 10 2 19 6 41 22 17 22 26 12 21 19 2 4 26 3 12 4 2 3 8 8 8 17 9 23 5 50 24 24 29 40 9 49 41 6 21 3 32 6 5 2 8 3 5 7 Frame 1404 12 311 72 122 2 459 62 1663 844 645 965 1257 387 98O 856 4 41 1519 20 506 51 5 22 253 254 309 1177 413 1966 131 6326 2553 2564 3478 5616 577 7793 6360 276 2688 48 5404 337 233 13 711 69 296 632 Word construction stalk reactor emission modernization melter shutdown start-up worker root closure completion operator inspection location gate sprout leaf output turbine equipment residue zen foliage conversion workforce seed design fruit expansion pollution cost tour employee site owner roof manager operation characteristic production shoot unit tower co-owner instrumentation ground fiancee economics energy Table 10: plant x/5 2 4* 3* 3* 1 3* 1 0 2 3* 0 0 4* 2 2 3* 3* 5* 2 3* 3* 1 0 4* 0 1 3* 4* 5* 2 2 1 0 5* 1 3* 4* 3* 3* 1 3* 0 1 1 1 3* 2 0 1 2 Oer. 525 164 134 11 7 16 19 4 8 25 3 13 8 9 11 5 3 8 75 56 10 4 5 8 28 4 2 2 7 21 11 17 8 7 5 5 7 39 2 6 105 16 6 25 17 3 6 2 4 6 Fralne 1051 445 538 24 12 61 79 5 22 134 3 87 40 57 82 18 5 52 1462 1022 100 15 26 71 603 17 2 2 65 525 203 423 115 108 56 60 130 2323 4 112 8788 711 120 1442 837 20 135 5 53 144 Word dean principal graduate prom headmistress Mumni curriculum seventh-grader gymnasium faculty crit endowment ~umn~ cadet enrollment infwmary valedictorian commandant student feet auditorium jamieson yearbook cafeteria teacher grader wennberg jeffe pupil campus class trustee counselor benefactor berth hallway mascot founder raskin playground program ground courtyard hall championship accreditation fellow freund rector classroom Table 1 I: school 5* 3* 3* 4* 3* 5* 3* 5* 5* 0 3* 2 0 2 4* 4* 0 5* 0 3* 4* 5* 2 0 o' 3* 4* 5* 3* 4* 2 0 4* 3* 1 0 4* 3* 3* 3* 4* 1 2 1 0 2 4* 64
1999
8
A Pylonic Decision-Tree Language Model with Optimal Question Selection Adrian Corduneanu University of Toronto 73 Saint George St #299 Toronto, Ontario, M5S 2E5, Canada [email protected] Abstract This paper discusses a decision-tree approach to the problem of assigning probabilities to words following a given text. In contrast with previ- ous decision-tree language model attempts, an algorithm for selecting nearly optimal questions is considered. The model is to be tested on a standard task, The Wall Street Journal, allow- ing a fair comparison with the well-known tri- gram model. 1 Introduction In many applications such as automatic speech recognition, machine translation, spelling cor- rection, etc., a statistical language model (LM) is needed to assign ~probabilities to sentences. This probability assignment may be used, e.g., to choose one of many transcriptions hypoth- esized by the recognizer or to make deci- sions about capitalization. Without any loss of generality, we consider models that oper- ate left-to-right on the sentences, assigning a probability to the next word given its word history. Specifically, we consider statistical LM's which compute probabilities of the type P{wn ]Wl, W2,..-, Wn--1}, where wi denotes the i-th word in the text. Even for a small vocabulary, the space of word histories is so large that any attempt to estimate the conditional probabilities for each distinct history from raw frequencies is infea- sible. To make the problem manageable, one partitions the word histories into some classes C(wl,w2,...,Wn-1), and identifies the word probabilities with P{wn [ C(wl, w2,. . . , Wn-1)}. Such probabilities are easier to estimate as each class gets significantly more counts from a train- ing corpus. With this setup, building a language model becomes a classification problem: group the word histories into a small number of classes 606 while preserving their predictive power. Currently, popular N-gram models classify the word histories by their last N - 1 words. N varies from 2 to 4 and the trigram model P{wn [Wn-2, wn-1} is commonly used. Al- though these simple models perform surpris- ingly well, there is much room for improvement. The approach used in this paper is to classify the histories by means of a decision tree: to clus- ter word histories Wl,W2,... ,wn-1 for which the distributions of the following word Wn in a training corpus are similar. The decision tree is pylonic in the sense that histories at different nodes in the tree may be recombined in a new node to increase the complexity of questions and avoid data fragmentation. The method has been tried before (Bahl et al., 1989) and had promising results. In the work presented here we made two major changes to the previous attempts: we have used an opti- mal tree growing algorithm (Chou, 1991) not known at the time of publication of (Bahl et al., 1989), and we have replaced the ad-hoc clus- tering of vocabulary items used by Bahl with a data-driven clustering scheme proposed in (Lu- cassen and Mercer, 1984). 2 Description of the Model 2.1 The Decision-Tree Classifier The purpose of the decision-tree classifier is to cluster the word history wl, w2,..., Wn-1 into a manageable number of classes Ci, and to esti- mate for each class the next word conditional distribution P{wn [C i}. The classifier, together with the collection of conditional probabilities, is the resultant LM. The general methodology of decision tree construction is well known (e.g., see (Jelinek, 1998)). The following issues need to be ad- dressed for our specific application. • A tree growing criterion, often called the measure of purity; • A set of permitted questions (partitions) to be considered at each node; • A stopping rule, which decides the number of distinct classes. These are discussed below. Once the tree has been grown, we address one other issue: the estimation of the language model at each leaf of the resulting tree classifier. 2.1.1 The Tree Growing Criterion We view the training corpus as a set of ordered pairs of the following word wn and its word his- tory (wi,w2,... ,wn-i). We seek a classifica- tion of the space of all histories (not just those seen in the corpus) such that a good conditional probability P{wn I C(wi, w2,.. . , Wn-i)} can be estimated for each class of histories. Since sev- eral vocabulary items may potentially follow any history, perfect "classification" or predic- tion of the word that follows a history is out of the question, and the classifier must parti- tion the space of all word histories maximizing the probability P{wn I C(wi, w2, . . . , Wn-i)} as" signed to the pairs in the corpus. We seek a history classification such that C(wi,w2,... ,Wn-i) is as informative as pos- sible about the distribution of the next word. Thus, from an information theoretical point of view, a natural cost function for choosing ques- tions is the empirical conditional entropy of the training data with respect to the tree: H = - Z I c,)log f(w I C,). w i Each question in the tree is chosen so as to minimize the conditional entropy, or, equiva- lently, to maximize the mutual information be- tween the class of a history and the predicted word. 2.1.2 The Set of Questions and Decision Pylons Although a tree with general questions can rep- resent any classification of the histories, some restrictions must be made in order to make the selection of an optimal question computation- ally feasible. We consider elementary questions of the type w-k E S, where W-k refers to the k-th position before the word to be predicted, 607 y/ n ( D n yes no Figure 1: The structure of a pylon and S is a subset of the vocabulary. However, this kind of elementary question is rather sim- plistic, as one node in the tree cannot refer to two different history positions. A conjunction of elementary questions can still be implemented over a few nodes, but similar histories become unnecessarily fragmented. Therefore a node in the tree is not implemented as a single elemen- tary question, but as a modified decision tree in itself, called a pylon (Bahl et al., 1989). The topology of the pylon as in Figure 1 allows us to combine answers from elementary questions without increasing the number of classes. A py- lon may be of any size, and it is grown as a standard decision tree. 2.1.3 Question Selection Within the Pylon For each leaf node and position k the problem is to find the subset S of the vocabulary that minimizes the entropy of the split W-k E S. The best question over all k's will eventually be selected. We will use a greedy optimization algorithm developed by Chou (1991). Given a partition P = {81,/32,...,/3k} of the vocabu- lary, the method finds a subset S of P for which the reduction of entropy after the split is nearly optimal. The algorithm is initialized with a random partition S t2 S of P. At each iteration every atom 3 is examined and redistributed into a new partition S'U S', according to the following rule: place j3 into S' when l(wlw-kcf~) < Ew f(wlw-k e 3) log I(w w_heS) -- E,o f (wlw_ 3) log f(wlW-kEC3) where the f's are word frequencies computed relative to the given leaf. This selection crite- rion ensures a decreasing empirical entropy of the tree. The iteration stops when S = S' and If questions on the same level in the pylon are constructed independently with the Chou algo- ritm, the overall entropy may increase. That is why nodes whose children are merged must be jointly optimized. In order to reduce complex- ity, questions on the same level in the pylon are asked with respect to the same position in the history. The Chou algorithm is not accurate when the training data is sparse. For instance, when no history at the leaf has w-k E /3, the atom is invariantly placed in S'. Because such a choice of a question is not based on evidence, it is not expected to generalize to unseen data. As the tree is growing, data is fragmented among the leaves, and this issue becomes unavoidable. To deal with this problem, we choose the atomic partition P so that each atom gets a history count above a threshold. The choice of such an atomic partition is a complex problem, as words composing an atom must have similar predictive power. Our ap- proach is to consider a hierarchical classification of the words, and prune it to a level at which each atom gets sufficient history counts. The word hierarchy is generated from training data with an information theoretical algorithm (Lu- cassen and Mercer, 1984) detailed in section 2.2. 2.1.4 The Stopping Rule A common problem of all decision trees is the lack of a clear rule for when to stop growing new nodes. The split of a node always brings a reduction in the estimated entropy, but that might not hold for the true entropy. We use a simplified version of cross-validation (Breiman et al., 1984), to test for the significance of the reduction in entropy. If the entropy on a held out data set is not reduced, or the reduction on the held out text is less than 10% of the entropy reduction on the training text, the leaf is not split, because the reduction in entropy has failed to generalize to the unseen data. 2.1.5 Estimating the Language Model at Each Leaf Once an equivalence classification of all histo- ries is constructed, additional training data is used to estimate the conditional probabilities required for each node, as described in (Bahl et al., 1989). Smoothing as well as interpolation with a standard trigram model eliminates the zero probabilities. 2.2 The Hierarchical Classification of Words The goal is to build a binary tree with the words of the vocabulary as leaves, such that similar words correspond to closely related leaves. A partition of the vocabulary can be derived from such a hierarchy by taking a cut through the tree to obtain a set of subtrees. The reason for keeping a hierarchy instead of a fixed partition of the vocabulary is to be able to dynamically adjust the partition to accommodate for train- ing data fragmentation. The hierarchical classification of words was built with an entirely data-driven method. The motivation is that even though an expert could exhibit some strong classes by looking at parts of speech and synonyms, it is hard to produce a full hierarchy of a large vocabulary. Perhaps a combination of the expert and data-driven ap- proaches would give the best result. Neverthe- less, the algorithm that has been used in deriv- ing the hierarchy can be initialized with classes based on parts of speech or meaning, thus tak- ing account of prior expert information. The approach is to construct the tree back- wards. Starting with single-word classes, each iteration consists of merging the two classes most similar in predicting the word that follows them. The process continues until the entire vo- cabulary is in one class. The binary tree is then obtained from the sequence of merge operations. To quantify the predictive power of a parti- tion P = {j3z,/32,...,/3k} of the vocabulary we look at the conditional entropy of the vocabu- lary with respect to class of the previous word: H(w I P) = EZeP p(/3)H(w [ w-1 •/3) = - E epp(/3) E evp(wl )logp(w I/3) At each iteration we merge the two classes that minimize H(w I P') - H(w I P), where P' is the partition after the merge. In information- theoretical terms we seek the merge that brings the least reduction in the information provided by P about the distribution of the current word. 608 IRAN'S UNION'S IRAQ'S INVESTORS' BANKS' PEOPLE'S FARMER TEACHER WORKER DRIVER WRITER SPECIALIST EXPERT TRADER PLUMMETED PLUNGED SOARED TUMBLED SURGED RALLIED FALLING FALLS RISEN FALLEN MYSELF HIMSELF OURSELVES THEMSELVES CONSIDERABLY SIGNIFICANTLY SUBSTANTIALLY SOMEWHAT SLIGHTLY Figure 2: Sample classes from a 1000-element partition of a 5000-word vocabulary (each col- umn is a different class) The algorithm produced satisfactory results on a 5000-word vocabulary. One can see from the sample classes that the automatic building of the hierarchy accounts both for similarity in meaning and of parts of speech. the vocabulary is significantly larger, making impossible the estimation of N-gram models for N > 3. However, we expect that due to the good smoothing of the trigram probabilities a combination of the decision-tree and N-gram models will give the best results. 4 Summary In this paper we have developed a decision-tree method for building a language model that pre- dicts words given their previous history. We have described a powerful question search algo- rithm, that guarantees the local optimality of the selection, and which has not been applied before to word language models. We expect that the model will perform significantly better than the standard N-gram approach. 5 Acknowledgments I would like to thank Prof.Frederick Jelinek and Sanjeev Khu- dampur from Center for Language and Speech Processing, Johns Hopkins University, for their help related to this work and for providing the computer resources. I also wish to thank Prof.Graeme Hirst from University of Toronto for his useful advice in all the stages of this project. 3 Evaluation of the Model The decision tree is being trained and tested on the Wall Street Journal corpus from 1987 to 1989 containing 45 million words. The data is divided into 15 million words for growing the nodes, 15 million for cross-validation, 10 mil- lion for estimating probabilities, and 5 million for testing. To compare the results with other similar attempts (Bahl et al., 1989), the vocab- ulary consists of only the 5000 most frequent words and a special "unknown" word that re- places all the others. The model tries to predict the word following a 20-word history. At the time this paper was written, the im- plementation of the presented algorithms was nearly complete and preliminary results on the performance of the decision tree were expected soon. The evaluation criterion to be used is the perplexity of the test data with respect to the tree. A comparison with the perplexity of a standard back-off trigram model will in- dicate which model performs better. Although decision-tree letter language models are inferior to their N-gram counterparts (Potamianos and Jelinek, 1998), the situation should be reversed for word language models. In the case of words References L. R. Bahl, P. F. Brown, P. V. de Souza, and R. L. Mercer. 1989. A tree-based statistical language model for natural language speech recognition. IEEE Transactions on Acous- tics, Speech, and Signal Processing, 37:1001- 1008. L. Breiman, J. Friedman, R. Olshen, and C. Stone. 1984. Classification and regression trees. Wadsworth and Brooks, Pacific Grove. P. A. Chou. 1991. Optimal partitioning for classification and regression trees. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 13:340-354. F. Jelinek. 1998. Statistical methods ]or speech recognition. The MIT Press, Cambridge. J. M. Lucassen and R. L. Mercer. 1984. An information theoretic approach to the auto- matic determination of phonemic baseforms. In Proceedings of the 1984 International Con- -ference on Acoustics, Speech, and Signal Pro- cessing, volume III, pages 42.5.1-42.5.4. G. Potamianos and F. Jelinek. 1998. A study of n-gram and decision tree letter language modeling methods. Speech Communication, 24:171-192. 609
1999
80
An Unsupervised Model for Statistically Determining Coordinate Phrase Attachment Miriam Goldberg Central High School & Dept. of Computer and Information Science 200 South 33rd Street Philadelphia, PA 19104-6389 University of Pennsylvania miriamgOunagi, cis. upenn, edu Abstract This paper examines the use of an unsuper- vised statistical model for determining the at- tachment of ambiguous coordinate phrases (CP) of the form nl p n2 cc n3. The model pre- sented here is based on JAR98], an unsupervised model for determining prepositional phrase at- tachment. After training on unannotated 1988 Wall Street Journal text, the model performs at 72% accuracy on a development set from sections 14 through 19 of the WSJ TreeBank [MSM93]. 1 Introduction The coordinate phrase (CP) is a source of struc- tural ambiguity in natural language. For exam- ple, take the phrase: box of chocolates and roses 'Roses' attaches either high to 'box' or low to 'chocolates'. In this case, attachment is high, yielding: H-attach: ((box (of chocolates)) (and roses)) Consider, then, the phrase: salad of lettuce and tomatoes 'Lettuce' attaches low to 'tomatoes', giving: L-attach: (salad (of ((lettuce) and (tomatoes))) [AR98] models. In addition to these, a corpus- based model for PP-attachment [SN97] has been reported that uses information from a semantic dictionary. Sparse data can be a major concern in corpus- based disambiguation. Supervised models are limited by the amount of annotated data avail- able for training. Such a model is useful only for languages in which annotated corpora are available. Because an unsupervised model does not rely on such corpora it may be modified for use in multiple languages as in [AR98]. The unsupervised model presented here trains from an unannotated version of the 1988 Wall Street Journal. After tagging and chunk- ing the text, a rough heuristic is then employed to pick out training examples. This results in a training set that is less accurate, but much larger, than currently existing annotated cor- pora. It is the goal, then, of unsupervised train- ing data to be abundant in order to offset its noisiness. 2 Background The statistical model must determine the prob- ability of a given CP attaching either high (H) or low (L), p( attachment I phrase). Results shown come from a development corpus of 500 phrases of extracted head word tuples from the WSJ TreeBank [MSM93]. 64% of these phrases attach low and 36% attach high. After further development, final testing will be done on a sep- arate corpus. The phrase: Previous work has used corpus-based ap- proaches to solve the similar problem of prepo- sitional phrase attachment. These have in- cluded backed-off [CB 95], maximum entropy [RRR94], rule-based [HR94], and unsupervised (busloads (of ((executives) and (their wives))) gives the 6-tuple: L busloads of executives and wives 610 where, a = L, nl = busloads, p = of, n2 = executives, cc = and, n3 = wives. The CP at- tachment model must determine a for all (nl p n2 cc n3) sets. The attachment decision is correct if it is the same as the corresponding decision in the TreeBank set. The probability of a CP attaching high is conditional on the 5-tuple. The algorithm pre- sented in this paper estimates the probability: regular expressions that replace noun and quan- tifier phrases with their head words. These head words were then passed through a set of heuris- tics to extract the unambiguous phrases. The heuristics to find an unambiguous CP are: • wn is a coordinating conjunction (cc) if it is tagged cc. • w,~_~ is the leftmost noun (nl) if: I5 = (a l nl,p, n2, cc, n3) The parts of the CP are analogous to those of the prepositional phrase (PP) such that {nl,n2} - {n,v} and n3 - p. JAR98] de- termines the probability p(v,n,p,a). To be consistent, here we determine the probability p(nl, n2, n3, a). 3 Training Data Extraction A statistical learning model must train from un- ambiguous data. In annotated corpora ambigu- ous data are made unambiguous through classi- fications made by human annotators. In unan- notated corpora the data themselves must be unambiguous. Therefore, while this model dis- ambiguates CPs of the form (nl p n2 cc n3), it trains from implicitly unambiguous CPs of the form (n ccn). For example: - Wn-x is the first noun to occur within 4 words to the left of cc. -no preposition occurs between this noun and cc. - no preposition occurs within 4 words to the left of this noun. • wn+x is the rightmost noun (n2) if: - it is the first noun to occur within 4 words to the right of cc. - No preposition occurs between cc and this noun. The first noun to occur within 4 words to the right of cc is always extracted. This is ncc. Such nouns are also used in the statistical model. For example, the we process the sentence below as follows: dog and cat Because there are only two nouns in the un- ambiguous CP, we must redefine its compo- nents. The first noun will be referred to as nl. It is analogous to nl and n2 in the ambiguous CP. The second, terminal noun will be referred to as n3. It is analogous to the third noun in the ambiguous CP. Hence nl -- dog, cc --- and, n3 = cat. In addition to the unambiguous CPs, the model also uses any noun that follows acc. Such nouns are classified, ncc. We extracted 119629 unambiguous CPs and 325261 nccs from the unannotated 1988 Wall Street Journal. First the raw text was fed into the part-of-speech tagger described in [AR96] 1. This was then passed to a simple chunker as used in [AR98], implemented with two small IBecause this tagger trained on annotated data, one may argue that the model presented here is not purely unsupervised. Several firms have also launched busi- ness subsidiaries and consulting arms specializing in trade, lobbying and other areas. First it is annotated with parts of speech: Several_JJ firms__NNS have_VBP also_RB launched_VBN business.aNN subsidiaries_NNS and_CC consult- ing_VBG armsANNS specializing_VBG in_IN tradeANN ,_, lobbying_NN and_CC other_JJ areas_NNS ._. From there, it is passed to the chunker yield- ing: firmsANNS have_VBP also_RB launched_VBN subsidiaries_NNS and_CC consulting_VBG armsANNS specializing_VBG in_IN tradeANN ,_, Iobbying_.NN and_CC areas_NNS ._. 611 Noun phrase heads of ambiguous and unam- biguous CPs are then extracted according to the heuristic, giving: subsidiaries and arms and areas where the extracted unambiguous CP is {nl = subsidiaries, cc = and, n3 = arms} and areas is extracted as a ncc because, although it is not part of an unambiguous CP, it occurs within four words after a conjunction. 4 The Statistical Model First, we can factor p(a, nl, n2, n3) as follows: p(a, nl,n2, n3) = p(nl)p(n2) , p(alnl ,n2) , p(n3 I a, nl,n2) The terms p(nl) and p(n2) are independent of the attachment and need not be computed. The other two terms are more problematic. Be- cause the training phrases are unambiguous and of the form (nl cc n2), nl and n2 of the CP in question never appear together in the train- ing data. To compensate we use the following heuristic as in JAR98]. Let the random variable ¢ range over (true, false} and let it denote the presence or absence of any n3 that unambigu- ously attaches to the nl or n2 in question. If ¢ = true when any n3 unambiguously attaches to nl, then p(¢ = true [ nl) is the conditional probability that a particular nl occurs with an unambiguously attached n3. Now p(a I nl,n2) can be approximated as: p(a = H lnl, n2) p(true l nl) Z(nl,n2) p(a = L [nl,n2) ~ p(true In2) " Z(nl, n2) where the normalization factor, Z(nl,n2) = p(true I nl) + p(true I n2). The reasoning be- hind this approximation is that the tendency of a CP to attach high (low) is related to the ten- dency of the nl (n2) in question to appear in an unambiguous CP in the training data. We approximate p(n3la, nl, n2) as follows: p(n3 I a = H, nl, n2) ~ p(n3 I true, nl) p(n3 I a = L, nl, n2) ~ p(n3 I true, n2) The reasoning behind this approximation is that when generating n3 given high (low) at- tachment, the only counts from the training data that matter are those which unambigu- ously attach to nl (n2), i.e., ¢ = true. Word statistics from the extracted CPs are used to formulate these probabilities. 4.1 Generate ¢ The conditional probabilities p(truelnl) and p(true I n2) denote the probability of whether a noun will appear attached unambiguously to some n3. These probabilities are estimated as: { $(.~1,true) iff(nl,true) >0 f(nl) p(truelnl) = .5 otherwise { /(n2,~r~,e) if f(n2, true)> 0 /(n2) p(true[n2) = .5 otherwise where f(n2, true) is the number of times n2 appears in an unambiguously attached CP in the training data and f(n2) is the number of times this noun has appeared as either nl, n3, or ncc in the training data. 4.2 Generate n3 The terms p(n3 I nl, true) and p(n3 I n2, true) denote the probabilies that the noun n3 appears attached unambiguously to nl and n2 respec- tively. Bigram counts axe used to compute these as follows: f(nl,n3,true) p(n3 [ true, nl) = l](nl, TM) if I(nl,n3,true)>O otherwise f(n2,n3,true) p(n3 l true, n2) = 11(n2, TM) if f(n2,n3,true)>O otherwise where N is the set of all n3s and nets that occur in the training data. 5 Results Decisions were deemed correct if they agreed with the decision in the corresponding Tree- Bank data. The correct attachment was chosen 612 72% of the time on the 500-phrase development corpus from the WSJ TreeBank. Because it is a forced binary decision, there are no measure- ments for recall or precision. If low attachment is always chosen, the accuracy is 64%. After fur- ther development the model will be tested on a testing corpus. When evaluating the effectiveness of an un- supervised model, it is helpful to compare its performance to that of an analogous supervised model. The smaller the error reduction when going from unsupervised to supervised models, the more comparable the unsupervised model is to its supervised counterpart. To our knowl- edge there has been very little if any work in the area of ambiguous CPs. In addition to develop- ing an unsupervised CP disambiguation model, In [MG, in prep] we have developed two super- vised models (one backed-off and one maximum entropy) for determining CP attachment. The backed-off model, closely based on [CB95] per- forms at 75.6% accuracy. The reduction error from the unsupervised model presented here to the backed-off model is 13%. This is compa- rable to the 14.3% error reduction found when going from JAR98] to [CB95]. It is interesting to note that after reducing the volume of training data by half there was no drop in accuracy. In fact, accuracy remained exactly the same as the volume of data was in- creased from half to full. The backed-off model in [MG, in prep] trained on only 1380 train- ing phrases. The training corpus used in the study presented here consisted of 119629 train- ing phrases. Reducing this figure by half is not overly significant. 6 Discussion In an effort to make the heuristic concise and portable, we may have oversimplified it thereby negatively affecting the performance of the model. For example, when the heuristic came upon a noun phrase consisting of more than one consecutive noun the noun closest to the cc was extracted. In a phrase like coffee and rhubarb apple pie the heuristic would chose rhubarb as the n3 when clearly pie should have been cho- sen. Also, the heuristic did not check if a prepo- sition occurred between either nl and cc or cc and n3. Such cases make the CP ambiguous thereby invalidating it as an unambiguous train- ing example. By including annotated training data from the TreeBank set, this model could be modified to become a partially-unsupervised classifier. Because the model presented here is basically a straight reimplementation of [AR98] it fails to take into account attributes that are specific to the CP. For example, whereas (nl ce n3) -- (n3 cc nl), (v p n) ~ (n p v). In other words, there is no reason to make the distinction between "dog and cat" and "cat and dog." Modifying the model accordingly may greatly increase the usefulness of the training data. 7 Acknowledgements We thank Mitch Marcus and Dennis Erlick for making this research possible, Mike Col]in.~ for his guidance, and Adwait Ratnaparkhi and Ja- son Eisner for their helpful insights. References ~[CB95] M. Collins, J. Brooks. 1995. Preposi- tional Phrase Attachment through a Backed- Off Model, A CL 3rd Workshop on Very Large Corpora, Pages 27-38, Cambridge, Mas- sachusetts, June. [MG, in prep] M. Goldberg. in preparation. Three Models for Statistically Determining Coordinate Phrase Attachment. [HR93] D. Hindle, M. Rooth. 1993. Structural Ambiguity and Lexical Relations. Computa- tional Linguistics, 19(1):103-120. [MSM93] M. Marcus, B. Santorini and M. Marcinkiewicz. 1993. Building a Large Anno- tated Corpus of English: the Penn Treebank, Computational Linguistics, 19(2):313-330. [RRR94] A. Ratnaparkhi, J. Reynar and S. Roukos. 1994. A Maximum Entropy Model for Prepositional Phrase Attachment, In Pro- ceedings of the ARPA Workshop on Human Language Technology, 1994. [AR96] A. Ratnaparkhi. 1996. A Maximum En- tropy Part-Of-Speech Tagger, In Proceedings of the Empirical Methods in Natural Lan- guage Processing Conference, May 17-18. [AR98] A. Ratnaparkhi. 1998. Unsupervised Statistical Models for Prepositional Phrase Attachment, In Proceedings of the Seven- teenth International Conference on Compu- tational Linguistics, Aug. 10-14, Montreal, Canada. 613 [SN97] J. Stetina, M. Nagao. 1997. Corpus Based PP Attachment Ambiguity Resolution with a Semantic Dictionary. In Jou Shou and Kenneth Church, editors, Proceedings o] the Fifth Workshop on Very Large Corpora, pages 66-80, Beijing and Hong Kong, Aug. 18-20. 614
1999
81
A flexible distributed architecture for NLP system development and use Freddy Y. Y. Choi Artificial Intelligence Group University of Manchester Manchester, U.K. [email protected] Abstract We describe a distributed, modular architecture for platform independent natural language sys- tems. It features automatic interface genera- tion and self-organization. Adaptive (and non- adaptive) voting mechanisms are used for inte- grating discrete modules. The architecture is suitable for rapid prototyping and product de- livery. 1 Introduction This article describes TEA 1, a flexible architec- ture for developing and delivering platform in- dependent text engineering (TE) systems. TEA provides a generalized framework for organizing and applying reusable TE components (e.g. to- kenizer, stemmer). Thus, developers are able to focus on problem solving rather than imple- mentation. For product delivery, the end user receives an exact copy of the developer's edition. The visibility of configurable options (different levels of detail) is adjustable along a simple gra- dient via the automatically generated user inter- face (Edwards, Forthcoming). Our target application is telegraphic text compression (Choi (1999b); of Roelofs (Forth- coming); Grefenstette (1998)). We aim to im- prove the efficiency of screen readers for the visually disabled by removing uninformative words (e.g. determiners) in text documents. This produces a stream of topic cues for rapid skimming. The information value of each word is to be estimated based on an unusually wide range of linguistic information. TEA was designed to be a development en- vironment for this work. However, the target application has led us to produce an interesting tTEA is an acronym for Text Engineering Architec- ture. architecture and techniques that are more gen- erally applicable, and it is these which we will focus on in this paper. 2 Architecture I System input and output I I L I I Plug*ins Shared knowledge System control s~ructure Figure 1: An overview of the TEA system framework. The central component of TEA is a frame- based data model (F) (see Fig.2). In this model, a document is a list of frames (Rich and Knight, 1991) for recording the properties about each token in the text (example in Fig.2). A typical TE system converts a document into F with an input plug-in. The information required at the output determines the set of process plug-ins to activate. These use the information in F to add annotations to F. Their dependencies are auto- matically resolved by TEA. System behavior is controlled by adjusting the configurable param- eters. Frame 1: (:token An :pos art :begin_s 1) Frame 2: (:token example :pos n) Frame 3: (:token sentence :pos n) Frame 4: (:token . :pos punc :end_s 1) Figure 2: "An example sentence." in a frame- based data model 615 This type of architecture has been imple- mented, classically, as a 'blackboard' system such as Hearsay-II (Erman, 1980), where inter- module communication takes place through a shared knowledge structure; or as a 'message- passing' system where the modules communi- cate directly. Our architecture is similar to blackboard systems. However, the purpose of F (the shared knowledge structure in TEA) is to provide a single extendable data structure for annotating text. It also defines a standard in- terface for inter-module communication, thus, improves system integration and ease of soft- ware reuse. 2.1 Voting mechanism A feature that distinguishes TEA from similar systems is its use of voting mechanisms for sys- tem integration. Our approach has two distinct but uniformly treated applications. First, for any type of language analysis, different tech- niques ti will return successful results P(r) on different subsets of the problem space. Thus combining the outputs P(rlti) from several ti should give a result more accurate than any one in isolation. This has been demonstrated in sev- eral systems (e.g. Choi (1999a); van Halteren et al. (1998); Brill and Wu (1998); Veronis and Ide (1991)). Our architecture currently offers two types of voting mechanisms: weighted av- erage (Eq.1) and weighted maximum (Eq.2). A Bayesian classifier (Weiss and Kulikowski, 1991) based weight estimation algorithm (Eq.3) is in- cluded for constructing adaptive voting mecha- nisms. P(r) = w P(rlti) i=1 (1) P(r) = max{WlP(rltx),...,w,,P(rlt,)} (2) = P(rlt,)) (3) Second, different types of analysis a/ will pro- vide different information about a problem, hence, a solution is improved by combining sev- eral ai. For telegraphic text compression, we es- timate E(w), the information value of a word, based on a wide range of different information sources (Fig.2.1 shows a subset of our working system). The output of each ai are combined by a voting mechanism to form a single measure. Vo~ng mechanism 0 Pmcoss 0 I " ....... "I l I I ! Technique Ane~ysis com~na~on ¢om~n~on Figure 3: An example configuration of TEA for telegraphic text compression. Thus, for example, if our system encoun- ters the phrase 'President Clinton', both lexical lookup and automatic tagging will agree that 'President' is a noun. Nouns are generally infor- mative, so should be retained in the compressed output text. However, grammar-based syntac- tic analysis gives a lower weighting to the first noun of a noun-noun construction, and bigram analysis tells us that 'President Clinton' is a common word pair. These two modules overrule the simple POS value, and 'President Clinton' is reduced to 'Clinton'. 3 Related work Current trends in the development of reusable TE tools are best represented by the Edinburgh tools (LTGT) 2 (LTG, 1999) and GATE 3 (Cun- ningham et al., 1995). Like TEA, both LTGT and GATE are frameworks for TE. LTGT adopts the pipeline architecture for module integration. For processing, a text doc- ument is converted into SGML format. Pro- cessing modules are then applied to the SGML file sequentially. Annotations are accumulated as mark-up tags in the text. The architecture is simple to understand, robust and future proof. The SGML/XML standard is well developed and supported by the community. This im- proves the reusability of the tools. However, 2LTGT is an acronym for the Edinburgh Language Technology Group Tools aGATE is an acronym for General Architecture for Text Engineering. 616 tile architecture encourages tool development rather than reuse of existing TE components. GATE is based on an object-oriented data model (similar to the TIPSTER architecture (Grishman, 1997)). Modules communicate by reading and writing information to and from a central database. Unlike LTGT, both GATE and TEA are designed to encourage software reuse. Existing TE tools are easily incorporated with Tcl wrapper scripts and Java interfaces, re- spectively. Features that distinguish LTCT, GATE and TEA are the configuration methods, portabil- ity and motivation. Users of LTGT write shell scripts to define a system (as a chain of LTGT components). With GATE, a system is con- structed manually by wiring TE components to- gether using the graphical interface. TEA as- sumes the user knows nothing but the available input and required output. The appropriate set of plug-ins are automatically activated. Module selection can be manually configured by adjust- ing the parameters of the voting mechanisms. This ensures a TE system is accessible to com- plete novices ~,,-I yet has sufficient control for developers. LTGT and GATE are both open-source C ap- plications. They can be recompiled for many platforms. TEA is a Java application. It can run directly (without compilation) on any Java supported systems. However, applications con- structed with the current release of GATE and TEA are less portable than those produced with LTGT. GATE and TEA encourage reuse of ex- isting components, not all of which are platform independent 4. We believe this is a worth while trade off since it allows developers to construct prototypes with components that are only avail- able as separate applications. Native tools can be developed incrementally. 4 An example Our application is telegraphic text compression. The examples were generated with a subset of our working system using a section of the book HAL's legacy (Stork, 1997) as test data. First, we use different compression techniques to gen- erate the examples in Fig.4. This was done by simply adjusting a parameter of an output plug- 4This is not a problem for LTGT since the architec- ture does not encourage component reuse. in. It is clear that the output is inadequate for rapid text skimming. To improve the system, the three measures were combine with an un- weighted voting mechanism. Fig.4 presents two levels of compression using the new measure. 1. With science fiction films the more science you understand the less you admire the film or respect its makers 2. fiction films understand less admire respect makers 3. fiction understand less admire respect makers 4. science fiction films science film makers Figure 4: Three measures of information value: (1) Original sentence, (2) Token frequency, (3) Stem frequency and (4) POS. 1. science fiction films understand less admire film respect makers 2. fiction makers Figure 5: Improving telegraphic text compres- sion by analysis combination. 5 Conclusions and future directions We have described an interesting architecture (TEA) for developing platform independent text engineering applications. Product delivery, configuration and development are made sim- ple by the self-organizing architecture and vari- able interface. The use of voting mechanisms for integrating discrete modules is original. Its motivation is well supported. The current implementation of TEA is geared towards token analysis. We plan to extend the data model to cater for structural annota- tions. The tool set for TEA is constantly be- ing extended, recent additions include a proto- type symbolic classifier, shallow parser (Choi, Forthcoming), sentence segmentation algorithm (Reynar and Ratnaparkhi, 1997) and a POS tagger (Ratnaparkhi, 1996). Other adaptive voting mechanisms are to be investigated. Fu- ture release of TEA will support concurrent ex- ecution (distributed processing) over a network. Finally, we plan to investigate means of im- proving system integration and module orga- nization, e.g. annotation, module and tag set compatibility. 617 References E. Brill and J. Wu. 1998. Classifier combina- tion for improved lexical disambiguation. In Proceedings of COLING-A CL '98, pages 191- 195, Montreal, Canada, August. F. Choi. 1999a. An adaptive voting mechanism for improving the reliability of natural lan- guage processing systems. Paper submitted to EACL'99, January. F. Choi. 1999b. Speed reading for the visually disabled. Paper submitted to SIGART/AAAI'99 Doctoral Consortium, February. F. Choi. Forthcoming. A probabilistic ap- proach to learning shallow linguistic patterns. In ProCeedings of ECAI'99 (Student Session), Greece. H. Cunningham, R.G. Gaizauskas, and Y. Wilks. 1995. A general architecture for text engineering (gate) - a new approach to language engineering research and de- velopment. Technical Report CD-95-21, Department of Computer Science, University of Sheffield. http://xxx.lanl.gov/ps/cmp- lg/9601009. M. Edwards. Forthcoming. An approach to automatic interface generation. Final year project report, Department of Computer Sci- ence, University of Manchester, Manchester, England. L. Erman. 1980. The hearsay-ii speech under- standing system: Integrating knowledge to resolve uncertainty. In A CM Computer Sur- veys, volume 12. G. Grefenstette. 1998. Producing intelligent telegraphic text reduction to provide an audio scanning service for the blind. In AAAI'98 Workshop on Intelligent Text Summariza- tion, San Francisco, March. R. Grishman. 1997. Tipster architecture de- sign document version 2.3. Technical report, DARPA. http://www.tipster.org. LTG. 1999. Edinburgh univer- sity, hcrc, ltg software. WWW. http://www.ltg.ed.ac.uk/software/index.html. H. Rollfs of Roelofs. Forthcoming. Telegraph- ese: Converting text into telegram style. Master's thesis, Department of Computer Sci- ence, University of Manchester, Manchester, England. G. M. P. O'Hare and N. R. Jennings, edi- tots. 1996. Foundations of Distributed Ar- tificial Intelligence. Sixth generation com- puter series. Wiley Interscience Publishers, New York. ISBN 0-471-00675. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceed- ings of the empirical methods in NLP confer- ence, University of Pennsylvania. J. Reynar and A. Ratnaparkhi. 1997. A max- imum entropy approach to identifying sen- tence boundaries. In Proceedings of the fifth conference on Applied NLP, Washington D.C. E. Rich and K. Knight. 1991. Artificial Intel- ligence. McGraw-Hill, Inc., second edition. ISBN 0-07-100894-2. D. Stork, editor. 1997. Hal's Legacy: 2001's Computer in Dream and Reality. MIT Press. http: / / mitpress.mit.edu[ e-books /Hal /. H. van Halteren, J. Zavrel, and W. Daelemans. 1998. Improving data driven wordclass tag- ging by system combination. In Proceedings of COLING-A CL'g8, volume 1. J. Veronis and N. Ide. 1991. An accessment of semantic information automatically extracted from machine readable dictionaries. In Pro- ceedings of EA CL'91, pages 227-232, Berlin. S. Weiss and C. Kulikowski. 1991. Computer Systems That Learn. Morgan Kaufmann. 618
1999
82
Modeling Filled Pauses in Medical Dictations Serge)' V.. Pakhomov University of Minnesota 190 Klaeber Court 320-16 th Ave. S.E Minneapolis, MN 55455 [email protected] Abstract Filled pauses are characteristic of spontaneous speech and can present considerable problems for speech recognition by being often recognized as short words. An um can be recognized as thumb or arm if the recognizer's language model does not adequately represent FP's. Recognition of quasi-spontaneous speech (medical dictation) is subject to this problem as well. Results from medical dictations by 21 family practice physicians show that using an FP model trained on the corpus populated with FP's produces overall better results than a model trained on a corpus that excluded FP's or a corpus that had random FP's. Introduction Filled pauses (FP's), false starts, repetitions, fragments, etc. are characteristic of spontaneous speech and can present considerable problems for speech recognition. FP's are often recognized as short words of similar phonetic quality. For example, an um can be recognized as thumb or arm if the recognizer's language model does not adequately represent FP's. Recognition of quasi-spontaneous speech (medical dictation) is subject to this problem as well. The FP problem becomes especially pertinent where the corpora used to build language models are compiled from text with no FP's. Shriberg (1996) has shown that representing FP's in a language model helps decrease the model' s perplexity. She finds that when a FP occurs at a major phrase or discourse boundary, the FP itself is the best predictor of the following lexical material; conversely, in a non-boundary context, FP's are predictable from the preceding words. Shriberg (1994) shows that the rate of disfluencies grows exponentially with the length of the sentence, and that FP's occur more often in the initial position (see also Swerts (1996)). This paper presents a method of using bigram probabilities for extracting FP distribution from a corpus of hand- transcribed dam. The resulting bigram model is used to populate another Iraining corpus that originally had no FP's. Results from medical dictations by 21 family practice physicians show that using an FP model trained on the corpus populated with FP's produces overall better results than a model trained on a corpus that excluded FP's or a corpus that had random FP's. Recognition accuracy improves proportionately to the frequency of FP's in the speech. 1. Filled Pauses FP's are not random events, but have a systematic distribution and well-defined functions in discourse. (Shriberg and Stolcke 1996, Shriberg 1994, Swerts 1996, Macalay and Osgood 1959, Cook 1970, Cook and Lalljee 1970, Christenfeld, et al. 1991) Cook and Lalljee (1970) make an interesting proposal that FP's may have something to do with the listener's perception of disfluent speech. They suggest that speech may be more 619 comprehensible when it contains filler material during hesitations by preserving continuity and that a FP may serve as a signal to draw the listeners attention to the next utterance in order for the listener not to lose the onset of the following utterance. Perhaps, from the point of view of perception, FP's are not disfluent events at all. This proposal bears directly on the domain of medical dictations, since many doctors who use old voice operated equipment train themselves to use FP's instead of silent pauses, so that the recorder wouldn't cut off the beginning of the post pause utterance. 2. Quasi-spontaneous speech Family practice medical dictations tend to be pre-planned and follow an established SOAP format: (Subjective (informal observations), Objective (examination), Assessment (diagnosis) and Plan (treatment plan)). Despite that, doctors vary greatly in how frequently they use FP's, which agrees with Cook and Lalljee's (1970) findings of no correlation between FP use and the mode of discourse. Audience awareness may also play a role in variability. My observations provide multiple examples where the doctors address the transcriptionists directly by making editing comments and thanking them. 3. Training Corpora and FP Model This study used three base and two derived corpora Base corpora represent three different sets of dictations described in section 3.1. Derived corpora are variations on the base corpora conditioned in several different ways described in section 3.2. 3.1 Base Balanced FP training corpus (BFP- CORPUS) that has 75, 887 words of word-by-word transcription data evenly distributed between 16 talkers. This 3.2 corpus was used to build a BIGRAM- FP-LM which controls the process of populating a no-FP corpus with artificial FP's. Unbalanced FP training corpus (UFP- CORPUS) of approximately 500,000 words of all available word-by-word transcription data from approximately 20 talkers. This corpus was used only to calculate average frequency of FP use among all available talkers. Finished transcriptions corpus (FT- CORPUS) of 12,978,707 words contains all available dictations and no FP's. It represents over 200 talkers of mixed gender and professional status. The corpus contains no FP's or any other types of disfluencies such as repetitions, repairs and false starts. The language in this corpus is also edited for grammar. Derived CONTROLLED-FP-CORPUS is a version of the finished transcriptions corpus populated stochastically with 2,665,000 FP's based on the BIGRAM- FP-LM. RANDOM-FP-CORPUS- 1 (normal density) is another version of the finished transcriptions corpus populated with 916,114 FP's where the insertion point was selected at random in the range between 0 and 29. The random function is based on the average frequency of FPs in the unbalanced UFP-CORPUS where an FP occurs on the average after every 15 th word. Another RANDOM-FP-CORPUS-2 (high density) was used to approximate the frequency of FP's in the CONTROLLED-FP-CORPUS. 620 4. Models The language modeling process in this study was conducted in two stages. First, a bigram model containing bigram probabilities of FP's in the balanced BFP-COPRUS was built followed by four different trigram language models, some of which used corpora generated with the BIGRAM-FP- LM built during the first stage. 4.1 Bigram FP model This model contains the distribution of FP's obtained by using the following formulas: P(FPIwi-O = Cw-i Fp/Cw-i P(FPIwH) = CFp w+l/Cw+l Thus, each word in a corpus to be populated with FP's becomes a potential landing site for a FP and does or does not receive one based on the probability found in the BIGRAM-FP-LM. 4.2 Trigram models The following trigram models were built using ECRL's Transcriber language modeling tools (Valtchev, et al. 1998). Both bigram and trigram cutoffs were set to 3. • NOFP-LM was built using the FT- CORPUS with no FP's. • ALLFP-LM was built entirely on CONTROLLED-FP-CORPUS. • ADAPTFP-LM was built by interpolating ALLFP-LM and NOFP- LM at 90/10 ratio. Here 90 % of the resulting ADAPTFP-LM represents the CONTROLLED-FP-CORPUS and 10% represents FT-CORPUS. • RANDOMFP-LM-1 (normal density) was built entirely on the RANDOM-FP- CORPUS-1. = RANDOMFP-LM-2 (high density) was built entirely on the RANDOM-FP- CORPUS-2 5. Testing Data Testing data comes from 21 talkers selected at random and represents 3 (1-3 min) dictations for each talker. The talkers are a random mix of male and female medical doctors and practitioners who vary greatly in their use of FP's. Some use literally no FP's (but long silences instead), others use FP's almost every other word. Based on the frequency of FP use, the talkers were roughly split into a high FP user and low FP user groups. The relevance of such division will become apparent during the discussion of test results. 6. Adaptation Test results for ALLFP-LM (63.01% avg. word accuracy) suggest that the model over represents FP's. The recognition accuracy for this model is 4.21 points higher than that of the NOFP-LM (58.8% avg. word accuracy) but lower than that of both the RANDOMFP-LM-1 (67.99% avg. word accuracy) by about 5% and RANDOMFP- LM-2 (65.87% avg. word accuracy) by about 7%. One way of decreasing the FP representation is to correct the BIGRAM- FP-LM, which proves to be computationally expensive because of having to rebuild the large training corpus with each change in BIGRAM-FP-LM. Another method is to build a NOFP-LM and an ALLFP-LM once and experiment with their relative weights through adaptation. I chose the second method because ECRL Transcriber toolkit provides an adaptation tool that achieves the goals of the first method much faster. The results show that introducing a NOFP-LM into the equation improves recognition. The difference in recognition accuracy between the ALLFP-LM and ADAPTFP-LM is on average 4.9% across all talkers in ADAPTFP-LM's favor. Separating the talkers into high FP user group and low FP user group raises ADAPTFP-LM's gain to 6.2% for high FP users and lowers it to 3.3% 621 for low FP users. This shows that adaptation to no-FP data is, counter- intuitively more beneficial for high FP users. 7. Results and discussion Although a perplexity test provides a good theoretical measure of a language model, it is not always accurate in predicting the model's performance in a recognizer (Chen 1998); therefore, both perplexity and recognition accuracy were used in this study. Both were calculated using ECRL's LM Transcriber tools. 7.1 Perplexity Perplexity tests were conducted with ECRL's LPlex tool based on the same text corpus (BFP-CORPUS) that was used to build the BIGRAM-FP-LM. Three conditions were used. Condition A used the whole corpus. Condition B used a subset of the corpus that contained high frequency FP users (FPs/Words ratio above 1.0). Condition C used the remaining subset containing data from lower frequency FP users (FPs/Words ratio below 1.0). Table 1 summarizes the results of perplexity tests at 3-gram level for the models under the three conditions. .... , : Lp~ Lplex.: :: i OOV: ~. :Lpl~ :NOFP~LIV, ::, = ,,: ,: 617.59 6.35 1618.35 6.08 287.46 ADAVT~. M ........ i.. ;'L = 132.74 6.35 ::: ..... 6.08 ' ~:13L70 : .... : ~DOMFP~LM~. : 138.02 6.3_5 ~ 6.08 125,79 i ,R.ANDOMFP~2 156.09 6.35 152.16 6.08 145.47 6.06 980.67 6.35 964.48 6.08 916.53 6.06 Table 1. Perplexity measurements OOV r~:: (%),:,, ,,,, 6.06 6.06 6.06 The perplexity measures in Condition A show over 400 point difference between ADAPTFP- LM and NOFP-LM language models. The 363,08 increase in perplexity for ALLFP-LM model corroborates the results discussed in Section 6. Another interesting result is contained in the highlighted fields of Table 1. ADAPTFP-LM based on CONTROLLED-FP- CORPUS has lower perplexity in general. When tested on conditions B and C, ADAPTFP- LM does better on frequent FP users, whereas RANDOMFP-LM-Â does better on infrequent FP users, which is consistent with the recognition accuracy results for the two models (see Table 2). 7.2 Recognition accuracy Recognition accuracy was obtained with ECRL's HResults tool and is summarized in Table 2. ::~. ~,::,~: 1 5140 % [ . . . . . ~ I ~ ~ / ) ~ ~:::l 66.57 % [ ~ ii: ~ii~! iiiiiii!!iiiiiii!i ii]67.14% Table 2. Recognition accuracy tests for LM's. !A~ ! i ~ ~ ) i:~i~::.~:i. ~i!~i I 67.76% 71.46 % 69.23 % 71.24% The results in Table 2 demonstrate two things. First, a FP model performs better than a clean model that has no FP representation~ Second, a FP model based on populating a no-FP training corpus with FP's whose distribution was derived from a 622 small sample of speech data performs better than the one populated with FP's at random based solely on the frequency of FP's. The results also show that ADAPTFP-LM performs slightly better than RANDOMFP- LM-1 on high FP users. The gain becomes more pronounced towards the higher end of the FP use continuum. For example, the scores for the top four high FP users are 62.07% with RANDOMFP-LM-1 and 63.51% with ADAPTFP-LM. This difference cannot be attributed to the fact that RANDOMFP-LM-1 contains fewer FP's than ADAPTFP-LM. The word accuracy rates for RANDOMFP-LM-2 indicate that frequency of FP's in the training corpus is not responsible for the difference in performance between the RANDOM-FP-LM-1 and the ADAPTFP- LM. The frequency is roughly the same for both RANDOMFP-CORPUS-2 and CONTROLLED-FP-CORPUS, but RANDOMFP-LM-2 scores are lower than those of RANDOMFP-LM-1, which allows in absence of further evidence to attribute the difference in scores to the pattern of FP distribution, not their frequency. Conclusion Based on the results so far, several conclusions about FP modeling can be made: 1. Representing FP's in the training data improves both the language model's perplexity and recognition accuracy. 2. It is not absolutely necessary to have a corpus that contains naturally occurring FP's for successful recognition. FP distribution can be extrapolated from a relatively small corpus containing naturally occurring FP's to a larger clean corpus. This becomes vital in situations where the language model has to be built from "clean" text such as finished transcriptions, newspaper articles, web documents, etc. 3. If one is hard-pressed for hand transcribed data with natural FP's, a . random population can be used with relatively good results. FP's are quite common to both quasi- spontaneous monologue and spontaneous dialogue (medical dictation). Research in progress The present study leaves a number of issues to be investigated further: 1. The results for RANDOMFP-LM-1 are very close to those of ADAPTFP-LM. A statistical test is needed in order to determine if the difference is significant. 2. A systematic study of the syntactic as well as discursive contexts in which FP's are used in medical dictations. This will involve tagging a corpus of literal transcriptions for various kinds of syntactic and discourse boundaries such as clause, phrase and theme/rheme boundaries. The results of the analysis of the tagged corpus may lead to investigating which lexical items may be helpful in identifying syntactic and discourse boundaries. Although FP's may not always be lexically conditioned, lexical information may be useful in modeling FP's that occur at discourse boundaries due to co- occurrence of such boundaries and certain lexical items. 3. The present study roughly categorizes talkers according to the frequency of FP's in their speech into high FP users and low FP users. A more finely tuned categorization of talkers in respect to FP use as well as its usefulness remain to be investigated. 4. Another area of investigation will focus on the SOAP structure of medical dictations. I plan to look at relative frequency of FP use in the four parts of a medical dictation. Informal observation of data collected so far indicates that FP use is more frequent and different from other parts during the 623 Subjective part of a dictation. This is when the doctor uses fewer frozen expressions and the discourse is closest to a natural conversation. Acknowledgements I would like to thank Joan Bachenko and Michael Shonwetter, at Linguistic Technologies, Inc. and Bruce Downing at the University of Minnesota for helpful discussions and comments. References Chen, S., Beeferman, Rosenfeld, R. (1998). "Evaluation metrics for language models," In DARPA Broadcast News Transcription and Understanding Workshop. Christenfeld, N, Schachter, S and Bilous, F. (1991). "Filled Pauses and Gestures: It's not coincidence," Journal of Psycholinguistic Research, Vol. 20(1). Cook, M. (1977). "The incidence of filled pauses in relation to part of speech," Language and Speech, Vol. 14, pp. 135-139. Cook, M. and Lalljee, M. (1970). "The interpretation of pauses by the listener," Brit. J. Soc. Clin. Psy. Vol. 9, pp. 375-376. Cook, M., Smith, J, and Lalljee, M (1977). "Filled pauses and syntactic complexity," Language and Speech, Vol. 17, pp.11-16. Valtchev, V. Kershaw, D. and Odell, J. 1998. The truetalk transcriber book. Entropic Cambridge Research Laboratory, Cambridge, England. Heeman, P.A. and Loken-Kim, K. and Allen, J.F. (1996). "Combining the detection and correlation of speech repairs," In Proc., ICSLP. Lalljee, M and Cook, M. (1974). "Filled pauses and floor holding: The final test?" Semiotica, Vol. 12, pp.219-225. Maclay, H, and Osgood, C. (1959). "Hesitation phenomena in spontaneous speech," Word, Vol.15, pp. 19-44. Shriberg, E. E. (1994). Preliminaries to a theory of speech disfluencies. Ph.D. thesis, University of California at Berkely. Shriberg, E.E and Stolcke, A. (1996). "Word predictability after hesitations: A corpus- based study,, In Proc. ICSLP. Shriberg, E.E. (1996). "Disfluencies in Switchboard," In Proc. ICSLP. Shriberg, EE. Bates, R. and Stolcke, A. (1997). "A prosody-only decision-tree model for disfluency detection" In Proc. EUROSPEECH. Siu, M. and Ostendorf, M. (1996). "Modeling disfluencies in conversational speech," Proc. ICSLP. Stolcke, A and Shriberg, E. (1996). "Statistical language modeling for speech disfluencies," In Proc. ICASSP. Swerts, M, Wichmann, A and Beun, R. (1996). "Filled pauses as markers of discourse structure," Proc. ICSLP. 624
1999
83
AUTHOR INDEX Abella, Alicia ............................................. 191 Abney, Steven ............................................ 542 Barzilay, Regina ........................................ 550 Bateman, John A ....................................... 127 Bean, David L ............................................ 373 Beil, Franz .......................................... 104, 269 Berland, Matthew ....................................... 57 Bian, Guo-Wei ........................................... 215 Blaheta, Don .............................................. 513 Bloedorn, Eric ............................................ 558 Bratt, Elizabeth Owen .............................. 183 Breck, Eric .................................................. 325 Brill, Eric ....................................................... 65 Bruce, Rebecca F ........................................ 246 Burger, John D ........................................... 325 Canon, Stephen ......................................... 535 Caraballo, Sharon A ................................. 120 Carroll, Glenn .................................... 104, 269 Carroll, John .............................................. 473 Caudal, Patrick ..... ; .................................... 497 Cech, Claude G .......................................... 238 Charniak, Eugene ................................ 57, 513 Chen, Hsin-His .......................................... 215 Chi, Zhiyi ................................................... 535 Cho, Jeong-Mi ............................................ 230 Choi, Won Seug ......................................... 230 Collins, Michael ......................................... 505 Condon, Sherri L ....................................... 238 Content, Alain ........................................... 436 Core, Mark G ............................................. 413 Corston-Oliver, Simon H ......................... 349 Daelemans, Walter .................................... 285 Dohsaka, Kohji .......................................... 200 Dolan, William B ....................................... 349 Dowding, John .......................................... 183 Dras, Mark ................................................... 80 Edwards, William R ................................. 238 Eisner, Jason ............................................... 457 Elhadad, Michael .............................. 144, 550 Florian, Radu ............................................. 167 Fung, Pascale ............................................. 333 Furui, Sadaoki .............................................. 11 Gardent, Claire ............................................ 49 Gates, Barbara ........................................... 558 Gawron, Jean Mark ................................... 183 Geman, Stuart ............................................ 535 Gorin, Allen L ............................................ 191 625 Hajic, Jan .................................................... 505 Harper, Mary P ......................................... 175 Hatzivassiloglou, Vasileios ..................... 135 Hearst, Marti A ................................................ 3 Hepple, Mark ............................................ 465 Hirasawa, Jun-ichi .................................... 200 Hirschman, Lynette .................................. 325 Holt, Alexander ........................................ 451 Hwa, Rebecca .............................................. 73 Isahara, Hitoshi ......................................... 489 Jacquemin, Christian ........................ 341, 389 Jang, Myung-Gil ....................................... 223 Johnson, Mark ................................... 421,535 Joshi, Aravind ............................................. 41 Kanzaki, Kyoko ......................................... 489 Kasper, Walter .......................................... 405 Kawabata, Takeshi ................................... 200 Kearns, Michael S ..................................... 309 Kiefer, Bernd ..................................... 405, 473 Kis, Bal~zs .................................................. 261 Klein, Ewan ............................................... 451 Knott, Alistair .............................................. 41 Koo, Jessica Li Teng .................................. 443 Krieger, Hans-Ulrich ........................ 405, 473 Kurohashi, Sadao ...................................... 481 Lange, Marielle ......................................... 436 Lapata, Maria ................ ~ ........................... 397 Lee, Lillian ............................................. 25, 33 Light, Marc ................................................ 325 Lim, Chung Yong ..................................... 443 Lin, Dekang ............................................... 317 Lin, Wen-Cheng ........................................ 215 Litman, Diane J ......................................... 309 Malouf, Rob ............................................... 473 Manandhar, Suresh .................................. 293 Mani, Inderjeet .......................................... 558 Marcu, Daniel ............................................ 365 McAllester, David ..................................... 542 McCarley, J. Scott ...................................... 208 McKeown, Kathleen R ............................. 550 Mihalcea, Rada ......... . ................................. 152 Mikheev, Andrei ....................................... 159 Miller, George A ......................................... 21 Miyazaki, Noboru .................................... 200 Moldovan, Dan I ....................................... 152 Moore, Robert ........................................... 183 Morin, Emmanuel ..................................... 389 Myaeng, Sung Hyon ................................. 223 Nagata, Masaaki ....................................... 277 Nakano, Mikio ........................................... 200 Netzer, Yael Dahan ................................... 144 Ng, Hwee Tou ........................................... 443 Ngai, Grace .................................................. 65 Oflazer, Kemal ........................................... 254 O'Hara, Thomas P ..................................... 246 Park, Se Young .......................................... 223 Pereira, Fernando ................................ 33, 542 Prescher, Detlef ................................. 104, 269 Pr6sz6ky, G~ibor ........................................ 261 Ramshaw, Lance ....................................... 505 Rapp, Reinhard ......................................... 519 Resnik, Philip ............................................. 527 Reynar, Jeffrey C ....................................... 357 Riezler, Stefan ............................ 104, 269, 535 Riloff, Ellen ................................................ 373 Roark, Brian ............................................... 421 Rooth, Mats ........................................ 104, 269 Rupp, C. J ................................................... 405 Sakai, Yasuyuki ......................................... 481 Satta, Giorgio ............................................. 457 Schubert, Lenhart K ................................. 413 Schuler, William ......................................... 88 Seo, Jungyun ............................................. 230 Shaw, James ............................................... 135 Shun, Cheung Chi .................................... 333 Siegel, Eric V .............................................. 112 Steedman, Mark ........................................ 301 Stent, Amanda ........................................... 183 Stone, Matthew ........................................... 41 Tanaka, Hideki .......................................... 381 Thede, Scott M .......................................... 175 Tillmann, Christoph ................................. 505 van den Bosch, Antal ............................... 285 Walker, Marilyn A .................................... 309 Webber, Bonnie ........................................... 41 Wiebe, Janyce M ....................................... 246 Willis, Alistair ........................................... 293 Wintner, Shuly ............................................ 96 Worm, Karsten L ...................................... 405 Xiaohu, Liu ................................................ 333 Yang, Charles D ........................................ 429 Yarowsky, David ...................................... 167 Yokoo, Akio ............................................... 381 STUDENT AUTHOR INDEX Choi, Freddy Y. Y ...................................... 615 Corduneanu, Adrian ................................ 606 Goldberg, Miriam ..................................... 610 Kaiser, Edward C ...................................... 573 Kaufmann, Stefan ..................................... 591 Kinyon, Alexandra .................................... 585 Miyao, Yusuke .......................................... 579 Pakhomov, Sergey V ................................ 619 Saggion, Horacio ...................................... 596 Tetreault, Joel R ......................................... 602 Thomas, Kavita ......................................... 569 626
1999
84
Man* vs. Machine: A Case Study in Base Noun Phrase Learning Eric Brill and Grace Ngai Department of Computer Science The Johns Hopkins University Baltimore, MD 21218, USA Email: (brill,gyn}~cs. jhu. edu Abstract A great deal of work has been done demonstrat- ing the ability of machine learning algorithms to automatically extract linguistic knowledge from annotated corpora. Very little work has gone into quantifying the difference in ability at this task between a person and a machine. This pa- per is a first step in that direction. 1 Introduction Machine learning has been very successful at solving many problems in the field of natural language processing. It has been amply demon- strated that a wide assortment of machine learn- ing algorithms are quite effective at extracting linguistic information from manually annotated corpora. Among the machine learning algorithms stud- ied, rule based systems have proven effective on many natural language processing tasks, including part-of-speech tagging (Brill, 1995; Ramshaw and Marcus, 1994), spelling correc- tion (Mangu and Brill, 1997), word-sense dis- ambiguation (Gale et al., 1992), message un- derstanding (Day et al., 1997), discourse tag- ging (Samuel et al., 1998), accent restoration (Yarowsky, 1994), prepositional-phrase attach- ment (Brill and Resnik, 1994) and base noun phrase identification (Ramshaw and Marcus, In Press; Cardie and Pierce, 1998; Veenstra, 1998; Argamon et al., 1998). Many of these rule based systems learn a short list of simple rules (typ- ically on the order of 50-300) which are easily understood by humans. Since these rule-based systems achieve good performance while learning a small list of sim- ple rules, it raises the question of whether peo- *and Woman. 65 ple could also derive an effective rule list man- ually from an annotated corpus. In this pa- per we explore how quickly and effectively rel- atively untrained people can extract linguistic generalities from a corpus as compared to a ma- chine. There are a number of reasons for doing this. We would like to understand the relative strengths and weaknesses of humans versus ma- chines in hopes of marrying their con~plemen- tary strengths to create even more accurate sys- tems. Also, since people can use their meta- knowledge to generalize from a small number of examples, it is possible that a person could de- rive effective linguistic knowledge from a much smaller training corpus than that needed by a machine. A person could also potentially learn more powerful representations than a machine, thereby achieving higher accuracy. In this paper we describe experiments we per- formed to ascertain how well humans, given an annotated training set, can generate rules for base noun phrase chunking. Much previous work has been done on this problem and many different methods have been used: Church's PARTS (1988) program uses a Markov model; Bourigault (1992) uses heuristics along with a grammar; Voutilainen's NPTool (1993) uses a lexicon combined with a constraint grammar; Juteson and Katz (1995) use repeated phrases; Veenstra (1998), Argamon, Dagan & Kry- molowski(1998) and Daelemaus, van den Bosch & Zavrel (1999) use memory-based systems; Ramshaw & Marcus (In Press) and Cardie & Pierce (1998) use rule-based systems. 2 Learning Base Noun Phrases by Machine We used the base noun phrase system of Ramshaw and Marcus (R&M) as the machine learning system with which to compare the hu- man learners. It is difficult to compare different machine learning approaches to base NP anno- tation, since different definitions of base NP are used in many of the papers, but the R&M sys- tem is the best of those that have been tested on the Penn Treebank. 1 To train their system, R&M used a 200k-word chunk of the Penn Treebank Parsed Wall Street Journal (Marcus et al., 1993) tagged using a transformation-based tagger (Brill, 1995) and extracted base noun phrases from its parses by selecting noun phrases that contained no nested noun phrases and further processing the data with some heuristics (like treating the posses- sive marker as the first word of a new base noun phrase) to flatten the recursive struc- ture of the parse. They cast the problem as a transformation-based tagging problem, where each word is to be labelled with a chunk struc- ture tag from the set {I, O, B}, where words marked 'T' are inside some base NP chunk, those marked "O" are not part of any base NP, and those marked "B" denote the first word of a base NP which immediately succeeds an- other base NP. The training corpus is first run through a part-of-speech tagger. Then, as a baseline annotation, each word is labelled with the most common chunk structure tag for its part-of-speech tag. After the baseline is achieved, transforma- tion rules fitting a set of rule templates are then learned to improve the "tagging accuracy" of the training set. These templates take into consideration the word, part-of-speech tag and chunk structure tag of the current word and all words within a window of 3 to either side of it. Applying a rule to a word changes the chunk structure tag of a word and in effect alters the boundaries of the base NP chunks in the sen- tence. An example of a rule learned by the R&M sys- tem is: change a chunk structure tag of a word from I to B if the word is a determiner, the next word ks a noun, and the two previous words both have chunk structure tags of I. In other words, a determiner in this context is likely to begin a noun phrase. The R&M system learns a total 1We would like to thank Lance Ramshaw for pro- viding us with the base-NP-annotated training and test corpora that were used in the R&M system, as well as the rules learned by this system. of 500 rules. 3 Manual Rule Acquisition R&M framed the base NP annotation problem as a word tagging problem. We chose instead to use regular expressions on words and part of speech tags to characterize the NPs, as well as the context surrounding the NPs, because this is both a more powerful representational lan- guage and more intuitive to a person. A person can more easily consider potential phrases as a sequence of words and tags, rather than looking at each individual word and deciding whether it is part of a phrase or not. The rule actions we allow are: 2 Add Add a base NP (bracket a se- quence of words as a base NP) Kill Delete a base NP (remove a pair of parentheses) Transform Transform a base NP (move one or both parentheses to ex- tend/contract a base NP) Merge Merge two base NPs As an example, we consider an actual rule from our experiments: Bracket all sequences of words of: one determiner (DT), zero or more adjec- tives (JJ, JJR, JJS), and one or more nouns (NN, NNP, NNS, NNPS), if they are followed by a verb (VB, VBD, VBG, VBN, VBP, VBZ). In our language, the rule is written thus: 3 A (* .) ({i} t=DT) (* t=JJ[RS]?) (+ t=NNP?S?) ({i} t=VB [DGNPZ] ?) The first line denotes the action, in this case, Add a bracketing. The second line defines the context preceding the sequence we want to have bracketed -- in this case, we do not care what this sequence is. The third line defines the se- quence which we want bracketed, and the last 2The rule types we have chosen are similar to those used by Vilain and Day (1996) in transformation-based parsing, but are more powerful. SA full description of the rule language can be found at http://nlp, cs. jhu. edu/,~baseNP/manual. 6B line defines the context following the bracketed sequence. Internally, the software then translates this rule into the more unwieldy Perl regular expres- sion: s( ( ( ['\s_] +__DT\s+) ( ['\s_] +__JJ [RS] \s+)* The actual system is located at http://nlp, cs. jhu. edu/~basenp/chunking. A screenshot of this system is shown in figure 4. The correct base NPs are enclosed in paren- theses and those annotated by the human's rules in brackets. ( ['\s_] +__NNPFS?\s+) +) ( [" \s_] +__VB [DGNPZ] \s+)} 4 { ( $1 ) $5 ]'g The base NP annotation system created by the humans is essentially a transformation- based system with hand-written rules. The user manually creates an ordered list of rules. A rule list can be edited by adding a rule at any position, deleting a rule, or modifying a rule. The user begins with an empty rule list. Rules are derived by studying the training corpus and NPs that the rules have not yet bracketed, as well as NPs that the rules have incorrectly bracketed. Whenever the rule list is edited, the efficacy of the changes can be checked by run- ning the new rule list on the training set and seeing how the modified rule list compares to the unmodified list. Based on this feedback, the user decides whether, to accept or reject the changes that were made. One nice prop- erty of transformation-based learning is that in appending a rule to the end of a rule list, the user need not be concerned about how that rule may interact with other rules on the list. This is much easier than writing a CFG, for instance, where rules interact in a way that may not be readily apparent to a human rule writer. To make it easy for people to study the train- ing set, word sequences are presented in one of four colors indicating that they: 1. are not part of an NP either in the truth or in the output of the person's rule set 2. consist of an NP both in the truth and in the output of the person's rule set (i.e. they constitute a base NP that the person's rules correctly annotated) 3. consist of an NP in the truth but not in the output of the person's rule set (i.e. they constitute a recall error) 4. consist of an NP in the output of the per- son's rule set but not in the truth (i.e. they constitute a precision error) Experimental Set-Up and Results The experiment of writing rule lists for base NP annotation was assigned as a homework set to a group of 11 undergraduate and graduate stu- dents in an introductory natural language pro- cessing course. 4 The corpus that the students were given from which to derive and validate rules is a 25k word subset of the R&M training set, approximately ! the size of the full R&M training set. The 8 reason we used a downsized training set was that we believed humans could generalize better from less data, and we thought that it might be possible to meet or surpass R&M's results with a much smaller training set. Figure 1 shows the final precision, recall, F- measure and precision+recall numbers on the training and test corpora for the students. There was very little difference in performance on the training set compared to the test set. This indicates that people, unlike machines, seem immune to overtraining. The time the students spent on the problem ranged from less than 3 hours to almost 10 hours, with an av- erage of about 6 hours. While it was certainly the case that the students with the worst results spent the least amount of time on the prob- lem, it was not true that those with the best results spent the most time -- indeed, the av- erage amount of time spent by the top three students was a little less than the overall aver- age -- slightly over 5 hours. On average, peo- ple achieved 90% of their final performance after half of the total time they spent in rule writing. The number of rules in the final rule lists also varied, from as few as 16 rules to as many as 61 rules, with an average of 35.6 rules. Again, the average number for the top three subjects was a little under the average for everybody: 30.3 rules. 4These 11 students were a subset of the entire class. Students were given an option of participating in this ex- periment or doing a much more challenging final project. Thus, as a population, they tended to be the less moti- vated students. 67 TRAINING SET (25K Words) Precision Recall 87.8% 88.6% 88.1% 88.2% 88.6% 87.6% 88.0% 87.2% 86.2% 86.8% 86.0% 87.1% 84.9% 86.7% 83.6% 86.0% 83.9% 85.0% 82.8% 84.5% 84.8% 78.8% Student 1 Student 2 Student 3 Student 4 Student 5 Student 6 Student 7 Student 8 Student 9 Student 10 Student 11 F-Measure P+n Precision 2 88.2 88.2 88.0% 88.2 88.2 88.2% 88.1 88.2 88.3% 87.6 87.6 86.9% 86.5 86.5 85.8% 86.6 86.6 85.8% 85.8 85.8 85.3% 84.8 84.8 83.1% 84.4 84.5 83.5% 83.6 83.7 83.3% 81.7 81.8 84.0% TEST SET Recall F-Measure 88.8% 88.4 87.9% 88.0 87.8% 88.0 85.9% 86.4 85.8% 85.8 87.1% 86.4 87.3% 86.3 85.7% 84.4 84.8% 84.1 84.4% 83.8 77.4% 80.6 2 88.4 88.1 88.1 86.4 85.8 86.5 86.3 84.4 84.2 83.8 80.7 Figure 1: P/R results of test subjects on training and test corpora In the beginning, we believed that the stu- dents would be able to match or better the R&M system's results, which are shown in fig- ure 2. It can be seen that when the same train- ing corpus is used, the best students do achieve performances which are close to the R&M sys- tem's -- on average, the top 3 students' per- formances come within 0.5% precision and 1.1% recall of the machine's. In the following section, we will examine the output of both the manual and automatic systems for differences. 5 Analysis Before we started the analysis of the test set, we hypothesized that the manually derived sys- tems would have more difficulty with potential rifles that are effective, but fix only a very small number of mistakes in the training set. The distribution of noun phrase types, iden- tified by their part of speech sequence, roughly obeys Zipf's Law (Zipf, 1935): there is a large tail of noun phrase types that occur very infre- quently in the corpus. Assuming there is not a rule that can generalize across a large number of these low-frequency noun phrases, the only way noun phrases in the tail of the distribution can be learned is by learning low-count rules: in other words, rules that will only positively af- fect a small number of instances in the training corpus. Van der Dosch and Daelemans (1998) show that not ignoring the low count instances is of- ten crucial to performance in machine learning systems for natural language. Do the human- written rules suffer from failing to learn these infrequent phrases? To explore the hypothesis that a primary dif- ference between the accuracy of human and ma- chine is the machine's ability to capture the low frequency noun phrases, we observed how the accuracy of noun phrase annotation of both hu- man and machine derived rules is affected by the frequency of occurrence of the noun phrases in the training corpus. We reduced each base NP in the test set to its POS tag sequence as assigned by the POS tagger. For each POS tag sequence, we then counted the number of times it appeared in the training set and the recall achieved on the test set. The plot of the test set recall vs. the number of appearances in the training set of each tag sequence for the machine and the mean of the top 3 students is shown in figure 3. For instance, for base NPs in the test set with tag sequences that appeared 5 times in the training corpus, the students achieved an average recall of 63.6% while the machine achieved a recall of 83.5%. For base NPs with tag sequences that appear less than 6 times in the training set, the machine outperforms the students by a recall of 62.8% vs. 54.8%. However, for the rest of the base NPs -- those that appear 6 or more times -- the performances of the machine and students are almost identical: 93.7% for the machine vs. 93.5% for the 3 students, a difference that is not statistically significant. The recall graph clearly shows that for the top 3 students, performance is comparable to the machine's on all but the low frequency con- stituents. This can be explained by the human's 68 Recall F-Measure 89.3% 89.0 92.3% 92.0 2 89.0 92.1 0.9 Figure 2: P/R results of the R&M system on test corpus ..." ""... °o." 0.8 0.7~ 0.6- 0.5- 0.4- 0.3 o Training set size(words) Precision 25k 88.7% 200k 91.8% Number of Appearances in Training Set • • 4- - • Machine Students Figure 3: Test Set Recall vs. Frequency of Appearances in Training Set. reluctance or inability to write a rule that will only capture a small number of new base NPs in the training set. Whereas a machine can easily learn a few hundred rules, each of which makes a very small improvement to accuracy, this is a tedious task for a person, and a task which ap- parently none of our human subjects was willing or able to take on. There is one anomalous point in figure 3. For base NPs with POS tag sequences that appear 3 times in the training set, there is a large de- crease in recall for the machine, but a large increase in recall for the students. When we looked at the POS tag sequences in question and their corresponding base NPs, we found that this was caused by one single POS tag sequence -- that of two successive numbers (CD). The 69 test set happened to include many sentences containing sequences of the type: ...( CD CD ) TO ( CD CD )... as in: ( International/NNP Paper/NNP ) fell/VBD ( 1/CD 3/CD ) to/TO ( 51/CD ½/CD )... while the training set had none. The machine ended up bracketing the entire sequence I/CD -~/CD to/T0 51/CD ½/CD as a base NP. None of the students, however, made this mistake. 6 Conclusions and Future Work In this paper we have described research we un- dertook in an attempt to ascertain how people can perform compared to a machine at learning linguistic information from an annotated cor- pus, and more importantly to begin to explore the differences in learning behavior between hu- man and machine. Although people did not match the performance of the machine-learned annotator, it is interesting that these "language novices", with almost no training, were able to come fairly close, learning a small number of powerful rules in a short amount of time on a small training set. This challenges the claim that machine learning offers portability advan- tages over manual rule writing, seeing that rel- atively unmotivated people can near-match the best machine performance on this task in so lit- tle time at a labor cost of approximately US$40. We plan to take this work in a number of di- rections. First, we will further explore whether people can meet or beat the machine's accuracy at this task. We have identified one major weak- ness of human rule writers: capturing informa- tion about low frequency events. It is possible that by providing the person with sufficiently powerful corpus analysis tools to aide in rule writing, we could overcome this problem. We ran all of our human experiments on a fixed training corpus size. It would be interest- ing to compare how human performance varies as a function of training corpus size with how machine performance varies. There are many ways to combine human corpus-based knowledge extraction with ma- chine learning. One possibility would be to com- bine the human and machine outputs. Another would be to have the human start with the out- put of the machine and then learn rules to cor- rect the machine's mistakes. We could also have a hybrid system where the person writes rules with the help of machine learning. For instance, the machine could propose a set of rules and the person could choose the best one. We hope that by further studying both human and ma- chine knowledge acquisition from corpora, we can devise learning strategies that successfully combine the two approaches, and by doing so, further improve our ability to extract useful lin- guistic information from online resources. 70 Acknowledgements The authors would like to thank Ryan Brown, Mike Harmon, John Henderson and David Yarowsky for their valuable feedback regarding this work. This work was partly funded by NSF grant IRI-9502312. References S. Argamon, I. Dagan, and Y. Krymolowski. 1998. A memory-based approach to learning shallow language patterns. In Proceedings of the ITth International Conference on Compu- tational Linguistics, pages 67-73. COLING- ACL. D. Bourigault. 1992. Surface grammatical anal- ysis for the extraction of terminological noun phrases. In Proceedings of the 30th Annual Meeting of the Association of Computational Linguistics, pages 977-981. Association of Computational Linguistics. E. Brill and P. Resnik. 1994. A rule-based approach to prepositional phrase attachment disambiguation. In Proceedings of the fif- teenth International Conference on Compu- tational Linguistics (COLING-1994). E. Brill. 1995. Transformation-based error- driven learning and natural language process- ing: A case study in part of speech tagging. Computational Linguistics, December. C. Cardie and D. Pierce. 1998. Error-driven pruning of treebank gramars for base noun phrase identification. In Proceedings of the 36th Annual Meeting of the Association of Computational Linguistics, pages 218-224. Association of Computational Linguistics. K. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143. Association of Computational Lin- guistics. W. Daelemans, A. van den Bosch, and J. Zavrel. 1999. Forgetting exceptions is harmful in lan- guage learning. In Machine Learning, spe- cial issue on natural language learning, vol- ume 11, pages 11-43. to appear. D. Day, J. Aberdeen, L. Hirschman, R. Kozierok, P. Robinson, and M. Vi- lain. 1997. Mixed-initiative development of language processing systems. In Fifth Conference on Applied Natural Language ~nUre corpus ~mSed lines only ~l'recision a'ro. only ~ errors only ~3rep on re~c Rules so far: (Reload frame ON EVERY ITERATION to make=urethat contents rare up to date) 1~e in yore mla in thebox bdow, Tlmn~ for your im~dpation and good luck~ existential/pronoun Pule (e ,) ({1 } t=(EX I PRP IWP It~T)) (* .) # dete rm~ ne r+adjecti re+noun A, (-({1})t=(DT)) (* t=(CDt33[RS]?IVBG)) (+ t=NNS?) (* .) # POS+adjecti ves+nouns A (* .) ({1} t=PO5) (? t=(JJ[RS]?IVBNIVBG)) (+ t=NNS?) (* .) ([-~T-t~ird-lar~st ,3 thriftNN i~titutionNN D hi m ([PtlcrtONN P RiCONNp]) ahoRB ==Jdv~ ([itpap]) exlmCtSv~ ([aljT retnrnNs]) tOTo ([profitabilitysN ] ) in m ([theft third;~ quartersN])Wltc~wl ~ ([itpRl~]) rePOr~vBZ (opc~tingvB G rcsultsvl ~ ([thiZDT weekNN]) .. Sem~ce 499: ([POneeNN P Federalt, iNp] ) Illddv~ ([th%T divid~dNN])WltSv~ IRl=FatdedvBN inlN ([.anticipationN NI) OliN (m0reRBR [|tzhlgl~tjj ~Pimlss r~u~u~nsss] )und=m [ (~r Financi~ssP institotiomNN p Pd~OIIlINNP] ,, [I~d~C~NNP] ,,'ndcc [FmforeememtNN P AetNN P] ) ofm ([1989cD]) .; $mtcnc¢ .~0: ([%~ labor-..~,~=tn ~o~PNN])~'~ ~-~o ([~ rcvisedvB N buy-otltNn bidNN] ) for m [ (Onited~Np Aklin=NsPS ~-,-t,N] [UALNNp CO~' N.p] ) ([t~tw~r]),,,~d~ m,~ ([=~Jo~N. ~'~l'~ N])=~o ([~mp~s]) ~ Figure 4: Screenshot of base NP chunking system Processing, pages 348-355. Association for Computational Linguistics, March. W. Gale, K. Church, and D. Yarowsky. 1992. One sense per discourse. In Proceedings of the 4th DARPA Speech and Natural Language Workship, pages 233-237. J. Juteson and S. Katz. 1995. Technical ter- minology: Some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1:9-27. L. Mangu and E. Brill. 1997. Automatic rule acquisition for spelling correction. In Pro- ceedings of the Fourteenth International Con- ference on Machine Learning, Nashville, Ten- nessee. M. Marcus, M. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. L. Ramshaw and M. Marcus. 1994. Exploring the statistical derivation of transformational 71 rule sequences for part-of-speech tagging. In The Balancing Act: Proceedings of the A CL Workshop on Combining Symbolic and Sta- tistical Approaches to Language, New Mexico State University, July. L. Ramshaw and M. Marcus. In Press. Text chunking using transformation-based learn- ing. In Natural Language Processing Using Very large Corpora. Kluwer. K. Samuel, S. Carberry, and K. Vijay- Shanker. 1998. Dialogue act tagging with transformation-based learning. In Proceed- ings of the 36th Annual Meeting of the As- sociation for Computational Linguistics, vol- ume 2. Association of Computational Linguis- tics. A. van der Dosch and W. Daelemans. 1998. Do not forget: Full memory in memory- based learning of word pronunciation. In New Methods in Language Processing, pages 195- 204. Computational Natural Language Learn- ing. J. Veenstra. 1998. Fast NP chunking using memory-based learning techniques. In BENELEARN-98: Proceedings of the Eighth Belgian-Dutch Conference on Ma- chine Learning, Wageningen, the Nether- lands. M. Vilain and D. Day. 1996. Finite-state parsing by rule sequences. In International Conference on Computational Linguistics, Copenhagen, Denmark, August. The Interna- tional Committee on Computational Linguis- tics. A Voutilainen. 1993. NPTool, a detector of English noun phrases. In Proceedings of the Workshop on Very Large Corpora, pages 48- 57. Association for Computational Linguis- tics. D. Yarowsky. 1994. Decision lists for lexi- cal ambiguity resolution: Application to ac- cent restoration in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguis- tics, pages 88-95, Las Cruces, NM. G. Zipf. 1935. The Psycho-Biology of Language. Houghton Mifflin. 72
1999
9
Processes that Shape Conversation and their Implications for Computational Linguistics Susan E. Brennan Department of Psychology State University of New York Stony Brook, NY, US 11794-2500 [email protected] Abstract Experimental studies of interactive language use have shed light on the cognitive and interpersonal processes that shape conversation; corpora are the emergent products of these processes. I will survey studies that focus on under-modelled aspects of interactive language use, including the processing of spontaneous speech and disfluencies; metalinguistic displays such as hedges; interactive processes that affect choices of referring expressions; and how communication media shape conversations. The findings suggest some agendas for computational linguistics. Introduction Language is shaped not only by grammar, but also by the cognitive processing of speakers and addressees, and by the medium in which it is used. These forces have, until recently, received little attention, having been originally consigned to "performance" by Chomsky, and considered to be of secondary importance by many others. But as anyone who has listened to a tape of herself lecturing surely knows, spoken language is formally quite different from written language. And as those who have transcribed conversation are excruciatingly aware, interactive, spontaneous speech is especially messy and disfluent. This fact is rarely acknowledged by psychological theories of comprehension and production (although see Brennan & Schober, in press; Clark, 1994, 1997; Fox Tree, 1995). In fact, experimental psycholinguists still make up most of their materials, so that much of what we know about sentence processing is based on a sanitized, ideal form of language that no one actually speaks. But the field of computational linguistics has taken an interesting turn: Linguists and computational linguists who formerly used made-up sentences are now using naturally- and experimentally-generated corpora on which to base and test their theories. One of the most exciting developments since the early 1990s has been the focus on corpus data. Organized efforts such as LDC and ELRA have assembled large and varied corpora of speech and text, making them widely available to researchers and creators of natural language and speech recognition systems. Finally, Internet usage has generated huge corpora of interactive spontaneous text or "visible conversations" that little resemble edited texts. Of course, ethnographers and sociolinguists who practice conversation analysis (e.g., Sacks, Schegloff, & Jefferson, 1974; Goodwin, 1981) have known for a long time that spontaneous interaction is interesting in its own right, and that although conversation seems messy at first glance, it is actually orderly. Conversation analysts have demonstrated that speakers coordinate with each other such feats as achieving a joint focus of attention, producing closely timed turn exchanges, and finishing each another’s utterances. These demonstrations have been compelling enough to inspire researchers from psychology, linguistics, computer science, and human-computer interaction to turn their attention to naturalistic language data. But it is important to keep in mind that a corpus is, after all, only an artifact—a product that emerges from the processes that occur between and within speakers and addressees. Researchers who analyze the textual records of conversation are only overhearers, and there is ample evidence that overhearers experience a conversation quite differently from addressees and from side participants (Schober & Clark, 1989; Wilkes-Gibbs & Clark, 1992). With a corpus alone, there is no independent evidence of what people actually intend or understand at different points in a conversation, or why they make the choices they do. Conversation experiments that provide partners with a task to do have much to offer, such as independent measures of communicative success as well as evidence of precisely when one partner is confused or has reached a hypothesis about the other’s beliefs or intentions. Task-oriented corpora in combination with information about how they were generated are important for discourse studies. We still don't know nearly enough about the cognitive and interpersonal processes that underlie spontaneous language use—how speaking and listening are coordinated between individuals as well as within the mind of someone who is switching speaking and listening roles in rapid succession. Hence, determining what information needs to be represented moment by moment in a dialog model, as well as how and when it should be updated and used, is still an open frontier. In this paper I start with an example and identify some distinctive features of spoken language interchanges. Then I describe several experiments aimed at understanding the processes that generate them. I conclude by proposing some desiderata for a dialog model. Two people in search of a perspective To begin, consider the following conversational interchange from a laboratory experiment on referential communication. A director and a matcher who could not see each another were trying to get identical sets of picture cards lined up in the same order. (1) D:ah boy this one ah boy all right it looks kinda likeon the right top there’s a square that looks diagonal M: uh huh D: and you have sort of another like rectangle shape, thelike a triangle, angled, and on the bottom it’s uh I don’t know what that is, glass shaped M: all right I think I got it D: it’s almost like a person kind of in a weird way M: yeah like like a monk praying or something D: right yeah good great M: all right I got it (Stellmann & Brennan, 1993) Several things are apparent from this exchange. First, it contains several disfluencies or interruptions in fluent speech. The director restarts her first turn twice and her second turn once. She delivers a description in a series of installments, with backchannels from the matcher to confirm them. She seasons her speech with fillers like uh, pauses occasionally, and displays her commitment (or lack thereof) to what she is saying with displays like ah boy this one ah boy and I don’t know what that is. Even though she is the one who knows what the target picture is, it is the matcher who ends up proposing the description that they both end up ratifying: like a monk praying or something. Once the director has ratified this proposal, they have succeeded in establishing a conceptual pact (see Brennan & Clark, 1996). En route, both partners hedged their descriptions liberally, marking them as provisional, pending evidence of acceptance from the other. This example is typical; in fact, 24 pairs of partners who discussed this object ended up synthesizing nearly 24 different but mutually agreed-upon perspectives. Finally, the disfluencies, hedges, and turns would have been distributed quite differently if this conversation had been conducted over a different medium—through instant messaging, or if the partners had had visual contact. Next I will consider the proceses that underlie these aspects of interactive spoken communication. 1 Speech is disfluent, and disfluencies bear information The implicit assumptions of psychological and computational theories that ignore disfluencies must be either that people aren't disfluent, or that disfluencies make processing more difficult, and so theories of fluent speech processing should be developed before the research agenda turns to disfluent speech processing. The first assumption is clearly false; disfluency rates in spontaneous speech are estimated by Fox Tree (1995) and by Bortfeld, Leon, Bloom, Schober, and Brennan (2000) to be about 6 disfluencies per 100 words, not including silent pauses. The rate is lower for speech to machines (Oviatt, 1995; Shriberg, 1996), due in part to utterance length; that is, disfluency rates are higher in longer utterances, where planning is more difficult, and utterances addressed to machines tend to be shorter than those addressed to people, often because dialogue interfaces are designed to take on more initiative. The average speaker may believe, quite rightly, that machines are imperfect speech processors, and plan their utterances to machines more carefully. The good news is that speakers can adapt to machines; the bad news is that they do so by recruiting limited cognitive resources that could otherwise be focused on the task itself. As for the second assumption, if the goal is to eventually process unrestricted, natural human speech, then committing to an early and exclusive focus on processing fluent utterances is risky. In humans, speech production and speech processing are done incrementally, using contextual information from the earliest moments of processing (see, e.g., Tanenhaus et al. 1995). This sort of processing requires quite a different architecture and different mechanisms for ambiguity resolution than one that begins processing only at the end of a complete and well-formed utterance. Few approaches to parsing have tried to handle disfluent utterances (notable exceptions are Core & Schubert, 1999; Hindle, 1983; Nakatani & Hirschberg, 1994; Shriberg, Bear, & Dowding, 1992). The few psycholinguistic experiments that have examined human processing of disfluent speech also throw into question the assumption that disfluent speech is harder to process than fluent speech. Lickley and Bard (1996) found evidence that listeners may be relatively deaf to the words in a reparandum (the part that would need to be excised in order for the utterance to be fluent), and Shriberg and Lickley (1993) found that fillers such as um or uh may be produced with a distinctive intonation that helps listeners distinguish them from the rest of the utterance. Fox Tree (1995) found that while previous restarts in an utterance may slow a listener’s monitoring for a particular word, repetitions don’t seem to hurt, and some fillers, such as uh, seem to actually speed monitoring for a subsequent word. What information exists in disfluencies, and how might speakers use it? Speech production processes can be broken into three phases: a message or semantic process, a formulation process in which a syntactic frame is chosen and words are filled in, and an articulation process (Bock, 1986; Bock & Levelt, 1994; Levelt, 1989). Speakers monitor their speech both internally and externally; that is, they can make covert repairs at the point when an internal monitoring loop checks the output of the formulation phase before articulation begins, or overt repairs when a problem is discovered after the articulation phase via the speaker's external monitor—the point at which listeners also have access to the signal (Levelt, 1989). According to Nooteboom's (1980) Main Interruption Rule, speakers tend to halt speaking as soon as they detect a problem. Production data from Levelt's (1983) corpus supported this rule; speakers interrupted themselves within or right after a problem word 69% of the time. How are regularities in disfluencies exploited by listeners? We have looked at the comprehension of simple fluent and disfluent instructions in a constrained situation where the listener had the opportunity to develop expectations about what the speaker would say (Brennan & Schober, in press). We tested two hypotheses drawn from some suggestions of Levelt's (1989): that "by interrupting a word, a speaker signals to the addressee that the word is an error," and that an editing expression like er or uh may "warn the addressee that the current message is to be replaced," as with Move to the ye— uh, orange square. We collected naturally fluent and disfluent utterances by having a speaker watch a display of objects; when one was highlighted he issued a command about it, like "move to the yellow square." Sometimes the highlight changed suddenly; this sometimes caused the speaker to produce disfluencies. We recorded enough tokens of simple disfluencies to compare the impact of three ways in which speakers interrupt themselves: immediately after a problem word, within a problem word, or within a problem word and with the filler uh. We reasoned that if a disfluency indeed bears useful information, then we should be able to find a situation where a target word is faster to comprehend in a disfluent utterance than in a fluent one. Imagine a situation in which a listener expects a speaker to refer to one of two objects. If the speaker begins to name one and then stops and names the other, the way in which she interrupts the utterance might be an early clue as to her intentions. So the listener may be faster to recognize her intentions relative to a target word in a disfluent utterance than in an utterance in which disfluencies are absent. We compared the following types of utterances: a. Move to the orange square (naturally fluent) b. Move to the |orange square (disfluency excised) c. Move to the yelloworange square d. Move to the yeorange square e. Move to the yeuh, orange square f. Move to the orange square g. Move to the yeorange square h. Move to the uh, orange square Utterances c, d, and e were spontaneous disfluencies, and f, g, and h were edited versions that replaced the removed material with pauses of equal length to control for timing. In utterances c—h, the reparandum began after the word the and continued until the interruption site (after the unintended color word, color word fragment, or location where this information had been edited out). The edit interval in c—h began with the interruption site, included silence or a filler, and ended with the onset of the repair color word. Response times were calculated relative to the onset of the repair, orange. The results were that listeners made fewer errors, the less incorrect information they heard in the reparandum (that is, the shorter the reparandum), and they were faster to respond to the target word when the edit interval before the repair was longer. They comprehended target words after mid-word interruptions with fillers faster than they did after mid-word interruptions without fillers (since a filler makes the edit interval longer), and faster than they did when the disfluency was replaced by a pause of equal length. This filler advantage did not occur at the expense of accuracy—unlike with disfluent utterances without fillers, listeners made no more errors on disfluent utterances with fillers than they did on fluent utterances. These findings highlight the importance of timing in speech recognition and utterance interpretation. The form and length of the reparandum and edit interval bear consequences for how quickly a disfluent utterance is processed as well as for whether the listener makes a commitment to an interpretation the speaker does not intend. Listeners respond to pauses and fillers on other levels as well, such as to make inferences about speakers’ alignment to their utterances. People coordinate both the content and the process of conversation; fillers, pauses, and self-speech can serve as displays by speakers that provide an account to listeners for difficulties or delays in speaking (Clark, 1994; Clark, 1997; Clark & Brennan, 1991). Speakers signal their Feeling-of-Knowing (FOK) when answering a question by the displays they put on right before the answer (or right before they respond with I don’t know) (Brennan & Williams, 1995; Smith & Clark, 1993). In these experiments, longer latencies, especially ones that contained fillers, were associated with answers produced with a lower FOK and that turned out to be incorrect. Thus in the following example, A1 displayed a lower FOK than A2: Q: Who founded the American Red Cross? A1: .....um......... Florence Nightingale? A2: ......... Clara Barton. Likewise, non-answers (e.g., I don’t know) after a filler or a long latency were produced by speakers who were more likely to recognize the correct answers later on a multiple choice test; those who produced a non-answer immediately did not know the answers. Not only do speakers display their difficulties and metalinguistic knowledge using such devices, but listeners can process this information to produce an accurate Feeling-of-Another's-Knowing, or estimate of the speaker’s likelihood of knowing the correct answer (Brennan & Williams, 1995). These programs of experiments hold implications for both the generation and interpretation of spoken utterances. A system could indicate its confidence in its message with silent pauses, fillers, and intonation, and users should be able to interpret this information accurately. If machine speech recognition were conducted in a fashion more like human speech recognition, timing would be a critical cue and incremental parses would be continually made and unmade. Although this approach would be computationally expensive, it might produce better results with spontaneous speech. 2 Referring expressions are provisional until ratified by addressees. Consider again the exchange in Example (1). After some work, the director and matcher eventually settled on a mutual perspective. When they finished matching the set of 12 picture cards, the cards were shuffled and the task was repeated several more times. In the very next round, the conversation went like this: (2) B: nine is that monk praying A: yup Later on, referring was even more efficient: (3) A: three is the monk B: ok A and B, who switched roles on each round, marked the fact that they had achieved a mutual perspective by reusing the same term, monk, in repeated references to the same object. These references tend to shorten over time. In Brennan and Clark (1996), we showed that once people coordinate a perspective on an object, they tend to continue to use the same terms that mark that shared perspective (e.g., the man’s pennyloafer), even when they could use an even shorter basiclevel term (e.g., the shoe, when the set of objects has changed such that it no longer needs to be distinguished from other shoes in the set). This process of conceptual entrainment appears to be partner-specific—upon repeated referring to the same object but with a new partner, speakers were more likely to revert to the basic level term, due in part to the feedback they received from their partners (Brennan & Clark, 1996). These examples depict the interpersonal processes that lead to conceptual entrainment. The director and matcher used many hedges in their initial proposals and counter-proposals (e.g., it’s almost like a person kind of in a weird way, and yeah like like a monk praying or something). Hedges dropped out upon repeated referring. We have proposed (Brennan & Clark, 1996) that hedges are devices for signaling a speaker's commitment to the perspective she is proposing. Hedges serve social needs as well, by inviting counter-proposals from the addressee without risking loss of face due to overt disagreements (Brennan & Ohaeri, 1999). It is worth noting that people's referring expressions converge not only with those of their human partners, but also with those of computer partners (Brennan, 1996; Ohaeri, 1995). In our text and spoken dialogue Wizardof-Oz studies, when simulated computer partners used deliberately different terms than the ones people first presented to them, people tended to adopt the computers' terms, even though the computers had apparently "understood" the terms people had first produced (Brennan, 1996; Ohaeri, 1995). The impetus toward conceptual entrainment marked by repeated referring expressions appears to be so compelling that native speakers of English will even produce non-idiomatic referring expressions (e.g., the chair in which I shake my body, referring to a rocking chair) in order to ratify a mutuallyachieved perspective with non-native speakers (Bortfeld & Brennan, 1987). Such findings hold many implications for utterance generation and the design of dialogue models. Spoken and text dialogue interfaces of the future should include resources for collaboration, including those for negotiating meanings, modeling context, recognizing which referring expressions are likely to index a particular conceptualization, keeping track of the referring expressions used by a partner so far, and reusing those expressions. This would help solve the “vocabulary problem” in humancomputer interaction (Brennan, to appear). 3 Grounding varies with the medium Grounding is the process by which people coordinate their conversational activities, establishing, for instance, that they understand one another well enough for current purposes. There are many activities to coordinate in conversation, each with its own cost, including: • getting an addressee’s attention in order to begin the conversation • planning utterances the addressee is likely to understand • producing utterances • recognizing when the addressee does not understand • initiating and managing repairs • determining what inferences to make when there is a delay • receiving utterances • recognizing the intention behind an utterance • displaying or acknowledging this understanding • keeping track of what has been discussed so far (common ground due to linguistic co-presence) • determining when to take a turn • monitoring and furthering the main purposes or tasks at hand • serving other important social needs, such as face-management (adapted from Clark & Brennan, 1991) Most of these activities are relatively easy to do when interaction is face-to-face. However, the affordances of different media affect the costs of coordinating these activities. The actual forms of speech and text corpora are shaped by how people balance and trade off these costs in the context of communication. In a referential communication study, I compared task-oriented conversations in which one person either had or didn’t have visual evidence about the other’s progress (Brennan, 1990). Pairs of people discussed many different locations on identical maps displayed on networked computer screens in adjoining cubicles. The task was for the matcher to get his car icon parked in the same spot as the car displayed on only the director’s screen. In one condition, Visual Evidence, the director could see the matcher’s car icon and its movements. In the other, Verbal-Only Evidence, she could not. In both conditions, they could talk freely. Language-action transcripts were produced for a randomly chosen 10% of 480 transcribed interchanges. During each trial, the x and y coordinates of the matcher's icon were recorded and time-stamped, as a moment-bymoment estimate of where the matcher thought the target location was. For the sample of 48 trials, I plotted the distance between the matchers' icon and the target (the director's icon) over time, to provide a visible display of how their beliefs about the target location converged. Sample time-distance plots are shown in Figures 1 and 2. Matchers' icons got closer to the target over time, but not at a steady rate. Typically, distance diminished relatively steeply early in the trial, while the matcher interpreted the director's initial description and rapidly moved his icon toward the target location. Many of the plots then showed a distinct elbow followed by a nearly horizontal region, meaning that the matcher then paused or moved away only slightly before returning to park his car icon. This suggests that it wasn’t sufficient for the matcher to develop a reasonable hypothesis about what the director meant by the description she presented, but that they also had to ground their understanding, or exchange sufficient evidence in order to establish mutual belief. The region after the elbow appears to correspond to the acceptance phase proposed by Clark & Schaefer (1989); the figures show that it was much shorter when directors had visual evidence than when they did not. The accompanying speech transcripts, when synchronized with the time-distance plots, showed that matchers gave verbal acknowledgements when directors did not have visual evidence and withheld them when directors did have visual evidence. Matchers made this adjustment to directors even though the information on the matchers’ own screen was the same for both conditions, which alternated after every 10 locations for a total of 80 locations discussed by each pair. Figure 1: Time-Distance Plot of Matcher-Director Convergence, Without Visual Evidence of the Matcher’s Progress Figure 2: Time-Distance Plot of Matcher-Director Convergence, With Visual Evidence of the Matcher’s Progress These results document the grounding process and the time course of how directors’ and matchers’ hypotheses converge. The process is a flexible one; partners shift the responsibility to whomever can pay a particular cost most easily, expending the least collaborative effort (Clark & Wilkes-Gibbs, 1986). In another study of how media affect conversation (Brennan & Ohaeri, 1999; Ohaeri, 1998) we looked at how grounding shapes conversation held face-to-face vs. via chat windows in which people sent text messages that appeared immediately on their partners’ screens. Three-person groups had to reach a consensus account of a complex movie clip they had viewed together. We examined the costs of serving face-management needs (politeness) and 0 50 100 150 200 250 300 350 0 5 10 15 20 25 30 Time (seconds) 0 50 100 150 200 250 300 350 0 5 10 15 20 25 30 Time (seconds) looked at devices that serve these needs by giving a partner options or seeking their input. The devices counted were hedges and questions. Although both kinds of groups recalled the events equally well, they produced only half as many words typing as speaking. There were much lower rates of hedging (per 100 words) in the text conversations than face-to-face, but the same rates of questions. We explained these findings by appealing to the costs of grounding over different media: Hedging requires using additional words, and therefore is more costly in typed than spoken utterances. Questions, on the other hand, require only different intonation or punctuation, and so are equally easy, regardless of medium. The fact that people used just as many questions in both kinds of conversations suggests that people in electronic or remote groups don’t cease to care about facemanagement needs, as some have suggested; it’s just harder to meet these needs when the medium makes the primary task more difficult. Desiderata for a Dialogue Model Findings such as these hold a number of implications for both computational linguistics and human-computer interaction. First is a methodological point: corpus data and dialogue feature coding are particularly useful when they include systematic information about the tasks conversants were engaged in. Second, there is a large body of evidence that people accomplish utterance production and interpretation incrementally, using information from all available sources in parallel. If computational language systems are ever to approach the power, error recovery ability, and flexibility of human language processing, then more research needs to be done using architectures that can support incremental processing. Architectures should not be based on assumptions that utterances are complete and well-formed, and that processing is modular. A related issue is that timing is critically important in interactive systems. Many models of language processing focus on the propositional content of speech with little attention to “performance” or “surface” features such as timing. (Other non-propositional aspects such as intonation are important as well.) Computational dialogue systems (both text and spoken) should include resources for collaboration. When a new referring expression is introduced, it could be marked as provisional. Fillers can be used to display trouble, and hedges, to invite input. Dialogue models should track the forms of referring expressions used in a discourse so far, enabling agents to use the same terms consistently to refer to the same things. Because communication media shape conversations and their emergent corpora, minor differences in features of a dialogue interface can have major impact on the form of the language that is generated, as well as on coordination costs that language users pay. Finally, dialogue models should keep a structured record of jointly achieved contributions that is updated and revised incrementally. No agent is omniscient; a dialogue model represents only one agent's estimate of the common ground so far (see Cahn & Brennan, 1999). There are many open and interesting questions about how to best structure the contributions from interacting partners into a dialogue model, as well as how such a model can be used to support incremental processes of generation, interpretation, and repair. Acknowledgements This material is based upon work supported by the National Science Foundation under Grants No. IRI9402167, IRI9711974, and IRI9980013. I thank Michael Schober for helpful comments. References Bock, J. K. (1986). Meaning, sound, and syntax: Lexical priming in sentence production. J. of Experimental Psychology: Learning, Memory, & Cognition, 12, 575-586. Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M.A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp. 945-984). London: Academic Press. Bortfeld, H., & Brennan, S. E. (1997). Use and acquisition of idiomatic expressions in referring by native and non-native speakers. Discourse Processes, 23, 119-147. Bortfeld, H., Leon, S. D., Bloom, J. E., Schober, M. F., & Brennan, S. E. (2000). Disfluency rates in spontaneous speech: Effects of age, relationship, topic, role, and gender. Manuscript under review. Brennan, S. E. (1990). Seeking and providing evidence for mutual understanding. Unpublished doctoral dissertation. Stanford University. Brennan, S. E. (1996). Lexical entrainment in spontaneous dialog. Proc. 1996 International Symposium on Spoken Dialogue (ISSD-96) (pp. 4144). Acoustical Society of Japan: Phila., PA. Brennan, S. E. (to appear). The vocabulary problem in spoken dialog systems. In S. Luperfoy (Ed.), Automated Spoken Dialog Systems, Cambridge, MA: MIT Press. Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. J. of Experimental Psychology: Learning, Memory, & Cognition, 22, 1482-1493. Brennan, S. E., & Ohaeri, J. O. (1999). Why do electronic conversations seem less polite? The costs and benefits of hedging. Proc. Int. Joint Conference on Work Activities, Coordination, and Collaboration (WACC ’99) (pp. 227-235). San Francisco, CA: ACM. Brennan, S. E., & Schober, M. F. (in press). How listeners compensate for disfluencies in spontaneous speech. J. of Memory & Language. Brennan, S. E., & Williams, M. (1995). The feeling of another’s knowing: Prosody and filled pauses as cues to listeners about the metacognitive states of speakers. J. of Memory & Language, 34, 383-398. Cahn, J. E., & Brennan, S. E. (1999). A psychological model of grounding and repair in dialog. Proc. AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems (pp. 2533). North Falmouth, MA: AAAI. Clark, H.H. (1994). Managing problems in speaking. Speech Communication, 15, 243-250. Clark, H. H. (1997). Dogmas of understanding. Discourse Processes, 23, 567-598. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127-149). Clark, H.H. & Schaefer, E.F. (1989). Contributing to discourse. Cognitive Science, 13, 259-294. Clark, H.H. & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22, 1-39. Core, M. G., & Schubert, L. K. (1999). A model of speech repairs and other disruptions. Proc. AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems. North Falmouth, MA: AAAI. Fox Tree, J.E. (1995). The effects of false starts and repetitions on the processing of subsequent words in spontaneous speech. J. of Memory & Language, 34, 709-738. Goodwin, C. (1981). Conversational Organization: Interaction between speakers and hearers. New York: Academic Press. Hindle, D. (1983). Deterministic parsing of syntactic non-fluencies. In Proc. of the 21st Annual Meeting, Association for Computational Linguistics, Cambridge, MA, pp. 123-128. Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. Levelt, W. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press. Lickley, R., & Bard, E. (1996). On not recognizing disfluencies in dialog. Proc. International Conference on Spoken Language Processing (ICSLIP ‘96), Philadelphia, 1876-1879. Nakatani, C. H., & Hirschberg, J. (1994). A corpusbased study of repair cues in spontaneous speech. J of the Acoustical Society of America, 95, 1603-1616. Nooteboom, S. G. (1980). Speaking and unspeaking: Detection and correction of phonological and lexical errors in spontaneous speech. In V. A. Fromkin (Ed.), Errors in linguistic performance. New York: Academic Press. Ohaeri, J. O. (1995). Lexical convergence with human and computer partners: Same cognitive process? Unpub. Master's thesis. SUNY, Stony Brook, NY. Ohaeri, J. O. (1998). Group processes and the collaborative remembering of stories. Unpublished doctoral dissertation. SUNY, Stony Brook, NY. Oviatt, S. (1995). Predicting spoken disfluencies during human-computer interaction. Computer Speech and Language, 9, 19-35. Sacks, H., Schegloff, E., & Jefferson, G. (1974). A simplest systematics for the organization of turntaking in conversation. Language, 50, 696-735. Schober, M.F. & Clark, H.H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21, 211-232. Shriberg, E. (1996). Disfluencies in Switchboard. Proceedings, International Conference on Spoken Language Processing, Vol. Addendum, 11-14. Philadelphia, PA, 3-6 October. Shriberg, E., Bear, J., & Dowding, J. (1992). Automatic detection and correction of repairs in human-computer dialog. In M. Marcus (Ed.), Proc DARPA Speech and Natural Language Workshop (pp. 419-424). Morgan Kaufmann. Shriberg, E.E. & Lickley, R.J. (1993). Intonation of clause-internal filled pauses. Phonetica, 50, 172-179. Smith, V., & Clark, H. H. (1993). On the course of answering questions. J. of Memory and Language, 32, 25-38. Stellmann, P., & Brennan, S. E. (1993). Flexible perspective-setting in conversation. Abstracts of the Psychonomic Society, 34th Annual Meeting (p. 20), Washington, DC. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634. Wilkes-Gibbs, D., & Clark, H.H. (1992). Coordinating beliefs in conversation. Journal of Memory and Language, 31, 183-194.
2000
1
Robust Temporal Processing of News Inderjeet Mani and George Wilson The MITRE Corporation, W640 11493 Sunset Hills Road Reston, Virginia 22090 {imani, gwilson}@mitre.org Abstract We introduce an annotation scheme for temporal expressions, and describe a method for resolving temporal expressions in print and broadcast news. The system, which is based on both hand-crafted and machine-learnt rules, achieves an 83.2% accuracy (Fmeasure) against hand-annotated data. Some initial steps towards tagging event chronologies are also described. Introduction The extraction of temporal information from news offers many interesting linguistic challenges in the coverage and representation of temporal expressions. It is also of considerable practical importance in a variety of current applications. For example, in question-answering, it is useful to be able to resolve the underlined reference in “the next year, he won the Open” in response to a question like “When did X win the U.S. Open?”. In multidocument summarization, providing finegrained chronologies of events over time (e.g., for a biography of a person, or a history of a crisis) can be very useful. In information retrieval, being able to index broadcast news stories by event times allows for powerful multimedia browsing capabilities. Our focus here, in contrast to previous work such as (MUC 1998), is on resolving time expressions, especially indexical expressions like “now”, “today”, “tomorrow”, “next Tuesday”, “two weeks ago”, “20 mins after the next hour”, etc., which designate times that are dependent on the speaker and some “reference” time1. In this paper, we discuss a temporal annotation scheme for representing dates and times in temporal expressions. This is followed by details and performance measures for a tagger to extract this information from news sources. The tagger uses a variety of hand-crafted and machine-discovered rules, all of which rely on lexical features that are easily recognized. We also report on a preliminary effort towards constructing event chronologies from this data. 1 Annotation Scheme Any annotation scheme should aim to be simple enough to be executed by humans, and yet precise enough for use in various natural language processing tasks. Our approach (Wilson et al. 2000) has been to annotate those things that a human could be expected to tag. Our representation of times uses the ISO standard CC:YY:MM:DD:HH:XX:SS, with an optional time zone (ISO-8601 1997). In other words, time points are represented in terms of a calendric coordinate system, rather than a real number line. The standard also supports the representation of weeks and days of the week in the format CC:YY:Wwwd where ww specifies which week within the year (1-53) and d specifies the day of the week (1-7). For example, “last week” might receive the VAL 20:00:W16. A time (TIMEX) expression (of type TIME or DATE) representing a particular point on the ISO line, e.g., “Tuesday, November 2, 2000” (or “next Tuesday”) is represented with the ISO time Value (VAL), 20:00:11:02. Interval expressions like “From 1 Some of these indexicals have been called “relative times” in the (MUC 1998) temporal tagging task. May 1999 to June 1999”, or “from 3 pm to 6 pm” are represented as two separate TIMEX expressions. In addition to the values provided by the ISO standard, we have added several extensions, including a list of additional tokens to represent some commonly occurring temporal units; for example, “summer of ‘69” could be represented as 19:69:SU. The intention here is to capture the information in the text while leaving further interpretation of the Values to applications using the markup. It is worth noting that there are several kinds of temporal expressions that are not to be tagged, and that other expressions tagged as a time expression are not assigned a value, because doing so would violate the simplicity and preciseness requirements. We do not tag unanchored intervals, such as “half an hour (long)” or “(for) one month”. Non-specific time expressions like generics, e.g., “April” in “April is usually wet”, or “today” in “today’s youth”, and indefinites, e.g., “a Tuesday”, are tagged without a value. Finally, expressions which are ambiguous without a strongly preferred reading are left without a value. This representation treats points as primitive (as do (Bennett and Partee 1972), (Dowty 1979), among others); other representations treat intervals as primitive, e.g., (Allen 1983). Arguments can be made for either position, as long as both intervals and points are accommodated. The annotation scheme does not force committing to end-points of intervals, and is compatible with current temporal ontologies such as (KSL-Time 1999); this may help eventually support advanced inferential capabilities based on temporal information extraction. 2 Tagging Method Overall Architecture The system architecture of the temporal tagger is shown in Figure 1. The tagging program takes in a document which has been tokenized into words and sentences and tagged for part-of-speech. The program passes each sentence first to a module that identifies time expressions, and then to another module (SC) that resolves selfcontained time expressions. The program then takes the entire document and passes it to a discourse processing module (DP) which resolves context-dependent time expressions (indexicals as well as other expressions). The DP module tracks transitions in temporal focus, uses syntactic clues, and various other knowledge sources. The module uses a notion of Reference Time to help resolve context-dependent expressions. Here, the Reference Time is the time a context-dependent expression is relative to. In our work, the reference time is assigned the value of either the Temporal Focus or the document (creation) date. The Temporal Focus is the time currently being talked about in the narrative. The initial reference time is the document date. 2.2 Assignment of time values We now discuss the modules that assign values to identified time expressions. Times which are fully specified are tagged with their value, e.g, “June 1999” as 19:99:06 by the SC module. The DP module uses an ordered sequence of rules to handle the context-dependent expressions. These cover the following cases: Explicit offsets from reference time: indexicals like “yesterday”, “today”, “tomorrow”, “this afternoon”, etc., are ambiguous between a specific and a nonspecific reading. The specific use (distinguished from the generic one by machine learned rules discussed below) gets assigned a value based on an offset from the reference time, but the generic use does not. Positional offsets from reference time: Expressions like “next month”, “last year” and “this coming Thursday” use lexical markers (underlined) to describe the direction and magnitude of the offset from the reference time. Implicit offsets based on verb tense: Expressions like “Thursday” in “the action taken Thursday”, or bare month names like “February” are passed to rules that try to determine the direction of the offset from the reference time. Once the direction is determined, the magnitude of the offset can be computed. The tense of a neighboring verb is used to decide what direction to look to resolve the expression. Such a verb is found by first searching backward to the last TIMEX, if any, in the sentence, then forward to the end of the sentence and finally backwards to the beginning of the sentence. If the tense is past, then the direction is backwards from the reference time. If the tense is future, the direction is forward. If the verb is present tense, the expression is passed on to subsequent rules for resolution. For example, in the following passage, “Thursday” is resolved to the Thursday prior to the reference date because “was”, which has a past tense tag, is found earlier in the sentence: The Iraqi news agency said the first shipment of 600,000 barrels was loaded Thursday by the oil tanker Edinburgh. Further use of lexical markers: Other expressions lacking a value are examined for the nearby presence of a few additional markers, such as “since” and “until”, that suggest the direction of the offset. Nearby Dates: If a direction from the reference time has not been determined, some dates, like “Feb. 14”, and other expressions that indicate a particular date, like “Valentine’s Day”, may still be untagged because the year has not been determined. If the year can be chosen in a way that makes the date in question less than a month from the reference date, that year is chosen. For example, if the reference date is Feb. 20, 2000 and the expression “Feb. 14” has not been assigned a value, this rule would assign it the value Feb. 14, 2000. Dates more than a month away are not assigned values by this rule. 3 Time Tagging Performance 3.1 Test Corpus There were two different genres used in the testing: print news and broadcast news transcripts. The print news consisted of 22 New York Times (NYT) articles from January 1998. The broadcast news data consisted of 199 transcripts of Voice of America (VOA) broadcasts from January of 1998, taken from the TDT2 collection (TDT2 1999). The print data was much cleaner than the transcribed broadcast data in the sense that there were very few typographical errors, spelling and grammar were good. On the other hand, the print data also had longer, more complex sentences with somewhat greater variety in the words used to represent dates. The broadcast collection had a greater proportion of expressions referring to time of day, primarily due to repeated announcements of the current time and the time of upcoming shows. The test data was marked by hand tagging the time expressions and assigning value to them where appropriate. This hand-marked data was used to evaluate the performance of a frozen version of the machine tagger, which was trained and engineered on a separate body of NYT, ABC News, and CNN data. Only the body of the text was included in the tagging and evaluation. 3.2 System performance The system performance is shown in Table 12. Note that if the human said the TIMEX had no value, and the system decided it had a value, this is treated as an error. A baseline of just tagging values of absolute, fully specified TIMEXs (e.g., “January 31st, 1999”) is shown for comparison in parentheses. Obviously, we would prefer a larger data sample; we are currently engaged in an effort within the information extraction community to annotate a large sample of the TDT2 collection and to conduct an interannotator reliability study. Error Analysis Table 2 shows the number of errors made by the program classified by the type of error. Only 2 of these 138 errors (5 on TIME, 133 on DATE) were due to errors in the source. 14 of the 138 errors (9 NYT vs. 5 VOA) 2 The evaluated version of the system does not adjust the Reference Time for subsequent sentences. were due to the document date being incorrect as a reference time. Part of speech tagging: Some errors, both in the identification of time expressions and the assignment of values, can be traced to incorrect part of speech tagging in the preprocessing; many of these errors should be easily correctable. TIMEX expressions A total of 44 errors were made in the identification of TIMEX expressions. Not yet implemented: The biggest source of errors in identifying time expressions was formats that had not yet been implemented. For example, one third (7 of 21, 5 of which were of type TIME) of all missed time expressions came from numeric expressions being spelled out, e.g. “nineteen seventynine”. More than two thirds (11 of 16) of the time expressions for which the program incorrectly found the boundaries of the expression (bad extent) were due to the unimplemented pattern “Friday the 13th”. Generalization of the existing patterns should correct these errors. Proper Name Recognition: A few items were spuriously tagged as time expressions (extra TIMEX). One source of this that should be at least partially correctable is in the tagging of apparent dates in proper names, e.g. “The July 26 Movement”, “The Tonight Show”, “USA Today”. The time expression identifying rules assumed that these had been tagged as lexical items, but this lexicalization has not yet been implemented. Values assigned A total of 94 errors were made in the assignment of values to time expressions that had been correctly identified. Generic/Specific: In the combined data, 25 expressions were assigned a value when they should have received none because the expression was a generic usage that could not be placed on a time line. This is the single biggest source of errors in the value assignments. 4 Machine Learning Rules Our approach has been to develop initial rules by hand, conduct an initial evaluation on an unseen test set, determine major errors, and then handling those errors by augmenting the rule set with additional rules discovered by machine learning. As noted earlier, distinguishing between specific use of a time expression and a generic use (e.g., “today”, “now”, etc.) was and is a significant source of error. Some of the other problems that these methods could be applied to distinguishing a calendar year reference from a fiscal year one (as in “this year”), and distinguishing seasonal from specific day references. For example, “Christmas” has a seasonal use (e.g., “I spent Christmas visiting European capitals”) distinct from its reference to a specific day use as “December 25th” (e.g., “We went to a great party on Christmas”). Here we discuss machine learning results in distinguishing specific use of “today” (meaning the day of the utterance) from its generic use meaning “nowadays”. In addition to features based on words cooccurring with “today” (Said, Will, Even, Most, and Some features below), some other features (DOW and CCYY) were added based on a granularity hypothesis. Specifically, it seems possible that “today” meaning the day of the utterance sets a scale of events at a day or a small number of days. The generic use, “nowadays”, seems to have a broader scale. Therefore, terms that might point to one of these scales such as the names of days of the week, the word “year” and four digit years were also included in the training features. To summarize, the features we used for the “today” problem are as follows (features are boolean except for string-valued POS1 and POS2): Poss: whether “today” has a possessive inflection Qcontext: whether “today” is inside a quotation Said: presence of “said” in the same sentence Will: presence of “will” in the same sentence Even: presence of “even” in the same sentence Most: presence of “most” in the same sentence Some: presence of “some” in the same sentence Year: presence of “year” in the same sentence CCYY: presence of a four-digit year in the same sentence DOW: presence of a day of the week expression (“Monday” thru “Sunday”) in the same sentence FW: “today” is the first word of the sentence POS1: part-of-speech of the word before “today” POS2: part-of-speech of the word after “today” Label: specific or non-specific (class label) Table 3 shows the performance of different classifiers in classifying occurrences of “today” as generic versus specific. The results are for 377 training vectors and 191 test vectors, measured in terms of Predictive Accuracy (percentage test vectors correctly classified). We incorporated some of the rules learnt by C4.5 Rules (the only classifier which directly output rules) into the current version of the program. These rules included classifying “today” as generic based on (1) feature Most being true (74.1% accuracy) or (2) based on feature FW being true and Poss, Some and Most being false (67.4% accuracy). The granularity hypothesis was partly borne out in that C4.5 rules also discovered that the mention of a day of a week (e.g. “Monday”), anywhere in the sentence predicted specific use (73.3% accuracy). 5 Towards Chronology Extraction Event Ordering Our work in this area is highly preliminary. To extract temporal relations between events, we have developed an eventordering component, following (Song and Cohen 1991). We encode the tense associated with each verb using their modified Reichenbachian (Reichenbach 1947) representation based on the tuple <si, lge, ri, lge, ei>. Here si is an index for the speech time, ri for the reference time, and ei for the event time, with lge being the temporal relations precedes, follows, or coincides. With each successive event, the temporal focus is either maintained or shifted, and a temporal ordering relation between the event and the focus is asserted, using heuristics defining coherent tense sequences; see (Song and Cohen 1991) for more details. Note that the tagged TIME expressions aren't used in determining these inter-event temporal relations, so this eventordering component could be used to order events which don't have time VALs. Event Time Alignment In addition, we have also investigated the alignment of events on a calendric line, using the tagged TIME expressions. The processing, applied to documents tagged by the time tagger, is in two stages. In the first stage, for each sentence, each “taggable verb occurrence” lacking a time expression is given the VAL of the immediately previous time expression in the sentence. Taggable verb occurrences are all verb occurrences except auxiliaries, modals and verbs following “to”, “not”, or specific modal verbs. In turn, when a time expression is found, the immediately previous verb lacking a time expression is given that expression's VAL as its TIME. In the second stage, each taggable verb in a sentence lacking a time expression is given the TIME of the immediately previous verb in the sentence which has one, under the default assumption that the temporal focus is maintained. Of course, rather than blindly propagating time expressions to events based on proximity, we should try to represent relationships expressed by temporal coordinators like “when”, “since”, “before”, as well as explicitly temporally anchored events, like “ate at 3 pm”. The event-aligner component uses a very simple method, intended to serve as a baseline method, and to gain an understanding of the issues involved. In the future, we expect to advance to event-alignment algorithms which rely on a syntactic analysis, which will be compared against this baseline. Assessment An example of the chronological tagging of events offered by these two components is shown in Figure 2, along with the TIMEX tags extracted by the time tagger. Here each taggable verb is given an event index, with the precedes attribute indicating one or more event indices which it precedes temporally. (Attributes irrelevant to the example aren't shown). The information of the sort shown in Figure 2 can be used to sort and cluster events temporally, allowing for various time-line based presentations of this information in response to specific queries. The event-orderer has not yet been evaluated. Our evaluation of the eventaligner checks the TIME of all correctly recognized verbs (i.e., verbs recognized correctly by the part-of-speech tagger). The basic criterion for event TIME annotation is that if the time of the event is obvious, it is to be tagged as the TIME for that verb. (This criterion excludes interval specifications for events, as well as event references involving generics, counterfactuals, etc. However, the judgements are still delicate in certain cases.) We score Correctness as number of correct TIME fills for correctly recognized verbs over total number of correctly recognized verbs. Our total correctness scores on a small sample of 8505 words of text is 394 correct event times out of 663 correct verb tags, giving a correctness score of 59.4%. Over half the errors were due to propagation of spreading of an incorrect event time to neighboring events; about 15% of the errors were due to event times preceding the initial TIMEX expression (here the initial reference time should have been used); and at least 10% of the errors were due to explicitly marked tense switches. This is a very small sample, so the results are meant to be illustrative of the scope and limitations of this baseline eventaligning technique rather than present a definitive result. 6 Related Work The most relevant prior work is (Wiebe et al. 98), who dealt with meeting scheduling dialogs (see also (Alexandersson et al. 97), (Busemann et al. 97)), where the goal is to schedule a time for the meeting. The temporal references in meeting scheduling are somewhat more constrained than in news, where (e.g., in a historical news piece on toxic dumping) dates and times may be relatively unconstrained. In addition, their model requires the maintenance of a focus stack. They obtained roughly .91 Precision and .80 Recall on one test set, and .87 Precision and .68 Recall on another. However, they adjust the reference time during processing, which is something that we have not yet addressed. More recently, (Setzer and Gaizauskas 2000) have independently developed an annotation scheme which represents both time values and more fine-grained interevent and event-time temporal relations. Although our work is much more limited in scope, and doesn't exploit the internal structure of events, their annotation scheme may be leveraged in evaluating aspects of our work. The MUC-7 task (MUC-7 98) did not require VALs, but did test TIMEX recognition accuracy. Our 98 F-measure on NYT can be compared for just TIMEX with MUC-7 (MUC-7 1998) results on similar news stories, where the best performance was .99 Precision and .88 Recall. (The MUC task required recognizing a wider variety of TIMEXs, including event-dependent ones. However, at least 30% of the dates and times in the MUC test were fixed-format ones occurring in document headers, trailers, and copyright notices. ) Finally, there is a large body of work, e.g., (Moens and Steedman 1988), (Passoneau 1988), (Webber 1988), (Hwang 1992), (Song and Cohen 1991), that has focused on a computational analysis of tense and aspect. While the work on event chronologies is based on some of the notions developed in that body of work, we hope to further exploit insights from previous work. Conclusion We have developed a temporal annotation specification, and an algorithm for resolving a class of time expressions found in news. The algorithm, which is relatively knowledge-poor, uses a mix of hand-crafted and machine-learnt rules and obtains reasonable results. In the future, we expect to improve the integration of various modules, including tracking the temporal focus in the time resolver, and interaction between the eventorder and the event-aligner. We also hope to handle a wider class of time expressions, as well as further improve our extraction and evaluation of event chronologies. In the long run, this could include representing eventtime and inter-event relations expressed by temporal coordinators, explicitly temporally anchored events, and nominalizations. Figure 1. Time Tagger Source articles number of words Type Human Found (Correct) System Found System Correct Precision Recall Fmeasure NYT 22 35,555 TIMEX 302 302 296 98.0 98.0 98.0 Values 302 302 249 (129) 82.5 (42.7) 82.5 (42.7) 82.5 (42.7) Broadcast 199 42,616 TIMEX 426 417 400 95.9 93.9 94.9 Values 426 417 353 (105) 84.7 (25.1) 82.9 (24.6) 83.8 (24.8) Overall 221 78,171 TIMEX 728 719 696 96.8 95.6 96.2 Values 728 719 602 (234) 83.7 (32.5) 82.7 (32.1) 83.2 (32.3) Table 1. Performance of Time Tagging Algorithm Print Broadcast Total Missing Vals 10 29 39 Extra Vals 18 7 25 Wrong Vals 19 11 30 Missing TIMEX 6 15 21 Extra TIMEX 2 5 7 Bad TIMEX extent 4 12 16 TOTAL 59 79 138 Table 2. High Level Analysis of Errors Driver Resolve Self-contained Identify Expressions Discourse Processor Context Tracker Algorithm Predictive Accuracy MC4 Decision Tree3 79.8 C4.5 Rules 69.8 Naïve Bayes 69.6 Majority Class (specific) 66.5 Table 3. Performance of “Today” Classifiers In the last step after years of preparation, the countries <lex eindex=“9” precedes=“10|” TIME=“19981231”>locked</lex> in the exchange rates of their individual currencies to the euro, thereby <lex eindex=“10” TIME=“19981231”>setting</lex> the value at which the euro will begin <lex eindex=“11” TIME=“19990104”>trading</lex> when financial markets open around the world on <TIMEX VAL=“19990104”>Monday</TIMEX>……. Figure 2. Chronological Tagging 3 Algorithm from the MLC++ package (Kohavi and Sommerfield 1996). References J. Alexandersson, N. Riethinger, and E. Maier. Insights into the Dialogue Processing of VERBMOBIL. Proceedings of the Fifth Conference on Applied Natural Language Processing, 1997, 33-40. J. F. Allen. Maintaining Knowledge About Temporal Intervals. Communications of the ACM, Volume 26, Number 11, 1983. M. Bennett and B. H. Partee. Towards the Logic of Tense and Aspect in English, Indiana University Linguistics Club, 1972. S. Busemann, T. Decleck, A. K. Diagne, L. Dini, J. Klein, and S. Schmeier. Natural Language Dialogue Service for Appointment Scheduling Agents. Proceedings of the Fifth Conference on Applied Natural Language Processing, 1997, 25-32. D. Dowty. “Word Meaning and Montague Grammar”, D. Reidel, Boston, 1979. C. H. Hwang. A Logical Approach to Narrative Understanding. Ph.D. Dissertation, Department of Computer Science, U. of Alberta, 1992. ISO-8601 ftp://ftp.qsl.net/pub/g1smd/8601v03.pdf 1997. R. Kohavy and D. Sommerfield. MLC++: Machine Learning Library in C++. http://www.sgi.com/Technology/mlc 1996. KSL-Time 1999. http://www.ksl.Stanford.EDU/ontologies/time/ 1999. M. Moens and M. Steedman. Temporal Ontology and Temporal Reference. Computational Linguistics, 14, 2, 1988, pp. 15-28. MUC-7. Proceedings of the Seventh Message Understanding Conference, DARPA. 1998. R. J. Passonneau. A Computational Model of the Semantics of Tense and Aspect. Computational Linguistics, 14, 2, 1988, pp. 44-60. H. Reichenbach. Elements of Symbolic Logic. London, Macmillan. 1947. A. Setzer and R. Gaizauskas. Annotating Events and Temporal Information in Newswire Texts. Proceedings of the Second International Conference On Language Resources And Evaluation (LREC-2000), Athens, Greece, 31 May- 2 June 2000. F. Song and R. Cohen. Tense Interpretation in the Context of Narrative. Proceedings of the Ninth National Conference on Artifical Intelligence (AAAI'91), pp.131-136. 1991. TDT2 http://morph.ldc.upenn.edu/Catalog/LDC99T3 7.html 1999 B. Webber. Tense as Discourse Anaphor. Computational Linguistics, 14, 2, 1988, pp. 61-73. J. M. Wiebe, T. P. O’Hara, T. OhrstromSandgren, and K. J. McKeever. An Empirical Approach to Temporal Reference Resolution. Journal of Artificial Intelligence Research, 9, 1998, pp. 247-293. G. Wilson, I. Mani, B. Sundheim, and L. Ferro. Some Conventions for Temporal Annotation of Text. Technical Note (in preparation). The MITRE Corporation, 2000.
2000
10
      "!#$% &$'   (&) $* +$ ,.-0/ 132 / 1 -547698:/ 1 6<; 1>=?A@<B'CEDGFH1>I 4HJLKNM'JO-GP @QB'C ,.-RMST65UV 1 S 1W=X? Y[Z.\^]`_ B'acbQdOegfhacikj d[Cml ] b3aonpB'qQBsrutWv*wpxkwpvzyW{}|7~'w*{R'wQ€k*‚*y<ƒ"|7„W…†yX‡*ˆ3ƒp‡‰yRŠ>…‰‹R…Œ|^t3vk ŽZ.\ +_ B'a‘bpd’ehfha‘ikj d ] a”“W•  @Qegfhd’ao–o–cd˜—9r™ƒk‚pwpš‰y>›}|7…'ƒ'›5v3*‚kyœž|7„W…ŸyR‡*ˆ5œ*v›0| t3vk D¢¡ J = -RM6 = £m¤R¥§¦m¨5©ª¨<«g¬­X«g¦™®¯¬™¥±°Q«g¦.©Œ¦7²R¨Q« ¬™³O¥§¦7«g­µ´¶«g©ª¬7·R¸ ¥¶·R¹ŒºŒ«¯»7¤5¼­½»™¼Œ©ª²X»7¼’ºž©¾»7¥§® ©ª´±´¶¿µ¦7« ´¶«g®^».À”¬™¼ªº ©"¦7«¯»Ÿ¼ÁÀW·R¼’²R·ž¨R¤R¬ ©ª¦7«g¦gª«gº°Q«g­R­X¥¶·R¹"¨5¬7¼’¨<«g¬ ·5©ªºŒ«g¦¼ªÀ>­R¥ÄÃp« ¬™« ·O»‰¦7« ºž©Á·O»7¥§®.®¯´§©ª¦™¦7«g¦g¾»7¤R«g¥±¬ ºŒ¼’¦H»­X¥§¦u»™¥±·<®^»7¥¶³ª«µÀ”«g©¾»™²R¬™«g¦gŽ£m¤R«ž¬™«g¦7²R´±»"¼ÁÀ »™¤R«Æ´¶«g©ª¬7·R¥¶·R¹Ç¨5¬7¼X®¯«h¦7¦[¥¶¦©Ç­X«h®¯¥§¦H¥¶¼ª·È»™¬7«g« É ¤R¥§® ¤"®¯´§©ª¦™¦7¥ÄÊ5«h¦W©Á·²5·R˝·R¼ É ·¨R¬™¼ª¨Q« ¬W·5©ªºŒ« ¼’·Ì»™¤R«Í°5©ª¦7¥§¦¼ªÀ¥±»™¦G® ¼ª·O»7« Ν»¢¼ªÀϼX® ® ²R¬7¸ ¬™« ·5® «ªÅУm¤R¥§¦Ñ®¯´§©ª¦™¦7¥ÄÊ5«g¬Ò¥¶¦Ò²5¦7«g­»™¼Ó«h¦u»™¥Ä¸ ºž©¾»™«Ô»7¤R«˜¨R¬™¼ª°<©Á°R¥¶´±¥±»u¿Æ­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª·Õ¼ÁÀ}©ª· ¼’²X»˜¼ÁÀ³’¼X® ©Á°5²R´¶©ª¬7¿¢¨R¬™¼ª¨Q« ¬ ·5©ªºŒ«Ï¼¾³’« ¬˜© » ©Á¹’¦7«¯»hÅG£m¤R¥¶¦Œ¨5¬7¼’°5©Á°R¥¶´¶¥Ä»u¿Æ­R¥¶¦H»7¬™¥±°5²X»7¥¶¼ª·Õ¥¶¦ ¥±»™¦7« ´±Àk²5¦7«g­»7¼«g¦H»7¥¶ºž©¾»™«»™¤R«0¨5©Á¬ ©ÁºŒ« »7« ¬ ¦Ÿ¼ÁÀ ©Œ¦H»7¼X® ¤5©ª¦H»7¥§®}¨5©Á¬7».¼ÁÀ¦H¨Q« «h® ¤» ©Á¹’¹ª« ¬hÅ Ö × S = -RØ 2TÙ 6 = 4H؉S £m¤R« É ¼’¬7ËÔ­X«g¦™®¯¬™¥±°Q«g­ ¥¶·˜»™¤R¥¶¦¨5©ª¨<«g¬©Á¥¶ºž¦T©Á»T« ·R¬™¥§® ¤X¸ ¥¶·R¹´¶«¯ÎX¥§® © É ¥Ä»™¤½·R« É ¨R¬™¼ª¨Q« ¬†·5©ÁºŒ«h¦ Å'ÚÛ·½¦7²5® ¤½´±« Ν¥§® ©5 « ³’« ¬™¿ É ¼ª¬ ­ÜG¥§¦z©ª¦™¦7¥±¹’·R«g­©®¯¼’²R·O»‰­X¥¶¦H»7¬™¥¶°R²X»7¥¶¼ª·Œ¼¾³’« ¬ »7¤5«­X¥ÄÃp« ¬™« ·O»'»™©ª¹’¦'¼ªÀ5»7¤5«m»™©Á¹’¹ª«g¬k» ©Á¹O¦H« »mݔ»7¤R«.·²Rº°<«g¬ ¼ÁÀQ»7¥¶ºŒ«g¦‰Ü É ©’¦´¶©ª°<«g´±´¶«g­ É ¥±»7¤Œ» ©Á¹ÞHß^Å£m¤R¥§¦z­X¥§¦H»7¬™¥±°R²R¸ »7¥¶¼ª·L¥§¦® ©Á´¶´¶«g­Ï»7¤R«Òà á¾âXã5ä徿§ç^äoè^æ‘é¯âXäoæ‘á¾ãϼÁÀ.»™¤R« É ¼ª¬ ­>Å £m¤R«¨R¬™¼X­X²5® «g­Ô´±« ÎX¥¶®g©ž©Á¬™«}²<¦H«h­»™¼ž«g¦H»7¥¶ºž©¾»7«0»™¤R«"¨5©Á¸ ¬ ©ÁºŒ«¯»™« ¬ ¦Ÿ¼ªÀ'©Œêmë0ì¦H»7¼X® ¤5©’¦u»™¥¶®»™©ª¹ª¹’« ¬hÅ íÏ« É ¥¶´±´R® ¼ª·5® « ·O»7¬ ©¾»™«m¼ª·¨R¬7¼’¨<«g¬'·5©ÁºŒ«g¦'¥¶·ž©}·R« É ¦H¸ ¨5©ª¨<«g¬*® ¼ª¬™¨R²5¦‰Ýïî3ð.ñá¾ãQå’ðTòªóªôRõgöHòÁó’óÁ÷h߯ÂO©ª´Ä»™¤R¼ª²5¹ª¤»™¤R« »7«h® ¤R·R¥§øO²R«g¦z­R«g¦™®¯¬™¥±°Q«g­ž®g©Á·Œ°<«T²5¦H«h­À”¼ª¬‰©ª·O¿® ©Á»7« ¹’¼ª¬™¿ ¼ÁÀ É ¼’¬™­5¦ Å£m¤R«µ­X«g® ¥¶¦7¥¶¼ª·[»™¼˜®¯¼ª·<®¯« ·O»™¬™©Á»7«¼ª·¨R¬7¼’¨<«g¬ ·5©ªº«h¦†À”¼ª´¶´±¼ É ¦ŸÀ”¬™¼ªºù»™¤R«0À‘©ª®¯»m»7¤5©Á»T©Á´±»7¤R¼’²R¹ª¤½¨R¬7¼’¨<«g¬ ·5©ªº«h¦Ô¬7«g¨R¬7«h¦H«g·O»¼ª·R´¶¿Ç©ÑºŒ¼X­X«g¬™©Á»7«¨R¬7¼’¨<¼’¬H»™¥±¼’·G¼ÁÀ »7¤5« É ¼ª¬ ­R¦ž¼X® ® ²R¬7¬™« ·<®¯«g¦¥±·N¦H²5® ¤Ç® ¼ª¬™¨<¼’¬™©GÝcúRû üOý’þ߯ »7¤5«Ï¨R¬™¼ª°<©Á°R¥¶´±¥±»u¿¢¼ÁÀ©ª·9¼’²X»Ô¼ªÀ³ª¼X®g©Á°R²R´§©Á¬™¿NÝïëë.ÿ0ß É ¼ª¬ ­Ï°Q« ¥¶·R¹©[¨R¬7¼’¨<«g¬"·5©ªº«¥¶¦¤R¥¶¹ª¤WÅLÝ «h® ¤R«¯»©ª·5­ ³’¼ª·W  ’߉¬7«g¨<¼’¬H»« ÎX¨<«g¬7¥¶ºŒ« ·O»™¦.® ¼ª·5® « ¬™·R¥±·5¹µëëÿ ¨R¬™¼ª¨Q« ¬·5©ÁºŒ«g¦¼ª·Ñ©³ª« ¬™¿Ï® ´±¼O¦H«½®¯¼ª¬™¨R²5¦½Ýoîkð ŒáÁãQå’ð 徿 ±á ¾äoæ â5ðµòÁó’ôRõ öHòªóªó ß^Å}£m¤R« ¿ ¦7¤R¼ É «h­½»7¤5©Á»0ýOþ ¼ÁÀkëëÿ É ¼ª¬ ­R¦ É ¥±»7¤µ¬™«g¦7¨<«h®^»z»™¼©·5« É ¦7¨5©Á¨Q« ¬ªü  É ¼ª¬ ­R¦.´¶«¯ÎX¥¶® ¼ª·˜©Á¬™«¨Q¼Á»7«g·O»7¥§©Á´¶´±¿Ô¨R¬™¼ª¨Q« ¬T·<©ÁºŒ«g¦gÅR²R¬7¸ »7¤5« ¬™º¼’¬7«’ÂÁ»™¤R«T¦™©ÁºŒ«.«¯ÎX¨Q« ¬™¥±ºŒ« ·O» ¦‰¦7¤R¼ É «h­»7¤5©Á»Ÿú þ ¼ªÀ‰»7¤5«Œ¦7« ·O»™« ·5® «g¦0®¯¼’·’» ©Á¥¶·Ï©Á»}´¶«g©’¦u»0¼ª·R«ëëÿ É ¼ª¬ ­>Å †«g¦7¥§­X«g¦g¨R¬™¼ª¨Q« ¬˜·5©ªº«h¦˜©Á¬™«L¥±ºŒ¨Q¼ª¬7»™©ª·’» À”¼’¬[ºž©Á·¿ » ©ª¦7˝¦½©ª¦¥±·XÀ”¼’¬7ºž©Á»7¥¶¼ª·G¬™«¯»7¬™¥¶« ³¾©Á´¼ª¬·<©ÁºŒ«g­¢«g·’»™¥Ä»™¥±«h¦ « ÎO»™¬™©’®^»™¥±¼’·WÅ £m¤R«Ÿ»7«g® ¤5·R¥¶øO²R«†­X«g¦™®¯¬™¥¶°<«h­¤5©ª¦3» É ¼T¦u» ©Á¹’«g¦gŲR¬7¥¶·R¹ »™¤R«ŒÊ5¬ ¦u»¦H»™©ª¹ª«ªÂk©»™¬™©ª¥±·5¥±·R¹˜® ¼ª¬™¨R²5¦}¥¶¦"²5¦H«h­Ï»7¼ ¹ª¬™¼ É ­X«h®¯¥§¦H¥¶¼ª·Ç»7¬™« «g¦Ô¼ªÀ©¦7¨<«h®¯¥§©Á´}˝¥±·5­N® ©ª´±´¶«g­Ó읫 ºž©Á·O»™¥¶®  ´§©ª¦™¦H¥±Ê<®g©¾»7¥¶¼ª·Ï£3¬™« «h¦Ýoì  £ß^Å3읲<® ¤[»7¬™« «h¦TºŒ¼X­X«g´k»™¤R« ¦™©Á´¶¥±«g·O»À”«g©Á»7²R¬™«g¦.¼ªÀ»7¤R«®¯¼ª·O»™«¯Î»™¦T¥±· É ¤R¥§® ¤ É ¼ª¬ ­R¦m¼ÁÀ © ¹’¥±³’« ·L¦H«gºž©Á·O»7¥§®µ®¯´§©ª¦™¦0¼X® ® ²R¬gÅÔÚÛ·Ñ©˜¦7«g®¯¼’·5­Ò¦u» ©Á¹’«ªÂ ì  £.¦©Á¬™«²<¦H«h­9»™¼G²R¨p­R©¾»™«L»7¤R«L´¶«¯ÎX¥§® ©ª´"« ·O»7¬™¥¶«g¦ ¼ÁÀ ëëÿ É ¼ª¬ ­R¦©Á¨5¨<«h©Á¬™¥±·R¹¥±·©ž»7«g¦H»}® ¼ª¬™¨R²5¦gÂ5°5©’¦H«h­˜¼ª· »™¤R« ¥¶¬Ô­R¥ÄÃp« ¬™« ·O»Ô® ¼ª·O»7« Ν»½¼ÁÀ"¼X®g®¯²R¬™¬7«g·5®¯«¥±·9»7¤R«»7«g¦H» ® ¼ª¬™¨R²5¦gÅz£m¤R«²R¨p­R©¾»™«g­ ´¶«¯ÎX¥¶® ¼ª·Ô¥¶¦.»7¤R«g·˜²5¦H«h­½»7¼ž«g¦H»7¥±¸ ºž©¾»™«0»7¤R«¨<©Á¬ ©ÁºŒ«¯»™« ¬ ¦Ÿ¼ªÀ*©Œê†ë0ìµ»™©ª¹ª¹’« ¬hÅ £m¤R«}¨5©Á¨Q« ¬m¥§¦†¦H»7¬™²5®¯»7²R¬™«g­½©ª¦‰À”¼ª´¶´±¼ É ¦gÅ'ÚÛ·½¦7«g®^»™¥±¼’·  »™¤R«}»™©Á¹’¹ª«g¬†©ª·5­ž»™¤R«}»™©Á¹O¦H« »†²<¦H«h­½©ª¬7«}­X«g¦™®¯¬™¥±°Q«g­WÂR¦H«h®^¸ »™¥±¼’·Nú¥±·O»7¬™¼X­X²5® «g¦˜ì« ºž©ª·’»™¥¶®  ´§©ª¦™¦H¥±Ê<® ©Á»7¥¶¼ª·9£k¬™« «g¦g ¦7«g®¯»7¥¶¼ª· Ô­X«g¦™®¯¬™¥¶°<«h¦¤5¼ É ©Á·ì  £Í¥¶¦}°R²5¥±´±»"©Á·5­Ï¦H«h®^¸ »™¥±¼’·¤R¼ É ¥Ä»¥§¦²5¦7«g­»7¼0«g¦H»7¥¶ºž©¾»™«m»7¤R«.¨5©ª¬™©ªºŒ«¯»7«g¬™¦k¼ÁÀ ©˜êmë0옻 ©Á¹ª¹’« ¬hŵ£m¤R«ž¨Q« ¬7À”¼ª¬™ºŒ©ª·5®¯«h¦0¼ÁÀ†»7¤R«º« »7¤R¼X­ ©ª¬7«¹’¥±³’« ·Œ¥¶·¦H«h®^»™¥±¼’·XÅ"!ªÂ¦H«h®^»7¥¶¼ª·µü"°R¬7¥¶«$#<¿­X«h¦7® ¬7¥¶°Q«g¦ ¼ª»7¤R«g¬Ô©ª¨R¨R¬™¼’©ª® ¤5«g¦ž»™¼­X«h©Á´ É ¥±»7¤Èëëÿ É ¼ª¬ ­R¦Ô©Á·5­ ¦7«g®¯»7¥¶¼ª·Ný®¯¼’·5®¯´¶²5­X«h¦Œ»7¤5«[¨5©Á¨Q« ¬ É ¥Ä»™¤Ç¦H¼’ºŒ« À”²X»7²R¬™« É ¼ª¬™ËpÅ % & ; 1Ó= M(')' 1 -MS 2%= ; 1N= M('‰J 1>= £m¤R«.êmë0ì0»™©ª¹ª¹’« ¬ É «†²5¦7«0Ýo읨R¬™¥±« »z©Á·5­+*z´±¸o°-, «/.g«ªÂ0!21 1 ªß ¥§¦‰°<©ª¦7«g­ž¼’·ž»7¤R«}¦u» ©Á·5­5©Á¬ ­»7¬™¥±¹’¬™©ªººŒ¼X­X« ´ïÝ  ¤<©Á¬™·R¥¶©ªË « »T©Á´oűÂ3!41 1ªúOß65 7 ݑÜ98;: <5ß)=?>@4AŸºž©¾Î BDCFE G < H IKJ 8ML Ý”Þ I;N Þ IPOQ Þ IO 8gß L Ý”Ü I;N Þ I ß ÝR!hß É ¤R« ¬™« 7 Ý”Ü 8;: < ߘ¥§¦Ï©N¦H«hø’²5« ·5® «Æ¼ÁÀTS êmë0ìN»™©ª¹’¦ ® ¼ª¬™¬7«h¦H¨Q¼ª·5­R¥±·R¹s»™¼ »7¤R«A¦7«gøO²R«g·5®¯«Í¼ªÀ É ¼’¬™­R¦¢Ü 86: < Å £m¤R«¦H«h®¯¼ª·<­9»™« ¬™º ¼ªÀŒ»7¤R«Ñ¨5¬7¼X­X²5®¯»¼ÁÀž«hø’²<©¾»7¥¶¼ª·U! Ý L Ý”Ü I6N Þ I ßHߥ§¦«g¦H»7¥¶ºŒ©Á»7«h­Ò²5¦H¥¶·R¹˜»™¤R«½® ¼ª²R·O»Œ­X¥¶¦H»7¬™¥¶°R²X¸ »™¥±¼’·5¦¦H»7¼ª¬™«g­¥±·Ô»™¤R«´±« ÎX¥¶® ¼ª·WÅ £m¤R«[» ©Á¹’¹ª« ¬ É ©’¦ž»7¬ ©Á¥¶·R«h­¢¼ª·¢»7¤5«·R« É ¦H¨5©ª¨<«g¬ îkð ñá¾ãpåªð}°Q«¯» É « «g·¿ª«h©Á¬ ¦9Výž©Á·<­W1X!’Å}Úï»0²5¦7«g¦}©Yªü Z É ¼ª¬ ­R¦ž´¶«¯ÎX¥¶® ¼ª·WÅÈ£m¤R«˜» ©Á¹O¦H« »½®¯¼’·5¦H¥§¦H»™¦ž¼ªÀ[!4 ϺŒ¼ª¬7¸ ¨R¤R¼O¦H¿·O»™©’®^»™¥¶®Ñ»™©Á¹O¦ É ¤5¥¶® ¤ ¥¶·5® ´±²5­R«Æ»™¤R«²<¦H²5©ª´ŒºŒ©Á¸ \ ¼ª¬ É ¼’¬™­R® ´¶©’¦7¦7«g¦Ÿ¨R´¶²5¦¦7¼ªºŒ«"¦7« ºž©Á·O»7¥§®¦7²R°Q® ´¶©’¦7¦7«g¦†À”¼’¬ ¨R¬™¼ª¨Q« ¬·<©ÁºŒ«g¦25‰Ê<¬™¦H»T·5©ªº«h¦"ÝD]0^`_-a/b>ß^ÂQÀ‘©ÁºŒ¥¶´±¿ ·5©ªº«h¦ ÝD]/ced9^`f$gkß^®¯¼’²R·O»7¬™¥±«h¦ žÝihjk-l3b3_ gkß »™¼ É ·5¦ÑÝDb3jml3ß ©Á·<­G¼ª¬™¹’©ª·R¥¶¦™©¾»™¥±¼’·5¦˜Ýij_n3߯Åí« É ¥¶´¶´¨R¬™¥¶ºŒ©ª¬7¥¶´¶¿°Q« ¥¶·’»™« ¬™«g¦H»7«h­L¥±·»7¤5¥¶¦¨5©ª¨<«g¬°¿Ò»7¤5¥¶¦Œ¦7²R°5¦7«¯»Œ¼ÁÀo˜»™©Á¹O¦ É ¤R¥§® ¤½® ¼ª·5¦H»7¥±»7²X»™«g¦Ÿ»7¤5«}¦7« ºž©Á·O»™¥¶®T¨5©Á¬7»†¼ªÀW»7¤5«T» ©Á¹’¦7«¯»hÅ íG¤R«g·µ©Á·ëë.ÿ É ¼ª¬ ­¥¶¦z¨R¬7¼X® «g¦™¦H«h­°¿»7¤R«.» ©Á¹’¹ª« ¬hÂÁ¥±» ¥§¦†»™©Á¹’¹ª«h­ k-lepzÀ”¼’¬âXã0qÁãQárã<Å £m¤R¥¶¦[»™©ª¹ª¹ª«g¬˜¤5© ³ª«L°Q« «g·Í« ³¾©Á´¶²5©Á»7«g­È¼’·Í©G¤5©Á·<­¸ ®¯¼X­X«h­¢®¯¼ª¬™¨R²5¦gÅGÚﻙ¦µ¨<«g¬HÀ”¼’¬7ºž©Á·<®¯«g¦ž©ª¬7«˜® ¼ªºŒ¨5©Á¬ ©Á°5´±« »7¼¼Á»™¤R« ¬Ô¦u» ©¾»7« ¸o¼ªÀ ¸c»™¤R«¯¸Û©Á¬7»¦7¿¦H»7«gºž¦Ý‘©ª°<¼’²X» 1’þ ¼ÁÀ ©ª®g®¯²R¬ ©ª® ¿Rß^Å s t 1eu MS = 476wv F M'JJ4Fx06pM = 4H؉S & 1W1 J 읫gºŒ©ª·O»7¥§®  ´§©ª¦™¦H¥±Ê<®g©¾»7¥¶¼ª·£3¬™« «h¦†Ýoì  £ßk¤5© ³ª«Ÿ°<«g« ·¥±·R¸ »7¬™¼X­X²5® «g­"°¿µÝD0²R¤R·Œ©Á·5­­X«zy[¼ª¬™¥cÂM!41 1ªüOßW©’¦*©ºŒ«g©ª·5¦ ¼ÁÀ'®¯´§©ª¦™¦H¥±À”¿¥±·5¹·R« É ¦H»7¬™¥±·5¹’¦†À”¬™¼ªº ©Œ®¯¼’¬7¨R²<¦Ÿ¼ªÀ3» ©Á¹ª¹’«g­ ¦H»7¬™¥±·R¹O¦ Å.í«²<¦H«¥Ä»0©ª¦T©ºŒ«g©Á·<¦¼ÁÀ‰® ´¶©’¦7¦7¥±À”¿O¥¶·R¹ž·R¼’²R· ¨R¤R¬ ©ª¦7«g¦0ÝD{Tꉦ ßm®¯¼’·O»™©Á¥¶·R¥¶·R¹Œ²R·R˝·R¼ É ·Ô¨R¬™¼ª¨Q« ¬.·5©ªº«h¦ À”¬™¼ªº&©®¯¼ª¬™¨R²5¦"¼ÁÀ.´¶©ª°<«g´±´¶«g­|{ꉦgÅÔ£m¤5«{T꟦"©Á¬™«ž´¶©Á¸ °Q« ´¶´±«h­Ç°¿Õ»7¤R«® ©¾»™« ¹’¼ª¬™¿¼ªÀ»7¤R«Ï« º°Q«g­5­X«g­9¨R¬7¼’¨<«g¬ ·5©ªº«’Å£m¤R«Ô´§©Á°Q« ´.¼ÁÀ}©Á·}{ês¥¶¦»7¤5« ¬™«¯À”¼ª¬™«Ô¼ª·R´¶¿L¬7« ¸ ´§©¾»7«h­»7¼0»7¤5«¨R¬™¼ª¨Q« ¬z·5©ªºŒ«.¥±·5® ´±²<­X«g­Œ¥¶·Œ¥Ä»Ÿ©ª·5­·R¼Á»»™¼ »7¤5«« ·O»™¥Ä»u¿¬7«g¨R¬™«g¦7« ·O»7«h­Ò°O¿Ï»7¤5« É ¤R¼’´±«{ê'Å~5¼ª¬«¯Î¸ ©ÁºŒ¨R´¶«ªÂ<»7¤5«{Tê€53äPRð9 Qè™ð^ç^æ‘åªð ã5äTáR‚„ƒ*è^â …Áↇ É ¥¶´¶´k°Q« ´§©Á°Q« ´¶´±«h­ É ¥±»7¤Ô»™¤R«0»™©ª¹Thjk-l3b3_ gL©Á·5­½·R¼Á»‰ˆ†ŠX_-a2jl'Å *‰©ª® ¤˜·5¼­R«¼ÁÀ»™¤R«"»™¬7«g«¥§¦©’¦7¦7¼X®¯¥§©¾»™«g­ É ¥Ä»™¤Ï©¬7«g¹Á¸ ²R´§©Á¬«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·Nݑ©ª®¯»7²5©ª´±´¶¿ªÂ¼’·R´¶¿Ñ©[´¶¥±ºŒ¥±»7«g­ÆÀ”¼ª¬™º ¼ÁÀ ¬™« ¹ª²5´¶©ª¬« Ν¨5¬7«h¦7¦7¥±¼’·5¦ ß.¥¶·O³’¼ª´¶³¥±·R¹½´±« ÎX¥¶®g©Á´'¥Ä»™« ºž¦gÂ3ê†ë0ì »™©ª¹’¦gª¹O©Á¨5¦ݔ·R¼’·µ« ºŒ¨X»u¿Œ¦H«høO²R« ·5® «g¦‰¼ÁÀ É ¼ª¬ ­R¦¼ª¬†ê†ë0ì »™©ª¹’¦ ßT©ª·5­»7¤R«» É ¼Ô¦7¿º°Q¼ª´§¦Œ‹s©Á·5­} ¬™«g¦7¨<«h®^»™¥±³’« ´¶¿ ¥¶·5­X¥§® ©¾»™¥±·5¹Ï»™¤R« °<«g¹ª¥¶·R·R¥¶·R¹L©Á·5­Ñ»7¤R«˜«g·5­¼ÁÀ}©ª·}{ê'Å *‰©’® ¤´¶«g©ÁÀQ¼ªÀp»7¤R«m»™¬7«g«.¥¶¦Ÿ©ª¦™¦H¼X®¯¥§©¾»™«g­ É ¥Ä»™¤µ©}¨R¬7¼’°5©Á°5¥±´±¸ ¥±»u¿­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·¼¾³’« ¬*»7¤5«9¦7« ºž©ª·’»™¥¶®Ÿ» ©Á¹’¦gÅk£m¤5«® ©¾»7¸ « ¹’¼ª¬™¿"¤5© ³¥¶·R¹}»7¤R«.¤R¥¶¹ª¤5«g¦H»z¨R¬7¼’°5©Á°5¥±´¶¥Ä»u¿¥±·»™¤R¥¶¦Ÿ­X¥§¦u»™¬7¥±¸ °R²X»™¥±¼’·ž¥¶¦‰® ©Á´¶´¶«g­»7¤R«äÛᎠà;¾äÛð…’áÁè6‡†¼ªÀp»7¤R«.´¶«g©ÁÀuÅ*íG¤5« · ©Á·W{ê9® ¼ª·O»™©ª¥±·R¥¶·R¹µ©ª·Ïëë.ÿ ¨5¬7¼’¨<«g¬T·5©ªº«ºŒ©Á»™® ¤R«h¦ »7¤5«†¬™« ¹’²R´§©Á¬'«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·¼ÁÀQ©}´¶«g©ÁÀu¾»7¤R«©’¦7¦7¼X®¯¥§©¾»™«g­­R¥¶¦H¸ »7¬™¥¶°R²X»7¥¶¼ª·˜¹’¥±³’«g¦.©ª·˜«g¦H»7¥¶ºŒ©Á»7¥¶¼ª· ¼ÁÀ'»™¤R«T±ðސ’æ‘àŽ'åÁæ§ç^äcè^æ ö é âXäcæ‘áÁãÔݑ¨R¬7¼’°5©Á°5¥±´¶¥Ä»u¿ž­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª·¼¾³’« ¬‰»7¤R«¦7« ºž©Á·O»7¥§® »™©ª¹’¦7«¯»^߉¼ªÀk»™¤R«²R·R˝·R¼ É ·Ô¨R¬™¼ª¨Q« ¬.·5©ªºŒ«ªÅ ‘ · ì  £Ð¤5©’¦N°Q« «g· ¬™« ¨R¬™«g¦7« ·O»™«g­ ¥¶· Ê<¹ª²R¬™«’!’Å *‰©’® ¤&·5¼­R«ù¥¶¦Í´§©Á°Q« ´¶´¶«g­ É ¥Ä»™¤ ¥Ä» ¦ ®¯¼’¬7¬™«g¦7¨Q¼ª·5­X¥¶·R¹ ¬™« ¹ª²5´¶©ª¬L«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·WÅ £m¤R«G´±« À »7ºŒ¼’¦H»Æ´¶«g©ÁÀuµÀ”¼ª¬Æ«¯Î¸ ©ÁºŒ¨R´¶«ªÂN®¯¼ª¬™¬™«g¦7¨<¼’·5­R¦È»™¼»7¤R«¬™« ¹ª²5´¶©ª¬ «¯ÎX¨R¬™«g¦™¦H¥¶¼ª· ‹‰“¨R¬X «g¦7¥§­X« ·O»6“¹’¬7¼’²R¨Q«”“Œ É ¤R« ¬™«»™¤R«}“ ¦7¥¶¹ª·5¦ ­X« ¸ ·R¼ª»7«†¹’©Á¨<¦ Å'읲5® ¤©¬™« ¹ª²5´¶©ª¬k« Ν¨5¬7«h¦7¦7¥±¼’·"ºž©¾» ® ¤R«g¦({T꟦ ©ª¦[¶ðo Qè(• ð¯ç^æ‘åªð ã5äå¾â …ªè7á¾â4 5ð–ÝHä`5ð— pè7ð^ç^æ‘å’ð¯ã<ä0áR‚Œä`5𠖀…ªè7áÁâ4 Xß^ņ£m¤R«¨R¬™¼ª°5©ª°R¥¶´±¥±»u¿­X¥§¦u»™¬7¥¶°R²X»™¥±¼’· ¼ÁÀ*»7¤R«²R·R¸ ˝·R¼ É · É ¼’¬™­>Â3ºŒ©ª¬7˒«g­˜–Â>¥¶·Ï»™¤R¥§¦"®¯¼’·’»™«¯Î»gÂk¥¶¦0¹’¥±³’« · ¥¶·½»™¤R«´¶«g©¾ÀuÅ <+ville de+>? yes no <+président+>? <+secrétaire+> ? yes no <+secrétaire général+>? no yes yes no <+président+groupe> ? <+ville+>? ORG : 0.8 FAMILY:0.2 *¥¶¹ª²R¬™«[! 5 ‘ ¦H«gºž©Á·O»7¥§®®¯´§©ª¦™¦H¥±Ê<®g©¾»7¥¶¼ª·»7¬™« « ™ V¢-5Øeš47S›' = ; 1È21 6>47JO4H؉S = 1W1 ÚÛ·N¼ª¬ ­X« ¬»7¼Æ°5²R¥±´§­9©­X«g® ¥¶¦7¥¶¼ª·¢»7¬™« «’ÂT¼’·R«[·R«g«g­R¦ ©Á· ¦™©ÁºŒ¨R´¶«® ¼ª¬™¨R²5¦gÂW© ¦H« »}¼ªÀ†øO²R«g¦H»7¥¶¼ª·<¦ ÂW©Ô¦H¨5´±¥±»®¯¬™¥Ä»™« ¬™¥¶© ©ª·5­L©˜¦u»™¼ª¨Ñ® ¼ª·5­X¥±»7¥¶¼ª·3Âk«h©ª® ¤Ò¼ÁÀ É ¤R¥§® ¤L¥¶¦­X«g¦™®¯¬™¥±°Q«g­ °Q« ´¶¼ É Å œž Ÿ¡ ~¢W£2¤†¥|¦~§D¢Y¨ ©†ª”¦(«£ £m¤R«ž¦™©ÁºŒ¨R´¶«ž®¯¼ª¬™¨R²5¦}¥§¦0ºž©ª­X«¼ÁÀ†´¶©ª°<«g´±´¶«g­¬{T꟦}«g©ª® ¤ ® ¼ª·O»™©ª¥±·R¥¶·R¹Æ©Æ¨R¬™¼ª¨Q« ¬·<©ÁºŒ«¼ªÀ©L˝·R¼ É ·9¦7« ºž©Á·O»™¥¶® ® ´¶©’¦7¦gÅ ­3¥±ºŒ¥±»7¥¶·R¹˜»™¤R«½® ¼ª·O»7« ÎO»"¼ªÀ©˜¨R¬™¼ª¨Q« ¬"·5©ªºŒ«Œ»7¼ »™¤R«¡{TêÕ¥¶· É ¤R¥§® ¤Ô¥Ä»T©ª¨R¨<«h©Á¬ ¦†¥¶¦.©»7¬ ©ª­R«¯¸ï¼Áذ<« » É «g« · » ©Á˝¥±·5¹¥¶·O»7¼Œ©ª®g®¯¼ª²5·’»© É ¥±·<­X¼ É ¼ªÀ*©Á· ©Á¬™°R¥±»7¬ ©Á¬™¿ž¦H¥". « ©ª¬7¼’²R·5­»7¤R«‰¨R¬7¼’¨<«g¬>·5©ÁºŒ«Tݑ²5¦7²5©Á´¶´±¿Œ!¼ª¬ É ¼ª¬ ­R¦ ßp©Á·5­ » ©Á˝¥±·5¹ž¥±·O»7¼µ©’® ® ¼ª²R·O».»7¤5«« ·O»7¥¶¬7«"¦7« ·O»™« ·5® «ªÅŸ£m¤5«Ê5¬ ¦u» ¨R¬™« ³’« ·O»™¦.»7¼žºŒ¼X­X« ´*¦H¼’º«"® ¼ª´¶´±¼X®g©¾»7¥¶¼ª·<¦ É ¤R¥§® ¤ÔºŒ¥¶¹ª¤O» °Q«T¬™« ´¶« ³¾©Á·O»zÀ”¼’¬Ÿ¦7« ºž©Á·O»™¥¶®­R¥¶¦™©Áº°R¥±¹’²5©¾»™¥±¼’· É ¤R¥±´¶«»™¤R« ¦7«g® ¼ª·5­0¥¶·O»7¬™¼­R²5®¯«h¦>»7¼¼º²5® ¤·R¼’¥¶¦7«z¥¶·0»7¤R«†©Á²X»™¼ªºž©¾»™¥¶® ´¶«g©ª¬7·R¥¶·R¹Œ¨R¬™¼X®¯«g¦™¦gÅ £m¤R«Ç¦™©ÁºŒ¨R´¶«Ç®¯¼ª¬™¨R²5¦Ò¥§¦Ñ©ª²X»7¼’ºŒ©Á»7¥§® ©ª´±´¶¿Í°R²R¥¶´Ä»¥±· »™¤R¬7«g«"¦u»™« ¨5¦—5 ® £m¤5«z»7¬ ©Á¥¶·R¥¶·R¹T® ¼ª¬™¨R²5¦>¥§¦>» ©Á¹ª¹’«g­0²5¦7¥¶·R¹.»7¤5«Ÿ¦H»™©¾»™¥¶¦H¸ »™¥¶®g©Á´» ©Á¹ª¹’« ¬'¨R¬™«g¦7« ·O»7«h­"¥¶·¦7«g®¯»7¥¶¼ª·XÅ ‘ ´¶´X¨R¬™¼ª¨Q« ¬ ·<©ÁºŒ«g¦°Q« ´¶¼ª·R¹’¥±·R¹»7¼Ï»7¤R«Ô» ©Á¹ª¹’« ¬4¯ ¦´±« ÎX¥¶® ¼ª·Æ©ª¬7« ©ª²X»7¼’ºž©¾»7¥§® ©ª´±´¶¿» ©Á¹’¹ª«g­}©ª®g®¯¼’¬™­X¥¶·R¹†»7¼T«gøO²5©¾»™¥±¼’·[!ªÅ ® £m¤5« ·W’»7¤R¥§¦» ©Á¹’¹ª«g­®¯¼’¬7¨5²5¦'¥¶¦‰¨5©Á¬ ¦7«g­ É ¥Ä»™¤µ©¡{Tê Ê<·R¥Ä»™«¯¸Û¦u» ©¾»™«¨5©Á¬ ¦H«g¬m¥±· ¼ª¬ ­X«g¬Ÿ»™¼ž­X«¯»™«g®^»9{ꉦgÅ ® *¥¶·5©Á´¶´¶¿ªÂÁ»™¤R«{T꟦z®¯¼’·’» ©Á¥¶·R¥¶·R¹¨5¬7¼’¨<«g¬*·<©ÁºŒ«g¦©ª¬7« ¦H»7¼’¬7«h­Ô¥±·Ô»7¤5«¦7©ªºŒ¨R´±«® ¼ª¬™¨R²5¦©ª·5­½»™©Á¹’¹ª«h­ É ¥±»7¤ »™¤R«Ò¦H«gºŒ©ª·O»7¥§®®¯´§©ª¦™¦µ¼ªÀ»7¤R«« º°<«h­R­X«h­G¨R¬™¼ª¨Q« ¬ ·<©ÁºŒ«ªÅ R¼’¬« ÎX©ªºŒ¨R´±«’»7¤5«Ô¦7« ·O»7«g·5®¯«˜5°Á»7¤R«Ô¨R¬™«g¦7¥¶­X«g·O»¼ÁÀ ì5ëo{› ­X«g® ´¶©ª¬7«h­Ñ¥¶·Õ©ª·¥¶·O»7« ¬™³¥±« É Å¶Å °5Â É ¥¶´±´mÊ5¬ ¦H»Œ°Q« » ©Á¹ª¹’«g­Ô©ª¦—5 ±i²4³´µD¶”·¸¹ºX±D»¼´½”¾/¿´2À”²3µDÁ ¹ ºM±FÂ2Ã3µiĔÅ4·”Ä0º†±F¹”Æ$ÁÇ-µFÆ/ÅȺ ±i¿´É2ʔË2¼´4¿µDÌ͹ ºM±Î¾$À-µDÄ4Ŕ·”Ä0ºX±Ë/À-µD¶·¸¹ ºX±R¾$À²”´”¼”Ï ¾2´2ÐeµÁ ¹ º £m¤R« · »™¤R«Ê5·R¥±»7« ¸ï¦H»™©Á»7«¨5©ª¬™¦7« ¬ É ¥¶´¶´3¥§¦H¼’´¶©Á»7«0»7¤5«À”¼ª´±¸ ´¶¼ É ¥±·R¹{ꉦ—5 Ñ2±i²2³´µD¶”·”¸¹ ºX±D»”¼´½”¾/¿”´2À²3µDÁ¹ ºX±FÂ2õiĔŔ·”ÄMºX±¹”Æ/ÁÇ-µFÆ$Å È º6Ò Ñ2±FË$À-µD¶”·¸¹ ºX±Î¾$À”²´4¼”Ï ¾/´2Ð-µDÁ ¹º6Ò ©ÁºŒ¼ª·R¹ É ¤5¥¶® ¤0»™¤R«Ó{TêÑä`5ð3 pè7ð¯ç æ‘å’ð¯ã<äká΂)ÔÕ)Ö„× É ¥¶´¶´ °Q«˒« ¨X»¦H¥¶·5® «¥±»® ¼ª·O»™©ª¥±·<¦T©½¨R¬7¼’¨<«g¬·<©ÁºŒ«½ÝÎÔÕ)Ö„×†Ø ¬™«g®¯¼’¹ª·R¥". «h­Ï©’¦"©Á·L¼’¬7¹O©Á·R¥".g©Á»7¥¶¼ª·Ï·5©ÁºŒ«Ýje_n3߯Ř£m¤R« {Tê9¥¶¦0¦H»7¼’¬7«h­ ¥±·»7¤R«Œ¦7©ªºŒ¨R´±«®¯¼’¬7¨R²<¦²R·5­R« ¬»7¤R«À”¼ª¬7¸ ºž©¾»¡5 ±i²4³´µD¶”·¸ ºX±D»¼´½”¾/¿´2À”²3µDÁ0ºX±Â2Ã3µDĔÅ4·Ä0ºX±iÙٔÙ4Ù3µDÄÁ0º$ڔÆ2ÅÈ íG¤R« ¬™«.»7¤5«¨R¬™¼ª¨Q« ¬z·5©ªºŒ«.¥¶¦‰¬7«g¨R´¶©’®¯«h­°¿»7¤R«¦H¿º¸ °Q¼ª´o–‰–9–‰–%©Á·5­¢¥±»™¦µ»™©Á¹Æ°<«h®¯¼ªºŒ«h¦YÛÜÖŸÀ”¼ª¬¨R¬7¼’¨<«g¬ ·5©ªº«’Å£m¤R¥§¦T¥§¦.»™¼¥¶·5­X¥§® ©¾»™«»7¤<©¾»}©µ¨R¬7¼’¨<«g¬T·5©ªºŒ«"´¶©Á¸ °Q« ´¶´±«h­„j_n¤5©ª¦m¼X®g®¯²R¬™¬7«h­µ¥¶·Ô»7¤R¥§¦®¯¼’·O»7«¯Î»hÅ ‘ »}»™¤R«Œ« ·5­Ï¼ÁÀ‰»7¤R¥§¦0¨R¬™¼® «g¦™¦0©½¦7«¯»0¼ÁÀm¦7©ªº¨5´±«h¦TÀ”¼’¬ «g©’® ¤µ®¯´§©ª¦™¦z¼ªÀW¨R¬7¼’¨<«g¬‰·5©ªºŒ«g¦Ÿ¤5©ª¦‰°<«g« ·µ°R²5¥±´±»m©ª·5­Œ»™¤R« {T꟦ É ¤R¼O¦H«m·²Rº°Q« ¬‰¼ÁÀp¼®g®¯²R¬™¬™« ·5® «g¦*«¯ÎR®¯«g«g­R¦©0¹’¥±³’« · »7¤5¬7«h¦H¤R¼’´¶­W‰©Á¬™«½Ë’« ¨X»hÅ£m¤R¥§¦ž¦7«¯»Œ¼ÁÀ}¦™©ÁºŒ¨R´¶«g¦Œ® ¼ª·5¦H»7¥±¸ »7²R»7«g¦»7¤5«˜»7¬ ©Á¥¶·R¥¶·R¹Ñ®¯¼’¬7¨R²<¦ž¼ª· É ¤R¥§® ¤¢»7¤5«Ï­X«h®¯¥§¦H¥¶¼ª· »7¬™« « É ¥¶´±´3°<«°R²5¥±´±»gÅ œÞÝ ße¢à©âá›ã3«~¢0£/à4ä©å£ £m¤R«†¼ª¬™¥±¹’¥±·<©Á´’©’¦H¨Q«g®¯»*¼ªÀ<ì  £L¥§¦k»7¤R« É © ¿}¥¶· É ¤R¥§® ¤"»™¤R« ¦7«¯»z¼ÁÀp¨Q¼’¦™¦H¥¶°R´¶«møO²R«h¦u»™¥±¼’·5¦'¥§¦¹ª«g·R« ¬ ©¾»™«g­>Å*£m¤R«g¦7«øO²R«g¦H¸ »7¥¶¼ª·<¦©ª¦7Ë É ¤R« »7¤R«g¬©˜¦7«gøO²R«g·5®¯«ž¼ªÀ É ¼ª¬ ­R¦©ª·5­Lê†ë0ì »™©ª¹’¦ºž©¾»™® ¤5«g¦0©½® « ¬7»™©ª¥±·[¬™« ¹’²R´§©Á¬}«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·[¥¶·³ª¼’´±³O¸ ¥¶·R¹ É ¼’¬™­R¦gÂRê†ë0콩Á·<­¹O©Á¨5¦gÅ T²R¬™¥¶·R¹»7¤R«[¹’¬7¼ É ¥¶·R¹Ï¨R¬7¼X® «g¦™¦Œ¼ÁÀT»™¤R«ì  £0Âz«h©ª® ¤ ·R¼X­X«µ¼ªÀm»7¤R«»7¬™« «µ¥§¦©’¦7¦7¼X®¯¥§©¾»7«h­ É ¥Ä»™¤Æ© ¬7«g¹ª²R´§©Á¬«¯Î¸ ¨R¬™«g¦™¦H¥¶¼ª·L®g©Á´¶´±«h­»™¤R«YæãQárãçÔQäcè^â5à äcâXè™ðŒÝDæԝß©ª·5­L© ¦7«¯» ¼ÁÀ¦™©ÁºŒ¨R´¶«g¦Ô®¯¼ª·O» ©Á¥¶·R¥±·5¹©Á´¶´»7¤R«¬{T꟦À”¬™¼ªº »™¤R« ¦™©ÁºŒ¨R´¶«Ô® ¼ª¬™¨R²5¦ É ¤R¥§® ¤¦™©¾»7¥§¦HÀ”¿Ò»7¤R¥§¦Œ¬7«g¹ª²R´§©Á¬«¯ÎX¨R¬™«g¦H¸ ¦7¥±¼’·WÅ ‘ »»™¤R«°Q« ¹’¥±·R·5¥±·R¹¼ªÀ<»™¤R«.¹ª¬™¼ É ¥¶·R¹¨5¬7¼X®¯«h¦7¦g¾»™¤R« ¬™¼O¼ª»*¼ªÀ<»™¤R«†»7¬™« «m¥§¦z©ª¦™¦H¼X® ¥¶©Á»7«g­»7¼0»7¤5«æÔ‹è“鍩ª·5­ »7¼Ô»™¤R«ž« ·O»7¥¶¬™«Œ»7¬ ©Á¥¶·R¥±·5¹Ô® ¼ª¬™¨R²5¦gÅ ‘ æÔ[©Á´§¦7¼Ô¬™«g® ¼ª¬ ­R¦ »7¤5«¨Q¼’¦7¥Ä»™¥±¼’·[¼ÁÀ»™¤R«´§©ª¦H»¥±»7«gº%»7¤5©Á» É ©ª¦T¥±·O»7¬™¼X­X²5® «g­ ¥¶·Ô¥±»gÅ £m¤R« æԏ¼ªÀ.©[·R¼X­X«½©ª·5­»7¤R«Ô¦H« »Œê ® ¼ªºŒ¨<¼O¦H«h­Ò¼ÁÀ »7¤5«[´±« ÎX¥¶®g©Á´«g·’»™¬7¥¶«g¦µ¼ÁÀ}»7¤R«[´¶«¯ÎX¥§®¯¼’·¢©ª·5­Õ»7¤5«˜»™©ª¹’¦7«¯» É ¥¶´±´R¹’¥±³’«m¬7¥§¦H«m»™¼"¦H«g³ª«g¬™©ª´O·5« É ¬7«g¹ª²R´§©Á¬'«¯ÎX¨R¬™«g¦™¦7¥±¼’·5¦'°¿ ¬™« ¨R´§©ª® ¥±·R¹Ô¥¶·¬æÔ[©¹O©Á¨ É ¥Ä»™¤«g´±«gºŒ« ·O»™¦0¼ªÀzêÅ+y˜¼ª¬™« ¨R¬™«g® ¥¶¦7« ´¶¿5 ® «h©ª® ¤Õ« ´¶« ºŒ« ·O»ë¼ÁÀê ¨R¬7¼X­X²<®¯«g¦À”¼’²R¬­X¥±Ãp« ¬™« ·O» ¨5©Á»H»™« ¬™·5¦/5zì4ëîíOÂï쓉ëŽí’ÂKì2ëR“í’ÂK쓉ëR“í ® «h©ª® ¤Œ¼ÁÀp»7¤5«¹ª«g·R« ¬ ©¾»™«g­¨5©¾»7»7« ¬™·5¦¬™« ¨5´¶©’®¯«g¦z¥±·æÔ »™¤R«¹’©ª¨µ¦H¥±»7²<©¾»7«h­Œ¬7«h¦H¨Q«g®¯»7¥¶³ª«g´±¿©¾»z»™¤R«¬™¥±¹’¤’»‰©ª·5­ ©Á»»7¤R«´±« À »T¼ÁÀ*»7¤R«« ´¶« ºŒ« ·O»T¼ªÀ*»™¤R«"´§©ª¦H»¥Ä»™« º¥±·R¸ »™¬7¼X­X²5® «g­>Å íG¥Ä»™¤Œ»7¤R¥§¦zºŒ«¯»™¤R¼X­>’©0¹’¥±³’« ·TæÔ¹ª« ·5« ¬ ©¾»7«h¦©0ºž©¾Î¸ ¥¶º²RºÍ¼ªÀX)ð N ê N ð›.¬™« ¹’²R´¶©ª¬3« Ν¨5¬7«h¦7¦7¥±¼’·5¦gÅ ‘   É ¼’¬™­ ´¶«¯ÎX¥¶® ¼ª·Í©Á·<­Í©¢»™©ª¹’¦7«¯»®¯¼’·O»™©Á¥¶·R¥¶·R¹ñ!2 Gê†ë0ì9»™©ª¹’¦gÂ É ¥¶´±´k¨R¬7¼X­X²<®¯«0À”¼ª¬«g©’® ¤„æÔQÂXTðLÝD   ›“?!2 ’ßzðWŒ= !gü5û V Z ­X¥±Ãp« ¬™« ·O».·R« É ¬7«g¹ª²R´§©Á¬†«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·3Å*£m¤R«¡æÔò5 ‹ó“ô†@õö/ëF÷õ4S>Þî“øÂ©Á·5­G»7¤R«[´¶«¯ÎX¥§® ©ª´T¥±»7« º áR‚ É ¥±´¶´ ¨R¬™¼X­X²5® «»™¤R«« ¥¶¹ª¤O».À”¼’´±´¶¼ É ¥±·5¹¬™« ¹’²R´¶©ª¬m«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·5¦25 ‹}ùúô†@õö/ëF÷õ4S>Þî“é ‹û“oùúTô†@õö/ëF÷õ4S>Þî“é ‹üùú+“Wô†@õö2ëF÷ õ4S>Þî“é ‹è“oùú+“Wô†@õö2ëF÷ õ4S>Þî“é ‹ü“ô†@õö2ëF÷ õ4S>Þùú  ‹û“ôâ@”õö2ëF÷ õ4S>Þ3“çùú  ‹û“ô†@õö/ëF÷õ4S>Þùú-“é ‹è“ô†@õö2ë÷õ4S>Þ“ýùú-“ñ *‰©’® ¤"¬7«g¹ª²R´§©Á¬k«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·"¦7¨R´¶¥Ä» ¦k»7¤R«m¦H« »*¼ªÀ<¦™©ÁºŒ¨R´¶«g¦ ©’¦7¦7¼X®¯¥§©¾»7«h­Œ»7¼»7¤R«0·R¼X­X«}¥¶·µ» É ¼†5k»7¤R«0¦H« »†¼ªÀW»7¤5«}¦™©Áº¸ ¨R´¶«g¦»7¤5©Á»zºŒ©Á»™® ¤»7¤5«.¬7«g¹ª²R´§©Á¬'«¯ÎX¨R¬™«g¦™¦7¥±¼’·©Á·<­»™¤R«¦7«¯» ¼ªÀ>»7¤R¼O¦H«T»7¤5©Á»m­X¼’·-¯ »gÅ ‘ ¬™« ¹’²R´§©Á¬z« Ν¨5¬7«h¦7¦7¥±¼’·ž¥¶¦‰»7¤5« ¬™«¯¸ À”¼’¬7«¦7« « · ©ª¦.©Œ¿’«g¦H¸o·5¼øO²R«h¦u»™¥±¼’·WÅ œ"þ ß3¦(§äà+¨ ªäPà2¢ªä¤ £m¤R«ž® ¤R¼’¥¶® «¼ªÀz»7¤5«¬™« ¹’²R´§©Á¬}«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·˜»™¤5©¾» É ¥¶´±´°Q« ©’¦7¦7¼X®¯¥§©¾»7«h­½»7¼©ž·5¼­R«ªÂQ¥¶¦TºŒ©’­X«"¥¶·©ª®g®¯¼’¬™­R©ª·5®¯« É ¥±»7¤ »™¤R«+ÿ}¥¶·R¥3¥¶ºŒ¨R²R¬™¥Ä»u¿ ®¯¬™¥±»7« ¬™¥§©˜Ý†¬7«g¥±ºž©ª·Ô« »}©ª´cŶÂe!41 Vß^Å £m¤R«}°Q«g¦H»møO²R«h¦u»™¥±¼’·Ý”¤R«g¬7«}¬™« ¹ª²5´¶©ª¬‰«¯ÎX¨R¬™«g¦™¦7¥±¼’·<ß'¥§¦‰»™¤R« ¼’·R« É ¤R¥§® ¤°5¬7¥¶·R¹’¦T»7¤5«ºž©¾ÎX¥¶º²Rº ­R¬7¼’¨[¥±·æ` › pâXè^æ äD‡ °Q«¯» É « «g·»™¤R«·R¼X­X«©ª·5­¥Ä» ¦‰® ¤R¥¶´§­X¬7«g·WÅ*ÚïÀQ»™¤R«m» É ¼"® ¤R¥±´±¸ ­X¬™« ·Œ¼ªÀp©}·R¼X­X«¢©Á¬™«.® ©Á´¶´¶«g­†©Á·5­3< ¾Âª»7¤R«­X¬™¼ª¨ ¼ªÀ*¥¶ºŒ¨R²R¬™¥Ä»u¿  ž¥§¦­X« Ê5·R«h­Ô©’¦/5  Œ= <Ýß N  N N N QÝ^ß N < N N N <Ý< ¯ß Ýiªß ÚÛºŒ¨R²R¬™¥Ä»u¿Ô¼ÁÀz©ž·R¼X­X«Þ É ¥±»7¤„ëm©Á·<­µ¬ ©Á·R¹’¥±·5¹¼¾³’« ¬ »™¤R«0»™©ª¹’¦7«¯».¥¶¦T®¯¼ªºŒ¨R²R»7«g­ É ¥±»7¤ÔÀ”¼ª´¶´±¼ É ¥¶·R¹À”¼ª¬™º²5´¶© 5 <ݑÞHß)= I J ôkÝë N ÞHßPôkÝ N ÞHß Ýcú’ß É ¤R« ¬™«ôkÝë N ÞHß}¥§¦"«h¦u»™¥±ºž©¾»™«g­ É ¥±»7¤Ò»7¤5«µ¬7«g´¶©Á»7¥¶³ª«À”¬™«¯¸ øO²R«g·5®¯¿µ¼ªÀ'¦™©ÁºŒ¨R´¶«g¦m´§©Á°Q« ´¶´¶«g­ É ¥Ä»™¤Ô»™©Á¹[ëz¥¶· Þ^Å £m¤R« ø’²5«g¦H»7¥¶¼ª·©’¦7¦7¼X®¯¥§©¾»™«g­»7¼·R¼X­X«ÔÞ É ¥¶´±´.°Q«»™¤R« øO²R«h¦u»™¥±¼’· É ¤R¥§® ¤ ºž©¾ÎX¥±ºŒ¥§¦H«h¦  5Å œKœ ß3à4©†¦û¨ ©âå"!äà2äD©âå ‘ ·R¼X­X«T¼ªÀ>»7¤R«»7¬™« «¥¶¦Ÿ·R¼ª»‰À”²R¬7»7¤R«g¬†¦7¨R´¶¥Ä»m¥±À3« ¥±»7¤R«g¬‰»™¤R« ¥¶ºŒ¨R²R¬™¥Ä»u¿[¼ªÀ‰»™¤R«ž·R¼X­X«Œ¥¶¦0°Q« ´¶¼ É ©½»™¤R¬7«h¦H¤5¼ª´§­¼’¬»™¤R« ·²Rº°<«g¬.¼ÁÀ¦7©ªº¨5´±«h¦m´±« À »¥±·˜©·R¼X­X««gøO²5©ª´¶¦—!’Å ‘ »»™¤R«Ô« ·5­Æ¼ÁÀ»™¤R¥¶¦»™¬™©ª¥±·R¥¶·R¹Ï¨R¬7¼X® «g¦™¦©»7¬™« «¤<©ª¦ °Q« «g·L°R²R¥¶´Ä»hÅ*‰©’® ¤Ï·R¼X­X«ž¥§¦"©’¦7¦7¼X®¯¥§©¾»7«h­[»™¼[©Ô¬7«g¹ª²R´§©Á¬ « Ν¨5¬7«h¦7¦7¥±¼’·"ºž©ª­X« É ¥±»7¤ É ¼’¬™­R¦gÂÁê†ë0ì"©Á·5­¹’©ª¨5¦ Å~*‰©’® ¤ ´¶«g©ÁÀ0® ¼ª·O»™©ª¥±·5¦ž©Ò¦7«¯»µ¼ªÀ0¦™©ÁºŒ¨R´¶«g¦À”¬™¼ªº »7¤R« »™¬™©ª¥±·R¥¶·R¹ ® ¼ª¬™¨R²5¦¡5»7¤R¼O¦H«»7¤5©Á»ºž©¾» ® ¤[»7¤5«¬™« ¹ª²5´¶©ª¬T«¯ÎX¨R¬™«g¦™¦7¥±¼’· ¬™« ¨R¬™«g¦7« ·O»™«g­°¿»™¤R«´¶«g©ÁÀuÅò5¬7¼’º »™¤R¥§¦¦7«¯»g©˜¨5¬7¼’°5©¾¸ °R¥¶´¶¥Ä»u¿ž­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·ž¼¾³ª«g¬»7¤R«}­X¥±ÃQ«g¬7«g·’»Ÿ¨R¬7¼’¨<«g¬‰·5©ªºŒ«g¦ ¦7« ºž©Á·O»™¥¶®½®¯´§©ª¦™¦H«h¦"¥§¦®¯¼’ºŒ¨R²X»7«h­>ŬR¼ª¬«¯ÎR©ªº¨5´±«’Â*¥±À© ´¶«g©ÁÀp® ¼ª·O»™©ª¥±·5¦ !4 0¦™©ÁºŒ¨R´¶«g¦g 1 }¼ªÀ É ¤R¥§® ¤ž©Á¬™«m´¶©ª°<«g´±´¶«g­ É ¥±»7¤»7¤R«»™©ª¹b-jmTl[©Á·<­„!2´¶©ª°<«g´±´¶«g­ É ¥±»7¤„je_nz’»™¤R« ® ´¶©’¦7¦Yb3jml É ¥±´¶´¬™«g® « ¥¶³ª« »™¤R«¨R¬™¼ª°<©Á°R¥¶´±¥±»u¿üRû 1Ñ©Á·5­ ® ´¶©’¦7¦—je_nÒ»™¤R«¨5¬7¼’°5©Á°R¥¶´¶¥Ä»u¿ Rû"!ªÅT£m¤R«ºŒ¼ª¬™«²R·R¥±À”¼ª¬™º »™¤R¥¶¦†­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·µ¥§¦ ÂO»7¤5«T´¶«g¦™¦‰¬™« ¨R¬™«g¦7« ·O» ©¾»7¥¶³ª«.¼ªÀW©¦7«¯¸ ºž©Á·O»™¥¶®® ´¶©’¦7¦Ÿ»7¤5«´±«h©¾À'¬7«g¹ª²R´§©Á¬m«¯ÎX¨R¬™«g¦™¦7¥±¼’· É ¥¶´±´>°Q«ªÅ £m¤R«½¦7¤5©Á¨Q«¼ªÀT©[·R¼X­X«Ô­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·Æ©Á´¶´±¼ É ¦"²5¦"»™¼ ­X¥§¦u»™¥±·5¹ª²R¥§¦H¤Ñ»7¤R«Ô´±«h© ³ª«h¦ É ¥±»7¤Æ¬™«g¦7¨Q«g®^»»7¼»7¤R«g¥±¬ž©Á°5¥±´±¸ ¥±»u¿»™¼­X¥§¦™®¯¬™¥±ºŒ¥¶·5©¾»™«¼ª·R«}¦H«gºž©Á·O»7¥§®T® ´¶©’¦7¦g’¦7²5® ¤µ´¶«g© ³’«g¦ É ¥¶´±´W°Q«"® ©ª´±´¶«g­Ò徿§ç à è^æ` æ ã¾ã<äcÅ ‘ ´±«h©¾À É ¥¶´¶´W°<«"® ¼ª·5¦7¥§­¸ « ¬™«g­­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·’» É ¤R« ·»7¤R«¨R¬7¼’°5©Á°5¥±´¶¥Ä»u¿ ¼ªÀz¥Ä» ¦»™¼ª¨ ® ©Á»7«g¹ª¼ª¬™¿µ« ÎX® « «h­R¦©ž®¯«g¬H» ©Á¥¶·Ô»7¤R¬™«g¦7¤R¼ª´§­>Åm£m¤R«"¨R¬™¼ª°<©¾¸ °R¥¶´±¥±»u¿˜¼ÁÀ‰»7¤R«»7¼’¨Ï® ©Á»7«g¹ª¼ª¬™¿ É ¥±´¶´'°<«Œ® ©ª´±´¶«g­Õ徿§ç à¯è^æ` æ ö ãÁãQà ð.¼ªÀ»7¤R«´±«h©¾Àz©Á·<­Ô»7¤R«»7¤R¬™«g¦7¤R¼ª´§­ É ¥¶´±´*°<«® ©ª´±´¶«g­ »7¤5« æ ã<æ` âM 徿§ç à è^æ` æ ã¾ãQà ðäPè7ð¯çîRá±å¾ÅŸíG¤R« · »™¤R« ºŒ¥±·5¥±º²Rº ­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·<®¯«»7¤R¬™«g¦7¤R¼’´¶­Ï¥¶¦¦H« »»7¼WRÂ*©Á´¶´ ´¶«g© ³ª«h¦m©Á¬™«0® ¼ª·5¦7¥§­X« ¬™«g­½­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·’»hÅ œ$# %o£2à2ä¥ý¤Mà2äDå'& à2 ~¢„ã3«¤†§äPà)(ü©âá9¤Xå?ß'*ŒŸ £m¤R«.øO²5©Á´¶¥±»u¿¼ÁÀ<©ª·ì  £Æ¤5©ª¦*°Q« « ·« ³¾©ª´±²5©Á»7«h­"°¿0»7«h¦u»7¸ ¥¶·R¹¥±»™¦0©Á°5¥±´¶¥Ä»u¿Ô»™¼½®¯¼’¬7¬™«g®¯»7´¶¿» ©Á¹Ô©µ¨R¬™¼ª¨Q« ¬}·5©ÁºŒ«¼®¯¸ ®¯²5¬7¬™« ·5® «©ª®g®¯¼ª¬ ­X¥¶·R¹0»7¼"»7¤R«o{TêL¥¶· É ¤R¥§® ¤µ¥Ä»†©Á¨R¨Q«g©ª¬™¦gÅ ‘ »7«h¦u»"® ¼ª¬™¨R²5¦}¼ÁÀm´¶©ª°<«g´±´¶«g­˜{T꟦0¤5©ª¦0°Q« «g·Ï¹O©¾»™¤R« ¬™«g­ ©Á·<­L«g©’® ¤ç{Tê ¤<©ª¦°<«g« ·Æ¹’¥±³’« ·Ñ©[»™©Á¹Ï°¿»™¤R« ì  £0Å £m¤R«T»™©Á¹’¹ª¥¶·R¹¨R¬™¼X®¯«g¦™¦z¥§¦‰¦H»7¬ ©Á¥¶¹ª¤O»7À”¼ª¬ É ©ª¬™­>ª¥Ä»m®¯¼ª·<¦H¥§¦u» ¦ ¼ª·Ç»7¬ © ³ª« ¬ ¦7¥±·R¹Ò»™¤R«Ï»7¬™« «ªÂ¦u» ©Á¬7»7¥¶·R¹©Á»»7¤5«Ï¬™¼¼Á»²R·R¸ »7¥¶´‰© ´¶«g©ÁÀ†¥§¦¬™«g©ª® ¤5«g­>ÅT5¼ª¬«g©’® ¤Ï·5¼­R«+`³¥§¦H¥±»7«h­>Â*¥ÄÀ »7¤5«Œ{Tê9ºž©¾»™® ¤5«g¦T»7¤R«¬7«g¹ª²R´§©Á¬« ÎX¨R¬7«h¦7¦7¥¶¼ª· ¼ÁÀ,+LÂQ»™¤R« ·R« ÎO»·R¼X­X«µ»™¼³¥§¦H¥±»¥§¦»7¤R«Ô­R©Á²R¹’¤O»7« ¬¼ÁÀ-+ ´§©Á°Q« ´¶´¶«g­ ‡ªð¯ç Â'¼ª»7¤R«g¬ É ¥§¦H«’Âk¥±»¥§¦»™¤R«­R©ª²R¹ª¤O»7«g¬"´§©Á°Q« ´¶´±«h­¢ãQá¾Å ÚïÀ »7¤5«´¶«g©ÁÀW¬7«h©ª® ¤R«h­ž¥¶¦†­X¥§¦7® ¬7¥¶ºŒ¥±·<©Á·O»}ݑ¥Ä» ¦†­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·<®¯« ¥§¦©ª°<¼¾³’«µ»™¤R« º¥¶·R¥¶º²Rº`­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·5®¯«»7¤R¬™«g¦7¤R¼ª´§­5߯ »7¤5«[« º°<«h­R­X«g­Õ¨R¬™¼ª¨Q« ¬·5©ÁºŒ«˜¥§¦ž´¶©ª°<«g´±´¶«g­ É ¥±»7¤¢»™¤R« »7¼’¨L® ©¾»™« ¹’¼ª¬™¿ªÅŒÚïÀm»7¤R«ž´¶«g©ÁÀ†¥§¦·R¼ª»­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·’»hÂW»™¤R« ¨R¬™¼ª¨Q« ¬0·5©ªº«ž¥§¦0·R¼ª»0¹’¥±³’« ·Ò©Á·¿˜» ©Á¹5Ō£m¤R¬™« «Ê5¹’²R¬7«h¦ ¤5© ³’«°Q« «g· ® ¼ªºŒ¨R²X»™«g­35 ® »™¤R«¨R¬™«g® ¥¶¦7¥¶¼ª·WÂ É ¤R¥¶® ¤˜¥§¦T»7¤R«·O²5º°Q« ¬T¼ªÀ¨R¬7¼’¨<«g¬ ·5©ªºŒ«G¼®g®¯²R¬™¬™« ·5® «g¦Ò®¯¼’¬7¬™«g®¯»7´¶¿È»™©Á¹’¹ª«h­­X¥¶³¥¶­R«g­ °¿Ñ»™¤R«[·²Rº°<«g¬µ¼ÁÀ0¨R¬™¼ª¨Q« ¬µ·5©ªºŒ«˜¼®g®¯²R¬™¬™« ·5® «g¦ » ©Á¹ª¹’«g­. ® »™¤R«¦7¿O·O» ©ª®^»™¥¶®®¯¼¾³ª«g¬™©ª¹ª«’Â É ¤R¥¶® ¤L¥§¦»™¤R«µ¨R¬™¼ª¨Q¼ª¬7¸ »™¥±¼’·N¼ªÀŒ{T꟦½¼X®g®¯²R¬™¬7«g·5®¯«h¦¥¶·N»7¤R«Ï»7«h¦u»[®¯¼’¬7¨5²5¦ »™¤5©¾»ºŒ©Á»™® ¤Ñ©˜­X¥§¦™®¯¬™¥±ºŒ¥¶·5©Á·O»´¶«g©ÁÀ/¯ ¦¬7«g¹ª²R´§©Á¬«¯Î¸ ¨R¬™«g¦™¦7¥±¼’·/. ® »™¤R«ž´±« ÎX¥¶®g©Á´z®¯¼¾³’« ¬ ©Á¹’«ªÂ É ¤R¥¶® ¤Ò¥§¦}»7¤R«ž·²Rº°<«g¬¼ÁÀ ­X¥±Ãp« ¬™« ·O»[¨R¬™¼ª¨Q« ¬ ·5©ÁºŒ«h¦ »™©ª¹ª¹ª«h­9©Á» ´¶«g©’¦u»[¼’·R« »™¥±ºŒ«.°¿»7¤5«Tì  £­R¥±³¥§­X«g­°¿»7¤5«m»7¼Á» ©Á´X·²Rº°<«g¬ ¼ªÀ'­X¥±Ãp« ¬™« ·O».¨R¬7¼’¨<«g¬.·5©ÁºŒ«h¦m¥±·Ô»7¤5«0»7«g¦H»T®¯¼’¬7¨5²5¦ £m¤R«L»7¬ ©Á¥¶·R¥¶·R¹Ç®¯¼’¬7¨5²5¦˜¥§¦[®¯¼’º¨Q¼’¦7«g­È¼ªÀ[{ꉦ«¯Î¸ »7¬ ©ª®¯»7«h­LÀ”¬™¼ªº »7¤5«½·R« É ¦H¨<©Á¨Q« ¬î3ð ñ[áÁãQå’ð¯ÂŸ°<« » É «g« · ¿ª«h©Á¬ ¦o!21 Vý©Á·5­˜!21 1X! É ¤R¥§® ¤ ® ¼ª·5¦H»7¥±»7²R»7«"©ž®¯¼’¬7¨5²5¦m¼ÁÀ ©Á´¶ºŒ¼’¦H»1 V9y É ¼’¬™­5¦W©Á·5­©³ª¼X® ©ª°R²R´§©Á¬™¿T¼ÁÀ5©ª´±ºŒ¼O¦u»e   É ¼ª¬ ­R¦ Åk£m¤R¥¶¦'® ¼ª¬™¨R²5¦W¤<©ª¦k°<«g« ·¨R¬7¼X® «g¦™¦H«h­}À”¼ª´¶´±¼ É ¥¶·R¹ »7¤5«"­X¥ÄÃp« ¬™« ·O»¦H»7« ¨<¦­X«g¦™®¯¬™¥¶°<«h­¥¶·˜¦H«h®^»7¥¶¼ª· 5Å"!ªÅ'ÚÛ·Ô¼ª¬7¸ ­X«g¬3»™¼T´¶¥±ºŒ¥±»¦7¨R²R¬™¥¶¼ª²5¦{Tꉦ*­X²R«Ÿ»7¼ É ¬7¼’·R¹êŸê[©¾»7»™©’® ¤X¸ ºŒ« ·O»™¦gÂ É «¤5© ³ª«}´¶¥±ºŒ¥±»7«g­Ô»™¤R«® ¼¾³ª« ¬ ©Á¹’«¼ÁÀ*»7¤5«"¹ª¬ ©Áº¸ ºž©Á¬}»7¼„{ꉦ0®¯¼’·’» ©Á¥¶·R¥¶·R¹5ÂW©Á»0ºŒ¼O¦u»hÂW¼ª·R«Œê‰êÓ©¾»7»™©’® ¤X¸ ºŒ« ·O»gÅG£m¤R«˜´¶« ·5¹Á»7¤Õ¼ÁÀ»7¤R«W{ꉦž©’®^»7²<©Á´¶´±¿Æ­R«¯»7«h®^»™«g­ ¬ ©Á·R¹’«TÀ”¬™¼ªº !0»™¼ò!4 É ¼’¬™­5¦ Å íÏ«˪« ¨R»gÂ5¥¶·Ô»7¤R«¦7©ªº¨5´±«"® ¼ª¬™¨R²5¦gÂ5©Á´¶´>»7¤5«{T꟦.¼X®^¸ ® ²R¬7¬™¥¶·R¹T©¾»*´±«h©ª¦H»WÀ”¼ª²5¬W»7¥¶ºŒ«g¦gÅ*£m¤5¥¶¦k¬7«g¨R¬™«g¦7« ·O»™¦W©¦H« »3¼ÁÀ !4OýÓ­X¥±ÃQ«g¬7«g·O»z{T꟦ Å£m¤R«¬7«g¹ª²R´§©Á¬‰«¯ÎX¨R¬™«g¦™¦H¥¶¼ª·5¦ É «g¬7« °R²5¥±´±»k¼’·»™¤R«‰³’¼X® ©Á°5²R´¶©ª¬7¿TºŒ©’­X«¼ÁÀ»™¤R«9!4 ‰»™©ª¹’¦>¼ªÀO»™¤R« » ©Á¹’¦7«¯»3©ª·5­}»7¤5«›! !4 É ¼ª¬ ­R¦p¼ÁÀ»7¤5«‰¦7©ªºŒ¨R´±«z® ¼ª¬™¨R²5¦gÅ ‘ » »™¤R««g·5­˜¼ÁÀ»™¤R«¹’¬7¼ É ¥¶·R¹ž¨R¬™¼X®¯«h¦7¦gÂ<©µ»7¬™« «® ¼ª·O»™©ª¥±·R¥¶·R¹ X!2È·R¼X­X«h¦‰¥§¦†¼ª°X» ©Á¥¶·R«g­ 8 Å10«g¬7«0©Á¬™«¦7¼ªºŒ«T« ÎR©ÁºŒ¨R´¶«g¦ ¼ªÀ†»™¤R«¬™« ¹’²R´§©Á¬«¯ÎX¨R¬™«g¦™¦7¥±¼’·5¦©¾»7»™©ª® ¤5«g­»7¼[»7¤R«µ´¶«g© ³’«g¦ ¼ªÀQ»™¤R«»™¬7«g«Ý »™¤R«T¨5¬7¼’¨<«g¬z·5©ÁºŒ«¥§¦z¬™« ¨5¬7«h¦H«g·’»™«g­°¿»™¤R« ´¶«¯»7»7« ¬32߉5 4 »¼/5 ´½”¾/¿´2À”² 464 Ë2¿87M¾$À ¾”½$²”¼Ë2² ¾2Â/Àû¿´ ¶4·¸¬Ù 4 Ú4Ù¹”Æ9 4 »¼/5 ´½”¾/¿´2À”²çÄ4Ŕ·”Ä;:Â=<Ï´4¼4À´87´2À²ç¿´W¶”·¸¬Ù 4 Ú4Ù4Ä?>2Ç 4 »¼/5 ´½”¾/¿´2À”²çÄ4Ŕ·”Ä|¿ ¾$¼´É/²”¾/¼´¬¿´ ¶4·¸¬Ù 4 Ú4Ù¹”Æ9 4 Ê4´ 4@4 »¼/5 ´½4¾/¿´2À² Ä4Ŕ·”Ä 4A4 ¿´ ¶4·¸¬Ù 4 Ú4Ù¹”Æ9 ‘ ´¶´»7¤5«g¦7«ž«¯ÎR©ÁºŒ¨R´¶«g¦® ¼ª¬™¬7«h¦H¨Q¼ª·5­»7¼ ·R¼X­X«h¦ É ¤R«g¬7« »™¤R«"­X¥§¦7® ¬7¥¶ºŒ¥±·5©ª·5®¯«0³¾©Á´¶²R«0¥§¦.ºž©¾ÎX¥±ºž©ª´zݔ«gøO²5©ª´p»7¼„! ß^Å *ÎX¨Q« ¬™¥±ºŒ« ·O» ¦¤5© ³’«µ°Q« « ·®¯¼ª·<­X²5®^»™«g­Æ¼ª·L» É ¼­X¥±À ¸ À”«g¬7«g·’».»™«g¦H»®¯¼ª¬™¨Q¼ª¬ ©5 ®CBED ¥§¦ºž©’­X«½¼ªÀ!ªû  F {ꉦÀ”¬™¼ªº »7¤5«½·R« É ¦H¨<©¾¸ ¨Q« ¬îkð½ñá¾ãQå’ðÀ”¼ª¬»7¤R« ¿’«g©Á¬ ¦T!41 1†!©Á·5­è!21 1XÅ *Ÿ©ª® ¤ {Tê ®¯¼’·’» ©Á¥¶·5¦Ò©9¨5¬7¼’¨<«g¬·<©ÁºŒ«G©Á¨R¨Q«g©ª¬H¸ ¥¶·R¹"¥¶·Œ»7¤5«´±« ÎX¥¶® ¼ª·W’¬7«g¨R¬7«h¦H«g·O»7¥¶·R¹—5Å ü 9­R¥ÄÃp« ¬™« ·O» ¨5¬7¼’¨<«g¬·5©ªº«h¦ Ř£m¤R¥§¦»™«g¦H»® ¼ª¬™¨R²5¦¥§¦»™¤R« ¬™«¯À”¼’¬7« ® ´±¼O¦H« »™¼L»7¤5«˜»7¬ ©Á¥¶·R¥¶·R¹Ñ®¯¼’¬7¨R²<¦[ݑ©ª´Ä»™¤R¼ª²R¹’¤Õ»7¤R«g¿ ® ¼ª¬™¬7«h¦H¨Q¼ª·<­Œ»7¼Œ­X¥±ÃQ«g¬7«g·’»m¿ª«h©Á¬ ¦™ß¯Âª»™¤R«©Á¥¶ºù¼ÁÀW»™¤R« « ÎX¨<«g¬7¥¶ºŒ« ·O»"¥¶¦»7¼Ô»7«h¦u»»™¤R«©Á°5¥±´¶¥Ä»u¿¼ÁÀ†»7¤R«ì  £ »™¼žºŒ¼­R« ´p»7¤5«0»7¬ ©Á¥¶·R¥±·5¹Œ­5©¾»™©5Å ®CB 8¥§¦.ºž©ª­X«0¼ªÀ'ü 1 Œ{Tꉦ.® ¼ª·O»™©ª¥±·5¥±·R¹V­X¥ÄÃp« ¬7¸ «g·O»}¨R¬™¼ª¨Q« ¬}·5©ªº«h¦©ª¨R¨Q«g©Á¬™¥¶·R¹µ·R¼½º¼’¬7«»7¤5©ª·  »™¥±ºŒ«h¦'¥±·»7¤5«†¿’«g©ª¬™¦~1X!m©Á·5­+1 T¼ÁÀ<»7¤R«m·R« É ¦H¨<©Á¨Q« ¬ îkðñá¾ãpåªð Å"£m¤R«©ª¥±º%¼ªÀ»7¤R¥§¦}«¯ÎX¨<«g¬7¥¶ºŒ« ·O»¥§¦T»7¼ »™«g¦H»Œ»7¤R«˜©Á°5¥±´¶¥Ä»u¿L¼ªÀ»7¤5«˜ì  £ »7¼Ò®¯¼’¬7¬™«g®¯»7´¶¿Ï» ©Á¹ ´¶¼ É À”¬7«høO²R« ·5® ¿¨5¬7¼’¨<«g¬·5©ªº«h¦Ý‘íϫϺŒ©ªËª«˜»™¤R« ©’¦7¦7²RºŒ¨X»™¥±¼’·½»™¤5©¾»"è™ðŽ5ëë.ÿͨR¬™¼ª¨Q« ¬.·<©ÁºŒ«g¦.©ª¬7« ¦7¨5©ª¬™¦7«h߯Š£m¤R«0¬7«h¦H²5´Ä» ¦‰¼ªÀ3»7¤5«}« ÎX¨<«g¬7¥¶ºŒ« ·O»†¼’·½® ¼ª¬™¨R²5¦ B D ©ª¬7« ¬™« ¨Q¼ª¬7»7«h­Ç¥¶·ÇÊ5¹’²R¬7«¬X»7¤R¥§¦½Ê5¹ª²5¬7«¦H¤R¼ É ¦Ô®¯¼¾³’« ¬ ©Á¹’« ©ª·5­Æ¨5¬7«h®¯¥§¦H¥¶¼ª· É ¥Ä»™¤¬™«g¦7¨<«h®^»»7¼»7¤5«ÔºŒ¥¶·R¥¶º²Rº ­X¥¶¦H¸ ® ¬7¥¶ºŒ¥±·5©ª·5®¯«»™¤R¬7«h¦H¤5¼ª´§­>Å£m¤R«h¦H«Œ¬™«g¦7²R´Ä» ¦"¦7¤R¼ É »™¤5©¾»© ¹’¼O¼X­ž¨5¬7«h®¯¥§¦H¥¶¼ª·µ®g©Á·µ°Q«¬™«g©’® ¤R«g­Œ°¿Œ¬ ©Á¥§¦H¥¶·R¹»™¤R«TºŒ¥¶·X¸ ¥¶º²Rº­X¥§¦™®¯¬™¥±ºŒ¥¶·5©Á·5® «0»7¤R¬™«g¦7¤R¼’´¶­>ÅG©ª¥¶¦7¥¶·R¹Œ»7¤R«º¥¶·R¥±¸ º²Rº ­X¥§¦7® ¬7¥¶ºŒ¥±·<©Á·5® «»™¤R¬7«h¦H¤5¼ª´§­˜»7«g·5­R¦0¼’·»™¤R«Œ¼Á»7¤5« ¬ ¤5©ª·5­½»7¼Œ´¶¼ É «g¬m® ¼¾³ª«g¬™©ª¹ª«ªÅ £m¤R«}»7¤R¬™« «­X¥±Ãp« ¬™« ·O»ºŒ«g©ª¦7²R¬™«g¦m©Á´¶´¶¼ É »7¼ž­X¬ © É ­X¥±À ¸ À”«g¬7«g·’»T® ¼ª·5® ´±²<¦H¥¶¼ª·5¦†¼’·»™¤R¥¶¦«¯ÎX¨<«g¬7¥¶ºŒ« ·O»gÅ £m¤R«"¦7¤5©Á¨Q«0¼ÁÀ*»7¤R«¨R¬™«g® ¥¶¦7¥¶¼ª·½® ²R¬™³ª«0¥±·<­X¥¶®g©¾»™«g¦†»7¤<©¾» ® ¼ª·O»7« ÎO» ¦»7¤5©Á»}¤5© ³’«¦7¤R¼ É «g­ »™¼½°Q«ž­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·’»0¼ª· »™¤R«.»7¬ ©Á¥¶·R¥¶·R¹"®¯¼’¬7¨5²5¦'´±«h­»7¼"©"® ¼ª¬™¬7«h®^»'»™©ª¹ª¹’¥±·R¹0¼’·»™¤R« »™«g¦H»® ¼ª¬™¨R²5¦gÅ"£m¤R«Œ¦H¤5©ª¨<«Œ¼ªÀz»7¤R«ž¦7¿·’» ©ª®¯»7¥§®®¯¼¾³’« ¬ ©Á¹’« ® ²R¬7³’«µ¦H¤5¼ É ¦¼ª·L©Á·5¼Á»7¤5« ¬"¤5©ª·5­»7¤<©¾»©ª·L¥±ºŒ¨Q¼ª¬7»™©Á·O» HIJKMLNICOPQSRTVUXWZY,TVQ[UX\V]^PMY_?_Wa`bUZYc K)d6K)WX_eVefK8cUX\ YJVK gihjPklKmO_h`n_goYJVKMLVprqEsEqteh_=uKmvmYwg$T\xc K8crR?yrYJVK z T h_{efK8P\N"_{klklUXQ[Q[UX_{\}|~"_=d6K)QKmY,PW€X‚„ƒ…{……6†‡€ 0 10 20 30 40 50 60 70 80 90 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 score ˆ discriminance threshold precision syntactic coverage lexical coverage *¥±¹’²R¬™« M5@G.«g®g©Á´¶´©Á·<­¢¨R¬7«h®¯¥§¦H¥¶¼ª·¢¼ªÀ©ª·9ì  £ É ¥±»7¤ ¬™«g¦7¨<«h®^»m»7¼Œ»™¤R«ºŒ¥±·R¥¶º²5º:­X¥§¦7® ¬7¥¶ºŒ¥±·<©Á·5® «}´¶« ³’« ´ ¨5©ª¬H»*¼ÁÀR»™¤R«z»™«g¦H»'® ¼ª¬™¨R²5¦k¥¶¦*®¯¼’º¨Q¼’¦7«g­"¼ÁÀ5·R¼’·­X¥§¦7® ¬7¥¶º¸ ¥¶·5©Á·O»†® ¼ª·O»7« Ν»™¦gÅkí«°Q« ´¶¥¶« ³ª«T»7¤5©Á»‰»™¤R¥¶¦ŸÀ‘©’®^»Ÿ¥§¦†­X²R«»™¼ » É ¼¨5¤R« ·R¼’ºŒ« ·5©†5 k¥¶¬ ¦u»hÂ3¦H¼’ºŒ«® ¼ª·O»7« ÎO» ¦©Á¬™«»™¬7²5´±¿[©ªº°R¥¶¹ª²R¼’²5¦gŌR¼’¬ «¯ÎR©ªº¨5´±«’Â*¥¶·Ñ»™¤R«½® ¼ª·O»7« Ν»ä`5ð Qè7ð¯ç^æ‘åªð ã5äáR‚Œ–Â*¥±»¥§¦ ¥¶º¨Q¼’¦™¦7¥±°R´¶«.»™¼­X«h®¯¥§­X«¥±À–G¬™«¯À”« ¬ ¦'»™¼©"® ¼ª²R·O»7¬™¿·<©ÁºŒ« ¼ª¬z©ª·Œ¼ª¬™¹’©Á·5¥K.h©¾»7¥¶¼ª··5©ÁºŒ«ªÅ*Úﻉ¥§¦*»7¤R«g¬7« À”¼ª¬™«m¥±·R«g³¥Ä» ©Á°R´¶« »7¤<©¾»©¨R¬™¼ª¨Q« ¬·<©ÁºŒ«½©ª¨R¨<«h©Á¬™¥±·5¹[¥±·Ñ»7¤R¥§¦Œ®¯¼ª·O»™«¯Î»¥§¦ ¦7¼ªºŒ«¯»™¥±ºŒ«g¦mºŒ¥§¦u» ©Á¹’¹ª«g­½°¿ž»7¤5«ì  £0Š읫g® ¼ª·5­> ¬™«g¦H»7¬™¥¶®¯»7¥¶·R¹m»7¤5«Ÿ®¯¼’·O»7«¯Î»3¼ªÀ5©.¨R¬™¼ª¨Q« ¬W·<©ÁºŒ« »7¼»7¤R«ò{Tꥶ· É ¤R¥§® ¤Ñ¥±»Œ©ª¨R¨<«h©Á¬ ¦¥§¦¦H¼’º« »7¥¶ºŒ«g¦»7¼¼ ´¶¥±ºŒ¥±»7«g­¥±·½¼ª¬ ­X« ¬‰»7¼®¯¼ª¬™¬™«g®^»™´±¿»™©ª¹»7¤5«¨R¬™¼ª¨Q« ¬Ÿ·5©ÁºŒ« 5 ¦7¼ªºŒ«.­X¥¶¦™©Áº°R¥¶¹ª²5©Á»7¥¶·R¹0« ´¶« ºŒ« ·O»™¦zºŒ© ¿´±¥¶«.¼ª²X» ¦H¥§­X«m»™¤R« {Tê'ÅXÚï» É ¼ª²5´¶­µ»7¤R«g¬HÀ”¼’¬7«0°Q«}¥¶·O»7«g¬7«h¦u»™¥±·R¹»™¼Œ® ¼ª·5¦7¥§­X« ¬.© ´§©Á¬™¹ª« ¬'®¯¼’·O»7«¯Î»©Á¬™¼ª²R·<­©¨R¬7¼’¨<«g¬'·5©ÁºŒ«†¼ª¬'²5¦7«†©TÊ5·R«g¬ ¦7¿O·O» ©ª®^»™¥¶®©Á·5©ª´±¿X¦7¥¶¦g©ª¦†­X¼ª·R«}¥¶·ÏÝ  ¼’´±´¶¥¶·5¦Ÿ©ª·5­½ìX¥±·R¹’« ¬h !21 1 1OߟÀ”¼ª¬T·<©ÁºŒ«g­ «g·’»™¥Ä»u¿Ô«¯Î»™¬™©’®^»7¥¶¼ª·3Åw0T¼ É «g³ª«g¬gÂX»™¤R¥§¦ É ¥¶´±´>­X«h®¯¬™«g©ª¦7«.»™¤R«¬™¼ª°5²5¦u»™·R«g¦™¦z¼ªÀW¼ª²R¬†ºŒ«¯»™¤R¼X­µ°¿Œ°<« ¸ ¥¶·R¹ºŒ¼ª¬™«¦7« ·5¦7¥±»7¥¶³ª«»™¼¨5©ª¬™¦7¥±·5¹µ« ¬™¬7¼’¬™¦T­X²R«"»™¼½©ªº°R¥±¸ ¹ª²5¥Ä»u¿’Å £m¤R«‰´¶«¯ÎX¥§® ©Á´®¯¼¾³’« ¬ ©Á¹’«z®¯²R¬™³ª«‰¦7¤R¼ É ¦W»7¤5©Á»'©ª·"¥±ºŒ¨Q¼ª¬7¸ »™©ª·O»T¨R¬™¼ª¨Q¼ª¬7»7¥¶¼ª·˜¼ÁÀz­X¥±Ãp« ¬™« ·O»¨R¬™¼ª¨Q« ¬T·5©ÁºŒ«g¦©Á¨R¨Q«g©ª¬ ©¾»0´¶«g©ª¦H»¼ª·R«»7¥¶ºŒ«Œ¥±·Ò©½­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·O»®¯¼’·’»™«¯Î»gÅ£m¤R« ¥§­X«g©»7¤<©¾»¥§¦m«¯ÎX¨R´¶¼ª¬™«g­µ¥¶·Ô»7¤R«0À”¼’´±´¶¼ É ¥¶·R¹¦7«g®¯»7¥¶¼ª· ®¯¼’·X¸ ¦7¥¶¦H»™¦†¥¶·µ»™©ªË¥±·R¹¥±·O»™¼©’® ®¯¼’²R·O»‰»™¤R«g¦7«¼X®g®¯²R¬™¬7«g·5®¯«h¦‰¼ªÀk© ¨R¬™¼ª¨Q« ¬·5©ÁºŒ«µ¥¶·Ñ¼ª¬ ­X«g¬"»7¼²R¨Q­5©¾»7«Ô¥±»™¦« ·O»7¬™¿Ï¥¶·L»™¤R« ´¶«¯ÎX¥¶® ¼ª·WÅ £m¤R«¬™«g¦7²R´±»™¦†¼ÁÀk»7¤R«}«¯ÎX¨Q« ¬™¥±ºŒ«g·’».¼’·½®¯¼’¬7¨5²5¦ B 8 ©Á¬™« ³ª«g¬7¿ ®¯´¶¼’¦7«»™¼ž»7¤R«¬7«h¦H²5´Ä» ¦¦7¤R¼ É ·˜¥¶·˜Ê5¹ª²5¬7«Œž«¯ÎR®¯«g¨X» À”¼ª¬‰»7¤R«´±« ÎX¥¶®g©Á´<® ¼¾³ª«g¬™©ª¹ª« É ¤R¥¶® ¤ž¥§¦Ÿ¬7«g´¶©Á»7¥¶³ª« ´¶¿® ´±¼O¦H«m»™¼ »7¤5«"¦H¿·O»™©’®^»7¥§®}®¯¼¾³ª«g¬™©ª¹ª«ݑºŒ¼’¦H»¼ÁÀ3»™¤R«0¨R¬™¼ª¨Q« ¬.·5©ªº«h¦ ©ª¨R¨<«h©Á¬¼’·R´¶¿½¼’·5®¯«¥¶· »7¤R«®¯¼ª¬™¨R²5¦ ß^Å.£m¤R¥§¦mÀ‘©’®^»}¦7¤R¼ É ¦ »™¤5©¾»"»™¤R«ž¬™« ¹ª²5´¶©ª¬}« ÎX¨R¬7«h¦7¦7¥¶¼ª·5¦0´¶«g©ª¬7·R«h­°¿»7¤R«ì  £ ¦7« «gº`»™¼ÑºŒ¼X­X« ´»7¤R«®¯¼’·’»™«¯Î»¼ªÀ©Nà$K¾ç çµ¼ÁÀ¨R¬™¼ª¨Q« ¬ ·5©ªºŒ«g¦¼’¬gÂ<¨R²X»­R¥ÄÃp« ¬™« ·O»7´¶¿ªÂ<»7¤<©¾»¨5¬7¼’¨<«g¬·5©ÁºŒ«h¦¼ÁÀz© ¹’¥±³’« · ®¯´§©ª¦™¦†»7« ·<­»™¼ž©Á¨R¨Q«g©ª¬.¥±·˜¦7¥±ºŒ¥¶´¶©ª¬m® ¼ª·O»7« Ν»™¦gÅ ‰ ŠŒ‹ 2 M = 4HS9' M Fu1WI 476p؉S Ù J4HS9'M t v & £m¤R«.¨5¬7«g³O¥¶¼ª²<¦'«¯ÎX¨<«g¬7¥¶ºŒ« ·O»™¦¦7¤R¼ É «g­»7¤5©Á»»7¤R«Tì  £Æ¥§¦ ©ª°R´±«µ»7¼[®¯¼’¬7¬™«g®¯»7´¶¿[» ©Á¹[© ¨R¬™¼ª¨Q« ¬"·<©ÁºŒ« É ¥±»7¤Ñ© ¤R¥¶¹ª¤ ©’® ®¯²5¬™©’®¯¿ É ¤R« ·»™¤R«Œ´¶©Á»H»7«g¬"©Á¨R¨Q«g©ª¬™¦}¥¶·©˜±ár€ žé¯æ ö …ªâXæ äD‡{ê'Â É ¤R¥§® ¤¥§¦‰©ª·{Tê»7¤<©¾»‰ºž©¾» ® ¤R«g¦©0¤5¥±¹’¤µ­X¥¶¦H¸ ® ¬7¥¶ºŒ¥±·5©ª·5®¯«.´¶«g© ³’« ¯ ¦*¬7«g¹ª²R´§©Á¬« Ν¨5¬7«h¦7¦7¥±¼’·WÅ3í«­X«h®¯¥§­X«g­ »™¼ »™©Á˒«µ©ª­X³¾©ª·’» ©Á¹’«¼ªÀ†»™¤R¥§¦¨R¬™¼ª¨Q« ¬7»u¿[« ·5­L¦7« ´¶«g®¯»gÂk¥±· »™¤R«˜»7«h¦u»Ô®¯¼ª¬™¨R²5¦gÂm©L¦H²R°<¦H« »¼ÁÀ0´±¼ É ©Áº°R¥¶¹ª²R¥±»u¿}{ꉦ ® ¼ª·O»™©ª¥±·R¥¶·R¹0»™¤R«.²R·R˝·R¼ É ·Œ¨R¬™¼ª¨Q« ¬z·<©ÁºŒ«g¦ É « É ©Á·O»'»7¼ ¥¶·5®¯´¶²5­X«¥±·Õ»7¤5«[´±« ÎX¥¶® ¼ª·WÅÇ£m¤R¥§¦µ¦7«¯»¼ªÀ—{ꉦŒ¥§¦ž²5¦7«g­ »™¼²5¨Q­R©Á»7«}»7¤5«}´¶«¯ÎX¥§® ©ª´Q«g·O»7¬™¥±«h¦Ÿ¼ªÀ3»™¤R«0¨R¬7¼’¨<«g¬m·5©ÁºŒ«g¦gÅ £m¤R«0´¶«¯ÎX¥§®¯¼ª· É ¥±´¶´>¥Ä» ¦H«g´ÄÀ*°<«}²5¦7«g­»7¼«g¦H»7¥¶ºž©¾»7«»7¤5«}¨<©¾¸ ¬ ©ÁºŒ«¯»™« ¬ ¦¼ªÀT©Ï¦H»™©Á»7¥§¦u»™¥¶®g©Á´Ÿ»™©ª¹ª¹ª«g¬gōG.«h® ©Á´¶´†»7¤5©Á»»™¤R« ´¶«¯ÎX¥§® ©"©’¦7¦7¥±¹’·»7¼"«g³ª« ¬™¿ É ¼’¬™­Ü9©"® ¼ª²R·O»Ÿ­R¥¶¦H»7¬™¥±°5²X»7¥¶¼ª· ¼¾³’« ¬}»7¤R«½­X¥ÄÃp« ¬™« ·O»» ©Á¹’¦0¼ªÀm»7¤R«Œ» ©Á¹ª¹’« ¬0» ©Á¹’¦7«¯»hŵ£m¤R« ® ¼ª²R·O»ž­X¥¶¦H»7¬™¥¶°R²X»7¥¶¼ª·<¦©ª¬7«½²R¨p­R©¾»™«g­¥±·Æ»7¤R«½À”¼ª´¶´±¼ É ¥¶·R¹ É © ¿5 ® *¥¶¬™¦H»g©0ºŒ¥±·5¥±º²Rºù­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·5®¯«m»™¤R¬7«h¦H¤5¼ª´§­Ž/ ¥§¦® ¤R¼O¦H«g·WÅ ® £m¤5« ·WÂX©Á´¶´<»7¤R«o{TꉦŸ¼ÁÀ>»™¤R«»7«g¦H»†® ¼ª¬™¨R²5¦‰¥±· É ¤5¥¶® ¤ ©ª¨R¨Q«g©Á¬ ¦Ô©Æ¹’¥±³’« ·Ç²R·5ËO·5¼ É · É ¼ª¬ ­GÜ ©ª¬7«Ï¨R¬™¼Á¸ ® «g¦™¦H«h­µ»™¤R¬™¼ª²R¹’¤½»™¤R«0»7¬™« «©ª·5­Ô« ·5­Ô²R¨ ¥¶·[­X¥ÄÃp« ¬7¸ «g·O»´±«h© ³ª«g¦ Q Å ® ÚïÀ.»™¤R«½­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·<®¯«ž¼ÁÀ.»™¤R«µ´¶«g©ÁÀm¥§¦°<«g´±¼ É Ž   »™¤R«{êG¥¶¦T¬7« \ «g®¯»7«g­WÂ5¼Á»™¤R« ¬ É ¥§¦H«0»™¤R«"¨R¬™¼ª°5©ª°R¥¶´±¥±»u¿ ­R¥¶¦H»7¬™¥±°5²X»7¥¶¼ª·Ï¼ÁÀz»™¤R«´¶«g©ÁÀ‰¥§¦T» ©Á˒« ·¥¶·O»7¼Ô©ª® ® ¼ª²R·O» »™¼L²R¨p­R©¾»™«˜»7¤R«[® ¼ª²R·O»­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª·Ç©ª¦™¦7¼® ¥¶©Á»7«h­ »™¼ÑÜ"ÅñR¼’¬µ«¯ÎR©ªº¨5´±«’†¥±À"Ü« ·<­R¦µ²R¨G¥¶·Ç©Ò´¶«g©¾À É ¤5¥¶® ¤¹’¥±³’«g¦>©.¨R¬™¼ª°<©Á°R¥¶´±¥±»u¿T¼ªÀ05û ü†»™¼.»7¤5«»™©ª¹‰je_n ©ª·5­˜Rû »™¼½»™¤R«»™©ª¹Whjk-l3b3_ gzÂ>»™¤R«ž®¯¼ª²5·’»™« ¬0¼ÁÀ » ©Á¹„je_nL¥¶·Ü¯ ¦}´¶«¯ÎX¥¶®g©Á´*« ·O»7¬™¿Ô¥¶¦0®¯¬™«g­R¥Ä»™«g­ É ¥±»7¤ ü ²5·R¥Ä» ¦©ª·5­Ï»7¤R«µ® ¼ª²R·O»7«g¬"¼ÁÀ9hjekel3b3_ g É ¥±»7¤| ²5·R¥Ä» ¦ Ÿ©ª¦¥ÄÀTÜ É ©ª¦¦7« «g·ü[»7¥¶º«h¦»™©Á¹’¹ª«h­ýje_n ©ª·5­ »7¥¶º«h¦m»™©ª¹ª¹’«g­ hjk-l3b3_ gzÅ ® ë·<®¯«©ª´±´k»7¤5«+{ꉦ0©Á¬™«"¨R¬™¼X®¯«g¦™¦7«g­>Âp©Á·[«h¦u»™¥±ºž©Á»7« ¼ªÀ[»7¤5«Ó¨5¬7¼’°5©Á°R¥¶´¶¥Ä»u¿ù­X¥§¦u»™¬7¥¶°R²X»™¥±¼’· L Ý‘Ü I N Þ I ߯¼ÁÀ «høO²5©¾»™¥±¼’·}!Œ¥§¦®¯¼ªºŒ¨R²R»7«g­Ï²5¦7¥±·5¹½»™¤R«µ®¯¼’²R·O»"­X¥¶¦H¸ »™¬7¥¶°R²X»™¥±¼’·5¦.¼ªÀ3»7¤5«´±« Ν¥§®¯¼’·WÅ )N"_\bY[hjP=h[y Y_YJVK‘Y[hjPUX\Ua\]rv)_heVTVQ)‚Ua\ YJVKMYK)QY-v)_h[’ eVTQ)‚ YJKSv8P=YK)]{_h[y_gYJKSTV\`b\V_8O\“O"_hjc Q1UaQ1T\V`b\V_8O\” YJVKmy•P=hKYjP]]{K8c@–˜—„™€@š›\œ_hjcKmhlY_ KmbY[hjPvmY‘žwŸ/Q)‚YJVK exP=hQ[Ua\]3]hjPklk^P=h"O1PQkl_bcUZ VK8c˜¡PWXWfeh_efKmh\VPklK,Qyk‘’ Rf_{WXQ|¢¤£¥o‚¦?§£f¨m©ª€m€)€‡†«UX\•YJK]hjPklk^Ph‘O"KmhKrhK)eWPv)K8c R?ynYJVKMQy k3Rf_{W/–„—„™/€MIJVUXQwYK)vjJV\U¬?TKMUX\?Y[h_bcTVvmK)QwQ[_{klK \V_UaQ[K-R?yhK)vm_{]{\UX­)UX\V]“Q[_{klK Q[eVT hUa_TVQEžwŸ/Q)€SIJK3UXklef_h[’ YjP\v)K_gYJUXQ,\_{UXQ[K JxPQ,\_YRfK)Km\r¬?TxP\bYUZ VK8c˜€ £m¤R¥¶¦0ºŒ« »7¤R¼X­Ò¬™« ´¶¥±«h¦¼’·»7¤R«Œ¥¶º¨5´±¥§®¯¥±»©ª¦™¦7²RºŒ¨X»7¥¶¼ª· »7¤<©¾»©Ò¨R¬™¼ª¨Q« ¬ž·5©ªºŒ« ©ª¨R¨Q«g©Á¬™¥¶·R¹Ò¥±·G©Á·G©ªº°R¥¶¹ª²R¼’²5¦ ®¯¼’·O»7«¯Î»'»7«g·5­R¦'»™¼°<«©ªº°R¥¶¹ª²5¼ª²5¦'©Á·<­»™¤5©¾»z¥±»™¦z´±« Ν¥§® ©ª´ ­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·Í¬7«/#5«g®¯»™¦ »™¤R«Ñ¨R¬™¼ª°<©Á°R¥¶´±¥±»u¿9­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª· ݑ©’¦'«g¦H»7¥¶ºž©¾»7«h­°O¿"»7¤R«Tì  £ß*¼ªÀ5»™¤R«®¯¼’·’»™«¯Î»¥±· É ¤R¥§® ¤ ¥±»k©Á¨R¨Q«g©ª¬™¦gÅ3£m¤R¥§¦3©’¦7¦7²RºŒ¨X»™¥±¼’·}¥§¦W¼ªÀ® ¼ª²R¬ ¦7«'»7¼¼¦u»™¬7¼’·R¹ ©Á·<­·R¼’·L©Áº°5¥±¹’²R¼ª²5¦ É ¼ª¬ ­R¦0® ©ª·©ª¨R¨<«h©Á¬¥¶·L©Á·Ò©Áº¸ °R¥¶¹ª²R¼’²5¦"® ¼ª·O»7« ÎO»hŘ£m¤R«µ¨R¬™¼ª¨Q« ¬"·<©ÁºŒ«Ôîkðgé;¾ãpá¾ã<ÂkÀ”¼’¬ «¯ÎR©ªº¨5´±«"¥§¦·5¼Á»T³’« ¬™¿½©Áº°R¥¶¹ª²R¼’²5¦ É ¤R¥¶´±«»7¤R«®¯¼’·’»™«¯Î» äPRð› Qè7ð¯ç^æ‘åªð ã5äá΂‰–ÂR¥¶· É ¤R¥¶® ¤ÏîkðhéŽÁãQáÁãµ® ©Á·˜©ª¨R¨<«h©Á¬h ¥§¦ ÅÜ*‰¦H»7¥¶ºž©¾»™¥±·R¹»7¤5«0©ªº°R¥¶¹ª²5¥Ä»u¿Œ¼ÁÀî3ðhé;¾ãQáÁã É ¥Ä»™¤»™¤R« ©Áº°R¥¶¹ª²R¥±»u¿©ª¦™¦7¼® ¥¶©Á»7«h­µ»™¼Œ»7¤R«"® ¼ª·O»7« ÎO»"äPRð Qè™ð^ç^æ‘åªð ã5ä á΂[–Ì¥¶¦·R¼Á»Œ³ª«g¬7¿Ò®¯¼’·³O¥¶·5® ¥±·R¹©Á·5­Ñ»7¤R« ©ª¦™¦7²RºŒ¨X»7¥¶¼ª· ¥¶·’»™¬7¼X­X²<®¯«g¦m©°R¥§©ª¦†¥¶·»™¤R«ºŒ«¯»7¤5¼­WÅz£m¤R«0¥±ºŒ¨Q¼ª¬7»™©Á·<®¯« ¼ÁÀ*»7¤5¥¶¦T°R¥¶©’¦.¥§¦.·R« ³’« ¬7»7¤R«g´±«h¦7¦.©Á»H»™« ·²5©¾»™«g­Ô°¿µ»™¤R«À‘©ª®¯» »7¤<©¾»†©ª·½ëë.ÿ É ¼’¬™­ž®g©Á·µ©Á¨5¨<«h©Á¬‰¥¶·¦7« ³’« ¬ ©Á´5­X¥±Ãp« ¬™« ·O» ®¯¼’·O»7«¯Î» ¦ ÂO¥±· É ¤R¥§® ¤® ©’¦H«’Â’¥±»™¦Ÿ´±« ÎX¥¶®g©Á´p­X¥¶¦H»7¬™¥¶°R²X»7¥¶¼ª·3©’¦ «g¦H»7¥¶ºž©¾»™«g­°¿»™¤R«ì  £0Â É ¥¶´¶´z°<«´±«h¦7¦"­R« ¨Q« ·5­X«g·O»¼ÁÀ ¼ª·5«® ¼ª·O»7« ÎO»hÅ£m¤R«Œ°R¥§©ª¦0® ©ª·°Q«À”²R¬H»™¤R« ¬©Á»H»7«g·²5©¾»™«g­ °¿L¬™©ª¥¶¦7¥¶·R¹[»™¤R« ¨5©Á¬ ©ÁºŒ«¯»™« ¬nŽ/’Ÿ©ª·5­L»™¤R« ¬™«¯À”¼’¬7«½»™©ªËª« ¥¶·’»™¼½©’® ®¯¼’²R·O»T´¶¼ É ©ªº°R¥¶¹ª²5¥Ä»u¿ ®¯¼’·’»™«¯Î»™¦¼ª·5´±¿Ô¥±·«g¦H»7¥±¸ ºž©¾»™¥±·R¹»™¤R«´±« ÎX¥¶®g©Á´W­X¥§¦u»™¬7¥¶°R²X»™¥±¼’· ¼ÁÀ©¨5¬7¼’¨<«g¬†·<©ÁºŒ«ªÅ #-ž %3®3¦Ó¢ªä¥ç¢0åà4£ £m¤R«‰©ª¨R¨R¬™¼’©’® ¤0­R«g¦™®¯¬™¥±°Q«g­0¥¶·"¦H«h®^»™¥±¼’·m¤5©ª¦W°<«g« ·® ¼ªº¸ ¨5©ª¬7«h­Ô»7¼©µ¦H»™©ª·5­R©ª¬™­½»7«h® ¤R·R¥§øO²R«À”¼ª¬¤5©Á·<­X´±¥¶·R¹Ôëëÿ É ¼ª¬ ­R¦T¥¶·Lê†ë0옻™©Á¹’¹ª¥¶·R¹5Â É ¤R¥§® ¤Ò®¯¼ª·<¦u»™¥Ä»™²X»7«h¦©°5©’¦H« ¸ ´¶¥±·R«ºŒ¼X­X« ´oÅ *zÎX¨<«g¬7¥¶ºŒ« ·O»™¦½®¯¼ª·<­X²5®^»™«g­9²5¦H¥¶·R¹°<¼ª»7¤ »7«h® ¤R·R¥§øO²R«g¦©ª¬7«¬7«g¨<¼’¬H»™«g­Ï¥±·Ï»7¤R«À”¼’´±´¶¼ É ¥±·5¹» É ¼Ô¦7«g®¯¸ »7¥¶¼ª·<¦ ÅL£m¤R«g¿ É « ¬™«½­R¼ª·R«½¼ª·Ñ»™¤R« ýû üW É ¼ª¬ ­R¦»™«g¦H» ®¯¼’¬7¨5²5¦ ͺž©ª­R«¼ÁÀ»™¤R«Œ¦H«g·’»™« ·5® «g¦À”¬7¼’º É ¤5¥¶® ¤ É « ¬™« «¯Î»™¬™©’®^»7«h­µ»™¤R«¡{T꟦m¼ÁÀ'®¯¼ª¬™¨R²5¦ B 8ÁÂR¨R¬™«g¦7« ·O»7«h­µ¥¶· <Å RÅ £m¤R«¢«¯ÎX¨Q« ¬™¥±ºŒ«g·’» ¦L®¯¼’·5¦H¥§¦H»7«g­¼ª· ®¯¼’·5¦7¥¶­X«g¬7¥¶·R¹N»7¤5©Á» »7¤5«Ï«g´±«gº«g·O»™¦½¼ÁÀ¦7«¯»°¯žÂTºž©ª­R«¼ÁÀ»™¤R«|V L¨R¬7¼’¨<«g¬ ·5©ªº«h¦z¼ªÀ B 8ÁÂ É «g¬7«²5·R˝·R¼ É ·ž»7¼"»7¤R«}´±« ÎX¥¶® ¼ª·µ©Á·<­ž¼ª· «g¦H»7¥¶ºž©¾»™¥±·R¹Æ»7¤R«g¥±¬Ô´¶«¯ÎX¥¶®g©Á´}« ·O»7¬™¥¶«g¦µ²<¦H¥¶·R¹Ñ» É ¼Æ­X¥±Ãp« ¬7¸ « ·O».»™«g® ¤R·R¥§øO²R«g¦gÅ ‘ À »™« ¬.»™¤R«·R« É ´¶«¯ÎX¥¶®g©Á´W­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª· É « ¬™«"®¯¼ªºŒ¨R²R»7«g­WÂ˜É ©’¦†»™©ª¹ª¹ª«h­ ©Á¹’©ª¥±·3ÂX¿O¥¶« ´§­X¥¶·R¹ž»™¤R« »™©ª¹ª¹’«g­Ô®¯¼ª¬™¨Q¼ª¬ ©/±²  ©ª·5­³/´µ'¶L©Á·5­½»™¤R«©ª®g®¯²R¬ ©ª® ¿ ¼ÁÀ*»7¤5«0»™©Á¹’¹ª¥¶·R¹Œ®¯¼ªºŒ¨R²R»7«g­WÅ #-žž ·+¤X£4¢0§äå~¢ò¥ç©/!~¢0§ ÚÛ·Ò»7¤R«µ°<©ª¦7« ´¶¥±·R«º¼X­X«g´c©ª·Ò©’¦7¦7²RºŒ¨X»™¥±¼’·L¥¶¦ºž©’­X« »7¤<©¾»‰©Á´¶´R«g´±«gº«g·O»™¦'¼ÁÀ¯È¦H¤<©Á¬™«Ÿ»™¤R«¦™©ÁºŒ«m²R·R¥±À”¼ª¬™ºA´¶«¯Î¸ ¥§® ©Á´>­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª·WÅz£m¤R¥§¦†¥¶¦m©¦u» ©Á·5­5©Á¬ ­Œ»7«h® ¤R·R¥§ø’²5«À”¼’¬ ¤5©ª·5­X´¶¥±·R¹Œëë.ÿ É ¼ª¬ ­R¦¥¶·µê†ë0ì» ©Á¹’¹ª¥¶·R¹µÝ”í« ¥§¦7® ¤5«g­X«g´ «¯»Œ©Á´oűÂ!41 1’ú’ß^ŏíG¤R« ·L»™¤R«½»™©Á¹’¹ª«g¬"¥¶¦À‘©ª® «g­ É ¥±»7¤Õ©Á· ëëÿ É ¼’¬™­ ¥¶·Ï©½¦H«g·O»7« ·<®¯«ªÂQ¥±»™¦}ºŒ¼’¦H»¨R¬™¼ª°5©ª°R´¶«ê†ë0ì »™©ª¹¥§¦3® ¤R¼O¦H«g· É ¥±»7¤¬7«h¦H¨Q«g®¯»W»7¼.»™¤R«Ÿ¦7¼ª´¶«'»7¬™¥¶¹ª¬ ©Áº9¨5¬7¼’°X¸ ©Á°5¥±´¶¥Ä»u¿’Å&£m¤R«©ª® ® ²R¬ ©ª®¯¿9¼ª·¸/±«²  ¬7«h©ª® ¤R«h­Nüýû úOþ ºŒ«g©ª·R¥±·5¹Æ»™¤5©¾»[üOýXû ú Oþ`¼ªÀ»7¤5«¼X®g®¯²R¬™¬7«g·5®¯«h¦¼ªÀ»™¤R« É ¼ª¬ ­R¦¼ªÀ¯ É «g¬7«Ï¹ª¥¶³ª« ·Ç»7¤R«Ò¬™¥±¹’¤’»Ô»™©ª¹5Å £m¤R¥§¦Ô¬7« ¸ ¦7²R´Ä»¥§¦®¯¼’·5¦H¥§¦H»7« ·O»žÝ‘©ª´Ä»™¤R¼ª²R¹’¤Ï©½°R¥±»"¤R¥±¹’¤R« ¬hÂW­X²R«»™¤R« ­X¥±ÃQ«g¬7«g·5®¯«¼ÁÀ»™¤R«"» ©Á¹’¦7«¯» ¦T¦7¥K.g«hß É ¥Ä»™¤˜»7¤R««gøO²R¥¶³¾©Á´¶« ·O» «¯ÎX¨Q« ¬™¥±ºŒ«g·’» ¦m¥±·LݑíÏ«g¥¶¦™® ¤R«g­R« ´W« »T©Á´oűÂ3!41 1ªúOß^Å #-žÞÝ ß'*ŒŸ ¥ç©/!~¢§ ÚÛ·¢»7¤R¥§¦µ« ÎX¨<«g¬7¥¶ºŒ« ·O»g†»7¤5«´¶«¯ÎX¥¶®g©Á´T­R¥¶¦H»7¬™¥±°5²X»7¥¶¼ª·¢¼ÁÀ »™¤R«G« ´¶« ºŒ« ·O» ¦Ò¼ÁÀ}¯ ©Á¬™«¢«g¦H»7¥¶ºž©¾»7«h­ À”¼’´±´¶¼ É ¥¶·R¹9»™¤R« ¨R¬™¼X®¯«h¦7¦­X«g¦™®¯¬™¥±°Q«g­Ñ¥±·}R²5¦7¥±·R¹®¯¼’¬7¨R²<¦^"ŏ£m¤5«¬™«¯¸ ¦7²R´±»™¦}¼ª°X» ©Á¥¶·R«g­¦u»™¬7¼’·R¹ª´¶¿½¬™« ´¶¿ ¼ª·˜»™¤R«Œ­X¥¶¦™®¯¬™¥¶º¥¶·5©ª·5®¯« »™¤R¬7«h¦H¤5¼ª´§­•Ž  ® ¤R¼’¦7« ·3ʼnR¼’¬}©¹ª¥¶³ª«g·œŽ  Â>¼ª·R´¶¿˜©¦7²R°X¸ ¦7«¯»‘¯^¹*¼ÁÀ,¯ù¬™«g® « ¥¶³ª«g­ ·5« É ´¶«¯ÎX¥¶®g©Á´*­X¥¶¦H»7¬™¥¶°R²X»7¥¶¼ª·3Â<»™¤R« ¼ª»7¤R«g¬‰¥±»7« ºž¦Ÿ¬7«gºž©Á¥¶·R«g­ É ¥±»7¤µ»7¤R«g¥±¬†²R·R¥±À”¼ª¬™ºù­X¥¶¦H»7¬™¥¶°R²X¸ »™¥±¼’·5¦ ŽëTÀ.®¯¼’²R¬ ¦H«’ÂW¥ÄÀ Ž  ¥¶¦¦7«¯»»™¼WRÂW»™¤R« ·;¯^¹Ó=º¯žÅ £*©Á¹’¹ª¥¶·R¹˜©ª® ® ²R¬ ©ª®¯¿¼ÁÀ´µ'¶ À”¼ª¬­X¥ÄÃp« ¬™« ·O»³¾©ª´±²R«h¦¼ÁÀ Ž/†©ª¬7«¬™« ¨Q¼ª¬7»7«h­0¥±·» ©Á°R´¶«9! É ¤R¥¶® ¤©Á´§¦H¼¦7¤R¼ É ¦>»™¤R«‰¬™« ´±¸ ©Á»7¥¶³ª«ž¹’©ª¥±·Ò®¯¼’ºŒ¨5©Á¬™«g­»7¼ »™¤R«ž°5©ª¦7« ´¶¥¶·R«žºŒ¼­R« ´oÅ£m¤R« °Q«g¦H»¬7«h¦H²5´Ä»"ÝoýÁúOþ߆¥§¦m¼ª°X» ©Á¥¶·R«g­ É ¥±»7¤»Ž/¦7«¯».»™¼[5û <Å ¼f½ … € … … € ƒ …b€ ¾ …b€ ¿ …b€ À Á ¤ÃbÄ„Å Æ Á{€ Ç Æ Á{€ Ç Æ Ç € … Æ ƒ?€ Ç Á Æ …b€ À ¿ Æ € È É-ÊbËbÌÍ Èb€ Î Èb€ Î Àb€ È Æ € È È?€ ƒ … € Ç £*©Á°R´¶«[! 5 ‘ ® ®¯²5¬™©’®¯¿ž¼ÁÀ/´Vµo¶´±¥¶ºŒ¥Ä»™«g­½»7¼µ¦7«¯» ¯ £m¤R«"­X¬™¼ª¨ ¼ÁÀ©ª®g®¯²R¬ ©ª® ¿ž¼ª°5¦7« ¬™³ª«h­µÀ”¼’¬³¾©Á´¶²R«g¦¼ÁÀŽ  ¤R¥¶¹ª¤5« ¬>»™¤5©Á·¡5û m®g©Á·0°<«‰«¯ÎX¨R´§©Á¥¶·R«g­0°¿»™¤R«‰­X«h®¯¬™«g©’¦H«'¼ÁÀ »™¤R«¦H¥". «0¼ÁÀ"¯^¹ É ¤R« ·°Ž  ¥¶·5®¯¬™«g©’¦H«h¦ ÂX©’¦†¦7¤R¼ É ·½°O¿Œ»™¤R« Ï Ð'ÑÏ Ï ÐEÏ ®¯²5¬7³’«¥±·µÊ5¹ª²R¬™«Tú5Å(*¥¶¹ª²R¬™«Tú"­R¥¶¦7¨R´§© ¿¦Ÿ©Á´§¦H¼©ª® ® ²X¸ ¬ ©ª® ¿¼ÁÀ±² m©ª·5­r´µ'¶ ´¶¥±ºŒ¥±»7«g­ž»7¼»™¤R«¥±»7« ºž¦Ÿ¼ÁÀ/¯^¹cÅ £m¤R«h¦H«Ÿ» É ¼T®¯²R¬™³ª«h¦k©ª´±´¶¼ É »7¼0®¯¼’º¨<©Á¬™«zºŒ¼ª¬™«zÊ5·R«g´±¿0»™¤R« ¨Q« ¬7À”¼ª¬™ºž©Á·5® «g¦.¼ªÀ»7¤R«» É ¼ž» ©Á¹’¹ª¥¶·R¹Œ»7«h® ¤R·R¥§ø’²5«g¦ݑ²R·R¥±¸ À”¼’¬7º ­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª·}ÒjÓ<ç*ì  £Ñ«h¦u»™¥±ºž©Á»7«g­­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·<ß ¦7¥±·<®¯«ž»™¤R«©ª®g®¯²R¬ ©ª® ¿[¥§¦"®¯¼’º¨5²X»7«h­Ò¼ª·R´¶¿¼’·L¥Ä»™« ºž¦¼ÁÀ ¯^¹ É ¤R¥§® ¤WÂ.°¿Õ­R«¯Ê5·R¥±»7¥¶¼ª·9¼ÁÀ^¯^¹c©Á¬™«˜»™©ª¹ª¹ª«h­Õ²5¦H¥¶·R¹ ­X¥±Ãp« ¬™« ·O»T­X¥§¦u»™¬7¥¶°R²X»™¥±¼’·5¦m¥±·³/±²  ©Á·<­¥¶·³´Vµ'¶Å £m¤R«m¦H¤5©ª¨<«h¦*¼ÁÀR»™¤R«g¦7«‰» É ¼0®¯²R¬™³ª«h¦3® ©ª´±´OÀ”¼’¬k» É ¼® ¼ªº¸ ºŒ« ·O» ¦ Å£m¤R«T® ²R¬™³ª«/´Vµo¶zݛ¯^¹ ß'¥¶¦Ÿ©Á´ É © ¿X¦'¤R¥¶¹ª¤5« ¬»™¤5©Á· /±²  ݛ¯^¹ ß^ºŒ«g©Á·5¥±·R¹9»7¤5©Á»Òì  £ «g¦H»7¥¶ºŒ©Á»7«h­Í´±« ÎX¥¶®g©Á´ ­X¥§¦H»7¬™¥±°R²R»7¥¶¼ª·5¦0©Á¬™«©ª´ É © ¿X¦°Q«¯»7»7« ¬}»™¤5©Á·Ï²R·R¥±À”¼ª¬™º ­X¥¶¦H¸ »™¬7¥¶°R²X»™¥±¼’·5¦ Å*£m¤R«‰© ³’« ¬ ©Á¹ª«'©Á°<¦H¼’´±²X»™«z¹’©ª¥±·0¼ªÀx/´Vµo¶‰Ý¯^¹ ß ¼¾³’« ¬“/±²  ݯ^¹§ß"¬™«g©’® ¤R«g¦Œ1OþÅL£m¤R« ®¯²R¬™³ª« ±² j ݯ^¹ ß ¦7¤R¼ É ¦Œ©Á·¥±ºŒ¨R¬™¼¾³ª«gº«g·O»¼ªÀ}©ª®g®¯²R¬ ©ª® ¿GÝ À”¬™¼ªº üýû úOþ ²R¨»™¼—V VOþß3À”¼ª¬*»7¤5«.¥Ä»™« ºž¦'¼ÁÀ¯^¹oÅ*£m¤R¥§¦*À‘©ª®¯»z¥±·<­X¥¶®g©¾»™«g¦ »™¤5©¾» É ¤R«g·ž©}® ¼ª·O»7« Ν»z¤5©ª¦'°Q« «g· \ ²<­X¹ª«h­­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·O» ©’® ®¯¼’¬™­R¥±·R¹}»™¼0»7¤R«Tì  £0’¥Ä»z¥§¦z©ª´¶¦7¼0º¼’¬7«­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·O» À”¼’¬m»7¤R«0»™¬7¥¶¹ª¬ ©ÁºùºŒ¼X­X« ´k©Á´¶¼ª·R«’Å Ô Õ 13F M =X132 š؉-RU íG¥±»7¤R¥¶·»7¤R«½À”¬™©ªºŒ« É ¼ª¬™ËϼªÀ‰{T©ªº«h­ý*‰·’»™¥Ä»u¿|*zÎO»™¬™©’®^¸ »™¥±¼’·WŸ¨Q¼ª¨5²R´¶©ª¬7¥". «h­Ñ°¿L»™¤R«Wy»Ö  ®¯¼’·XÀ”« ¬™« ·<®¯«g¦g‰¦7« ³O¸ «g¬™©ª´QºŒ« »7¤R¼X­R¦m¤5© ³’«°Q« «g· ¨R¬7¼’¨<¼O¦H«h­Œ»7¼Œ¤5©ª·5­X´¶«ëë.ÿ ¨R¬™¼ª¨Q« ¬*·5©ªº«h¦ Å*ÚÛ·ºŒ¼O¦u»'¼ÁÀR»™¤R«g¦7« É ¼’¬7ËX¦W¥±»¥¶¦'­X¥×µ®¯²R´±» »™¼µ­X¥¶¦H»7¥¶·R¹’²R¥¶¦7¤ »7¤5«"¨R¬™¼® «g¦™¦m¼ÁÀ'²R·R˝·R¼ É · É ¼ª¬ ­R¦†»™©ª¹Á¸ ¹’¥±·R¹½À”¬™¼ªº%»7¤5«¨5¬7¼X®¯«h¦7¦}¼ªÀ‰·5©ÁºŒ«h­«g·’»™¥Ä»u¿[­R«¯»7«h®^»™¥±¼’·WÅ £m¤R«h¦H«mºŒ«¯»™¤R¼X­R¦z®g©Á·°<«® ´¶©’¦7¦7¥ÄÊ<«g­©ª¦*À”¼ª´¶´¶¼ É ¦5*¤5©ª·5­¸ ® ¼­R«g­0¬™²R´±«h¦‰Ý É ¤R¥¶® ¤® ©Á·"©ª´¶¦7¼.°Q«Ÿ©ª¦™¦7¼® ¥¶©Á»7«h­ É ¥±»7¤¦u» ©¾¸ »™¥¶¦H»7¥§® ©ª´zºŒ«¯»7¤5¼­5¦™ß¯Â*® ¼Á¸ï¼®g®¯²R¬™¬™« ·5® «g¦°<« » É «g« · É ¼’¬™­5¦  y[©ÁÎX¥±º²Rº¸*‰·’»™¬7¼’¨¿ªÂT«h®¯¥§¦H¥¶¼ª· £k¬7«g«g¦L©Á·<­Ø0T¥¶­5­X« · y[©ª¬7˒¼¾³Œº¼X­X«g´¶¦gÅ 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 score Ù discriminance threshold Tbase TSCT |E’|/|E| *¥±¹’²R¬™«ú†5 ‘ ® ®¯²5¬™©’®¯¿ ¼ÁÀS±² j"©Á·<­•´Vµo¶ ¬7«h¦u»™¬7¥§®^»™«g­ »7¼n¯^¹ ‘ ¦7«¯».¼ªÀ3¨R¬™«g­X« Ê5·R«g­À”«g©Á»7²R¬™«g¦†¥§¦m²5¦7«g­µ°¿©ª´±´Q»™¤R«g¦7« ºŒ«¯»™¤R¼X­R¦Õ¥±· ¼’¬™­R« ¬»7¼® ´¶©’¦7¦7¥±À”¿s¨5¬7¼’¨<«g¬Æ·5©ªº«h¦Æ¼’¬ ·5©ªº«h­µ«g·’»™¥Ä»u¿’Åz£m¤R«g¦7«À”«g©¾»™²R¬™«g¦m® ©ª·°Q«0©´¶¥¶¦H».¼ÁÀ3˒« ¿O¸ É ¼ª¬ ­R¦ Âp©½¨<¼O¦H¥±»7¥¶¼ª· É ¥±»7¤5¥±·Ï»7¤R«·R¼’²R·[¨R¤R¬ ©ª¦7«¥¶·5® ´±²5­X¸ ¥¶·R¹»™¤R«‰¨R¬™¼ª¨Q« ¬k·5©ÁºŒ«Ÿ¼ª¬k¦7¼ªºŒ«‰»u¿O¨Q¼ª¹’¬™©ª¨R¤R¥§®'¥±·RÀ”¼ª¬™ºŒ©Á¸ »7¥¶¼ª·<¦T©ª¦® ©ª¨R¥Ä» ©Á´¶¥K.h©¾»™¥±¼’·˜¼ª¬.À”¼’¬7ºž©¾»}¼ÁÀ·²RºŒ« ¬™¥¶®g©Á´k«¯Î¸ ¨R¬™«g¦™¦H¥¶¼ª·<¦ Åz£m¤R«h¦H«0À”«h©¾»™²R¬7«h¦.©Á¬™«²5¦7«g­Ô« ¥±»7¤R«g¬.»7¼ž°5²R¥±´§­ ¤5©ª·5­¸Û®¯¼X­X«g­Õ¬™²R´¶«g¦ž¼’¬µ©ª¦ž¨5©ª¬™©ªºŒ«¯»7«g¬™¦Œ¥¶·Ç¦H»™©¾»™¥¶¦H»7¥§® ©ª´ ©Á¨5¨R¬7¼O©ª® ¤R«h¦ Å  ¼ªºŒ¨5©Á¬™«g­»™¼©Á´¶´Á»™¤R«z¼ª»7¤R«g¬3ºŒ«¯»™¤R¼X­R¦3´¶¥§¦u»™«g­"©ª°<¼¾³’«ªÂ ¼ª²5¬0©ª¨R¨R¬™¼’©’® ¤¤5©’¦0¼ª·R«ºž©Á¥¶·¼’¬7¥¶¹ª¥¶·5©ª´±¥±»u¿5 É «Œ­R¼ª·-¯ » ²5¦7«Ÿ©¦7«¯»*¼ÁÀR¨R¬™«g­R«¯Ê5·R«h­0À”«g©Á»7²R¬™«g¦*­X²R¬™¥¶·R¹¼ª²5¬3´¶«g©Á¬™·R¥¶·R¹ ¨R¬™¼X®¯«g¦™¦gÅ"ÚÛ·[À‘©’®^»gÂk©Á·¿˜¥Ä»™« º ¼ªÀŸ¼’²R¬T»™¬™©ª¥±·5¥±·R¹ ®¯¼’¬7¨5²5¦ Ý É ¼ª¬ ­R¦©Á·5­˜êmë0ìRß®g©Á·˜°Q«²5¦7«g­ »™¼ºŒ¼X­X« ´'©ž¨5©ª¬H»™¥¶®¯¸ ²R´§©Á¬0®¯¼’·’»™«¯Î»gÅ}Úﻥ§¦»7¤R«Œ¦7¨R´±¥±»}® ¬7¥±»7«g¬7¥§©RÂQ²5¦7«g­Ï©¾»«h©ª® ¤ ·R¼X­X«Œ¼ÁÀ‰»7¤R«»7¬™« « É ¤R¥§® ¤Ï® ¤5¼O¼O¦H«h¦0©Á·¥±»7«gº Ý É ¼ª¬ ­[¼’¬ ê†ë0ì5ßk©ª·5­©T¨Q¼’¦7¥Ä»™¥±¼’· É ¥±»7¤5¥±·»7¤5«†¬™« ¹’²R´§©Á¬'«¯ÎX¨R¬™«g¦™¦H¥¶¼ª· »7¼Œºž©Á˒«0²R¨˜©øO²R«h¦u»™¥±¼’·WÅ Úï»Ô¥§¦¥¶ºŒ¨<¼’¬H» ©Á·O»»™¼Æ·R¼ª»7«Ï»7¤5©Á»½¼’²R¬½ºŒ« »7¤R¼X­9®g©Á· «g©’¦H¥¶´¶¿Ô°Q«Œ®¯¼ªº°R¥¶·R«g­ É ¥±»7¤Ï¼Á»™¤R« ¬¦H»™©¾»™¥¶¦H»7¥§® ©ª´kºŒ« »7¤R¼X­ ­X«g³ª« ´¶¼ª¨5¨<«h­À”¼’¬'»7¤R«.·<©ÁºŒ«.« ·O»7¥±»u¿® ´¶©’¦7¦7¥±Ê<® ©Á»7¥¶¼ª·»™©ª¦7ËpÅ R¼’¬k« ÎR©ÁºŒ¨R´¶«ªÂ¾¥¶·»7¤R«m¦7¿¦H»7«gºÛÚ å’ð¯ã<äcæ܉æ ãQå’ð¯è ¶Ý ÝDŸ¥¶Ëª«g´ «¯»È©Á´oűÂü!41 1 1Oß^©Á´¶´[»7¤5«A²R·R˝·R¼ É ·¨5¬7¼’¨<«g¬9·5©ªº«h¦ ©Á¬™«¹ª¬™¼ª²R¨Q«g­9¥±·È©Õ²R·R¥§ø’²5«Ò®g©¾»™« ¹ª¼’¬7¿¢´§©Á°Q« ´¶´±«h­wkel-p É ¤R«g¬7«h©ª¦Ó¼ª²R¬ÍºŒ« »7¤R¼X­ ¦H¨5¬7«h©ª­R¦N»7¤5« º ¼¾³’« ¬NÊ5·R«g¬ ®¯´§©ª¦™¦7«g¦gÅ*£m¤R«0´¶«¯ÎX¥§®¯¼ª·½¨R¬™¼­R²5®¯«h­µ°¿ž¼ª²R¬mºŒ«¯»™¤R¼X­½®g©Á· ­X¥¶¬7«h®^»™´±¿ °Q«²5¦7«g­˜¥¶·[»™¤R«Œy©Á¬™Ëª¼¾³ºŒ¼X­X«g´*¼ªÀ»7¤R«¦H¿X¦H¸ »7«gº#¥¶·N¼’¬™­R« ¬Ô»7¼¢¥¶º¨5¬7¼¾³’«»™¤R«L¨R¬™«g®¯¥§¦7¥±¼’·ÇÀ”¼ª¬[¦7²5® ¤ É ¼ª¬ ­R¦ Å ‘ ® ¼ªºŒ¨5©ª¬7¥§¦H¼’·ž¼ÁÀ3¼’²R¬†¬7«h¦H²R´±»™¦ É ¥±»7¤¼ª»7¤R«g¬†¦7¿X¦u»™« ºž¦ ¥§¦·R¼Á»Ò¦u»™¬™©ª¥±¹’¤’»7À”¼ª¬ É ©ª¬™­W°<«h® ©Á²<¦H«’©’¦[¥±» É ©ª¦¦7©ª¥¶­ °Q«¯À”¼’¬7«’Â'»7¤R«»™©Á¹’¹ª¥¶·R¹[¨R¬™«g® ¥¶¦7¥¶¼ª·L¼ÁÀ²5·R˝·R¼ É ·L¨R¬™¼ª¨Q« ¬ ·5©ªºŒ«g¦¥¶¦·R¼ª»Œ® ¼ª·5¦7¥§­X« ¬™«g­>ºŒ¼O¦u»¼ÁÀ»™¤R«Ô»7¥¶º«’‰©ª¦Œ© ¦7²R°X» ©ª¦7ËǼÁÀ»7¤R«Ò·5©ªº«h­9«g·O»7¥±»u¿9®¯´§©ª¦™¦H¥±Ê<®g©¾»7¥¶¼ª·9»™©’¦HËpÅ {T« ³ª«g¬H»™¤R« ´¶«g¦™¦g»7¤R«½À”¼ª´¶´¶¼ É ¥±·R¹¨5©ª¨<«g¬™¦¹ª¥¶³ª«Ô¦H¼’º«Ô¬™«¯¸ ¦7²R´±»™¦ É ¤R¥§® ¤ ® ©Á· °Q«"®¯¼’º¨<©Á¬™«g­ž»™¼Œ¼ª²R¬ ¦—5 ݔíÒ©ª¦™® ¤R¼ª´§­X«g¬T«¯»0©Á´oűÂ~!21 1Oýªßm® ©Á¬™¬™¥±«h­ ¼ª·[©Á·«¯ÎX¨<«g¬7¥±¸ ºŒ« ·O»m¼ª·©­X¥§¦7©ªº°R¥¶¹ª²5©Á»7¥¶¼ª·Œ» ©ª¦7Ë É ¥±»7¤»7¤R¬™« «}¦H«gºŒ©ª·X¸ »™¥¶®»™©ª¹’¦ÝDˆXŠX_-a4jl'ÂeˆXf c3hŠpÂj_n3߯Å"£m¤R«"»™«g¦H»0® ¼ª¬™¨R²5¦ É ©ª¦0ºž©ª­R«ž¼ÁÀ‰!hú ½¨R¬™¼ª¨Q« ¬·<©ÁºŒ«Œ»7¼’˪« ·<¦0«¯Î»7¬ ©ª®¯»7«g­ À”¬™¼ªº V V"í©ª´±´>ìO»7¬™« « »Sޒ¼ª²R¬™·5©ª´5­X¼X®¯²RºŒ«g·’» ¦ Å£m¤R«T»™©ª¹Á¸ ¹’¥±·R¹ž©’® ®¯²5¬7¬™¿ž¬7«g¨<¼’¬H»™«g­½¥¶¦V’þÅ Ý  ¼’´±´¶¥±·<¦©Á·<­¢ìX¥±·R¹’« ¬h—!21 1 1’ߨR¬™«g¦7« ·O»™¦©Òº« »7¤R¼X­ ²5¦7¥¶·R¹¼’·R´±¿ž©³ª« ¬™¿´±¥¶ºŒ¥Ä»™«g­¦7«¯»†¼ÁÀzç ð ð å.¬™²R´¶«g¦Ÿ¥±·½¼ª¬ ­X« ¬ »™¼0©ª²X»7¼’ºž©¾»7¥§® ©ª´±´¶¿0´±«h©Á¬™·W À”¬™¼ªºA©ª·²R·R´§©Á°Q« ´¶´¶«g­®¯¼ª¬™¨R²5¦g ® ¼ª·O»7« ÎO» ¦¬™« ´¶« ³¾©ª·’»»7¼­X¥¶¦™©Áº°R¥¶¹ª²5©Á»7« É ¼ª¬ ­R¦"°Q« ´¶¼ª·5¹Á¸ ¥¶·R¹Ô»7¼ »™¤R«µ¦7©ªºŒ«Œ¦7«¯»¼ªÀŸ» ©Á¹’¦©’¦»7¤R«ž¨R¬™« ³¥¶¼ª²5¦ É ¼ª¬™Ë ¨R¬™«g¦7« ·O»™«g­>Å£m¤R«°<«h¦u»0©ª®g®¯²R¬ ©ª® ¿µ¬7«g¨<¼’¬H»™«g­>ÂQ¼ª·[©ž»7«g¦H» ® ¼ª¬™¨R²5¦¼ªÀ›!2  ž®¯¼’·O»7«¯Î» ¦¨R¥§® Ëª«h­[©¾»T¬ ©Á·<­X¼ªº¼ª·˜»™¤R« »™¬™©ª¥±·R¥¶·R¹L­R©Á»™©¢Ý{T« É z¼ª¬™ËÑ£m¥±ºŒ«g¦ž»7«¯Î»µ®¯¼’¬7¨5²5¦™ß¯Â‰¥§¦ 1†!ªû úOþ É ¥±»7¤R¼’²X».»™©ªËO¥¶·R¹Œ¥¶·’»™¼ž©ª® ® ¼ª²R·O»†»7¤R««g¬7¬™¼ª¬ ¦†­X²R« »™¼ © É ¬™¼ª·R¹ ®¯¼’·’»™«¯Î»­X«¯»™«g®^»™¥±¼’·©ª·5­˜VªúRû úOþ É ¥±»7¤L©Á´¶´ »™¤R«"®¯¼’·’»™«¯Î»™¦.­R«¯»7«h®^»™«g­>Å íÏ«°<«g´±¥¶« ³’«.»7¤<©¾»z»™¤R«¨Q« ¬7À”¼ª¬™ºŒ©ª·5®¯«.¼ªÀW¼ª²R¬‰º« »7¤R¼X­ ¥§¦øO²R¥±»7«Œ®¯¼’ºŒ¨5©Á¬ ©Á°R´¶«0»7¼»7¤R«Œ¦™®¯¼ª¬™«¹ª¥¶³ª«g·©Á°Q¼¾³ª«’Â<©’®^¸ ® ¼ª¬ ­X¥±·5¹»7¼»7¤R«À‘©ª®¯»‰»7¤<©¾»m¼ª²R¬‰»™©Á¹O¦H« »Ÿ¥§¦‰ºŒ¼’¬7«¨5¬7«h®¯¥§¦H« Ýi]$ced‰^`f6g˜©Á·5­[]0^`_-a$bÀ”¼ª¬)ˆXІ_-a2jl.0b3jmlÔ©ª·5­[hjk-lß b3_ g À”¼’¬ ˆXf c-hâŠ5ß^ÅUŸ«h¦H¥§­X«h¦ Â.¥¶·5®¯´¶²5­X¥¶·R¹¥±·9»7¤R«»7«g¦H» ® ¼ª¬™¨R²5¦Œ¼ª·5´±¿¥Ä»™« ºž¦µ¼X® ® ²R¬™¥±·R¹Ò´¶«g¦™¦Œ»7¤<©Á·ÕÀ”¼ª²R¬µ»7¥¶º«h¦ ºž©Á˒«g¦*»™¤R«.»™©’¦H˺Œ¼ª¬™«m­R¥a×µ®¯²5´Ä»Ÿ»7¤5©ª·¨5¥¶® Ë¥¶·R¹0»7¤R«g·ž©¾» ¬ ©Á·5­R¼ªº Å à á 1 -5J ‹ 1 6 = 4[â 1 J ‘ ¦¥Ä»"¥§¦"¦7¤R¼ É ·¥¶·L¦H«h®^»™¥±¼’·|RÅK!’Â>»7¤5¥¶¦ºŒ« »7¤R¼X­ É ¼’¬7ËX¦ É « ´¶´ É ¤R« ·»7¤5«Ô²5·R˝·R¼ É ·¨5¬7¼’¨<«g¬·5©ªºŒ«g¦Œ¼X® ® ²R¬gŸ©¾» ´¶«g©’¦u»¼’·5®¯«’Â5¥¶·[©Œ´±¼ É ©Áº°R¥±¹’²R¥±»u¿½®¯¼’·O»7«¯Î»¥¶· »7¤R«"»7«g¦H» ® ¼ª¬™¨R²5¦25T¥¶·Ê5¹’²R¬™«úRÂp»™¤R«»™©ª¹ª¹ª¥¶·R¹Ô©ª® ® ²R¬ ©ª®¯¿ ¬™«g©’® ¤R«g¦ 1’þ9À”¼ª¬ É ¼ª¬ ­R¦k©ª¨R¨Q«g©Á¬™¥¶·R¹©Á»*´¶«g©’¦u»'¼ª·5® «Ÿ¥¶·©}®¯¼’·O»7«¯Î» É ¥±»7¤[©µ­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·<®¯«¤R¥¶¹ª¤R«g¬.»7¤<©Á·WRû VRşÚï»T¥§¦.»7¤5« ¬™«¯¸ À”¼’¬7«·<©¾»7²5¬™©ª´R»7¼»7¬™¿»™¼¼’°X»™©ª¥±·W’À”¼ª¬†©"¹’¥±³’« ·ž²R·R˝·R¼ É · ¨R¬™¼ª¨Q« ¬ž·<©ÁºŒ«ªÂ.©ª¦Œºž©ª·O¿Õ®¯¼’·’»™«¯Î»ž¼ÁÀ0¼X® ®¯²5¬7¬™« ·5® «˜©ª¦ ¨Q¼’¦™¦H¥¶°R´¶«ªÅÔ£m¤R¥¶¦¦H¤R¼’²R´§­Ï¥¶·5® ¬7«h©ª¦7«»™¤R«ž¨R¬™¼ª°5©ª°R¥¶´±¥±»u¿[¼ÁÀ Ê5·<­X¥±·5¹0­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·O»®¯¼’·’»™«¯Î»™¦kÀ”¼’¬'® ¤<©Á¬ ©ª®^»™« ¬™¥K.g¥±·5¹¥Ä»hÅ íÏ«†¦u» ©Á¬7»7«h­}¦H»7²<­X¿¥±·R¹©.ºŒ«¯»7¤5¼­ É ¤5« ¬™«»7¤R«‰íϼ’¬7´§­¸ íG¥§­X«¯¸ïí« °ÌÝm+m[mµßÏ¥¶¦Ï¨R¬™¼ª°Q«g­ÓÀ”¼’¬ºŒ¼ª¬™«Æ¦™©ÁºŒ¨R´¶«g¦ É ¤R«g·µ»™¤R«T»™«g¦H»m® ¼ª¬™¨R²5¦Ÿ­X¼«g¦†·R¼Á»m®¯¼’·’» ©Á¥¶·µ« ·5¼ª²R¹’¤µ¼X®^¸ ® ²R¬7¬™« ·<®¯«g¦‰¼ÁÀ3©¹ª¥¶³ª« ·²R·R˝·R¼ É ·µ¨R¬™¼ª¨Q« ¬Ÿ·5©ÁºŒ«T»7¼²5¨X¸ ­R©Á»7«¥±»™¦m´¶«¯ÎX¥§® ©Á´>«g·O»7¬™¿ªÅ £m¤R«Œºž©Á¥¶·Ï¨R¬™¼ª°5´±«gº É ¤R« ·´¶¼O¼’˝¥±·R¹½À”¼ª¬0·R« É ¦™©Áº¸ ¨R´¶«g¦Ï¼ª· »7¤R«}m+m[mÐ¥§¦[»™¤R«Õ·R¼ª¥§¦H«©ª¦™¦H¼X®¯¥§©¾»™«g­ É ¥±»7¤ »™¤R«Ô©ª·5¦ É «g¬"»7¼©ÏøO²R«g¬7¿’Å£m¤5«½©Á·<¦ É « ¬ ¦"·R«g«g­Ñ»7¼°Q« ¨R¬™¼X®¯«h¦7¦7«g­Í¥¶·A¼’¬™­R« ¬[»™¼Ó® ¼ª·5¦H»7¥±»7²R»7«Õ³ ©ª´±¥§­A¦™©ÁºŒ¨R´¶«g¦gÅ £m¤R¥§¦.¨R¬™¼X®¯«h¦7¦7¥±·5¹¥¶·³ª¼ª´¶³ª«h¦m®¯´¶«g©Á·5¥±·R¹µ©ª·5­Ê5´Ä»™« ¬™¥±·5¹µ©¾À ¸ »™« ¬Œ­R©¾» ©[¤5©’¦°Q« «g·¦7« ·O»°<©ª® Ë°O¿Ñ©Ï¦H«h©Á¬ ® ¤L« ·R¹’¥±·5«ªÅ 읲5® ¤ ¨R¬™¼X®¯«h¦7¦7¥±·5¹¥¶·³ª¼ª´¶³ª«h¦Ÿ»7¤R«0À”¼’´±´¶¼ É ¥±·5¹¦H»7«g¨5¦5 !ªÅ읫g·5­X¥¶·R¹[©˜øO²R« ¬™¿˜»™¼[©˜¦H«h©Á¬ ® ¤Ï«g·R¹ª¥¶·R«À”¼ª¬"«h©ª® ¤ ²R·5ËO·5¼ É ·G¨R¬7¼’¨<«g¬·5©ªºŒ« É « É ©ª·’»»7¼Ñ¨R¬™¼X®¯«h¦7¦gÅ £m¤R«¢» É ¼Í¨5©ª¬™©ªº« »7«g¬™¦Ò¼ªÀ½»™¤R«9øO²R« ¬™¿A©ª¬7«¢»™¤R« ¨R¬™¼ª¨Q« ¬·5©ÁºŒ«©ª¦T©˪«g¿ É ¼ª¬ ­ ©Á·5­˜»7¤R«´¶©ª·R¹ª²5©ª¹ª« ¼ªÀk»™¤R«0»7« ÎO»hÅ XÅ5¬7¼’ºA©Á´¶´X»7¤5«©Á·5¦ É «g¬™¦'¦7« ·O»z°5©’® Ë"°¿»7¤R«.«g·R¹ª¥¶·R«’ ¼’·R´±¿Ï»7« Ν»7²5©ª´†­R©Á»™©˜¥§¦˪« ¨R»½Ý›ã3b3d‰f˜Ê5´¶«g¦ ß^ÅÏ£m¤R« ã3b3d‰fµ» ©Á¹O¦©ª¬7«»™¤R« ·¬™« ºŒ¼¾³ª«h­˜©Á·5­[»7¤R«»7« ÎO»0¥§¦ »™¼ªËª«g·R¥§¦H«h­>Â*»™©Á¹’¹ª«h­L©Á·5­L¨<©Á¬ ¦H«h­Ò´±¥¶Ëª«»7¤R«»7¬ ©Á¥¶·X¸ ¥¶·R¹˜®¯¼’¬7¨R²<¦²<¦H«h­[À”¼’¬0»7¤R«ž°R²5¥±´§­X¥¶·R¹ ¼ÁÀŸ¼’²R¬}»7¬™« «’Å *‰³ª« ·O»™²5©Á´¶´±¿’Â0»7¤R«}{TꉦϮ¯¼’·’» ©Á¥¶·R¥¶·R¹¢»7¤R«Æ¨R¬7¼’¨<«g¬ ·5©ªºŒ«0©ª¬7«0˒« ¨X»hÅ úRÅ ‘ ´±´p»™¤R«g¦7«o{ꉦm©Á¬™«T¨5¬7¼X®¯«h¦7¦7«g­ž»7¤R¬™¼ª²R¹’¤µ»™¤R«}»7¬™« « ©ª·5­ ¼ª·R´¶¿»7¤R¼O¦H«Ç«g·5­X¥¶·R¹ ²R¨ù¥¶·Ì­X¥§¦™®¯¬™¥±ºŒ¥¶·5©Á·O» ´¶«g© ³’«g¦m©Á¬™«0˪« ¨R»gÅ ‘ »Ô»™¤R«« ·5­N¼ÁÀ"»™¤R¥§¦½¨5¬7¼X®¯«h¦7¦»7¤5«g¦7«Ò¦™©ÁºŒ¨R´¶«g¦Ô©Á¬™« ©ª­5­X«g­0»™¼»™¤R«Ÿ¦™©ÁºŒ¨R´¶«g¦*©Á´¶¬7«h©ª­X¿}©Á¨R¨Q«g©ª¬7¥¶·R¹¥¶·»™¤R«‰»™«g¦H» ®¯¼’¬7¨5²5¦†»7¼Œ¬™« «g¦H»7¥¶ºž©¾»™«»™¤R«"ê†ë0ìµ»™©ª¹ª¹’« ¬mºŒ¼­R« ´oÅ íÏ«L® ©ª¬7¬™¥¶«g­¢¼ª²R» ©ÆÊ5¬ ¦u» «¯ÎX¨Q« ¬™¥±ºŒ« ·O» ¼ª·9»7¤R«Ò¦H« » ¯¨R¬7«h¦H«g·O»7«g­½¥¶·òRÅK!’Å(*Ÿ©ª® ¤½¥Ä»™« º ¼ªÀ"¯žÂR¨5¬7¼X®¯«h¦7¦7«g­°¿ »7¤5«†ºŒ« »7¤R¼X­¨R¬™« ³¥±¼’²5¦7´±¿­X«h¦7® ¬7¥¶°<«h­>ÂÁ¹’« ·R«g¬™©Á»7«g¦*¼’·© ³O¸ « ¬ ©Á¹’«†©0ú y˜¼‘ã-bd‰f»7« ÎO»‰® ¼ª¬™¨R²5¦gÅ ‘ À »7« ¬z»7¤R«® ´±«h©Á·R¥¶·R¹ ¨R¬™¼X®¯«g¦™¦g’¼’·R´±¿ò! !2 0¼¼ÁÀ"ã3b3d‰f»™«¯Î»†¥§¦†Ëª«g¨X»ŸÀ”¼’¬Ÿ«h©ª® ¤ ¥±»7« º Å}£m¤R« ·WÂ>©µ® ¼ª¬™¨R²5¦ B Q ®¯¼’·’» ©Á¥¶·R¥¶·R¹ž»7¤5«ü 1{T꟦ ¼ÁÀ B 8¥¶·Ñ©ª­R­R¥Ä»™¥±¼’·Ò»7¼ »7¤5«Y {T꟦"«¯Î»™¬™©’®^»7«h­ÏÀ”¬7¼’º »7¤5«lã-bd‰fŒ®¯¼’¬7¨R²<¦‰¥§¦m°R²R¥¶´Ä»hÅ ‘ À »™« ¬²R¨p­R©Á»7¥¶·R¹»7¤R«0´¶«¯Î¸ ¥§®¯¼ª·@¯ À”¼’´±´¶¼ É ¥¶·R¹µ»™¤R«Œº« »7¤R¼X­­X«g¦™®¯¬™¥¶°<«h­¥¶·¬ É ¥±»7¤ B Q ‰»7¤R« »™«g¦H»®¯¼’¬7¨R²<¦¥§¦»™©ª¹ª¹’«g­>Ÿ¿O¥¶« ´§­X¥¶·R¹Q ÅG£m¤R« »™©ª¹ª¹’¥±·R¹Ò©ª®g®¯²R¬ ©ª® ¿L¥¶¦µ¬7«g¨<¼’¬H»™«g­¥±·»™©ª°R´±« X5˜« ³’« ·Õ¥ÄÀ ©ž¦7´±¥¶¹ª¤O».¥±ºŒ¨R¬™¼¾³ª«gºŒ« ·O».¥¶¦.¼’°5¦7« ¬™³ª«g­WÂ É ¥Ä»™¤Ô¬™«g¦7¨Q«g®^».»™¼ ®¯¼’¬7¨5²5¦we8hÂX»7¤5«¬7«h¦H²R´±».¥¶¦Tø’²5¥Ä»™«"­X¥¶¦™©Á¨Q¼ª¥¶·O»7¥¶·R¹5Å Ž/ RÅ  5Å  RÅ  5Å ü RÅ V ! e8 ý0!ªÅ ú ý0!’Å ú ýÁúRÅ  ýRÅ ú†! ýRÅ V üOýXÅ  Q ýXÅ ü ýRÅ ü ýÁúRÅ  ý¾ú5Å ý ý¾úRÅ 1 ý5Å 1 £k©ª°R´±«X51G.«g¦7²R´±»™¦e8©Á·5­}Q ¼ª·½»7¤R«"¦7«¯»-¯ ‘ ºŒ©ª·²5©Á´® ¤R«h® ËѼª·Õ»™¤R«[­R©¾» ©L®¯¼’´±´¶«g®¯»7«g­¼ª·Õ»™¤R« m[m[m¹O© ³ª«0²5¦¦H¼’º«®¯´¶²R«g¦T¥±·˜¼’¬™­X«g¬m»7¼µ« ÎX¨R´¶©ª¥±· »™¤R¥§¦ ¨Q¼O¼’¬¥¶ºŒ¨R¬™¼¾³ª« ºŒ«g·’»45Ê5¬™¦H»g«g³ª«g· É ¥Ä»™¤Ñ»7¤5«Ô® ´±«h©Á·R¥¶·R¹ ¨R¬™¼X®¯«g¦™¦gÂg»™¤R«.­R©¾» ©T® ¼ª´¶´±«h®^»7«h­"¬™« ºž©Á¥¶·³’« ¬™¿·5¼ª¥§¦H¿µÝ‘´¶©’® Ë ¼ÁÀŸ¨R²R·5®¯»7²5©Á»7¥¶¼ª·WÂoã3b3d‰fž» ©Á¹Ô« ¬™¬7¼’¬™¦gÂ É ¬™¼ª·R¹Ô¦H«g·O»7« ·<®¯« ©Á·<­ É ¼ª¬ ­Ñ»™¼ªË’« ·R¥".g©Á»7¥¶¼ª·WŸ«¯»™®ªÅ ß=.T¦7«g®¯¼’·5­>‰»7¤R«[­X¥±Ãp« ¬7¸ « ·<®¯«g¦Ï°Q«¯» É « « ·A»7¤R«¢­R©Á»™©N²5¦H«h­È»7¼N»7¬ ©Á¥¶·Í»™¤R«Õ»7¬™« « ݔ·5« É ¦7¨5©Á¨Q« ¬'©Á¬7»7¥§®¯´¶«g¦ ß3©ª·5­0»™¤R¼’¦7«‰À”¼’²R·5­¼ª·"»™¤R«zm+m[m ݔ´¶¥±»H»7«g¬™©Á»7²R¬™«ªÂ<® ¤5©¾»hÂX«¯»™®ªÅ ß´¶«g©’­R¦†»7¤5«ì  £9»7¼ŒºŒ¥¶¦™®¯´§©ª¦H¸ ¦7¥ÄÀ”¿¨R¬7¼’¨<«g¬·5©ÁºŒ«h¦ ’« ³ª«g· É ¥Ä»™¤Œ©0¤5¥±¹’¤Œ­R¥¶¦™®¯¬™¥±ºŒ¥¶·5©Á·<®¯« »7¤5¬7«h¦H¤R¼’´¶­WÅ {« ³’« ¬7»7¤R«g´±«h¦7¦g »7¤5¥¶¦'Ê<¬™¦H»«¯ÎX¨<«g¬7¥¶ºŒ« ·O»z« ·<®¯¼ª²5¬™©ª¹ª«‰²<¦ »7¼® ©Á¬™¬™¿T¼ª·0»™¤R¥§¦ É © ¿ªÅ3ꉬ7¼’°R¥±·5¹m»7¤R«)m[m[mÍ¥¶·¼ª¬ ­X« ¬>»™¼ ©Á²R»7¼ªºž©Á»7¥§® ©Á´¶´¶¿µ²R¨p­R©¾»™«"¦H«gºŒ©ª·O»7¥§®´¶«¯ÎX¥§®¯¼’·À”¼’¬.¨R¬7¼’¨<«g¬ ·5©ªºŒ«g¦k¥¶¦*©.¨R¬™¼ªºŒ¥§¦H¥¶·R¹©ª¨R¨R¬™¼’©’® ¤0¦7¥¶·5®¯«Ÿ¨R¬7¼’¨<«g¬3·5©ªºŒ«g¦ ©ª¨R¨<«h©Á¬*©Á·5­"­R¥¶¦™©Á¨Q«g©ª¬3« ³’« ¬™¿­R© ¿0©Á·<­ É «‰°<«g´±¥¶« ³’«z»7¤<©¾» ©ª·©ª²X»7¼’ºŒ©Á»7¥§®Œ¦H«h©Á¬ ® ¤R¥±·5¹½©Á·<­´¶«g©ª¬7·R¥¶·R¹Ô¨R¬™¼® «g¦™¦0® ©Á· ¤R«g´±¨N¨R¬™¼­R²5®¯¥¶·R¹Ñ¬™« ´¶« ³¾©Á·O»Ô¬7«h¦H¼’²R¬™® «g¦ É ¤5¥¶® ¤9®g©Á·Ç°Q« ­X¥¶¬™«g®^»™´±¿²5¦H«h­Ô¥±·˜©Á·¿¦H»™©Á»7¥§¦u»™¥¶®g©Á´-{‰­3ꢩÁ¨R¨5´±¥§® ©Á»7¥¶¼ª·WÅ Õ 1/ä 1 1 ST6 1 J 5¬X «h­z «g¬7¥§®Y «h® ¤R«¯»µ©ª·5­üR¬ ©Á·1å ® ¼ª¥§¦[³’¼ª·WÅ    RÅ?­3«g¦ ·R¼ªºž¦¨R¬™¼ª¨R¬™«g¦« ·˜»™¬™©ª¥Ä»™« ºŒ« ·O»©Á²X»™¼ªºž©¾»™¥¶øO²R«­X«´¶© ¨5©Á¬™¼ª´¶«ªÅäïá}Ž Rð;¾èÏæ ãçæRèî¾æ äÛð/ Œð ã5ä^èâXäïá [Áäcæ â5ð åªð¯çîeÁã…ªâ5ð^ç^Å }©Á·R¥¶« ´ †¥±Ë’« ´oÂéG.¥§® ¤5©Á¬ ­ÐìX® ¤ É ©ª¬H»Ž.ªÂÌ©ª·5­êG©Á´¶¨R¤ íÏ«g¥¶¦™® ¤R«h­X« ´oÅ !21 1 1RÅ ‘ ·Ó©Á´¶¹ª¼’¬7¥±»7¤Rº »™¤5©¾» ´±«h©Á¬™·5¦ É ¤5©¾»4¯ ¦0¥¶·Ò©·5©ªºŒ«ªÅµñ ’àŽXæ ãQðŒî3ð;¾è^ã<æ ã…[ö9Ô Rð à æ Ú ç ç^â5ðáÁãYÖîî3ð;Áè^ã5æ ã…ªÂ5ú 5Â"!¯¸ÛúRÅ ­3¼ª²W†¼¾³ª«h¦ ÂX« ·R¥§¦MÞª¼’²R³ª« »g˜ޒ²R« ¬™¹ª«g·[읥±«g·R« ´oÂG.« ·5©Á»7¼ ­X«y˜¼’¬7¥oÂXR¬† «h­ « ¬™¥§®‰ «h® ¤R«¯»hÂX­W²5® ¥¶©ª·R¼k¥§¦™¦H¼’¬7«’Â5©Á·5­ êz¥¶«¯»™¬7¼W­k©¾À‘©ª® «ªÅò  RÅ ‘ ¦H¬"À”¼ª¬©ª²X»7¼’ºŒ©Á»7¥§®ž­X¥¶¬7«h®^¸ »7¼’¬7¿©’¦7¦7¥§¦u» ©Á·5® «50»7¤5«Œ¦7ºž©ª­R©Ô¨R¬™¼ \ «h®^»hÅÚÛ·¢äïá Ž Qö Rð;Áèæ ã}Ú;Ô"ëoèUr‰á¾èîq¾çîRá; CìwèoÔfí0÷Vîî îÁÂ<ê©Á¬™¥§¦ Å ­ †¬7«g¥±ºž©Á·3ÂÞ R¬™¥±«h­Xºž©Á·WÂG ë¤R´¶¦7« ·3Âp©ª·5­  ìO»™¼ª·R«’Å !21 V5Å ë(K¾ç ç æ ï†à;¾äoæ‘á¾ã ¾ãpåðíTð…ªè7ð¯ç™ç^æ‘áÁãñæXè™ð™ð¯ç^Å í©’­R¦ É ¼’¬H»™¤WÅ *‰²R¹ª«g·R«  ¤<©Á¬™·R¥¶©ªËp  ²R¬7»7¥§¦°0«g·5­X¬™¥¶® ËX¦7¼ª·WÂo{« ¥¶´^ÞO©¾¸ ®¯¼’°5¦H¼’·Wµ©Á·<­ y[¥±Ë’«Çê'« ¬™Ëª¼ É ¥±»î.ªÅ !21 1ªú5Å *ŸøO²5©¾¸ »7¥¶¼ª·5¦ À”¼ª¬ ¨<©Á¬7»H¸ï¼ÁÀ ¸Û¦H¨Q« «h® ¤`» ©Á¹’¹ª¥¶·R¹5Å ÚÛ· ò’ò¾äP ÖÁäcæ‘áÁãlëzáÁã2‚¯ð è7ð¯ãpà ðá¾ãtèè^äcæ ï†à¯æ1Ú¯ã5äÛð$`Äæï…’ð ãQà ð  ¨5©Á¹’«g¦TýV?ò5ýV 1RÅ y˜¥§® ¤5©ª« ´  ¼ª´¶´¶¥±·5¦©ª·5­ z¼ª¬ ©Áº&읥¶·R¹’« ¬hÅ !41 1 1RÅÖT·5¦7²X¸ ¨<«g¬7³¥§¦H«h­žºŒ¼X­X« ´§¦zÀ”¼’¬Ÿ·<©ÁºŒ«g­ž«g·O»7¥±»u¿Œ® ´¶©’¦7¦7¥±Ê<® ©Á»7¥¶¼ª·WÅ ÚÛ·¸óz 9 Qæ è^æ‘à;0ñ[ð äPRáhå çÏæ ãèÖ}îeÛ pè7áhà™ð¯ç ç^æ ã…èÁãQå ô<ð è6‡.î¾èD…OðlëzáÁèi RáÁèî0ö/óŸñÖîeÛmö?ôpîwëlõ ó’óÁÂÖT·R¥¶³ª« ¬7¸ ¦H¥±»u¿µ¼ªÀ(y[©ª¬7¿´§©Á·5­WÅ G.¼’´¶©ª·5­[0²R¤R·ž©Á·<­G« ·5©Á»7¼­X«›y˜¼’¬7¥oÅ!41 1’üRÅ3£m¤R«©ª¨X¸ ¨R´¶¥¶®g©¾»7¥¶¼ª·Ô¼ÁÀ*¦H«gºŒ©ª·O»7¥§®}® ´¶©’¦7¦7¥±Ê<® ©Á»7¥¶¼ª·ž»™¬7«g«g¦Ÿ»7¼Œ·5©Á»H¸ ²R¬ ©Á´´§©Á·R¹’²5©Á¹’« ²R·5­X«g¬™¦H»™©ª·5­X¥¶·R¹5Åöڇóó,ó÷æRèî¾ã5ç$’à¯ö äcæ‘áÁãRçá¾ã„Û¾äoäÛð¯è^ã°èãï‡ ç æ§çŒÁãQåžñW’àŽXæ ãQðlÚ¯ã5äÛð$` æ ö …’ð ãQà ð Â-! ýRÝD’ß65   1?òOü RÂy[© ¿ªÅ £m¤R¥¶« ¬™¬™¿ÍìX¨R¬7¥¶«¯»Ò©Á·5­ y[©Á¬ ®ç*‰´Ä¸ï°-, «2. «ªÅ !21 1XÅø*»7¥±¸ øO²R«¯» ©Á¹ª«.¨5¬7¼’°5©Á°R¥¶´¶¥¶¦H»7«m«¯»†®¯¼ª·O»™¬™©ª¥±·O»7«h¦¦H¿·O»™©ÁÎX¥¶øO²R«h¦ Å æX莾æ äÛð$ žð¯ã<ä‘èâXäÛá ¾äoæ¯â5ðÏåªð¯ç½îeÁã…ªâ5ð^ç^†ÿ¼ª´Tú’üR ·-!¯¸ÎXÅ {T¥±·5© íÒ©ª¦™® ¤R¼’´¶­X«g¬gÂ-‰©Á« ´GT© ³O¥¶·WÂ*©Á·5­¬y˜¥§¦H¼¼’Ë  ¤R¼ª¥oÅ !21 1OýÅ ¥¶¦™©Áº°R¥¶¹ª²5©Á»7¥¶¼ª· ¼ªÀƨR¬™¼ª¨Q« ¬È·5©ªº«h¦Ó¥±· »7« ÎO»hŌÚÛ·;èz Äæ‘ð™åYÖÁäcâXèîkîeÁã…ªâ†/…OðŒÛ†è™ágà ð¯ç™ç^æ ã0…?ø è›Ö}îeÛrõ ó5õ Å G©ª´±¨5¤sí« ¥§¦™® ¤R«g­X«g´cÂG¥¶® ¤5©ª¬™­sìX® ¤ É ©ª¬H»Ž.ªÂnޒ«¯Ã:ꩪ´Ä¸ º²5®g®¯¥o y[©ª¬7¥¶«wy[«¯»7«g« ¬hÂ[©Á·5­’­3©ª·5®¯«¸G©Áºž¦7¤5© É Å !21 1ªúRÅ  ¼ª¨5¥±·R¹ É ¥Ä»™¤s©ªº°R¥¶¹ª²5¥Ä»u¿Í©Á·<­ ²R·R˝·R¼ É · É ¼’¬™­5¦m»7¤R¬™¼ª²5¹ª¤˜¨R¬™¼ª°5©ª°R¥¶´±¥§¦u»™¥¶®"º¼X­X«g´¶¦gÅëzá 9 QâXäF¾ö äcæ‘áÁãpî'æ ã0…ÁâXæ§ç^äoæ‘à^ç^Â-!415Ýiªß65 ú  16òXú V RÅ
2000
11
The order of prenominal adjectives in natural language generation Robert Malouf Alfa Informatica Rijksuniversiteit Groningen Postbus 716 9700 AS Groningen The Netherlands [email protected] Abstract The order of prenominal adjectival modifiers in English is governed by complex and difficult to describe constraints which straddle the boundary between competence and performance. This paper describes and compares a number of statistical and machine learning techniques for ordering sequences of adjectives in the context of a natural language generation system. 1 The problem The question of robustness is a perennial problem for parsing systems. In order to be useful, a parser must be able to accept a wide range of input types, and must be able to gracefully deal with dysfluencies, false starts, and other ungrammatical input. In natural language generation, on the other hand, robustness is not an issue in the same way. While a tactical generator must be able to deal with a wide range of semantic inputs, it only needs to produce grammatical strings, and the grammar writer can select in advance which construction types will be considered grammatical. However, it is important that a generator not produce strings which are strictly speaking grammatical but for some reason unusual. This is a particular problem for dialog systems which use the same grammar for both parsing and generation. The looseness required for robust parsing is in direct opposition to the tightness needed for high quality generation. One area where this tension shows itself clearly is in the order of prenominal modifiers in English. In principle, prenominal adjectives can, depending on context, occur in almost any order: the large red American car ??the American red large car *car American red the large Some orders are more marked than others, but none are strictly speaking ungrammatical. So, the grammar should not put any strong constraints on adjective order. For a generation system, however, it is important that sequences of adjectives be produced in the ‘correct’ order. Any other order will at best sound odd and at worst convey an unintended meaning. Unfortunately, while there are rules of thumb for ordering adjectives, none lend themselves to a computational implementation. For example, adjectives denoting size do tend to precede adjectives denoting color. However, these rules underspecify the relative order for many pairs of adjectives and are often difficult to apply in practice. In this paper, we will discuss a number of statistical and machine learning approaches to automatically extracting from large corpora the constraints on the order of prenominal adjectives in English. 2 Word bigram model The problem of generating ordered sequences of adjectives is an instance of the more general problem of selecting among a number of possible outputs from a natural language generation system. One approach to this more general problem, taken by the ‘Nitrogen’ generator (Langkilde and Knight, 1998a; Langkilde and Knight, 1998b), takes advantage of standard statistical techniques by generating a lattice of all possible strings given a semantic representation as input and selecting the most likely output using a bigram language model. Langkilde and Knight report that this strategy yields good results for problems like generating verb/object collocations and for selecting the correct morphological form of a word. It also should be straightforwardly applicable to the more specific problem we are addressing here. To determine the correct order for a sequence of prenominal adjectives, we can simply generate all possible orderings and choose the one with the highest probability. This has the advantage of reducing the problem of adjective ordering to the problem of estimating n-gram probabilities, something which is relatively well understood. To test the effectiveness of this strategy, we took as a dataset the first one million sentences of the written portion of the British National Corpus (Burnard, 1995).1 We held out a randomly selected 10% of this dataset and constructed a backoff bigram model from the remaining 90% using the CMU-Cambridge statistical language modeling toolkit (Clarkson and Rosenfeld, 1997). We then evaluated the model by extracting all sequences of two or more adjectives followed by a noun from the held-out test data and counted the number of such sequences for which the most likely order was the actually observed order. Note that while the model was constructed using the entire training set, it was evaluated based on only sequences of adjectives. The results of this experiment were somewhat disappointing. Of 5,113 adjective sequences found in the test data, the order was correctly predicted for only 3,864 for an overall prediction accuracy of 75.57%. The apparent reason that this method performs as poorly as it does for this particular problem is that sequences of adjectives are relatively rare in written English. This is evidenced by the fact that in the test data only one sequence of adjectives was found for every twenty sentences. With adjective sequences so rare, the chances of finding information about any particular sequence of adjectives is extremely small. The data is simply too sparse for this to be a reliable method. 1The relevant files were identified by the absence of the <settDesc> (spoken text “setting description”) SGML tag in the file header. Thanks to John Carroll for help in preparing the corpus. 3 The experiments Since Langkilde and Knight’s general approach does not seem to be very effective in this particular case, we instead chose to pursue more focused solutions to the problem of generating correctly ordered sequences of prenominal adjectives. In addition, at least one generation algorithm (Carroll et al., 1999) inserts adjectival modifiers in a post-processing step. This makes it easy to integrate a distinct adjective-ordering module with the rest of the generation system. 3.1 The data To evaluate various methods for ordering prenominal adjectives, we first constructed a dataset by taking all sequences of two or more adjectives followed by a common noun in the 100 million tokens of written English in the British National Corpus. From 247,032 sequences, we produced 262,838 individual pairs of adjectives. Among these pairs, there were 127,016 different pair types, and 23,941 different adjective types. For test purposes, we then randomly held out 10% of the pairs, and used the remaining 90% as the training sample. Before we look at the different methods for predicting the order of adjective pairs, there are two properties of this dataset which bear noting. First, it is quite sparse. More than 76% of the adjective pair types occur only once, and 49% of the adjective types only occur once. Second, we get no useful information about the syntagmatic context in which a pair appears. The lefthand context is almost always a determiner, and including information about the modified head noun would only make the data even sparser. This lack of context makes this problem different from other problems, such as part-of-speech tagging and grapheme-to-phoneme conversion, for which statistical and machine learning solutions have been proposed. 3.2 Direct evidence The simplest strategy for ordering adjectives is what Shaw and Hatzivassiloglou (1999) call the direct evidence method. To order the pair {a,b}, count how many times the ordered sequences ⟨a,b⟩and ⟨b,a⟩appear in the training data and output the pair in the order which occurred more often. This method has the advantage of being conceptually very simple, easy to implement, and highly accurate for pairs of adjectives which actually appear in the training data. Applying this method to the adjectives sequences taken from the BNC yields better than 98% accuracy for pairs that occurred in the training data. However, since as we have seen, the majority of pairs occur only once, the overall accuracy of this method is 59.72%, only slightly better than random guessing. Fortunately, another strength of this method is that it is easy to identify those pairs for which it is likely to give the right result. This means that one can fall back on another less accurate but more general method for pairs which did not occur in the training data. In particular, if we randomly assign an order to unseen pairs, we can cut the error rate in half and raise the overall accuracy to 78.28%. It should be noted that the direct evidence method as employed here is slightly different from Shaw and Hatzivassiloglou’s: we simply compare raw token counts and take the larger value, while they applied a significance test to estimate the probability that a difference between counts arose strictly by chance. Like one finds in a trade-off between precision and recall, the use of a significance test slightly improved the accuracy of the method for those pairs which it had an opinion about, but also increased the number of pairs which had to be randomly assigned an order. As a result, the net impact of using a significance test for the BNC data was a very slight decrease in the overall prediction accuracy. The direct evidence method is straightforward to implement and gives impressive results for applications that involve a small number of frequent adjectives which occur in all relevant combinations in the training data. However, as a general approach to ordering adjectives, it leaves quite a bit to be desired. In order to overcome the sparseness inherent to this kind of data, we need a method which can generalize from the pairs which occur in the training data to unseen pairs. 3.3 Transitivity One way to think of the direct evidence method is to see that it defines a relation ≺on the set of English adjectives. Given two adjectives, if the ordered pair ⟨a,b⟩appears in the training data more often then the pair ⟨b,a⟩, then a ≺b. If the reverse is true, and ⟨b,a⟩is found more often than ⟨a,b⟩, then b ≺a. If neither order appears in the training data, then neither a ≺b nor b ≺a and an order must be randomly assigned. Shaw and Hatzivassiloglou (1999) propose to generalize the direct evidence method so that it can apply to unseen pairs of adjectives by computing the transitive closure of the ordering relation ≺. That is, if a ≺c and c ≺b, we can conclude that a ≺b. To take an example from the BNC, the adjectives large and green never occur together in the training data, and so would be assigned a random order by the direct evidence method. However, the pairs ⟨large,new⟩ and ⟨new,green⟩occur fairly frequently. Therefore, in the face of this evidence we can assign this pair the order ⟨large,green⟩, which not coincidently is the correct English word order. The difficulty with applying the transitive closure method to any large dataset is that there often will be evidence for both orders of any given pair. For instance, alongside the evidence supporting the order ⟨large,green⟩, we also find the pairs ⟨green,byzantine⟩, ⟨byzantine,decorative⟩, and ⟨decorative,new⟩, which suggest the order ⟨green,large⟩. Intuitively, the evidence for the first order is quite a bit stronger than the evidence for the second. The first ordered pairs are more frequent, as are the individual adjectives involved. To quantify the relative strengths of these transitive inferences, Shaw and Hatzivassiloglou (1999) propose to assign a weight to each link. Say the order ⟨a,b⟩occurs m times and the pair {a,b} occurs n times in total. Then the weight of the pair a →b is: −log 1− n ∑ k=m n k  · 1 2 n! This weight decreases as the probability that the observed order did not occur strictly by chance increases. This way, the problem of finding the order best supported by the evidence can be stated as a general shortest path problem: to find the preferred order for {a,b}, find the sum of the weights of the pairs in the lowest-weighted path from a to b and from b to a and choose whichever is lower. Using this method, Shaw and Hatzivassiloglou report predictions ranging from 81% to 95% accuracy on small, domain specific samples. However, they note that the results are very domainspecific. Applying a graph trained on one domain to a text from another another generally gives very poor results, ranging from 54% to 58% accuracy. Applying this method to the BNC data gives 83.91% accuracy, in line with Shaw and Hatzivassiloglou’s results and considerably better than the direct evidence method. However, applying the method is computationally a bit expensive. Like the direct evidence method, it requires storing every pair of adjectives found in the training data along with its frequency. In addition, it also requires solving the all-pairs shortest path problem, for which common algorithms run in O(n3) time. 3.4 Adjective bigrams Another way to look at the direct evidence method is as a comparison between two probabilities. Given an adjective pair {a,b}, we compare the number of times we observed the order ⟨a,b⟩to the number of times we observed the order ⟨b,a⟩. Dividing each of these counts by the total number of times {a,b} occurred gives us the maximum likelihood estimate of the probabilities P(⟨a,b⟩|{a,b}) and P(⟨b,a⟩|{a,b}). Looking at it this way, it should be clear why the direct evidence method does not work well, as maximum likelihood estimation of bigram probabilities is well known to fail in the face of sparse data. It should also be clear how we might improve the direct evidence method. Using the same strategy as described in section 2, we constructed a back-off bigram model of adjective pairs, again using the CMU-Cambridge toolkit. Since this model was constructed using only data specifically about adjective sequences, the relative infrequency of such sequences does not degrade its performance. Therefore, while the word bigram model gave an accuracy of only 75.57%, the adjective bigram model yields an overall prediction accuracy of 88.02% for the BNC data. 3.5 Memory-based learning An important property of the direct evidence method for ordering adjectives is that it requires storing all of the adjective pairs observed in the training data. In this respect, the direct evidence method can be thought of as a kind of memorybased learning. Memory-based (also known as lazy, nearest neighbor, instance-based, or case-based) approaches to classification work by storing all of the instances in the training data, along with their classes. To classify a new instance, the store of previously seen instances is searched to find those instances which most resemble the new instance with respect to some similarity metric. The new instance is then assigned a class based on the majority class of its nearest neighbors in the space of previously seen instances. To make the comparison between the direct evidence method and memory-based learning clearer, we can frame the problem of adjective ordering as a classification problem. Given an unordered pair {a,b}, we can assign it some canonical order to get an instance ab. Then, if a precedes b more often than b precedes a in the training data, we assign the instance ab to the class a ≺b. Otherwise, we assign it to the class b ≺a. Seen as a solution to a classification problem, the direct evidence method then is an application of memory-based learning where the chosen similarity metric is strict identity. As with the interpretation of the direct evidence method explored in the previous section, this view both reveals a reason why the method is not very effective and also indicates a direction which can be taken to improve it. By requiring the new instance to be identical to a previously seen instance in order to classify it, the direct evidence method is unable to generalize from seen pairs to unseen pairs. Therefore, to improve the method, we need a more appropriate similarity metric that allows the classifier to get information from previously seen pairs which are relevant to but not identical to new unseen pairs. Following the conventional linguistic wisdom (Quirk et al., 1985, e.g.), this similarity metric should pick out adjectives which belong to the same semantic class. Unfortunately, for many adjectives this information is difficult or impossible to come by. Machine readable dictionaries and lexical databases such as WordNet (Fellbaum, 1998) do provide some information about semantic classes. However, the semantic classification in a lexical database may not make exactly the distinctions required for predicting adjective order. More seriously, available lexical databases are by necessity limited to a relatively small number of words, of which a relatively small fraction are adjectives. In practice, the available sources of semantic information only provide semantic classifications for fairly common adjectives, and these are precisely the adjectives which are found frequently in the training data and so for which semantic information is least necessary. While we do not reliably have access to the meaning of an adjective, we do always have access to its form. And, fortunately, for many of the cases in which the direct evidence method fails, finding a previously seen pair of adjectives with a similar form has the effect of finding a pair with a similar meaning. For example, suppose we want to order the adjective pair {21-year-old,Armenian}. If this pair appears in the training data, then the previous occurrences of this pair will be used to predict the order and the method reduces to direct evidence. If, on the other hand, that particular pair did not appear in the training data, we can base the classification on previously seen pairs with a similar form. In this way, we may find pairs like {73-year-old,Colombian} and {44-year-old,Norwegian}, which have more or less the same distribution as the target pair. To test the effectiveness of a form-based similarity metric, we encoded each adjective pair ab as a vector of 16 features (the last 8 characters of a and the last 8 characters of b) and a class a ≺b or b ≺a. Constructing the instance base and testing the classification was performed using the TiMBL 3.0 (Daelemans et al., 2000) memorybased learning system. Instances to be classified were compared to previously seen instances by counting the number of feature values that the two instances had in common. In computing the similarity score, features were weighted by their information gain, an information theoretic measure of the relevance of a feature for determining the correct classification (Quinlan, 1986; Daelemans and van den Bosch, 1992). This weighting reduces the sensitivity of memory based learning to the presence of irrelevant features. Given the probability pi of finding each class i in the instance base D, we can compute the entropy H(D), a measure of the amount of uncertainty in D: H(D) = −∑ pi pi log2 pi In the case of the adjective ordering data, there are two classes a ≺b and b ≺a, each of which occurs with a probability of roughly 0.5, so the entropy of the instance base is close to 1 bit. We can also compute the entropy of a feature f which takes values V as the weighted sum of the entropy of each of the values V: H(D f ) = ∑ vi∈V H(D f=vi)|D f=vi| |D| Here H(Df=vi) is the entropy of subset of the instance base which has value vi for feature f. The information gain of a feature then is simply the difference between the total entropy of the instance base and the entropy of a single feature: G(D, f) = H(D)−H(D f ) The information gain G(D, f) is the reduction in uncertainty in D we expect to achieve by learning the value of the feature f. In other words, knowing the value of a feature with a higher G gets us closer on average to knowing the class of an instance than knowing the value of a feature with a lower G does. The similarity ∆between two instances then is the number of feature values they have in common, weighted by the information gain: ∆(X,Y) = n ∑ i=1 G(D,i)δ(xi,yi) where: δ(xi,yi) =  1 if xi = yi 0 otherwise Classification was based on the five training instances most similar to the instance to be classified, and produced an overall prediction accuracy of 89.34% for the BNC data. 3.6 Positional probabilities One difficulty faced by each of the methods described so far is that they all to one degree or another depend on finding particular pairs of adjectives. For example, in order for the direct evidence method to assign an order to a pair of adjectives like {blue, large}, this specific pair must have appeared in the training data. If not, an order will have to be assigned randomly, even if the individual adjectives blue and large appear quite frequently in combination with a wide variety of other adjectives. Both the adjective bigram method and the memory-based learning method reduce this dependency on pairs to a certain extent, but these methods still suffer from the fact that even for common adjectives one is much less likely to find a specific pair in the training data than to find some pair of which a specific adjective is a member. Recall that the adjective bigram method depended on estimating the probabilities P(⟨a,b⟩|{a,b}) and P(⟨b,a⟩|{a,b}). Suppose we now assume that the probability of a particular adjective appearing first in a sequence depends only on that adjective, and not the the other adjectives in the sequence. We can easily estimate the probability that if an adjective pair includes some given adjective a, then that adjective occurs first (let us call that P(⟨a,x⟩|{a,x})) by looking at each pair in the training data that includes that adjective a. Then, given the assumption of independence, the probability P(⟨a,b⟩|{a,b}) is simply the product of P(⟨a,x⟩|{a,x}) and P(⟨x,b⟩|{b,x}). Taking the most likely order for a pair of adjectives using this alternative method for estimating P(⟨a,b⟩|{a,b}) and P(⟨a,b⟩|{a,b}) gives quite good results: a prediction accuracy of 89.73% for the BNC data. At first glance, the effectiveness of this method may be surprising since it is based on an independence assumption which common sense indicates must not be true. However, to order a pair of adjectives, this method brings to bear information from all the previously seen pairs which include either of adjectives in the pair in question. Since it makes much more effective use of the training data, it can nevertheless achieve high accuracy. This method also has the advantage of being computationally quite simple. Applying this method requires only one easy-to-calculate value be stored for each possible adjective. Compared to the other methods, which require at a minimum that all of the training data be available during classification, this represents a considerable resource savings. 3.7 Combined method The two highest scoring methods, using memorybased learning and positional probability, perform similarly, and from the point of view of accuracy there is little to recommend one method over the other. However, it is interesting to note that the errors made by the two methods do not completely overlap: while either of the methods gives the right answer for about 89% of the test data, one of the two is right 95.00% of the time. This indicates that a method which combined the information used by the memory-based learning and positional probability methods ought to be able to perform better than either one individually. To test this possibility, we added two new features to the representation described in section 3.5. Besides information about the morphological form of the adjectives in the pair, we also included the positional probabilities P(⟨a,x⟩|{a,x}) and P(⟨b,x⟩|{b,x}) as real-valued features. For numeric features, the similarity metric ∆is computed using the scaled difference between the values: δ(xi,yi) = xi −yi maxi −mini Repeating the MBL experiment with these two additional features yields 91.85% accuracy for the BNC data, a 24% reduction in error rate over purely morphological MBL with only a modest increase in resource requirements. 4 Future directions To get an idea of what the upper bound on accuracy is for this task, we tried applying the direct evidence method trained on both the training data and the held-out test data. This gave an accuracy of approximately 99%, which means that 1% of the pairs in the corpus are in the ‘wrong’ order. For an even larger percentage of pairs either order is acceptable, so an evaluation procedure which assumes that the observed order is the only correct order will underestimate the classification accuracy. Native speaker intuitions about infrequently-occurring adjectives are not very strong, so it is difficult to estimate what fraction of adjective pairs in the corpus are actually unordered. However, it should be clear that even a perfect method for ordering adjectives would score well below 100% given the experimental set-up described here. While the combined MBL method achieves reasonably good results even given the limitations of the evaluation method, there is still clearly room for improvement. Future work will pursue at least two directions for improving the results. First, while semantic information is not available for all adjectives, it is clearly available for some. Furthermore, any realistic dialog system would make use of some limited vocabulary Direct evidence 78.28% Adjective bigrams 88.02% MBL (morphological) 89.34% (*) Positional probabilities 89.73% (*) MBL (combined) 91.85% Table 1: Summary of results. With the exception of the starred values, all differences are statistically significant (p < 0.005) for which semantic information would be available. More generally, distributional clustering techniques (Sch¨utze, 1992; Pereira et al., 1993) could be applied to extract semantic classes from the corpus itself. Since the constraints on adjective ordering in English depend largely on semantic classes, the addition of semantic information to the model ought to improve the results. The second area where the methods described here could be improved is in the way that multiple information sources are integrated. The technique method described in section 3.7 is a fairly crude method for combining frequency information with symbolic data. It would be worthwhile to investigate applying some of the more sophisticated ensemble learning techniques which have been proposed in the literature (Dietterich, 1997). In particular, boosting (Schapire, 1999; Abney et al., 1999) offers the possibility of achieving high accuracy from a collection of classifiers which individually perform quite poorly. 5 Conclusion In this paper, we have presented the results of applying a number of statistical and machine learning techniques to the problem of predicting the order of prenominal adjectives in English. The scores for each of the methods are summarized in table 1. The best methods yield around 90% accuracy, better than the best previously published methods when applied to the broad domain data of the British National Corpus. Note that McNemar’s test (Dietterich, 1998) confirms the significance of all of the differences reflected here (with p < 0.005) with the exception of the difference between purely morphological MBL and the method based on positional probabilities. From this investigation, we can draw some additional conclusions. First, a solution specific to adjective ordering works better than a general probabilistic filter. Second, machine learning techniques can be applied to a different kind of linguistic problem with some success, even in the absence of syntagmatic context, and can be used to augment a hand-built competence grammar. Third, in some cases statistical and memory based learning techniques can be combined in a way that performs better than either individually. 6 Acknowledgments I am indebted to Carol Bleyle, John Carroll, Ann Copestake, Guido Minnen, Miles Osborne, audiences at the University of Groningen and the University of Sussex, and three anonymous reviewers for their comments and suggestions. The work described here was supported by the School of Behavioral and Cognitive Neurosciences at the University of Groningen. References Steven Abney, Robert E. Schapire, and Yoram Singer. 1999. Boosting applied to tagging and PP attachment. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. Lou Burnard. 1995. Users reference guide for the British National Corpus, version 1.0. Technical report, Oxford University Computing Services. John Carroll, Ann Copestake, Dan Flickinger, and Victor Poznanski. 1999. An efficient chart generator for (semi-)lexicalist grammars. In Proceedings of the 7th European Workshop on Natural Language Generation (EWNLG’99), pages 86–95, Toulouse. Philip R. Clarkson and Ronald Rosenfeld. 1997. Statistical language modeling using the CMUCambridge Toolkit. In G. Kokkinakis, N. Fakotakis, and E. Dermatas, editors, Eurospeech ’97 Proceedings, pages 2707–2710. Walter Daelemans and Antal van den Bosch. 1992. Generalization performance of backpropagation learning on a syllabification task. In M.F.J. Drossaers and A. Nijholt, editors, Proceedings of TWLT3: Connectionism and Natural Language Processing, Enschede. University of Twente. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2000. TiMBL: Tilburg memory based learner, version 3.0, reference guide. ILK Technical Report 00-01, Tilburg University. Available from http://ilk.kub.nl/ ~ilk/papers/ilk0001.ps.gz. Thomas G. Dietterich. 1997. Machine learning research: four current directions. AI Magazine, 18:97–136. Thomas G. Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10(7):1895– 1924. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. Irene Langkilde and Kevin Knight. 1998a. Generation that exploits corpus-based statistical knowledge. In Proceedings of 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 704–710, Montreal. Irene Langkilde and Kevin Knight. 1998b. The practical value of n-grams in generation. In Proceedings of the International Natural Language Generation Workshop, Niagara-on-the-Lake, Ontario. Fernando Pereira, Naftali Tishby, and Lilian Lee. 1993. Distributional clustering of English words. In Proceedings of the 30th annual meeting of the Association for Computational Linguistics, pages 183–190. J. Ross Quinlan. 1986. Induction of decision trees. Machine Learning, 1:81–106. Randolf Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, London. Robert E. Schapire. 1999. A brief introduction to boosting. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence. Hinrich Sch¨utze. 1992. Dimensions of meaning. In Proceedings of Supercomputing, pages 787–796, Minneapolis. James Shaw and Vasileios Hatzivassiloglou. 1999. Ordering among premodifiers. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 135–143, College Park, Maryland.
2000
12
Spoken Dialogue Management Using Probabilistic Reasoning Nicholas Roy and Joelle Pineau and Sebastian Thrun Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213          !" #$ %'&( *)+ ,-).  #/"012345#67# Abstract Spoken dialogue managers have benefited from using stochastic planners such as Markov Decision Processes (MDPs). However, so far, MDPs do not handle well noisy and ambiguous speech utterances. We use a Partially Observable Markov Decision Process (POMDP)-style approach to generate dialogue strategies by inverting the notion of dialogue state; the state represents the user’s intentions, rather than the system state. We demonstrate that under the same noisy conditions, a POMDP dialogue manager makes fewer mistakes than an MDP dialogue manager. Furthermore, as the quality of speech recognition degrades, the POMDP dialogue manager automatically adjusts the policy. 1 Introduction The development of automatic speech recognition has made possible more natural human-computer interaction. Speech recognition and speech understanding, however, are not yet at the point where a computer can reliably extract the intended meaning from every human utterance. Human speech can be both noisy and ambiguous, and many real-world systems must also be speaker-independent. Regardless of these difficulties, any system that manages human-machine dialogues must be able to perform reliably even with noisy and stochastic speech input. Recent research in dialogue management has shown that Markov Decision Processes (MDPs) can be useful for generating effective dialogue strategies (Young, 1990; Levin et al., 1998); the system is modelled as a set of states that represent the dialogue as a whole, and a set of actions corresponding to speech productions from the system. The goal is to maximise the reward obtained for fulfilling a user’s request. However, the correct way to represent the state of the dialogue is still an open problem (Singh et al., 1999). A common solution is to restrict the system to a single goal. For example, in booking a flight in an automated travel agent system, the system state is described in terms of how close the agent is to being able to book the flight. Such systems suffer from a principal problem. A conventional MDP-based dialogue manager must know the current state of the system at all times, and therefore the state has to be wholly contained in the system representation. These systems perform well under certain conditions, but not all. For example, MDPs have been used successfully for such tasks as retrieving e-mail or making travel arrangements (Walker et al., 1998; Levin et al., 1998) over the phone, task domains that are generally low in both noise and ambiguity. However, the issue of reliability in the face of noise is a major concern for our application. Our dialogue manager was developed for a mobile robot application that has knowledge from several domains, and must interact with many people over time. For speaker-independent systems and systems that must act in a noisy environment, the user’s action and intentions cannot always be used to infer the dialogue state; it may be not be possible to reliably and completely determine the state of the dialogue following each utterance. The poor reliability of the audio signal on a mobile robot, coupled with the expectations of natural interaction that people have with more anthropomorphic interfaces, increases the demands placed on the dialogue manager. Most existing dialogue systems do not model confidences on recognition accuracy of the human utterances, and therefore do not account for the reliability of speech recognition when applying a dialogue strategy. Some systems do use the log-likelihood values for speech utterances, however these values are only thresholded to indicate whether the utterance needs to be confirmed (Niimi and Kobayashi, 1996; Singh et al., 1999). An important concept lying at the heart of this issue is that of observability – the ultimate goal of a dialogue system is to satisfy a user request; however, what the user really wants is at best partially observable. We handle the problem of partial observability by inverting the conventional notion of state in a dialogue. The world is viewed as partially unobservable – the underlying state is the intention of the user with respect to the dialogue task. The only observations about the user’s state are the speech utterances given by the speech recognition system, from which some knowledge about the current state can be inferred. By accepting the partial observability of the world, the dialogue problem becomes one that is addressed by Partially Observable Markov Decision Processes (POMDPs) (Sondik, 1971). Finding an optimal policy for a given POMDP model corresponds to defining an optimal dialogue strategy. Optimality is attained within the context of a set of rewards that define the relative value of taking various actions. We will show that conventional MDP solutions are insufficient, and that a more robust methodology is required. Note that in the limit of perfect sensing, the POMDP policy will be equivalent to an MDP policy. What the POMDP policy offers is an ability to compensate appropriately for better or worse sensing. As the speech recognition degrades, the POMDP policy acquires reward more slowly, but makes fewer mistakes and blind guesses compared to a conventional MDP policy. There are several POMDP algorithms that may be the natural choice for policy generation (Sondik, 1971; Monahan, 1982; Parr and Russell, 1995; Cassandra et al., 1997; Kaelbling et al., 1998; Thrun, 1999). However, solving real world dialogue scenarios is computationally intractable for full-blown POMDP solvers, as the complexity is doubly exponential in the number of states. We therefore will use an algorithm for finding approximate solutions to POMDP-style problems and apply it to dialogue management. This algorithm, the Augmented MDP, was developed for mobile robot navigation (Roy and Thrun, 1999), and operates by augmenting the state description with a compression of the current belief state. By representing the belief state succinctly with its entropy, belief-space planning can be approximated without the expected complexity. In the first section of this paper, we develop the model of dialogue interaction. This model allows for a more natural description of dialogue problems, and in particular allows for intuitive handling of noisy and ambiguous dialogues. Few existing dialogues can handle ambiguous input, typically relying on natural language processing to resolve semantic ambiguities (Aust and Ney, 1998). Secondly, we present a description of an example problem domain, and finally we present experimental results comparing the performance of the POMDP (approximated by the Augmented MDP) to conventional MDP dialogue strategies. 2 Dialogue Systems and POMDPs A Partially Observable Markov Decision Process (POMDP) is a natural way of modelling dialogue processes, especially when the state of the system is viewed as the state of the user. The partial observability capabilities of a POMDP policy allows the dialogue planner to recover from noisy or ambiguous utterances in a natural and autonomous way. At no time does the machine interpreter have any direct knowledge of the state of the user, i.e, what the user wants. The machine interpreter can only infer this state from the user’s noisy input. The POMDP framework provides a principled mechanism for modelling uncertainty about what the user is trying to accomplish. The POMDP consists of an underlying, unobservable Markov Decision Process. The MDP is specified by: 8 a set of states 9;:=<>@?A.>(BA(CCC DE 8 a set of actions F2:=<*G?A=GB(AC(CCA.GHIE 8 a set of transition probabilities J2KL>'MNA.GA.>(OQP R KL> MS >A.GO 8 a set of rewards TVU*9XWYF[Z\^] 8 an initial state >_ The actions represent the set of responses that the system can carry out. The transition probabilities form a structure over the set of states, connecting the states in a directed graph with arcs between states with non-zero transition probabilities. The rewards define the relative value of accomplishing certain actions when in certain states. The POMDP adds: 8 a set of observations `a:=<*b ?A=b*BA(CCCA.b*cdE 8 a set of observation probabilities efKLbA.> A=GOgP R KLb S > A=GO and replaces 8 the initial state >(_ with an initial belief, R Kh>_U >(_i:j9;O 8 the set of rewards with rewards conditioned on observations as well: TVU*9XWYFkWl`VZ\^] The observations consist of a set of keywords which are extracted from the speech utterances. The POMDP plans in belief space; each belief consists of a probability distribution over the set of states, representing the respective probability that the user is in each of these states. The initial belief specified in the model is updated every time the system receives a new observation from the user. The POMDP model, as defined above, first goes through a planning phase, during which it finds an optimal strategy, or policy, which describes an optimal mapping of action G to belief R KL>5Um>_ :9;O , for all possible beliefs. The dialogue manager uses this policy to direct its behaviour during conversations with users. The optimal strategy for a POMDP is one that prescribes action selection that maximises the expected reward. Unfortunately, finding an optimal policy exactly for all but the most trivial POMDP problems is computationally intractable. A near-optimal policy can be computed significantly faster than an exact one, at the expense of a slight reduction in performance. This is often done by imposing restrictions on the policies that can be selected, or by simplifying the belief state and solving for a simplified uncertainty representation. In the Augmented MDP approach, the POMDP problem is simplified by noticing that the belief state of the system tends to have a certain structure. The uncertainty that the system has is usually domain-specific and localised. For example, it may be likely that a household robot system can confuse TV channels (‘ABC’ for ‘NBC’), but it is unlikely that the system will confuse a TV channel request for a request to get coffee. By making the localised assumption about the uncertainty, it becomes possible to summarise any given belief vector by a pair consisting of the most likely state, and the entropy of the belief state. n KLoOqp P rts uvwts@x y n KLoO'z.{|K n KLo(OO} (1) {|K n KLo(O+O~P  €  ‚ ƒ ? n KLo(O„†… v B n KLo(O (2) The entropy of the belief state approximates a sufficient statistic for the entire belief state 1. Given this assumption, we can plan a policy for every possible such < state, entropy E pair, that approximates the POMDP policy for the corresponding belief n KLo(O . Figure 1: Florence Nightingale, the prototype nursing home robot used in these experiments. 3 The Example Domain The system that was used throughout these experiments is based on a mobile robot, Florence 1Although sufficient statistics are usually moments of continuous distributions, our experience has shown that the entropy serves equally well. Nightingale (Flo), developed as a prototype nursing home assistant. Flo uses the Sphinx II speech recognition system (Ravishankar, 1996), and the Festival speech synthesis system (Black et al., 1999). Figure 1 shows a picture of the robot. Since the robot is a nursing home assistant, we use task domains that are relevant to assisted living in a home environment. Table 1 shows a list of the task domains the user can inquire about (the time, the patient’s medication schedule, what is on different TV stations), in addition to a list of robot motion commands. These abilities have all been implemented on Flo. The medication schedule is pre-programmed, the information about the TV schedules is downloaded on request from the web, and the motion commands correspond to pre-selected robot navigation sequences. Time Medication (Medication 1, Medication 2, ..., Medication n) TV Schedules for different channels (ABC, NBC, CBS) Robot Motion Commands (To the kitchen, To the Bedroom) Table 1: The task domains for Flo. If we translate these tasks into the framework that we have described, the decision problem has 13 states, and the state transition graph is given in Figure 2. The different tasks have varying levels of complexity, from simply saying the time, to going through a list of medications. For simplicity, only the maximum-likelihood transitions are shown in Figure 2. Note that this model is handcrafted. There is ongoing research into learning policies automatically using reinforcement learning (Singh et al., 1999); dialogue models could be learned in a similar manner. This example model is simply to illustrate the utility of the POMDP approach. There are 20 different actions; 10 actions correspond to different abilities of the robot such as going to the kitchen, or giving the time. The remaining 10 actions are clarification or confirmation actions, such as re-confirming the desired TV channel. There are 16 observations that correspond to relevant keywords as well as a nonsense observation. The reward structure gives the most reward for choosing actions that satisfy the user request. These actions then lead back to the beginning state. Most other actions are penalised with an equivalent negative amount. However, the confirmation/clarification actions are penalised lightly (values close to 0), and the motion commands are penalised heavily if taken from the wrong state, to illustrate the difference between an undesirable action that is merely irritating (i.e., giving an inappropriate response) and an action that can be much more costly (e.g., having the robot leave the room at the wrong time, or travel to the wrong destination). 3.1 An Example Dialogue Table 2 shows an example dialogue obtained by having an actual user interact with the system on the robot. The left-most column is the emitted observation from the speech recognition system. The operating conditions of the system are fairly poor, since the microphone is on-board the robot and subject to background noise as well as being located some distance from the user. In the final two lines of the script, the robot chooses the correct action after some confirmation questions, despite the fact that the signal from the speech recogniser is both very noisy and also ambiguous, containing cues both for the “say hello” response and for robot motion to the kitchen. 4 Experimental Results We compared the performance of the three algorithms (conventional MDP, POMDP approximated by the Augmented MDP, and exact POMDP) over the example domain. The metric used was to look at the total reward accumulated over the course of an extended test. In order to perform this full test, the observations and states from the underlying MDP were generated stochastically from the model and then given to the policy. The action taken by the policy was returned to the model, and the policy was rewarded based on the state-action-observation triplet. The experiments were run for a total of 100 dialogues, where each dialogue is considered to be a cycle of observation-action utterances from the start state request_begun through a sequence of states and back to the start state. The time was normalised by the length of each dialogue cycle. 4.1 The Restricted State Space Problem The exact POMDP policy was generated using the Incremental Improvement algorithm (CassanWant Time Want TV Info Want CBS Info Want ABC Info Want NBC Info Start Meds Schedule Send Robot Send Robot to Kitchen Send Robot to Bedroom Continue Meds Done Meds No Request Request begun Figure 2: A simplified graph of the basic Markov Decision Process underlying the dialogue manager. Only the maximumlikelihood transitions are shown. Observation True State Belief Entropy Action Reward flo hello request begun 0.406 say hello 100 flo what is like start meds 2.735 ask repeat -100 flo what time is it for will the want time 0.490 say time 100 flo was on abc want tv 1.176 ask which station -1 flo was on abc want abc 0.886 say abc 100 flo what is on nbc want nbc 1.375 confirm channel nbc -1 flo yes want nbc 0.062 say nbc 100 flo go to the that pretty good what send robot 0.864 ask robot where -1 flo that that hello be send robot bedroom 1.839 confirm robot place -1 flo the bedroom any i send robot bedroom 0.194 go to bedroom 100 flo go it eight a hello send robot 1.110 ask robot where -1 flo the kitchen hello send robot kitchen 1.184 go to kitchen 100 Table 2: An example dialogue. Note that the robot chooses the correct action in the final two exchanges, even though the utterance is both noisy and ambiguous. dra et al., 1997). The solver was unable to complete a solution for the full state space, so we created a much smaller dialogue model, with only 7 states and 2 task domains: time and weather information. Figure 3 shows the performance of the three algorithms, over the course of 100 dialogues. Notice that the exact POMDP strategy outperformed both the conventional MDP and approximate POMDP; it accumulated the most reward, and did so with the fastest rate of accumulation. The good performance of the exact POMDP is not surprising because it is an optimal solution for this problem, but time to compute this strategy is high: 729 secs, compared with 1.6 msec for the MDP and 719 msec for the Augmented MDP. 4.2 The Full State Space Problem Figure 4 demonstrates the algorithms on the full dialogue model as given in Figure 2. Because of the number of states, no exact POMDP solution could be computed for this problem; the POMDP 0 5000 10000 15000 20000 25000 30000 0 10 20 30 40 50 60 70 80 90 100 Reward Gained Number of Dialogs Reward Gained per Dialog, for Small Decision Problem POMDP strategy Augmented MDP Conventional MDP Figure 3: A comparison of the reward gained over time for the exact POMDP, POMDP approximated by the Augmented MDP, and the conventional MDP for the 7 state problem. In this case, the time is measured in dialogues, or iterations of satisfying user requests. policy is restricted to the approximate solution. The POMDP solution clearly outperforms the conventional MDP strategy, as it more than triples the total accumulated reward over the lifetime of the strategies, although at the cost of taking longer to reach the goal state in each dialogue. -5000 0 5000 10000 15000 20000 25000 0 10 20 30 40 50 60 70 80 90 100 Reward Gained Number of Dialogs Reward Gained per Dialog, for Full Decision Problem Augmented MDP Conventional MDP Figure 4: A comparison of the reward gained over time for the approximate POMDP vs. the conventional MDP for the 13 state problem. Again, the time is measured in number of actions. Table 3 breaks down the numbers in more detail. The average reward for the POMDP is 18.6 per action, which is the maximum reward for most actions, suggesting that the POMDP is taking the right action about 95% of the time. Furthermore, the average reward per dialogue for the POMDP is 230 compared to 49.7 for the conventional MDP, which suggests that the conventional MDP is making a large number of mistakes in each dialogue. Finally, the standard deviation for the POMDP is much narrower, suggesting that this algorithm is getting its rewards much more consistently than the conventional MDP. 4.3 Verification of Models on Users We verified the utility of the POMDP approach by testing the approximating model on human users. The user testing of the robot is still preliminary, and therefore the experiment presented here cannot be considered a rigorous demonstration. However, Table 4 shows some promising results. Again, the POMDP policy is the one provided by the approximating Augmented MDP. The experiment consisted of having users interact with the mobile robot under a variety of conditions. The users tested both the POMDP and an implementation of a conventional MDP dialogue manager. Both planners used exactly the same model. The users were presented first with one manager, and then the other, although they were not told which manager was first and the order varied from user to user randomly. The user labelled each action from the system as “Correct” (+100 reward), “OK” (-1 reward) or “Wrong” (100 reward). The “OK” label was used for responses by the robot that were questions (i.e., did not satisfy the user request) but were relevant to the request, e.g., a confirmation of TV channel when a TV channel was requested. The system performed differently for the three test subjects, compensating for the speech recognition accuracy which varied significantly between them. In user #2’s case, the POMDP manager took longer to satisfy the requests, but in general gained more reward per action. This is because the speech recognition system generally had lower word-accuracy for this user, either because the user had unusual speech patterns, or because the acoustic signal was corrupted by background noise. By comparison, user #3’s results show that in the limit of good sensing, the POMDP policy approaches the MDP policy. This user had a much higher recognition rate from the speech recogniser, and consequently both the POMDP and conventional MDP acquire rewards at equivalent rates, and satisfied requests at similar rates. 5 Conclusion This paper discusses a novel way to view the dialogue management problem. The domain is represented as the partially observable state of the user, where the observations are speech utterances from the user. The POMDP representation inverts the traditional notion of state in dialogue management, treating the state as unknown, but inferrable from the sequences of observations from the user. Our approach allows us to model observations from the user probabilistically, and in particular we can compensate appropriately for more or less reliable observations from the speech recognition system. In the limit of perfect recognition, we achieve the same performance as a conventional MDP dialogue policy. However, as recognition degrades, we can model the effects of actively gathering information from the user to offset the loss of information in the utterance stream. In the past, POMDPs have not been used for dialogue management because of the computational complexity involved in solving anything but trivial problems. We avoid this problem by using an POMDP Conventional MDP Average Reward Per Action 18.6 +/- 57.1 Average Reward Per Action 3.8 +/- 67.2 Average Dialogue Reward 230.7 +/- 77.4 Average Dialogue Reward 49.7 +/- 193.7 Table 3: A comparison of the rewards accumulated for the two algorithms (approximate POMDP and conventional MDP) using the full model. POMDP Conventional MDP User 1 Reward Per Action 52.2 24.8 Errors per request 0.1 +/- 0.09 0.55 +/- 0.44 Time to fill request 1.9 +/- 0.47 2.0 +/- 1.51 User 2 Reward Per Action 36.95 6.19 Errors per request 0.1 +/- 0.09 0.825 +/- 1.56 Time to fill request 2.5 +/- 1.22 1.86 +/- 1.47 User 3 Reward Per Action 49.72 44.95 Errors per request 0.18 +/- 0.15 0.36 +/- 0.37 Time to fill request 1.63 +/- 1.15 1.42 +/- 0.63 Table 4: A comparison of the rewards accumulated for the two algorithms using the full model on real users, with results given as mean +/- std. dev. augmented MDP state representation for approximating the optimal policy, which allows us to find a solution that quantitativelyoutperforms the conventional MDP, while dramatically reducing the time to solution compared to an exact POMDP algorithm (linear vs. exponential in the number of states). We have shown experimentally both in simulation and in preliminary user testing that the POMDP solution consistently outperforms the conventional MDP dialogue manager, as a function of erroneous actions during the dialogue. We are able to show with actual users that as the speech recognition performance varies, the dialogue manager is able to compensate appropriately. While the results of the POMDP approach to the dialogue system are promising, a number of improvements are needed. The POMDP is overly cautious, refusing to commit to a particular course of action until it is completely certain that it is appropriate. This is reflected in its liberal use of verification questions. This could be avoided by having some non-static reward structure, where information gathering becomes increasingly costly as it progresses. The policy is extremely sensitive to the parameters of the model, which are currently set by hand. While learning the parameters from scratch for a full POMDP is probably unnecessary, automatic tuning of the model parameters would definitely add to the utility of the model. For example, the optimality of a policy is strongly dependent on the design of the reward structure. It follows that incorporating a learning component that adapts the reward structure to reflect actual user satisfaction would likely improve performance. 6 Acknowledgements The authors would like to thank Tom Mitchell for his advice and support of this research. Kevin Lenzo and Mathur Ravishankar made our use of Sphinx possible, answered requests for information and made bug fixes willingly. Tony Cassandra was extremely helpful in distributing his POMDP code to us, and answering promptly any questions we had. The assistance of the Nursebot team is also gratefully acknowledged, including the members from the School of Nursing and the Department of Computer Science Intelligent Systems at the University of Pittsburgh. This research was supported in part by Le Fonds pour la Formation de Chercheurs et l’Aide `a la Recherche (Fonds FCAR). References Harald Aust and Hermann Ney. 1998. Evaluating dialog systems used in the real world. In Proc. IEEE ICASSP, volume 2, pages 1053–1056. A. Black, P. Taylor, and R. Caley, 1999. The Festival Speech Synthesis System, 1.4 edition. Anthony Cassandra, Michael L. Littman, and Nevin L. Zhang. 1997. Incremental pruning: A simple, fast, exact algorithm for partially observable Markov decision processes. In Proc. 13th Ann. Conf. on Uncertainty in Artificial Intelligence (UAI–97), pages 54–61, San Francisco, CA. Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. 1998. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99–134. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1998. Using Markov decision process for learning dialogue strategies. In Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP). George E. Monahan. 1982. A survey of partially observable Markov decision processes. Management Science, 28(1):1–16. Yasuhisa Niimi and Yutaka Kobayashi. 1996. Dialog control strategy based on the reliability of speech recognition. In Proc. International Conference on Spoken Language Processing (ICSLP). Ronald Parr and Stuart Russell. 1995. Approximating optimal policies for partially observable stochastic domains. In Proceedings of the 14th International Joint Conferences on Artificial Intelligence. M. Ravishankar. 1996. Efficient Algorithms for Speech Recognition. Ph.D. thesis, Carnegie Mellon. Nicholas Roy and Sebastian Thrun. 1999. Coastal navigation with mobile robots. In Advances in Neural Processing Systems, volume 12. Satinder Singh, Michael Kearns, Diane Litman, and Marilyn Walker. 1999. Reinforcement learning for spoken dialog systems. In Advances in Neural Processing Systems, volume 12. E. Sondik. 1971. The Optimal Control of Partially Observable Markov Decision Processes. Ph.D. thesis, Stanford University, Stanford, California. Sebastian Thrun. 1999. Monte carlo pomdps. In S. A. Solla, T. K. Leen, and K. R. M¨uller, editors, Advances in Neural Processing Systems, volume 12. Marilyn A. Walker, Jeanne C. Fromer, and Shrikanth Narayanan. 1998. Learning optimal dialogue strategies: a case study of a spoken dialogue agent for email. In Proc. ACL/COLING’98. Sheryl Young. 1990. Use of dialogue, pragmatics and semantics to enhance speech recognition. Speech Communication, 9(5-6), Dec.
2000
13
An Unsupervised Approach to Prepositional Phrase Attachment using Contextually Similar Words Patrick Pantel and Dekang Lin Department of Computing Science University of Alberta1 Edmonton, Alberta T6G 2H1 Canada {ppantel, lindek}@cs.ualberta.ca 1This research was conducted at the University of Manitoba. Abstract Prepositional phrase attachment is a common source of ambiguity in natural language processing. We present an unsupervised corpus-based approach to prepositional phrase attachment that achieves similar performance to supervised methods. Unlike previous unsupervised approaches in which training data is obtained by heuristic extraction of unambiguous examples from a corpus, we use an iterative process to extract training data from an automatically parsed corpus. Attachment decisions are made using a linear combination of features and low frequency events are approximated using contextually similar words. Introduction Prepositional phrase attachment is a common source of ambiguity in natural language processing. The goal is to determine the attachment site of a prepositional phrase in a sentence. Consider the following examples: 1. Mary ate the salad with a fork. 2. Mary ate the salad with croutons. In both cases, the task is to decide whether the prepositional phrase headed by the preposition with attaches to the noun phrase (NP) headed by salad or the verb phrase (VP) headed by ate. In the first sentence, with attaches to the VP since Mary is using a fork to eat her salad. In sentence 2, with attaches to the NP since it is the salad that contains croutons. Formally, prepositional phrase attachment is simplified to the following classification task. Given a 4-tuple of the form (V, N1, P, N2), where V is the head verb, N1 is the head noun of the object of V, P is a preposition, and N2 is the head noun of the prepositional complement, the goal is to classify as either adverbial attachment (attaching to V) or adjectival attachment (attaching to N1). For example, the 4-tuple (eat, salad, with, fork) has target classification V. In this paper, we present an unsupervised corpus-based approach to prepositional phrase attachment that outperforms previous unsupervised techniques and approaches the performance of supervised methods. Unlike previous unsupervised approaches in which training data is obtained by heuristic extraction of unambiguous examples from a corpus, we use an iterative process to extract training data from an automatically parsed corpus. The attachment decision for a 4-tuple (V, N1, P, N2) is made as follows. First, we replace V and N2 by their contextually similar words and compute the average adverbial attachment score. Similarly, the average adjectival attachment score is computed by replacing N1 and N2 by their contextually similar words. Attachment scores are obtained using a linear combination of features of the 4-tuple. Finally, we combine the average attachment scores with the attachment score of N2 attaching to the original V and the attachment score of N2 attaching to the original N1. The proposed classification represents the attachment site that scored highest. 1 Previous Work Altmann and Steedman (1988) showed that current discourse context is often required for disambiguating attachments. Recent work shows that it is generally sufficient to utilize lexical information (Brill and Resnik, 1994; Collins and Brooks, 1995; Hindle and Rooth, 1993; Ratnaparkhi et al., 1994). One of the earliest corpus-based approaches to prepositional phrase attachment used lexical preference by computing co-occurrence frequencies (lexical associations) of verbs and nouns with prepositions (Hindle and Rooth, 1993). Training data was obtained by extracting all phrases of the form (V, N1, P, N2) from a large parsed corpus. Supervised methods later improved attachment accuracy. Ratnaparkhi et al. (1994) used a maximum entropy model considering only lexical information from within the verb phrase (ignoring N2). They experimented with both word features and word class features, their combination yielding 81.6% attachment accuracy. Later, Collins and Brooks (1995) achieved 84.5% accuracy by employing a backed-off model to smooth for unseen events. They discovered that P is the most informative lexical item for attachment disambiguation and keeping low frequency events increases performance. A non-statistical supervised approach by Brill and Resnik (1994) yielded 81.8% accuracy using a transformation-based approach (Brill, 1995) and incorporating word-class information. They report that the top 20 transformations learned involved specific prepositions supporting Collins and Brooks’ claim that the preposition is the most important lexical item for resolving the attachment ambiguity. The state of the art is a supervised algorithm that employs a semantically tagged corpus (Stetina and Nagao, 1997). Each word in a labelled corpus is sense-tagged using an unsupervised word-sense disambiguation algorithm with WordNet (Miller, 1990). Testing examples are classified using a decision tree induced from the training examples. They report 88.1% attachment accuracy approaching the human accuracy of 88.2% (Ratnaparkhi et al., 1994). The current unsupervised state of the art achieves 81.9% attachment accuracy (Ratnaparkhi, 1998). Using an extraction heuristic, unambiguous prepositional phrase attachments of the form (V, P, N2) and (N1, P, N2) are extracted from a large corpus. Cooccurrence frequencies are then used to disambiguate examples with ambiguous attachments. 2 Resources The input to our algorithm includes a collocation database and a corpus-based thesaurus, both available on the Internet2. Below, we briefly describe these resources. 2.1 Collocation database Given a word w in a dependency relationship (such as subject or object), the collocation database is used to retrieve the words that occurred in that relationship with w, in a large corpus, along with their frequencies (Lin, 1998a). Figure 1 shows excerpts of the entries in 2Available at www.cs.ualberta.ca/~lindek/demos.htm. eat: object: almond 1, apple 25, bean 5, beam 1, binge 1, bread 13, cake 17, cheese 8, dish 14, disorder 20, egg 31, grape 12, grub 2, hay 3, junk 1, meat 70, poultry 3, rabbit 4, soup 5, sandwich 18, pasta 7, vegetable 35, ... subject: adult 3, animal 8, beetle 1, cat 3, child 41, decrease 1, dog 24, family 29, guest 7, kid 22, patient 7, refugee 2, rider 1, Russian 1, shark 2, something 19, We 239, wolf 5, ... salad: adj-modifier: assorted 1, crisp 4, fresh 13, good 3, grilled 5, leftover 3, mixed 4, olive 3, prepared 3, side 4, small 6, special 5, vegetable 3, ... object-of: add 3, consume 1, dress 1, grow 1, harvest 2, have 20, like 5, love 1, mix 1, pick 1, place 3, prepare 4, return 3, rinse 1, season 1, serve 8, sprinkle 1, taste 1, test 1, Toss 8, try 3, ... Figure 1. Excepts of entries in the collocation database for eat and salad. Table 1. The top 20 most similar words of eat and salad as given by (Lin, 1998b). WORD SIMILAR WORDS (WITH SIMILARITY SCORE) EAT cook 0.127, drink 0.108, consume 0.101, feed 0.094, taste 0.093, like 0.092, serve 0.089, bake 0.087, sleep 0.086, pick 0.085, fry 0.084, freeze 0.081, enjoy 0.079, smoke 0.078, harvest 0.076, love 0.076, chop 0.074, sprinkle 0.072, Toss 0.072, chew 0.072 SALAD soup 0.172, sandwich 0.169, sauce 0.152, pasta 0.149, dish 0.135, vegetable 0.135, cheese 0.132, dessert 0.13, entree 0.121, bread 0.116, meat 0.116, chicken 0.115, pizza 0.114, rice 0.112, seafood 0.11, dressing 0.109, cake 0.107, steak 0.105, noodle 0.105, bean 0.102 the collocation database for the words eat and salad. The database contains a total of 11 million unique dependency relationships. 2.2 Corpus-based thesaurus Using the collocation database, Lin (1998b) used an unsupervised method to construct a corpusbased thesaurus consisting of 11839 nouns, 3639 verbs and 5658 adjectives/adverbs. Given a word w, the thesaurus returns a set of similar words of w along with their similarity to w. For example, the 20 most similar words of eat and salad are shown in Table 1. 3 Training Data Extraction We parsed a 125-million word newspaper corpus with Minipar3, a descendent of Principar (Lin, 1994). Minipar outputs dependency trees (Lin, 1999) from the input sentences. For example, the following sentence is decomposed into a dependency tree: Occasionally, the parser generates incorrect dependency trees. For example, in the above sentence, the prepositional phrase headed by with should attach to saw (as opposed to dog). Two separate sets of training data were then extracted from this corpus. Below, we briefly describe how we obtained these data sets. 3.1 Ambiguous Data Set For each input sentence, Minipar outputs a single dependency tree. For a sentence containing one or more prepositions, we use a program to detect any alternative prepositional attachment sites. For example, in the above sentence, the program would detect that with could attach to saw. Using an iterative algorithm, we initially create a table of cooccurrence frequencies for 3-tuples of the form (V, P, N2) and (N1, P, N2). For each k possible attachment site of a preposition P, we increment the frequency of the corresponding 3-tuple by 1/k. For example, Table 2 shows the initial cooccurrence frequency table for the corresponding 3-tuples of the above sentence. 3Available at www.cs.ualberta.ca/~lindek/minipar.htm. In the following iterations of the algorithm, we update the frequency table as follows. For each k possible attachment site of a preposition P, we refine its attachment score using the formulas described in Section 4: VScore(Vk, Pk, N2k) and NScore(N1k, Pk, N2k). For any tuple (Wk, Pk, N2k), where Wk is either Vk or N2k, we update its frequency as: ( ) ( ) ( ) ∑= = k i i i i k k k k N k P k W N P W Score N P W Score fr 1 2 2 2 , , , , , , where Score(Wk, Pk, N2k) = VScore(Wk, Pk, N2k) if Wk = Vk; otherwise Score(Wk, Pk, N2k) = NScore(Wk, Pk, N2k). Suppose that after the initial frequency table is set NScore(man, in, park) = 1.23, VScore(saw, with, telescope) = 3.65, and NScore(dog, with, telescope) = 0.35. Then, the updated cooccurrence frequencies for (man, in, park) and (saw, with, telescope) are: fr(man, in, park) = 23 .1 23 .1 = 1.0 fr(saw, with, telescope) = 35 .0 65 .3 65 .3 + = 0.913 Table 3 shows the updated frequency table after the first iteration of the algorithm. The resulting database contained 8,900,000 triples. 3.2 Unambiguous Data Set As in (Ratnaparkhi, 1998), we constructed a training data set consisting of only unambiguous Table 2. Initial co-occurrence frequency table entries for A man in the park saw a dog with a telescope. V OR N1 P N2 FREQUENCY man in park 1.0 saw with telescope 0.5 dog with telescope 0.5 Table 3. Co-occurrence frequency table entries for A man in the park saw a dog with a telescope after one iteration. V OR N1 P N2 FREQUENCY man in park 1.0 saw with telescope 0.913 dog with telescope 0.087 A man in the park saw a dog with a telescope. det det det det pcomp pcomp mod subj obj mod attachments of the form (V, P, N2) and (N1, P, N2). We only extract a 3-tuple from a sentence when our program finds no alternative attachment site for its preposition. Each extracted 3-tuple is assigned a frequency count of 1. For example, in the previous sentence, (man, in, park) is extracted since it contains only one attachment site; (dog, with, telescope) is not extracted since with has an alternative attachment site. The resulting database contained 4,400,000 triples. 4 Classification Model Roth (1998) presented a unified framework for natural language disambiguation tasks. Essentially, several language learning algorithms (e.g. naïve Bayes estimation, back-off estimation, transformation-based learning) were successfully cast as learning linear separators in their feature space. Roth modelled prepositional phrase attachment as linear combinations of features. The features consisted of all 15 possible sub-sequences of the 4-tuple (V, N1, P, N2) shown in Table 4. The asterix (*) in features represent wildcards. Roth used supervised learning to adjust the weights of the features. In our experiments, we only considered features that contained P since the preposition is the most important lexical item (Collins and Brooks, 1995). Furthermore, we omitted features that included both V and N1 since their co-occurrence is independent of the attachment decision. The resulting subset of features considered in our system is shown in bold in Table 4 (equivalent to assigning a weight of 0 or 1 to each feature). Let |head, rel, mod| represent the frequency, obtained from the training data, of the head occurring in the given relationship rel with the modifier. We then assign a score to each feature as follows: 1. (*, *, P, *) = log(|*, P, *| / |*, *, *|) 2. (V, *, P, N2) = log(|V, P, N2| / |*, *, *|) 3. (*, N1, P, N2) = log(|N1, P, N2| / |*, *, *|) 4. (V, *, P, *) = log(|V, P, *| / |V, *, *|) 5. (*, N1, P, *) = log(|N1, P, *| / |N1, *, *|) 6. (*, *, P, N2) = log(|*, P, N2| / |*, *, N2|) 1, 2, and 3 are the prior probabilities of P, V P N2, and N1 P N2 respectively. 4, 5, and 6 represent conditional probabilities P(V, P | V), P(N1, P | N1), and P(P N2 | N2) respectively. We estimate the adverbial and adjectival attachment scores, VScore(V, P, N2) and NScore(N1, P, N2), as a linear combination of these features: VScore(V, P, N2) = (*, *, P, *) + (V, *, P, N2) + (V, *, P, *) + (*, *, P, N2) NScore(N1, P, N2) = (*, *, P, *) + (*, N1, P, N2) + (*, N1, P, *) + (*, *, P, N2) For example, the attachment scores for (eat, salad, with, fork) are VScore(eat, with, fork) = -3.47 and NScore(salad, with, fork) = -4.77. The model correctly assigns a higher score to the adverbial attachment. 5 Contextually Similar Words The contextually similar words of a word w are words similar to the intended meaning of w in its context. Below, we describe an algorithm for constructing contextually similar words and we present a method for approximating the attachment scores using these words. 5.1 Algorithm For our purposes, a context of w is simply a dependency relationship involving w. For example, a dependency relationship for saw in the example sentence of Section 3 is saw:obj:dog. Figure 2 gives the data flow diagram for our algorithm for constructing the contextually similar words of w. We retrieve from the collocation database the words that occurred in the same dependency relationship as w. We refer to this set of words as the cohort of w for the dependency relationship. Consider the words eat and salad in the context eat salad. The cohort of eat consists of verbs that appeared Table 4. The 15 features for prepositional phrase attachment. FEATURES (V, *, *, *) (V, *, P, *) (*, N1, *, N2) (V, N1, *, *) (V, *, *, N2) (*, N1, P, N2) (V, N1, P, *) (V, *, P, N2) (*, *, P, *) (V, N1, *, N2) (*, N1, *, *) (*, *, *, N2) (V, N1, P, N2) (*, N1, P, *) (*, *, P, N2) with object salad in Figure 1 (e.g. add, consume, cover, …) and the cohort of salad consists of nouns that appeared as object of eat in Figure 1 (e.g. almond, apple, bean, …). Intersecting the set of similar words and the cohort then forms the set of contextually similar words of w. For example, Table 5 shows the contextually similar words of eat and salad in the context eat salad and the contextually similar words of fork in the contexts eat with fork and salad with fork. The words in the first row are retrieved by intersecting the similar words of eat in Table 1 with the cohort of eat while the second row represents the intersection of the similar words of salad in Table 1 and the cohort of salad. The third and fourth rows are determined in a similar manner. In the nonsensical context salad with fork (in row 4), no contextually similar words are found. While previous word sense disambiguation algorithms rely on a lexicon to provide sense inventories of words, the contextually similar words provide a way of distinguishing between different senses of words without committing to any particular sense inventory. 5.2 Attachment Approximation Often, sparse data reduces our confidence in the attachment scores of Section 4. Using contextually similar words, we can approximate these scores. Given the tuple (V, N1, P, N2), adverbial attachments are approximated as follows. We first construct a list CSV containing the contextually similar words of V in context V:obj:N1 and a list CSN2V containing the contextually similar words of N2 in context V:P:N2 (i.e. assuming adverbial attachment). For each verb v in CSV, we compute VScore(v, P, N2) and set SV as the average of the largest k of these scores. Similarly, for each noun n in CSN2V, we compute VScore(V, P, n) and set SN2V as the average of the largest k of these scores. Then, the approximated adverbial attachment score, Vscore', is: VScore'(V, P, N2) = max(SV, SN2V) We approximate the adjectival attachment score in a similar way. First, we construct a list CSN1 containing the contextually similar words of N1 in context V:obj:N1 and a list CSN2N1 containing the contextually similar words of N2 in context N1:P:N2 (i.e. assuming adjectival attachment). Now, we compute SN1 as the average of the largest k of NScore(n, P, N2) for each noun n in CSN1 and SN2N1 as the average of the largest k of NScore(N1, P, n) for each noun n in CSN2N1. Then, the approximated adjectival attachment score, NScore', is: NScore'(N1, P, N2) = max(SN1, SN2N1) For example, suppose we wish to approximate the attachment score for the 4-tuple (eat, salad, with, fork). First, we retrieve the contextually similar words of eat and salad in context eat salad, and the contextually similar words of fork in contexts eat with fork and salad with fork as shown in Table 5. Let k = 2. Table 6 shows the calculation of SV and SN2V while the calculation of SN1 and SN2N1 is shown in Table 7. Only the Figure 2. Data flow diagram for identifying the contextually similar words of a word in a dependency relationship. word in dependency relationship Similar Words Cohorts Corpus-Based Thesaurus Retrieve Intersect Get Similar Words Collocation DB Contextually Similar Words Table 5. Contextually similar words of eat and salad. WORD CONTEXT CONTEXTUALLY SIMILAR WORDS EAT eat salad consume, taste, like, serve, pick, harvest, love, sprinkle, Toss, … SALAD eat salad soup, sandwich, pasta, dish, cheese, vegetable, bread, meat, cake, bean, … FORK eat with fork spoon, knife, finger FORK salad with fork --top k = 2 scores are shown in these tables. We have: VScore' (eat, with, fork) = max(SV, SN2V) = -2.92 NScore' (salad, with, fork) = max(SN1, SN2N1) = -4.87 Hence, the approximation correctly prefers the adverbial attachment to the adjectival attachment. 6 Attachment Algorithm Figure 3 describes the prepositional phrase attachment algorithm. As in previous approaches, examples with P = of are always classified as adjectival attachments. Suppose we wish to approximate the attachment score for the 4-tuple (eat, salad, with, fork). From the previous section, Step 1 returns averageV = -2.92 and averageN1 = -4.87. From Section 4, Step 2 gives aV = -3.47 and aN1 = -4.77. In our training data, fV = 2.97 and fN1 = 0, thus Step 3 gives f = 0.914. In Step 4, we compute: S(V) = -3.42 and S(N1) = -4.78 Since S(V) > S(N1), the algorithm correctly classifies this example as an adverbial attachment. Given the 4-tuple (eat, salad, with, croutons), the algorithm returns S(V) = -4.31 and S(N1) = -3.88. Hence, the algorithm correctly attaches the prepositional phrase to the noun salad. 7 Experimental Results In this section, we describe our test data and the baseline for our experiments. Finally, we present our results. 7.1 Test Data The test data consists of 3097 examples derived from the manually annotated attachments in the Penn Treebank Wall Street Journal data (Ratnaparkhi et al., 1994)4. Each line in the test data consists of a 4-tuple and a target classification: V N1 P N2 target. 4Available at ftp.cis.upenn.edu/pub/adwait/PPattachData. The data set contains several erroneous tuples and attachments. For instance, 133 examples contain the word the as N1 or N2. There are also improbable attachments such as (sing, birthday, to, you) with the target attachment birthday. Table 6. Calculation of SV and SN2V for (eat, salad, with, fork). 4-TUPLE VSCORE (mix, salad, with, fork) -2.60 (sprinkle, salad, with, fork) -3.24 SV -2.92 (eat, salad, with, spoon) -3.06 (eat, salad, with, finger) -3.50 SN2V -3.28 Table 7. Calculation of SN1 and SN2N1 for (eat, salad, with, fork). 4-TUPLE NSCORE (eat, pasta, with, fork) -4.71 (eat, cake, with, fork) -5.02 SN1 -4.87 --n/a --n/a SN2N1 n/a Input: A 4-tuple (V, N1, P, N2) Step 1: Using the contextually similar words algorithm and the formulas from Section 5.2 compute: averageV = VScore'(V, P, N2) averageN1 = NScore'(N1, P, N2) Step 2: Compute the adverbial attachment score, av, and the adjectival attachment score, an1: aV = VScore(V, P, N2) aN1 = NScore(N1, P, N2) Step 3: Retrieve from the training data set the frequency of the 3-tuples (V, P, N2) and (N1, P, N2) à fV and fN1, respectively. Let f = (fV + fN1 + 0.2) / (fV + fN1 +0.5) Step 4: Combine the scores of Steps 1-3 to obtain the final attachment scores: S(V) = fav + (1 − f)averagev S(N1) = fan1 + (1 − f)averagen1 Output:The attachment decision: N1 if S(N1) > S(V) or P = of; V otherwise. Figure 3. The prepositional phrase attachment algorithm. 7.2 Baseline Choosing the most common attachment site, N1, yields an accuracy of 58.96%. However, we achieve 70.39% accuracy by classifying each occurrence of P = of as N1, and V otherwise. Human accuracy, given the full context of a sentence, is 93.2% and drops to 88.2% when given only tuples of the form (V, N1, P, N2) (Ratnaparkhi et al., 1994). Assuming that human accuracy is the upper bound for automatic methods, we expect our accuracy to be bounded above by 88.2% and below by 70.39%. 7.3 Results We used the 3097-example testing corpus described in Section 7.1. Table 8 presents the precision and recall of our algorithm and Table 9 presents a performance comparison between our system and previous supervised and unsupervised approaches using the same test data. We describe the different classifiers below: clbase: the baseline described in Section 7.2 clR1: uses a maximum entropy model (Ratnaparkhi et al., 1994) clBR5: uses transformation-based learning (Brill and Resnik, 1994) clCB: uses a backed-off model (Collins and Brooks, 1995) clSN: induces a decision tree with a sense-tagged corpus, using a semantic dictionary (Stetina and Nagao, 1997) clHR6: uses lexical preference (Hindle and Rooth, 1993) clR2: uses a heuristic extraction of unambiguous attachments (Ratnaparkhi, 1998) clPL: uses the algorithm described in this paper Our classifier outperforms all previous unsupervised techniques and approaches the performance of supervised algorithm. We reconstructed the two earlier unsupervised classifiers clHR and clR2. Table 10 presents the accuracy of our reconstructed classifiers. The originally reported accuracy for clR2 is within the 95% confidence interval of our reconstruction. Our reconstruction of clHR achieved slightly higher accuracy than the original report. 5The accuracy is reported in (Collins and Brooks, 1995). 6The accuracy was obtained on a smaller test set but, from the same source as our test data. Our classifier used a mixture of the two training data sets described in Section 3. In Table 11, we compare the performance of our system on the following training data sets: UNAMB: the data set of unambiguous examples described in Section 3.2 EM0: the data set of Section 3.1 after frequency table initialization EM1: EM0 + one iteration of algorithm 3.1 EM2: EM0 + two iterations of algorithm 3.1 EM3: EM0 + three iterations of algorithm 3.1 1/8-EM1: one eighth of the data in EM1 MIX: The concatenation of UNAMB and EM1 Table 11 illustrates a slight but consistent increase in performance when using contextually similar words. However, since the confidence intervals overlap, we cannot claim with certainty Table 8. Precision and recall for attachment sites V and N1. CLASS ACTUAL CORRECT INCORRECT PRECISION RECALL V 1203 994 209 82.63% 78.21% N1 1894 1617 277 84.31% 88.55% Table 9. Performance comparison with other approaches. METHOD LEARNING ACCURACY CLBASE --70.39% CLR1 supervised 81.6% CLBR supervised 81.9% CLCB supervised 84.5% CLSN supervised 88.1% CLHR unsupervised 75.8% CLR2 unsupervised 81.91% CLPL unsupervised 84.31% Table 10. Accuracy of our reconstruction of (Hindle & Rooth, 1993) and (Ratnaparkhi, 1998). METHOD ORIGINAL REPORTED ACCURACY RECONSTRUCTED SYSTEM ACCURACY (95% CONF) CLHR 75.8% 78.40% ± 1.45% CLR2 81.91% 82.40% ± 1.34% that the contextually similar words improve performance. In Section 7.1, we mentioned some testing examples contained N1 = the or N2 = the. For supervised algorithms, the is represented in the training set as any other noun. Consequently, these algorithms collect training data for the and performance is not affected. However, unsupervised methods break down on such examples. In Table 12, we illustrate the performance increase of our system when removing these erroneous examples. Conclusion and Future Work The algorithms presented in this paper advance the state of the art for unsupervised approaches to prepositional phrase attachment and draws near the performance of supervised methods. Currently, we are exploring different functions for combining contextually similar word approximations with the attachment scores. A promising approach considers the mutual information between the prepositional relationship of candidate attachments and N2. As the mutual information decreases, our confidence in the attachment score decreases and the contextually similar word approximation is weighted higher. Also, improving the construction algorithm for contextually similar words would possibly improve the accuracy of the system. One approach first clusters the similar words. Then, dependency relationships are used to select the most representative clusters as the contextually similar words. The assumption is that more representative similar words produce better approximations. Acknowledgements The authors wish to thank the reviewers for their helpful comments. This research was partly supported by Natural Sciences and Engineering Research Council of Canada grant OGP121338 and scholarship PGSB207797. References Altmann, G. and Steedman, M. 1988. Interaction with Context During Human Sentence Processing. Cognition, 30:191-238. Brill, E. 1995. Transformation-based Error-driven Learning and Natural Language Processing: A case study in part of speech tagging. Computational Linguistics, December. Brill, E. and Resnik. P. 1994. A Rule-Based Approach to Prepositional Phrase Attachment Disambiguation. In Proceedings of COLING-94. Kyoto, Japan. Collins, M. and Brooks, J. 1995. Prepositional Phrase Attachment through a Backed-off Model. In Proceedings of the Third Workshop on Very Large Corpora, pp. 27-38. Cambridge, Massachusetts. Hindle, D. and Rooth, M. 1993. Structural Ambiguity and Lexical Relations. Computational Linguistics, 19(1):103-120. Lin, D. 1999. Automatic Identification of Non-Compositional Phrases. In Proceedings of ACL-99, pp. 317-324. College Park, Maryland. Lin, D. 1998a. Extracting Collocations from Text Corpora. Workshop on Computational Terminology. Montreal, Canada. Lin, D. 1998b. Automatic Retrieval and Clustering of Similar Words. In Proceedings of COLING-ACL98. Montreal, Canada. Lin, D. (1994). Principar - an Efficient, Broad-Coverage, Principle-Based Parser. In Proceedings of COLING-94. Kyoto, Japan. Miller, G. 1990. Wordnet: an On-Line Lexical Database. International Journal of Lexicography, 1990. Ratnaparkhi, A. 1998. Unsupervised Statistical Models for Prepositional Phrase Attachment. In Proceedings of COLINGACL98. Montreal, Canada. Ratnaparkhi, A., Reynar, J., and Roukos, S. 1994. A Maximum Entropy Model for Prepositional Phrase Attachment. In Proceedings of the ARPA Human Language Technology Workshop, pp. 250-255. Plainsboro, N.J. Roth, D. 1998. Learning to Resolve Natural Language Ambiguities: A Unified Approach. In Proceedings of AAAI-98, pp. 806-813. Madison, Wisconsin. Stetina, J. and Nagao, M. 1997. Corpus Based PP Attachment Ambiguity Resolution with a Semantic Dictionary. In Proceedings of the Fifth Workshop on Very Large Corpora, pp. 66-80. Beijing and Hong Kong. Table 11. Performance comparison of different data sets. DATABASE ACCURACY WITHOUT SIMWORDS (95% CONF) ACCURACY WITH SIMWORDS (95% CONF) UNAMBIGUOUS 83.15% ± 1.32% 83.60% ± 1.30% EM0 82.24% ± 1.35% 82.69% ± 1.33% EM1 83.76% ± 1.30% 83.92% ± 1.29% EM2 83.66% ± 1.30% 83.70% ± 1.31% EM3 83.20% ± 1.32% 83.20% ± 1.32% 1/8-EM1 82.98% ± 1.32% 83.15% ± 1.32% MIX 84.11% ± 1.29% 84.31% ± 1.28% Table 12. Performance with removal of the as N1 or N2. DATA SET ACCURACY WITHOUT SIMWORDS (95% CONF) ACCURACY WITH SIMWORDS (95% CONF) WITH THE 84.11% ± 1.29% 84.31% ± 1.32% WITHOUT THE 84.44% ± 1.31% 84.65% ± 1.30%
2000
14
A Unified Statistical Model for the Identification of English BaseNP Endong Xun Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Ming Zhou Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Changning Huang Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Abstract This paper presents a novel statistical model for automatic identification of English baseNP. It uses two steps: the Nbest Part-Of-Speech (POS) tagging and baseNP identification given the N-best POS-sequences. Unlike the other approaches where the two steps are separated, we integrate them into a unified statistical framework. Our model also integrates lexical information. Finally, Viterbi algorithm is applied to make global search in the entire sentence, allowing us to obtain linear complexity for the entire process. Compared with other methods using the same testing set, our approach achieves 92.3% in precision and 93.2% in recall. The result is comparable with or better than the previously reported results. 1 Introduction Finding simple and non-recursive base Noun Phrase (baseNP) is an important subtask for many natural language processing applications, such as partial parsing, information retrieval and machine translation. A baseNP is a simple noun phrase that does not contain other noun phrase recursively, for example, the elements within [...] in the following example are baseNPs, where NNS, IN VBG etc are part-of-speech tags [as defined in M. Marcus 1993]. [Measures/NNS] of/IN [manufacturing/VBG activity/NN] fell/VBD more/RBR than/IN [the/DT overall/JJ measures/NNS] ./. Figure 1: An example sentence with baseNP brackets A number of researchers have dealt with the problem of baseNP identification (Church 1988; Bourigault 1992; Voutilainen 1993; Justeson & Katz 1995). Recently some researchers have made experiments with the same test corpus extracted from the 20th section of the Penn Treebank Wall Street Journal (Penn Treebank). Ramshaw & Markus (1998) applied transformbased error-driven algorithm (Brill 1995) to learn a set of transformation rules, and using those rules to locally updates the bracket positions. Argamon, Dagan & Krymolowski (1998) introduced a memory-based sequences learning method, the training examples are stored and generalization is performed at application time by comparing subsequence of the new text to positive and negative evidence. Cardie & Pierce (1998 1999) devised error driven pruning approach trained on Penn Treebank. It extracts baseNP rules from the training corpus and prune some bad baseNP by incremental training, and then apply the pruned rules to identify baseNP through maximum length matching (or dynamic program algorithm). Most of the prior work treats POS tagging and baseNP identification as two separate procedures. However, uncertainty is involved in both steps. Using the result of the first step as if they are certain will lead to more errors in the second step. A better approach is to consider the two steps together such that the final output takes the uncertainty in both steps together. The approaches proposed by Ramshaw & Markus and Cardie&Pierce are deterministic and local, while Argamon, Dagan & Krymolowski consider the problem globally and assigned a score to each possible baseNP structures. However, they did not consider any lexical information. This paper presents a novel statistical approach to baseNP identification, which considers both steps together within a unified statistical framework. It also takes lexical information into account. In addition, in order to make the best choice for the entire sentence, Viterbi algorithm is applied. Our tests with the Penn Treebank showed that our integrated approach achieves 92.3% in precision and 93.2% in recall. The result is comparable or better that the current state of the art. In the following sections, we will describe the detail for the algorithm, parameter estimation and search algorithms in section 2. The experiment results are given in section 3. In section 4 we make further analysis and comparison. In the final section we give some conclusions. 2 The statistical approach In this section, we will describe the two-pass statistical model, parameters training and Viterbi algorithm for the search of the best sequences of POS tagging and baseNP identification. Before describing our algorithm, we introduce some notations we will use 2.1 Notation Let us express an input sentence E as a word sequence and a sequence of POS respectively as follows: n n w w w w E 1 2 1 ... − = n n t t t t T 1 2 1 ... − = Where n is the number of words in the sentence, it is the POS tag of the word iw . Given E, the result of the baseNP identification is assumed to be a sequence, in which some words are grouped into baseNP as follows ... ] ... [ ... 1 1 1 + + − j j i i i w w w w w The corresponding tag sequence is as follows: (a) m j ji i j j i i i n n n t b t t t t t t B ... ... ... ... ] ... [ ... 2 1 1 , 1 1 1 1 = = = + − + + − In which j ib , corresponds to the tag sequence of a baseNP: ] ... [ 1 j i i t t t + . j ib , may also be thought of as a baseNP rule. Therefore B is a sequence of both POS tags and baseNP rules. Thus ∈ ≤ ≤ in n m , 1 (POS tag set∪ baseNP rules set), This is the first expression of a sentence with baseNP annotated. Sometime, we also use the following equivalent form: (b) n j j j j i i i i i i q q q bm t bm t bm t bm t bm t Q ... )... , ( ) , ( )... , ( ) ,( ) , ...( 2 1 1 1 1 1 1 1 = = + + + + − − Where each POS tag it is associated with its positional information i bm with respect to baseNPs. The positional information is one of } , , , , { S O E I F . F, E and I mean respectively that the word is the left boundary, right boundary of a baseNP, or at another position inside a baseNP. O means that the word is outside a baseNP. S marks a single word baseNP. This second expression is similar to that used in [Marcus 1995]. For example, the two expressions of the example given in Figure 1 are as follows: (a) B= [NNS] IN [VBG NN] VBD RBR IN [DT JJ NNS] (b) Q=(NNS S) (IN O) (VBG F) (NN E) (VBD O) (RBR O) (IN O) (DT F) (JJ I) (NNS E) (. O) 2.2 An ‘integrated’ two-pass procedure The principle of our approach is as follows. The most probable baseNP sequence * B may be expressed generally as follows: )) | ( ( max arg * E B p B B = We separate the whole procedure into two passes, i.e.: )) , | ( ) | ( ( max arg * E T B P E T P B B × ≈ (1) In order to reduce the search space and computational complexity, we only consider the N best POS tagging of E, i.e. )) | ( ( max arg ) ( ,..., 1 E T P best N T N T T T= = − (2) Therefore, we have: )) , | ( ) | ( ( max arg ,..., , * 1 E T B P E T P B N T T T B × ≈ = (3) Correspondingly, the algorithm is composed of two steps: determining the N-best POS tagging using Equation (2). And then determining the best baseNP sequence from those POS sequences using Equation (3). One can see that the two steps are integrated together, rather that separated as in the other approaches. Let us now examine the two steps more closely. 2.3 Determining the N best POS sequences The goal of the algorithm in the 1st pass is to search for the N-best POS-sequences within the search space (POS lattice). According to Bayes’ Rule, we have ) ( ) ( ) | ( ) | ( E P T P T E P E T P × = Since ) (E P does not affect the maximizing procedure of ) | ( E T P , equation (2) becomes )) ( ) | ( ( max arg )) | ( ( max arg ) ( ,..., ,..., 1 1 T P T E P E T P best N T N N T T T T T T × = = − = = (4) We now assume that the words in E are independent. Thus ∏ = ≈ n i i i t w P T E P 1 ) | ( ) | ( (5) We then use a trigram model as an approximation of ) (T P , i.e.: ∏ = − − ≈ n i i i i t t t P T P 1 1 2 ) , | ( ) ( (6) Finally we have )) | ( ( max arg ) ( ,..., 1 E T P best N T N T T T= = − )) , | ( ) | ( ( max arg 1 2 1 ,..., 1 − − = = × = ∏ i i i n i i i T T T t t t P t w P N (7) In Viterbi algorithm of N best search, ) | ( i i t w P is called lexical generation (or output) probability, and ) , | ( 1 2 − − i i i t t t P is called transition probability in Hidden Markov Model. 2.3.1 Determining the baseNPs As mentioned before, the goal of the 2nd pass is to search the best baseNP-sequence given the Nbest POS-sequences. Considering E ,T and B as random variables, according to Bayes’ Rule, we have ) | ( ) , | ( ) | ( ) , | ( T E P T B E P T B P E T B P × = Since ) ( ) ( ) | ( ) | ( T P B P B T P T B P × = we have, ) ( ) | ( ) ( ) | ( ) , | ( ) , | ( T P T E P B P B T P T B E P E T B P × × × = (8) Because we search for the best baseNP sequence for each possible POS-sequence of the given sentence E, so const T E P T P T E P = ∩ = × ) ( ) ( ) | ( , Furthermore from the definition of B, during each search procedure, we have ∏ = = = n i j i j i b t t P B T P 1 , 1 ) | ,..., ( ) | ( . Therefore, equation (3) becomes )) , | ( ) | ( ( max arg ,..., , * 1 E T B P E T P B N T T T B × = = )) ( ) , | ( ) | ( ( max arg ,..., , 1 B P T B E P E T P N T T T B × × = = (9) using the independence assumption, we have ∏ = ≈ n i i i i bm t w P T B E P 1 ) , | ( ) , | ( (10) With trigram approximation of ) (B P , we have: ∏ = − − ≈ m i i i i n n n P B P 1 1 2 ) , | ( ) ( (11) Finally, we obtain )) , | ( ) , | ( ) | ( ( max arg ,1 1 2 1 ,.. , * 1 ∏ ∏ = − − = = × × = m i i i i n i i i i T T T B n n n P t bm w P E T P B N 12To summarize, In the first step, Viterbi N-best searching algorithm is applied in the POS tagging procedure, It determines a path probability tf for each POS sequence calculated as follows: ∏ = − − × = n i i i i i i t t t t p t w p f ,1 1 2 ) , | ( ) | ( . In the second step, for each possible POS tagging result, Viterbi algorithm is applied again to search for the best baseNP sequence. Every baseNP sequence found in this pass is also asssociated with a path probability ∏ ∏ = − − = × = m i i i i n i i i i b n n n p bm t w p f ,1 1 2 1 ) , | ( ) , | ( . The integrated probability of a baseNP sequence is determined by b t f f × α , whereα is a normalization coefficient (α 4.2 = in our experiments). When we determine the best baseNP sequence for the given sentence E , we also determine the best POS sequence of E , which corresponds to the best baseNP of E . Now let us illustrate the whole process through an example: “stock was down 9.1 points yesterday morning.”. In the first pass, one of the N-best POS tagging result of the sentence is: T = NN VBD RB CD NNS NN NN. For this POS sequence, the 2nd pass will try to determine the baseNPs as shown in Figure 2. The details of the path in the dash line are given in Figure 3, Its probability calculated in the second pass is as follows (Φ is pseudo variable): ) , | ( ) , | ( ) , | ( ) , | ( ) , | ( B CD NUMBER p O RB down p O VBD was p S NN stock p E T B P × × × = ) ., | (. ) , | ( ) , | ( ) , | int ( O p E NN morning p B NN yesterday p E NNS s po p × × × × ) , |] ([ ) ], [| ( ]) [, | ( ) , |] ([ RB VBD NNS CD p VBD NN RB p NN VBD p NN p × × Φ × Φ Φ × ]) [ ], [| (. ]) [, |] ([ NN NN NNS CD p NNS CD RB NN NN p × × Figure 2: All possible brackets of "stock was down 9.1 points yesterday morning" Figure 3: the transformed form of the path with dash line for the second pass processing 2.4 The statistical parameter training In this work, the training and testing data were derived from the 25 sections of Penn Treebank. We divided the whole Penn Treebank data into two sections, one for training and the other for testing. As required in our statistical model, we have to calculate the following four probabilities: (1) ) , | ( 1 2 − − i i i t t t P , (2) ) | ( i i t w P , (3) ) | ( 1 2 − − i i i n n n P and (4) ) , | ( i i i bm t w P . The first and the third parameters are trigrams of T and B respectively. The second and the fourth are lexical generation probabilities. Probabilities (1) and (2) can be calculated from POS tagged data with following formulae: ∑ − − − − − − = j j i i i i i i i i t t t count t t t count t t t p ) ( ) ( ) , | ( 1 2 1 2 1 2 (13) ) ( ) ( ) | ( i i i i i t count t tag with w count t w p = (14) As each sentence in the training set has both POS tags and baseNP boundary tags, it can be converted to the two sequences as B (a) and Q (b) described in the last section. Using these sequences, parameters (3) and (4) can be calculated, The calculation formulas are similar with equations (13) and (14) respectively. Before training trigram model (3), all possible baseNP rules should be extracted from the training corpus. For instance, the following three sequences are among the baseNP rules extracted. There are more than 6,000 baseNP rules in the Penn Treebank. When training trigram model (3), we treat those baseNP rules in two ways. (1) Each baseNP rule is assigned a unique identifier (UID). This means that the algorithm considers the corresponding structure of each baseNP rule. (2) All of those rules are assigned to the same identifier (SID). In this case, those rules are grouped into the same class. Nevertheless, the identifiers of baseNP rules are still different from the identifiers assigned to POS tags. We used the approach of Katz (Katz.1987) for parameter smoothing, and build a trigram model to predict the probabilities of parameter (1) and (3). In the case that unknown words are encountered during baseNP identification, we calculate parameter (2) and (4) in the following way: 2 )) , ( ( max ) , ( ) , | ( i j j i i i i i t bm count t bm count t bm w p = (15) 2 )) ( ( max ) ( ) | ( j j i i i t count t count t w p = (16) Here, j bm indicates all possible baseNP labels attached to it , and jt is a POS tag guessed for the unknown word i w . 3 Experiment result We designed five experiments as shown in Table 1. “UID” and “SID” mean respectively that an identifier is assigned to each baseNP rule or the same identifier is assigned to all the baseNP rules. “+1” and “+4” denote the number of beat POS sequences retained in the first step. And “UID+R” means the POS tagging result of the given sentence is totally correct for the 2nd step. This provides an ideal upper bound for the system. The reason why we choose N=4 for the N-best POS tagging can be explained in Figure 4, which shows how the precision of POS tagging changes with the number N. 96. 95 97. 00 97. 05 97. 10 97. 15 97. 20 97. 25 97. 30 97. 35 97. 40 97. 45 1 2 3 4 5 6 Figure 4: POS tagging precision with respect to different number of N-best In the experiments, the training and testing sets are derived from the 25 sections of Wall Street Journal distributed with the Penn Treebank II, and the definition of baseNP is the same as Ramshaw’s, Table 1 summarizes the average performance on both baseNP tagging and POS tagging, each section of the whole Penn Treebank was used as the testing data and the other 24 sections as the training data, in this way we have done the cross validation experiments 25 times. Precision ( baseNP %) Recall ( baseNP %) F-Measure ( baseNP %) 2 R P + ( baseNP %) Precision (POS %) UID+1 92.75 93.30 93.02 93.02 97.06 UID+4 92.80 93.33 93.07 93.06 97.02 SID+1 86.99 90.14 88.54 88.56 97.06 SID+4 86.99 90.16 88.55 88.58 97.13 UID+R 93.44 93.95 93.69 93.70 100 Table 1 The average performance of the five experiments 88. 00 88. 50 89. 00 89. 50 90. 00 90. 50 91. 00 91. 50 92. 00 92. 50 93. 00 1 2 3 4 5 6 UI D+1 UI D+4 UI D+R Figure 5: Precision under different training sets and different POS tagging results 91. 60 91. 80 92. 00 92. 20 92. 40 92. 60 92. 80 93. 00 93. 20 93. 40 93. 60 1 2 3 4 5 6 UI D+1 UI D+4 UI D+R Figure 6: Recall under different training sets and different POS tagging results 96. 80 96. 85 96. 90 96. 95 97. 00 97. 05 97. 10 97. 15 97. 20 1 2 3 4 5 6 Vi t er bi UI D+4 SI D+4 Figure 7: POS tagging precision under different training sets Figure 5 -7 summarize the outcomes of our statistical model on various size of the training data, x-coordinate denotes the size of the training set, where "1" indicates that the training set is from section 0-8th of Penn Treebank, "2" corresponds to the corpus that add additional three sections 9-11th into "1" and so on. In this way the size of the training data becomes larger and larger. In those cases the testing data is always section 20 (which is excluded from the training data). From Figure 7, we learned that the POS tagging and baseNP identification are influenced each other. We conducted two experiments to study whether the POS tagging process can make use of baseNP information. One is UID+4, in which the precision of POS tagging dropped slightly with respect to the standard POS tagging with Trigram Viterbi search. In the second experiment SID+4, the precision of POS tagging has increase slightly. This result shows that POS tagging can benefit from baseNP information. Whether or not the baseNP information can improve the precision of POS tagging in our approach is determined by the identifier assignment of the baseNP rules when training trigram model of ) , | ( 1 2 − − i i i n n n P . In the future, we will further study optimal baseNP rules clustering to further improve the performances of both baseNP identification and POS tagging. 4 Comparison with other approaches To our knowledge, three other approaches to baseNP identification have been evaluated using Penn Treebank-Ramshaw & Marcus’s transformation-based chunker, Argamon et al.’s MBSL, and Cardie’s Treebank_lex in Table 2, we give a comparison of our method with other these three. In this experiment, we use the testing data prepared by Ramshaw (available at http://www.cs.biu.ac.il/~yuvalk/MBSL), the training data is selected from the 24 sections of Penn Treebank (excluding the section 20). We can see that our method achieves better result than the others . Transformation-Based (Training data: 200k) Treebank_Lex MBSL Unified Statistical Precision (%) 91.8 89.0 91.6 92.3 Recall (%) 92.3 90.9 91.6 93.2 F-Measure (%) 92.0 89.9 91.6 92.7 2 R P + 92.1 90.0 91.6 92.8 Table 2: The comparison of our statistical method with three other approaches Transforamtion-Based Treebank_Lex MBSL Unified Statistical Unifying POS & baseNP NO NO NO YES Lexical Information YES YES NO YES Global Searching NO NO YES YES Context YES NO YES YES Table 3: The comparison of some characteristics of our statistical method with three other approaches Table 3 summarizes some interesting aspects of our approach and the three other methods. Our statistical model unifies baseNP identification and POS tagging through tracing N-best sequences of POS tagging in the pass of baseNP recognition, while other methods use POS tagging as a pre-processing procedure. From Table 1, if we reviewed 4 best output of POS tagging, rather that only one, the F-measure of baseNP identification is improved from 93.02 % to 93.07%. After considering baseNP information, the error ratio of POS tagging is reduced by 2.4% (comparing SID+4 with SID+1). The transformation-based method (R&M 95) identifies baseNP within a local windows of sentence by matching transformation rules. Similarly to MBSL, the 2nd pass of our algorithm traces all possible baseNP brackets, and makes global decision through Viterbi searching. On the other hand, unlike MSBL we take lexical information into account. The experiments show that lexical information is very helpful to improve both precision and recall of baseNP recognition. If we neglect the probability of ∏ = n i i i i bm t w P 1 ) , | ( in the 2nd pass of our model, the precision/recall ratios are reduced to 90.0/92.4% from 92.3/93.2%. Cardie’s approach to Treebank rule pruning may be regarded as the special case of our statistical model, since the maximum-matching algorithm of baseNP rules is only a simplified processing version of our statistical model. Compared with this rule pruning method, all baseNP rules are kept in our model. Therefore in principle we have less likelihood of failing to recognize baseNP types As to the complexity of algorithm, our approach is determined by the Viterbi algorithm approach, or ) (n O , linear with the length. 5 Conclusions This paper presented a unified statistical model to identify baseNP in English text. Compared with other methods, our approach has following characteristics: (1) baseNP identification is implemented in two related stages: N-best POS taggings are first determined, then baseNPs are identified given the N best POS-sequences. Unlike other approaches that use POS tagging as preprocessing, our approach is not dependant on perfect POS-tagging, Moreover, we can apply baseNP information to further increase the precision of POS tagging can be improved. These experiments triggered an interesting future research challenge: how to cluster certain baseNP rules into certain identifiers so as to improve the precision of both baseNP and POS tagging. This is one of our further research topics. (2) Our statistical model makes use of more lexical information than other approaches. Every word in the sentence is taken into account during baseNP identification. (3) Viterbi algorithm is applied to make global search at the sentence level. Experiment with the same testing data used by the other methods showed that the precision is 92.3% and the recall is 93.2%. To our knowledge, these results are comparable with or better than all previously reported results. References Eric Brill and Grace Ngai. (1999) Man vs. machine: A case study in baseNP learning. In Proceedings of the 18 th International Conference on Computational Linguistics, pp.65-72. ACL’99 S. Argamon, I. Dagan, and Y. Krymolowski (1998) A memory-based approach to learning shallow language patterns. In Proceedings of the 17 th International Conference on Computational Linguistics, pp.67-73. COLING-ACL’98 Cardie and D. Pierce (1998) Error-driven pruning of treebank grammas for baseNP identification. In Proceedings of the 36 th International Conference on Computational Linguistics, pp.218-224. COLING-ACL’98 Lance A. Ramshaw and Michael P. Marcus ( In Press). Text chunking using transformation-based learning. In Natural Language Processing Using Very large Corpora. Kluwer. Originally appeared in The second workshop on very large corpora WVLC’95, pp.82-94. Viterbi, A.J. (1967) Error bounds for convolution codes and asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory IT-13(2): pp.260-269, April, 1967 S.M. Katz.(1987) Estimation of probabilities from sparse data for the language model component of speech recognize. IEEE Transactions on Acoustics, Speech and Signal Processing. Volume ASSP-35, pp.400-401, March 1987 Church, Kenneth. (1988) A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143. Association of Computational Linguistics. M. Marcus, M. Marcinkiewicx, and B. Santorini (1993) Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2): 313-330
2000
15
       "!# %$&('*)( +,- .(!/- 0)123!/4 5 " .674!89;: < 04!/=><?@A  BDCFE0GIHKJMLNEOQPSRTVUWEYX(O[Z]\^ECF_/`badcfe ghji PSklnm h RYl?oSpq-oIm ir l h kAs/tju h Rt h v oIwRxAyzo i{ u|RxA}zRu~ h kxnul€ ‚ PSƒl„u|mboIk hS…†Qgˆ‡/‰F‡/‰‹Š/… }ŒŽsŒ‘’Œ “ mbPSu”ƒ–•—F˜%™Iš›+™œIž Ÿ ¡ ¢#™0£ ¤¥%¡§¦+¨&©%ª«¦¬I­Iª ®D¯a±°‹CFE0GS° ²´³Fµ·¶K¸ ¹±¸&º»¼¸F»–ºn¶[º½d¾¿¶¼¹ÁÀ+ÂjÃ-¸ »–º³Fº½ ¶–µŽÄ±º ºÃ-¸ µÅ»¿µŽÀ¹ÇÆ-À+±ë¸ ¹±»–µ·¶–±½KÈ&º¾Êɏºº½K¾ÊÉfÂM¹Ç¸FË ¸F»¿Âj¹jÀ ³Fº¶MÌͱ»ÏÎFºıºÆÅÂj¸FµŽ½FÐѹÑÈ ¹j¶[º ½FÂjÒF½ ¸F³ »¿¹j¶[ºÀ ³dÒ ½FÓ±º»Ô³ÕÒFÃֹǽA»¿ÒFƎº,É1»¿µÅ¾–µŽ½Fй±½ Î ¹jÀ¾–µŽÄ±º×ÆÅºn¹Ç»¿½FµŽ½FÐØÒ ¶–µŽ½FÐ]µŽ½d¾–º»¿¹jÀ¾–µŽÄ±ºÙ»–ºn¹ÇÆÅË ¾¿µÅëº^³ÕÒFÃÖ¹±½Ú¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½%ÛÏÜÕºıº» ¹ÇÆÝ½FÂÞıºÆ Ä޹ǻ¿µ·¹Þ¾–µŽÂ±½&¶Ö±½M¹jÀ¾¿µÅÄjºÆÅºn¹Ç»¿½FµÅ½ Ð߹ǻ¿ºµÅ½ÕÄjº¶[Ë ¾¿µÅÐd¹Þ¾–ºnÎYà¹Ç½ Î’ÒF½ ÎFº»¿ÆÅáյ޽FÐâÀ+Âd¶Ê¾4ë‹΋ºÆ·¶4ÌÍÂj» À»–Âd¶–¶[˔ë‹ÎF¹±ÆÝÃÖ¹±À ³FµŽ½Fº’ÆÅºn¹Ç»¿½FµŽ½FÐãÀ±ë¸ ¹±»–µÅË ¶–±½¹Ç»¿º4¸F»¿º¶–º½d¾–ºnιǽ Îäº+勸FƎ±»¿ºÎ%ۏæ1ºn¶[ÒFÆÅ¾¿¶ ¶–³FÂÞÉ ¾–³ ¹Ç¾µÅ¾µ·¶"ëÂj»–º?ºç?À+µŽº½d¾-¹±½ Îë±»¿º ¶–Ò ÀÀº¶¿¶ÊÌÍÒ ÆSÈÕá?¶–ºÄjº» ¹ÇƠëºn¹±¶–ÒF»–ºn¶ ¾¿Â"¾¿»¿¹±µÅ½ä¹ ¶–ዶʾ¿ºÃ.Ò ¶–µÅ½ Ð¹jÀ¾¿µÅÄjºƎº¹±»–½FµŽ½FÐä¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½ » ¹Þ¾¿³Fº»´¾¿³ ¹Ç½³ ¹±½ ÎÕËèÀ+» ¹ÞÌ龖ºnÎ他ÒFƎº4É1»¿µ‘¾¿µÅ½FÐֹǾ ¹bÀ+±ë¸ ¹±»¿¹±ÈFƎºâÆÅºıºÆ,±Ì³dҠë¹±½MÆ·¹ÇÈS±»zµÅ½‹Ë Äjº¶[¾–뺽d¾Û ê ëjì °‹CF_1ZÝíÝGS°FO[_ ì î ½Fº]ÂÇÌÚ¾¿³Fº]¸ »–µŽÃֹǻ¿áï¸F»¿Â±È ÆÅºë¶Ù¾–³&¹Þ¾ðÝñ#ò󻖺Ë ¶–º¹Ç» À ³Fº»¿¶1É1³ ÂzÉf±»¿ÓzµŽ½^½ ºÉÏÆ·¹Ç½FÐjÒ ¹ÇÐjº¶1Âj»,½ ºÉÙÎ‹Â±Ë Ãֹǵ޽ ¶º½ À+ÂjÒF½d¾–º»µŽ¶¹Æ·¹±À ӒÂÇÌ(¹„Ä޹ǵޯ޹±ÈFÆÅº?¹±½F½F±¾¿¹Þ¾¿ºÎ ÎF¹Ç¾¿¹FÛ ôÂ±ÆŽÆŽºÀ¾¿µÅÂj½«ÂÇÌ%ÎF¹Þ¾ ¹§µŽ¶ ½FºµÅ¾–³Fº» º¹j¶[á½F±»À ³Fº¹±¸%Û ²´³FºõÀ±½ ¶[¾–»¿Ò À¾¿µÅÂj½ö±Ì¾¿³Fºõòº½ ½Q²/»–ººÈ ¹±½FÓM¶–µÅÐj½FµÅ÷FË À¹±½d¾–ÆŽá?µÅë¸F»¿ÂÞıºnÎ?¸Sº»–Ìͱ»¿Ãֹǽ Àº,ÌÍÂj»(ø ½FÐjÆÅµ·¶–³ä¶–ዶʾ¿ºÃÖ¶ ΋ºn¹ÇƎµÅ½FÐٵŽؾ–³Fº]ùʾ–» ¹±ÎFµ‘¾¿µÅÂj½ ¹ÇÆ·ú¼ðÝñ/òû΋Âjë¹±µÅ½&¶MüͺÐ ¸ ¹±»¿¶–µÅ½ Ð àݸ&¹Ç»–¾[ËýÂÇÌéËè¶[¸SººnÀ ³W¾¿¹±Ð±Ð±µŽ½FÐ&àݺ+¾ Àþ+ÛÑÿÝÂÞɏºıº»nà Ìͱ»¹b½FºÉ.Æ·¹Ç½FÐjÒ ¹ÇÐjº±àݹb¶[µŽÃ«µÅÆ·¹Ç»äµŽ½dÄjº¶[¾–뺽d¾äÂÇÌ"ºÌéË Ìͱ»–¾1µÅ½ä¾¿µÅ뺹ǽ&ÎAë±½ ºá?µ·¶´Ã«Âj¶[¾´ÆŽµŽÓ±ºƎá?¸F»¿Â±³ µÅÈFµÅ¾–µŽÄ±ºjà µÅ̽F±¾(µÅë¸SÂj¶¿¶[µŽÈFƎº±Û  ¹±ÀºÎÉ1µÅ¾–³b¾–³ ºäÀÂj¶[¾¿¶"¹j¶–¶–‹À+µ·¹Þ¾¿ºÎÉ1µÅ¾–³ãÎF¹Þ¾ ¹^¹±À+Ë  ÒFµ·¶[µÅ¾–µŽÂ±½%à »¿¹Ç¾–µŽÂ±½ ¹±ÆÅµ·¶Ê¾ ¶f비áz¹Ç»¿Ð±ÒFº¾–³&¹Þ¾(µÅ¾(ɏÂjÒFƎÎäÈSº ë±»¿º"À+Âd¶Ê¾,ºIºÀ¾¿µÅÄjº¾¿ÂAÀ+Âj½ ¶Ê¾¿»–Ò&À¾¶–ዶʾ¿ºÃÖ¶(Â±Ì ³ ¹Ç½&ÎÕË À+‹΋ºnΫ»–Ò ÆÅº(Ǝµ·¶Ê¾ ¶0¾–³&¹Þ¾fÀ¹Ç¸‹¾¿ÒF»¿º´¾–³Fº,ÆÅµŽ½FбҠµŽ¶[¾–µ·À(À ³ ¹±»¿¹jÀË ¾–º»–µ·¶[¾–µ·À¶0±ÌS¾¿³Fº1¾ ¹±¶–Ó¹Þ¾ ³&¹Ç½ ÎYàd» ¹Þ¾–³ º»¾–³&¹Ç½?¶–¸&º½ Î‹µŽ½FÐ À+Âj븠¹Ç» ¹ÇÈFƎººSÂj»[¾,¹±½F½F±¾¿¹Þ¾¿µÅ½ ÐzÎF¹Ç¾¿¹?¹±½ Îº+勸SºÀ+¾–µŽ½FÐ ¾–³ º«¶¿¹Ç뺫ÓÕ½FÂÞÉ1Ǝº΋Ðjº¾–ÂÈSº?¹±À  Ò µÅ»¿ºÎ^µÅ½ ÎFµÅ»¿ºÀ+¾–ÆŽáâÈÕá ¹zë¹jÀ ³FµŽ½FºƎº¹±»–½ µÅ½Fж–áÕ¶[¾–ºÃÛ²´³Fº  ÒFº¶[¾–µŽÂ±½Éº«¹Ç»¿º ¾–»¿áյŽ Ð,¾¿Â¹±ÎFÎF»–ºn¶–¶¾¿³Fº½Öµ·¶Ô#Ìͱ»N¹ÐjµÅÄjº½«À+Âj¶[¾N¹±¶¿¶–ÒFë¸‹Ë ¾–µŽÂ±½#àdÉ1³ µŽÀ ³A¹±¸F¸F»¿Âj¹jÀ ³ÖɏÂjÒFƎÎ?È&ºÝ¾–³FºëÂj¶[¾ºIºÀ+¾–µŽÄ±ºjÛ  ÆÅ¾–³FÂjÒFб³Ǝº¹±»–½FµŽ½FÐÖÀ+Ò »–Äjº¶(¶–³FÂÞÉ1µŽ½FЫ¸&º»[ÌÍÂj»–Ãֹǽ&À+º »¿ºÆ·¹Þ¾–µŽÄ±º?¾–¹±Ã«Â±ÒF½d¾"±Ì1¾–» ¹ÇµŽ½FµÅ½ Ð^ÎF¹Þ¾ ¹^¹±»–ºAÀ±ëë±½ µŽ½4¾–³ ºNÃÖ¹±À ³FµŽ½FºÆÅºn¹Ç»¿½FµÅ½ Ð(ÆÅµÅ¾–º»¿¹Ç¾–ÒF»¿º±à¾–³ º¶–º¹Ç»¿º µŽ½ ¹±ÎFº+Ë  Ò ¹Ç¾–ºÌͱ»/À+±ë¸ ¹±»–µŽ½FÐ1¶–áÕ¶[¾–ºÃÖ¶YÉ1µ‘¾¿³΋µSº»–º½j¾¶[ÂjÒF»¿Àº¶ ±Ì/¾¿»¿¹±µÅ½FµŽ½FÐ?ÎF¹Ç¾¿¹-±»,¶[ÒF¸Sº»¿Äյ޶–µŽÂ±½%ÛN²´³ µŽ¶1µ·¶1ºn¶[¸SºÀµŽ¹±ÆÅÆŽá ¾¿»–ÒFº4É1³ º½â¹«³ÕÒFÃֹǽ»¿ÒFÆÅº˔È&¹±¶–ºÎä¹Ç¸F¸F»¿Âj¹jÀ ³z¹±½ ÎºÃ-Ë ¸FµŽ»¿µŽÀ¹ÇÆIÆÅºn¹Ç»¿½FµŽ½FÐ-¹±»–ººÄÞ¹±ÆÅÒ ¹Ç¾–ºnÎA»¿ºÆ·¹Þ¾¿µÅÄjºÝ¾¿Â-ºSÂj»[¾´µŽ½‹Ë Äjº¶[¾–ºÎ%Û܋ҠÀ ³õ¹Ã"ҠƑ¾¿µ‘˔̹±À+¾–±»-À+Âd¶Ê¾"¹±½ ¹ÇƎዶ[µ·¶4µ·¶Ǝ±½FÐ ÂÞÄjº» ΋ÒFº±Û ²´³Fµ·¶1¸ ¹Ç¸Sº»ÝÉ1µÅƎÆ#À+Âj½ À+ƎҠ΋º4É1µÅ¾–³¹ÖÀ±ë¸F»¿º³Fº½ ¶[µŽÄ±º ÀÂj¶[¾«Ã«Â‹Î‹ºÆ1º+勸&Âd¶[µÅ¾–µŽÂ±½D¹Ç½&Îb¹±½ ¹ÇƎዶ[µ·¶à¹Ç½ Îã¹Ç½ãºÃ-Ë ¸FµŽ»¿µŽÀ¹ÇÆ/¶[¾–Ò&΋áÀ+±½d¾¿»¿¹j¶Ê¾¿µÅ½FÐÖ³ÕÒFÃÖ¹±½â»–Ò ÆÅº˔É1»¿µ‘¾¿µÅ½ Ð?ıº»[Ë ¶–Ò ¶¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½‹ËýÈ ¹±¶–ºΧƎº¹±»–½ µÅ½FÐ,¹Ç¸F¸F»¿Âj¹jÀ ³Fº¶%¾–³ ¹Ç¾¹±»–º ¶–º½ ¶–µÅ¾–µŽÄ±º¾–«¾¿³Fº¶–º4À+Âj¶[¾1ëÂÕÎFºÆ·¶Û   E0ajH¼Jö_Ní ì CFE0adH í ì c1O ì L ²´³Fº ΋Âjë¹±µÅ½öµÅ½öÉ1³Fµ·À ³öÂjÒF»zº勸&º»–µŽÃ«º½d¾¿¶¹Ç»¿º^¸Sº»–Ë ÌÍÂj»–ëºÎ×µ·¶È&¹±¶–ºb½FÂjÒF½ ¸F³F» ¹±¶–ºãÀ ³ÕÒF½FÓյ޽FÐ Û  ¶–µŽÐÇË ½FµÅ÷&À¹Ç½d¾^¹Çë±Ҡ½j¾±ÌÉf±»¿ÓÚ³ ¹±¶ÈSºº½¼Î‹Â±½Fº µŽ½Q¾–³Fµ·¶ ΋ÂjÃֹǵ޽Q¹±½ ÎÚÃֹǽÕáMÎFµIº»¿º½d¾ëº+¾–³ ÂÕΠ¶³ ¹„ıº’È&ºº½ ¹±¸F¸FƎµÅºnÎYÔô³ÕÒF» À ³ ¶?ò  æ²(ܼüjþ¸F»¿Â±Ðj»¿¹±ÃÒ ¶–ºÎ ¹^¹Ç»¿Ó±ÂÞÄâë‹΋ºÆf±ÒF»¿µŽÐj¹ÇҠƑ¾Aü ±þÒ ¶–ºÎ ³ ºÒF»¿µŽ¶[Ë ¾¿µŽÀ¶(¹ÇƎ±½FЫÉ1µÅ¾–³¹-б» ¹ÇëÃֹǻ!#"0±ҋ¾¿µÅÆ·¹ÇµŽ½Fº½$ ¶(ð,òN²/ÂÕ屮 ü%jþÒ ¶–ºÎ¹-ÆÅº勵ŽÀ±½À+ÂjÃ"ÈFµŽ½FºÎzÉ1µÅ¾–³â¹ÖÀ+Âj½ ¶[¾–» ¹ÇµŽ½j¾ Ðj»¿¹±Ã-ÃÖ¹±» &±Ò‹¾¿º¶–±½?¹±½ Î('4¹Þ¾*)"ü+!,jþÒ ¶–ºÎÖ»¿º¸Sº¹Ç¾–ºÎ ¸F³ »¿¹j¶[ºn¶-"0ºº½ ¶[¾–» ¹Øü+!jþà  »¿Ðj¹±Ã-Âj½%à/.,¹±Ðj¹Ç½10 '§»¿áÕÃ-ÂjÆÅÂÞÉ(¶–ÓÕµÝü+!jþà2.¹ÇºÆÅºë¹±½ ¶àÄÞ¹±½ß΋º½3fÂj¶¿À ³ 054Y¹„Äd»¿ºÆü+!jþ ¹±½ ÎA²76ÊÂj½FÐ8'§µŽÃØÜ‹¹±½FÐ90:"0ºº½&¶Ê¾¿»¿¹ üjþ(Ò ¶[ºnÎâëºëÂj»–ádËýÈ ¹±¶–ºÎ¶–ዶʾ¿ºÃÖ¶%æÝ¹ÇÃÖ¶[³&¹„É:0 â¹±»¿ÀÒ ¶?üjþ§¹±½ Îbôf¹Ç» ΋µŽº;0Áò µŽº» À+ºâü+!jþÒ ¶–ºÎ »¿ÒFƎº+ËýÈ ¹±¶–ºÎõ¶–ዶʾ¿ºÃÖ¶àÒF½FÂ)zº¾Ö¹±Æ|ÛWüjþÒ ¶[ºnÎß¹ <WµŽ½F½FÂÞÉ´ËýÈ ¹±¶–ºÎ¶–ዶʾ¿ºÃà ¹Ç½&Î ¾–³Fº>=,² @? æ1ºn¶[ºn¹Ç» À ³ ? »¿Â±ÒF¸/ü+dþ Ò ¶–ºÎä¹-¾–»¿ººËý¹jÎA6ʱµŽ½FµŽ½FÐ-б» ¹ÇëÃֹǻnÛ î ̹±ÆÅÆ%¾–³Fº¶–ዶʾ¿ºÃÖ¶àFæÝ¹ÇÃÖ¶[³&¹„ÉB0Câ¹Ç» À+Ò ¶d¾–» ¹Ç½ ¶[Ë ÌÍÂj»–ÃÖ¹Þ¾¿µÅÂj½ö»¿ÒFƎº+ËýÈ ¹±¶–ºÎö¶[ዶ[¾–ºÃV³ ¹±ÎM¾–³ ºÈ&ºn¶Ê¾â¸FÒ È‹Ë ÆŽµŽ¶–³FºnÎϸSº»–Ìͱ»¿Ã«¹±½ À+ºQüéÌéËý뺹±¶–ÒF»¿ºD ‹Û EjþÌÍÂj»’¶–ºÄjº» ¹ÇÆ ájº¹Ç» ¶àf¹±½ Î㵎¶Ö»¿ºÐd¹Ç» ΋ºÎb¹±¶«¾¿³Fº^΋ºÌ¹jÀ¾¿Âõ¶Ê¾ ¹Ç½ ÎF¹±»¿Î ÌÍÂj»«¾–³Fº^ÎF±Ãֹǵ޽%Û  ÆÅ¾–³ Â±ÒFÐj³D¶–ºÄjº» ¹ÇÆ1¶–áÕ¶[¾–ºÃÖ¶Ö³ ¹„ıº »¿ºÀº½d¾–ÆŽáÖ¹±À ³FµŽºÄjºΫ¶[ƎµŽÐ±³d¾–Ǝ᫳FµÅÐj³Fº»¸FÒFÈFƎµ·¶[³FºnÎ?»¿º¶–ÒFÆÅ¾¿¶ üFÒF½FÂ)ݺ+¾1¹ÇƔێÔ7 ‹Û FàÕ²76ʱ½FÐG'§µŽÃØÜ‹¹±½FÐG0H"0ºº½ ¶[¾–» ¹FÔ  ‹Û % IÕàJ=² @? æ1ºn¶[ºn¹Ç» À ³ ? »¿Â±ÒF¸%ÔK ‹Û Ldþ+àI¾–³FºµÅ»"¹±ÆÅÐjÂÇË »¿µ‘¾¿³FÃÖ¶A¹±»–º^¶–µŽÐ±½FµÅ÷&À¹±½d¾–ÆŽáßë±»¿ºâÀ+Âd¶Ê¾¿ÆÅájà1±»A½ ÂǾ?Ìͺn¹ÞË ¶–µÅÈ ÆÅºjà ¾–Â’µŽÃ-¸ ÆÅºÃ-º½d¾?µÅ½D¹Ç½ã¹±À¾¿µÅÄjºzƎº¹±»–½ µÅ½FÐ^ÌÍ»¿¹±Ã«º+Ë Éf±»¿ÓIÛã²/’̹±À+µŽÆŽµ‘¾ ¹Þ¾–ºÀ±½d¾–» ¹±¶[¾–µŽÄ±º¶Ê¾¿Ò Î‹µŽº¶àÉfº³ ¹„ıº ºĄ¹±ÆÅÒ&¹Þ¾–ºnÎ"ÂjÒF»0¹±À+¾–µŽÄ±ºfÆÅºn¹Ç»¿½FµÅ½ Ð,¹Ç½&ÎÀÂj¶[¾ë‹΋ºÆFÀ±Ã-Ë ¸ ¹±»–µ·¶[Âj½ ¶Ò ¶–µÅ½ Ðæ(¹ÇÃÖ¶–³ ¹„ÉD0Mâ¹Ç» À+Ò&¶±¶–ዶʾ¿ºà ¹±¶¾¿³Fº »¿º+Ìͺ»¿º½&À+º4¹ÇƎбÂj»–µÅ¾–³FÃµŽ½ä¾–³Fºn¶[º4º勸&º»–µŽÃ«º½d¾¿¶Û N ®WGS°‹OÊXNHBO"H%EC ì O ì LQP CF_SR ® ìÝì _ °ÕE°FOÊ_ ì ÜÕÒF¸Sº»¿Äյ޶–ºμ¶[¾¿¹Ç¾–µ·¶Ê¾¿µŽÀ¹ÇÆë¹jÀ ³FµŽ½FºõƎº¹Ç»¿½FµŽ½FÐM¶–ዶʾ¿ºÃÖ¶ ³ ¹„Äjº ¾–» ¹±ÎFµ‘¾¿µÅÂj½ ¹ÇƎÆÅá§»¿º  ÒFµŽ»–ºnÎ4Æ·¹Ç»¿Ð±º¹±Ã«Â±ÒF½d¾¿¶/ÂÇÌS¹Ç½F½FÂ±Ë ¾¿¹Ç¾–ºnÎâÎF¹Þ¾ ¹ÖÌÍ»¿Â±ÃÑÉ1³Fµ·À ³¾–Âzº+åÕ¾–» ¹±À+¾ÝÆÅµŽ½FÐjÒFµŽ¶[¾–µ·À¸ »–Âj¸‹Ë º»–¾–µŽº¶,ÂÇÌ0¾¿³Fº¾¿¹±¶–Ó¹Þ¾§³ ¹Ç½ Î%Û,ÿ(ÂÞÉfºÄjº»nà ½F±¾§¹±ÆÅÆÎF¹Ç¾¿¹ µ·¶zÀ»–ºn¹Þ¾–ºnÎ㺠 Ò ¹ÇÆ”Û  »¿¹±½ Î‹Â±Ã ΋µ·¶Ê¾¿»–µŽÈFҋ¾¿µÅÂj½WÂÇÌ"¹±½‹Ë ½F±¾¿¹Þ¾¿ºÎbÎF¹Þ¾ ¹À±½d¾¿¹±µÅ½ ¶-ÃÒ À ³ß»¿º΋Ҡ½ ÎF¹Ç½d¾«µÅ½FÌͱ»¿Ã«¹ÇË ¾–µŽÂ±½#ÛTfáֵ޽d¾–ºƎƎµÅÐjº½d¾–ÆŽáÖÀ ³FÂÕÂj¶–µÅ½FÐ"¾–³Fº,¾–» ¹ÇµŽ½FµŽ½FÐ"ºåF¹ÇÃ-Ë ¸FƎº¶fÉ1³Fµ·À ³zÐjº+¾´¸ ¹±¶¿¶–ºΫ¾¿Â¾¿³FºÆÅºn¹Ç»¿½Fº»nàÕµ‘¾´µŽ¶´¸SÂj¶¿¶–µÅÈFƎº ¾–¸ »–ÂÞÄÕµ·Î‹ºä¾–³Fºâ½ ºÀ+ºn¶–¶¿¹Ç»¿áß¹±Ã-ÂjÒF½d¾?ÂÇ̧µÅ½FÌͱ»¿Ã«¹Ç¾–µŽÂ±½ É1µÅ¾–³ÆÅºn¶–¶1Π¹Þ¾¿¹ Û  À¾¿µÅÄjº"Ǝº¹±»–½FµŽ½FÐA¹Ç¾[¾¿º븋¾¿¶Ý¾–Âz¸&º»[ÌÍÂj»–Ã]¾–³ µŽ¶,µÅ½d¾¿ºÆÅË ÆŽµÅÐjº½d¾N¶–¹±Ã«¸FÆÅµŽ½FЧÂÇÌYÎF¹Ç¾¿¹,¾–Â4»¿º΋Ò&À+º(¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½«À+Âd¶Ê¾ ¶ É1µÅ¾–³FÂjҋ¾ÝÎF¹ÇÃÖ¹±Ð±µŽ½FÐ"¸Sº»–Ìͱ»¿Ãֹǽ Àº±ÛTUè½zÐjº½Fº»¿¹±Æ|àd¾¿³Fº¶–º ëº+¾¿³F‹ÎF¶0À¹±ÆŽÀÒFƎ¹Ç¾–º¾–³Fº´Ò ¶–º+ÌÍÒ ÆÅ½Fºn¶–¶±ÌS¹Ç½º+åF¹Çë¸FƎº´ÈÕá ÷ » ¶Ê¾³ ¹„ÄյŽFЧ¾¿³Fº(Ǝº¹±»–½Fº» À+Æ·¹±¶¿¶[µÅÌÍá-µ‘¾nàd¹±½ Î¾–³Fº½A¶[ººµŽ½FÐ ³FÂÞÉWÒF½&À+º»–¾¿¹±µÅ½A¾–³ ¹Ç¾´ÀƎ¹j¶–¶–µÅ÷&À¹Ç¾–µŽÂ±½?É´¹±¶Û²´³ ºµ·Î‹ºn¹µ·¶ ¾–³&¹Þ¾0¾–³ º1Ã-Âj»–º1Ò ½ À+º»[¾ ¹ÇµŽ½"¾¿³Fº1º+åF¹Çë¸FƎº±à±¾–³Fº1Ǝº¶¿¶ÉfºÆŽÆ ë‹΋ºƎºÎ?¾–³Fµ·¶1¶[µÅ¾–Ò&¹Þ¾–µŽÂ±½zµŽ¶à‹¹Ç½ Î?¾–³Fº»–ºÌͱ»¿º±àd¾–³ º,ë±»¿º Ò ¶–º+ÌÍÒFÆÕµÅ¾0ɏÂjÒFƎÎ"È&º¾–³ ¹„Äjº ¾–³ µŽ¶º+åF¹±Ã-¸ ÆÅº´¹Ç½F½ ÂǾ¿¹Ç¾–ºnÎYÛ V$WYX Z\[^]F_`[;a5_`[AbD]Fcedgfh]jilkmSk#no[^c7]Yc7p ÜÕºÒF½FРà î ¸F¸&º»Ö¹Ç½ ÎDÜÕ±ë¸S±ƎµÅ½&¶[ÓÕáÚü+! ±þ¹±½ Îq »–ºË ÒF½ Î㺾ֹ±Æ|ÛKü+!#IÇþ"¸F»–Âj¸&Âd¶[ºnÎß¹¾–³ º±»¿º+¾¿µŽÀ¹ÇÆKrs`tuwv^x y v^xz*{}|8|9~€t‚t?¹±¸F¸F»¿Âj¹jÀ ³%ÛöÜÕÒ À ³Ú¹±½Ú¹Ç¸F¸ »–Âd¹±À ³ãÒ ¶[ºn¶ Ã"ҠƑ¾¿µÅ¸FƎº-Ã-‹΋ºƎ¶üÍÂj»¹ƒz‚{}|8|9~€F„t‚t–þ1¾–ÂzºÄÞ¹ÇÆŽÒ ¹Þ¾¿º¾¿³Fº ÎF¹Ç¾¿¹Fà¹Ç½&ÎÀ¹Ç½&΋µŽÎ ¹Þ¾–ºn¶§Ìͱ»-¹Ç½F½ ÂǾ¿¹Ç¾–µŽÂ±½WüÍÂj»>rs`tuw~Yt…[þ ¹Ç»¿º ΋»¿¹„É1½öÌÍ»¿Â±Ã=¾¿³Fº ¸&ÂÕ±Æ"ÂÇ̺åF¹Çë¸FƎº¶µŽ½öÉ1³Fµ·À ³ ¾–³ º?ëÂÕÎFºÆ·¶΋µ·¶–¹±Ð±»¿ºº±Û( ÒF»[¾¿³Fº»¿Ã«Â±»¿º±à7F»¿ºÒF½ Î’º¾¹ÇÆ”Û ¸F»¿ÂÞıº1¾¿³ ¹Þ¾nàÕÒF½ Î‹º»f¶–±ëº,¶–µÅ¾–Ò ¹Ç¾–µŽÂ±½ ¶àd¾–³Fº,бº½ º» ¹ÇƎµ)n¹ÞË ¾–µŽÂ±½’º»¿»¿Â±»§Î‹ºÀ»–ºn¹±¶–º¶,º+勸&Âj½Fº½d¾¿µŽ¹±ÆÅƎáÉ1µ‘¾¿³^¾¿³Fº«½ÕÒFÃ-Ë ÈSº»(Â±Ì  ÒFº»¿µŽº¶Û î ½ß¾¿³Fºº+勸Sº»¿µÅ뺽d¾ ¹ÇÆ(¶–µŽÎ‹ºjàf¹jÀ¾¿µÅÄjºäƎº¹±»–½ µÅ½FÐ ³ ¹j¶ ÈSºº½â¹±¸F¸FƎµÅºnÎ侖ÂA¶–ºÄjº» ¹ÇÆ%΋µIº»¿º½d¾Ý¸F»¿Â±ÈFƎºÃÖ¶Ûfñ#ºÉ1µ·¶ 0 ? ¹ÇƎº?ü+!}LÕþàSñ%ºÉ1µ·¶@07ôf¹Þ¾¿ÆÅº¾[¾ü}Ldþ´¹±½ Îñ%µŽº»¿º 0Á²/¹j΋º¸&¹ÇƎÆÅµ§ü+! I±þ§¹ÇƎƏ¹±¸F¸FƎµÅºnε‘¾¾–Â^¾–º+åÕ¾«À¹Þ¾¿ºÐjÂÇË »¿µ)n¹Þ¾–µŽÂ±½$ ø ½FÐjºÆ·¶[Âj½-0 .¹ÇÐd¹Ç½Úü+!†jþ§¹Ç¸ ¸FÆÅµŽºÎ µÅ¾"¾¿Â ¸ ¹±»[¾–˔±ÌéËý¶–¸&ººÀ ³A¾¿¹ÇÐjбµŽ½FÐ Û øN¹±À ³ö¹Ç¸F¸F»¿Âj¹jÀ ³W³ ¹±¶zµÅ¾¿¶äÂÞÉ1½MÉ´¹„áb±Ì΋º¾–º»¿Ã«µÅ½FË µŽ½FÐÒF½ À+º»[¾ ¹ÇµŽ½d¾ÊáâµÅ½ º勹±Ã«¸FÆÅºn¶ÛÖñ#ºÉ1µ·¶‡0 ? ¹ÇƎº«Ò ¶–ºÎ ¹z¸F»–ÂjÈ ¹ÇÈ µÅƎµŽ¶[¾–µ·À"ÀƎ¹j¶–¶–µÅ÷ º»§¹Ç½ Î^¸Fµ·À Ó±ºÎ⾖³Fº«º勹±Ã«¸FÆÅºn¶ ˆ É1³FÂd¶[º-À+Æ·¹±¶¿¶ÊËèÀ+Âj½ Î‹µÅ¾–µŽÂ±½ ¹±ÆŠ‰G‹`{A…wtuw~Y{^uw~/¸F»¿Â±È ¹±ÈFµÅƎµÅ¾Êá ŒÖüF(Ž ˆ þ-µŽ¶AÀ+ƎÂj¶–º¶[¾-¾¿ÂEFې,ßüÍÌͱ»A¹‘ÞËýÀƎ¹j¶–¶-¸F»¿Â±ÈFƎºÃ?þ+Û ø ½ Ð±ºÆ·¶–±½q0’.¹ÇÐd¹Ç½õµŽÃ«¸FƎºëº½d¾–ºnÎb¹’À+ÂjëÃ-µÅ¾[¾¿ººäÂÇÌ ÆŽº¹Ç»¿½Fº»¿¶à&¹±½ ÎÒ ¶–ºÎÄjÂǾ¿ºº½d¾–»¿Â±¸Õáz¾–Âz¸FµŽÀ Óº勹±Ã«¸FÆÅºn¶ É1³Fµ·À ³M³ ¹jÎD¾–³ º³FµŽÐ±³ º¶[¾΋µ·¶¿¹Çб»¿ººëº½d¾z¹ÇëÂj½FÐõ¾¿³Fº Ǝº¹Ç»¿½Fº»¿¶Û@U轍¹jÎF΋µÅ¾–µŽÂ±½%àYøN½FбºƎ¶–±½0“.¹ÇÐd¹Ç½â¹ÇÆ·¶–Â?µÅ½FË Ä±ºn¶Ê¾¿µÅÐd¹Þ¾¿º?¶[ºıº» ¹ÇÆ ÎFµIº»¿º½d¾¶–ºƎºÀ+¾–µŽÂ±½’¾–ºnÀ ³F½Fµ  Ò º¶µŽ½ ΋º¸‹¾–³%Û V$W•” –—k ˜’d;™7™7šF]Yfn›h]Y_lc7œ9n`cžed>šYpl_`[^]jh!Ÿž q]Yf ¡£¢ h!k#cžœ!]Y_lcžœ9]Yc¤d>f}h]Yi¥k¦mSk#no[^cž]Fcžp ²/Â7±Ҡ»ÚÓÕ½FÂÞÉ1ÆÅºn΋бºjà^¾¿³Fµ·¶Ú¸ ¹±¸&º»öÀ+Âj½ ¶[¾–µÅ¾–Ò‹¾¿º¶W¾¿³Fº ÷ » ¶[¾Éf±»¿Ó^¾¿Â¹Ç¸ ¸FÆÅáõ¹jÀ¾¿µÅÄjº?ÆÅºn¹Ç»¿½FµŽ½Fо–Â’È ¹j¶[º?½ Â±ÒF½ ¸F³ »¿¹j¶[º^À ³ÕÒF½ ÓdµŽ½FÐ&à1±»?¾¿Â߹Ǹ ¸FÆÅáW¹jÀ¾–µŽÄ±º^ÆÅºn¹Ç»¿½FµÅ½ Ð ¾– ¹¾¿»¿¹±½ ¶[Ìͱ»¿Ã«¹Ç¾–µŽÂ±½‹ËýƎº¹Ç»¿½FµŽ½FÐA¸&¹Ç» ¹±Î‹µŽÐ±Ã üYf»–µŽÆŽÆ|àŠ,±þ ÌÍÂj»f¹±½dáÖ¹±¸F¸FƎµŽÀ¹Þ¾–µŽÂ±½#ÛÜյ޽ Àº¹¾–» ¹Ç½ ¶[Ìͱ»¿ÃÖ¹Þ¾–µŽÂ±½F˔Ƞ¹j¶[ºnΠƎº¹±»–½Fº»#΋ÂÕº¶#½FÂǾ/бµŽÄ±ºN¹´¸F»¿Â±È&¹ÇÈFµŽÆÅµ·¶[¾–µ·À0±ҋ¾¿¸Fҋ¾nànÉfºN¹±»–º ½F±¾,¹ÇÈ ÆÅº§¾¿ÂÖÒ ¶[ºñ#ºÉ1µ·¶@0 ? ¹ÇƎº ¶1ëº+¾–³ ÂÕÎäÌÍÂj»Ý΋º¾–º»–Ë 뵎½FµÅ½ ÐÖÒF½ À+º»[¾ ¹ÇµŽ½d¾Êá±Û î ÒF»1º+勸Sº»¿µÅ뺽j¾ ¹ÇÆIÌÍ» ¹ÇëºÉf±»¿Ó ¾¿³dÒ&¶(Ò ¶–º¶´¾¿³Fº  ÒFº»–ázÈdáäÀ+ÂjÃ-ëµÅ¾[¾¿ºº¸ ¹±»¿¹j΋µŽÐ±Ã7É1µÅ¾–³ È ¹Ç¾¿À ³¶[ºÆÅºnÀ¾¿µÅÂj½%Ô jÛ ? µŽÄ±º½Ö¹À±»¿¸FÒ ¶-àչǻ¿ÈFµÅ¾–» ¹Ç»¿µÅƎ᧸FµŽÀ Ó8§¶–º½d¾¿º½ Àº¶ ÌÍÂj»(¹Ç½F½ ÂǾ¿¹Ç¾–µŽÂ±½%Û FÛ´ÿ,¹„ıº=¾¿³Fº¶–º¨§¶–º½d¾¿º½ Àº¶³&¹Ç½ ÎÕËè¹Ç½ ½FÂǾ ¹Þ¾–ºnÎYà ÎFºƎº+¾–º ¾–³FºÃMÌÍ»¿Â±ÃHb¹±½ Î¸Fҋ¾%¾¿³FºÃQµŽ½j¾¿Â(¹f¾–» ¹ÇµŽ½‹Ë µŽ½FÐ?¶–º+¾nào©Û % Ûª.,µŽÄdµ·Î‹º;©×µÅ½d¾¿Â—«½F±½F˔µ·Î‹º½d¾¿µŽÀ¹ÇƔà%ÈFҋ¾"½ ÂǾ½FºnÀË ºn¶–¶¿¹Ç»¿µŽÆÅáÖ½FÂj½‹ËýÂÞıº»¿Æ·¹Ç¸F¸FµŽ½FÐ&àd¶–ÒFÈ ¶–º+¾ ¶Û L&Ûª¬,¶[ºMº¹±À ³>¶–ÒFÈ ¶–º+¾b¹j¶ ¾–³FºÚ¾–» ¹ÇµŽ½FµÅ½ ÐK¶–º+¾ßÌͱ»ß¹ ë‹΋ºÆ|Û ,FÛ´øNÄÞ¹ÇÆŽÒ ¹Þ¾¿ºäºn¹±À ³bÃ-‹΋ºÆ1±½b¾–³Fº»¿ºÃÖ¹±µÅ½FµŽ½FÐ ¶[º½‹Ë ¾¿º½ Àº¶1µŽ½ † Ûª¬,¶[µŽ½FЫ¹"ëºn¹±¶–ÒF»–º,ÂÇÌ΋µŽ¶¿¹ÇÐj»–ººëº½d¾®­âà‹¸FµŽÀ Ó«¾¿³Fº ¯ ¶–º½d¾–º½ À+ºn¶§µÅ½‘É1µÅ¾–³¾–³ º-³ µÅÐj³Fº¶[¾K­ÌÍÂj»§¹±½‹Ë ½ ÂǾ¿¹Ç¾–µŽÂ±½%Û I‹Ûª.,ºƎº+¾¿º’¾–³Fº ¯ ¶[º½j¾¿º½ Àº¶zÌÍ»¿Â±Ã¨-à³ ¹„ıº^¾–³Fºà ¹±½F½F±¾¿¹Þ¾¿ºÎYàF¹±½ Î¹±ÎFÎA¾–³ ºÃ7¾¿Â(©Û  Û´æ(º¸Sº¹Þ¾1ÌÍ»¿Â±Ã1%FÛ Uè½ÂjÒF»º+勸Sº»¿µÅ뺽d¾ ¶àÞ¾¿³Fº´µÅ½ µ‘¾¿µŽ¹±ÆFÀ+±»¿¸FÒ ¶2Ú¾–³ ¹Ç¾0ɏº Ò ¶–ºÎÖÀ±½ ¶–µ·¶Ê¾¿ºΫÂÇÌ%¶–ºÀ+¾–µŽÂ±½ ¶°A,„Ë‚ÂÇÌI¾¿³Fº@< ¹ÇƎÆSÜd¾¿»–ºº+¾ &j±ÒF»¿½ ¹±Æ/²/»–ººÈ ¹±½FÓüYâ¹±»¿ÀÒ ¶Ýº¾¹±Æ|ێàT%jþ+à&É1³Fµ·À ³âµ·¶ ¹±ÆŽ¶–Â’¾–³Fº¾¿»¿¹±µÅ½ µÅ½FÐõ¶–º+¾?Ò&¶[ºnÎbÈÕáãæ(¹ÇÃÖ¶–³ ¹„ɱ0²â¹Ç»–Ë ÀÒ ¶Öü+!dþÛ«²´³FºÖµŽ½Fµ‘¾¿µŽ¹±Æ2§¶–º½d¾¿º½ Àº¶ɏº»–º-¾¿³Fº«÷ » ¶Ê¾ !EEA¶–º½d¾¿º½ Àº¶,ÂÇÌ ¾¿³Fº"¾¿»¿¹±µÅ½ µÅ½FÐäÀ+Âj»–¸FÒ&¶àY¹Ç½&Î ¯ ³ ,}E ¶–º½d¾–º½ À+ºn¶-ɏº»–ºä¸Fµ·À Ó±ºÎb¹Þ¾«º¹jÀ ³ßµÅ¾–º» ¹Þ¾¿µÅÂj½%ÛbÜÕº¾¿¶-ÂÇÌ ,Eb¶–º½d¾–º½ À+ºn¶äɏº»–º ¶[ºÆÅºnÀ¾¿ºÎÚÈSºÀ¹ÇÒ ¶–º’µ‘¾¾¿¹±Ó±ºn¶ä¹±¸‹Ë ¸F»¿Â„勵ÅÃֹǾ–ºƎáe!,Þˁ%E^뵎½Õҋ¾–ºn¶«Ìͱ»Ö³ÕÒFÃֹǽ&¶-¾–¹ǽF½ ÂÇË ¾ ¹Þ¾–º,¾–³ ºÃàF¹»¿º¹j¶[Âj½ ¹ÇÈFƎºÝ¹±Ã-ÂjÒF½d¾f±Ì#Éf±»¿Ó-¹±½ ÎÖ¾¿µÅëº ÌÍÂj»Ý¾¿³Fº-¹±½F½F±¾¿¹Þ¾¿Â±»Ý¾–Âz¶–¸Sº½ Î^È&ºÌͱ»¿º¾¿¹±ÓյŽFÐä¹?ÈF»¿º¹±Ó É1³FµŽÆŽº?¾–³ ºAÃÖ¹±À ³FµŽ½FºA¶–ºƎºÀ+¾¿¶¾–³Fºz½Fº+åÕ¾-¶–º+¾nÛâ²´³FºA¸&¹ÞË » ¹Çëº+¾¿º»\«’àYÉ1³FµŽÀ ³ ΋º½FÂǾ¿º¶,¾–³Fº«½ÕÒFÃÈ&º»±ÌNë‹΋ºÆ·¶ ¾¿Â¾¿»¿¹±µÅ½#àNÉ´¹±¶«¶[º¾?¹Þ¾;%FàfÉ1³Fµ·À ³ãÀ±ÒFÆ·ÎßÈSºº+勸SºÀ¾¿ºÎ ¾¿ÂäÐjµÅÄjºÒ&¶§»–ºn¹±¶–±½ ¹±ÈFÆÅº"Ǝ¹±È&ºÆÅƎµÅ½ ÐäÄÞ¹±»–µ·¹Þ¾¿µÅÂj½^ÂÞıº»Ý¾¿³Fº ¶¿¹Çë¸FƎº¶à ÈFÒF¾¹ÇÆ·¶–ÂÖɏÂjÒFƎν ÂǾÀ¹ÇÒ ¶–º§¾–³ º"¸F»¿Â‹À+º¶¿¶–µÅ½FÐ ¸F³&¹±¶–º,¾¿Â«¾¿¹±Ó±º4¹-ÆÅÂj½FÐ-¾–µŽÃ-ºjÛ ²#Â΋µŽÄÕµŽÎFº«¾–³Fº?À±»¿¸FÒ ¶µÅ½d¾¿Âz¾¿³Fº?΋µSº»–º½d¾"¶[ÒFÈ&¶[º¾¿¶ µŽ½zÜd¾¿º¸g% àjÉfº1¾–»¿µŽºΫҠ¶–µÅ½ Ð4¾Êɏ¹ǸF¸F»¿Âj¹jÀ ³Fº¶Ô%È ¹±Ð±ÐjµÅ½FÐ ¹±½ Î佋Ë|ÌÍÂjƎÎz¸ ¹±»[¾¿µ‘¾¿µÅÂj½FµÅ½ Ð Û2Uè½È ¹ÇÐjбµŽ½FРàÕÉfº§»¿¹±½ Î‹Â±Ã«ÆŽá Sequential Learning Active Learning (Vote Entropy Model) Performance (F-Measure) Size of Training Set (words) Relative Training Size for Same Performance Sequential Training Active Learning Active Learning (F-complement model) 83 84 85 86 87 88 89 90 91 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 µÅÐjÒF»¿º‡±Ô/òº»[ÌÍÂj»–ÃÖ¹±½ À+ºfċ¶Û¾–» ¹ÇµŽ½FµŽ½FÐ4¶[º¾¶–µ)º±Ô/¹jÀ¾¿µÅÄjº1ÆÅºn¹Ç»¿½FµÅ½ Ð§¹±½ ÎÖ¶–º  ÒFº½d¾¿µŽ¹±Æ ¹Ç½F½ ÂǾ¿¹Ç¾–µŽÂ±½-±½Ö²/»–ººÈ ¹±½FÓÎF¹Ç¾¿¹ ¸Fµ·À ӒüÉ1µ‘¾¿³â»¿º¸FÆ·¹±Àºëº½d¾ þµ´¶zÂÇ̾¿³Fº4¾–±¾¿¹±Æ#½ÕÒFÃ"ÈSº»,ÂÇÌ ¶–º½d¾–º½ À+ºn¶AµŽ½M3¾¿Âb¹j¶–¶–µÅÐj½D¾–Â㺹jÀ ³M¶[Ò È ¶[º¾ÛC<Wµ‘¾¿³ ½‹Ë”ÌͱƷθ ¹±»[¾¿µ‘¾¿µÅÂj½FµÅ½ Ð à0ÉfºA¸ ¹±»[¾¿µ‘¾¿µÅÂj½FºÎõ¾–³FºäÎF¹Þ¾ ¹âµŽ½j¾¿Â %ã΋µŽ¶¿À+»¿º+¾¿º¸&¹Ç»–¾–µÅ¾–µŽÂ±½ ¶à,¹±½ ÎÚºn¹±À ³Úë‹΋ºƧɴ¹±¶z¾–³ º½ ¾–» ¹ÇµŽ½FºnÎW±½BõÂÇÌ4¾¿³Fº¦%߸ ¹Ç»–¾–µÅ¾–µŽÂ±½ ¶Û<’ºÌͱÒF½ ÎM½F ¶–µÅÐj½Fµ‘÷SÀ¹Ç½d¾(΋µSº»–º½ À+º§ÈSº+¾ÊÉfºº½z¾¿³Fº§¾ÊÉfÂ-뺾–³F‹ÎF¶Û V$W•”WYX ·_žk#šFœ8_l¸°¹>]Fœn`p`[^k#k# k#cºh8¸_`[ghŸ7k » k#šYk#f}h!]Y_lcD_l¸@–—k ˜’¹¼nohn ²´³Fº¶Ê¾ ¹Ç½ ÎF¹±»¿ÎõÃ-º¾–³F‹ÎbÌͱ»«Ã«º¹j¶[Ò »–µŽ½FЍÎFµŽ¶¿¹ÇÐj»–ºº+Ë Ã«º½d¾(ÌÍÂj»,¶¿¹Çë¸FƎº¶[ºÆÅºnÀ¾–µŽÂ±½µŽ½â¹±À+¾–µŽÄ±º4Ǝº¹±»–½FµŽ½FÐ?¹±ÆÅÐjÂÇË »¿µ‘¾¿³FÃÖ¶?¾¿³ ¹Þ¾AÒ&¶[º^¾–³Fº  Ò º»¿áßÈÕáWÀ+±ëëµÅ¾[¾–ººâµ·¶zøN½‹Ë бºƎ¶–±½—0C.,¹±Ðj¹±½ ¶(ıÂǾ¿º4º½d¾–»¿Â±¸ÕázÃ-ºn¹±¶–ÒF»¿º±Û ? µÅÄjº½â¹ ¾¿¹±Ð±ÐjºÎzº+åF¹Çë¸FƎº ˆ^½ àÕ¾–³FºÎFµŽ¶¿¹ÇÐj»–ººëº½d¾°­ØÌÍÂj» ˆ µ·¶ ´ Ô ­ ³ ¾  ÆÅÂjе¿(ÀoÁ±Â üFÃ^Ä ˆ þ ¿ ÆÅÂjÐ Â üFÃ^Ä ˆ þ ¿ É1³Fº»–º ¿ ³ ð(ÒFÃÈ&º»1ÂÇÌëÂÕÎFºÆ·¶´µÅ½ä¾–³ ºÀ+±ëëµÅ¾[¾–ºº±Û  üFÃ^Ä ˆ þ ³ ð(ÒFÃÈ&º»1ÂÇÌëÂÕÎFºÆ·¶1¹±¶¿¶[µŽÐ±½FµŽ½FеÃ,¾– ˆ ÿ(ÂÞÉfºıº»à±³Fº»¿ºÝÉfº(¸F»¿Â±¸SÂj¶–ºÝ¹4½FÂÞÄjºÆS΋µŽ¶¿¹ÇÐj»–ººëº½d¾ 뺹j¶[ÒF»¿º ¾¿³ ¹Þ¾µ·¶/ÈSÂǾ¿³"ë±»¿º´¹Ç¸F¸FƎµ·À¹ÇÈ ÆÅºf¹Ç½ Î¹±À ³FµŽºÄjº¶ ¶–ÆÅµŽÐ±³d¾–ÆŽá"µŽÃ«¸F»¿ÂÞıºÎ¸Sº»–Ìͱ»¿Ã«¹±½ À+ºj۞<’º(È ¹j¶[º1ÂjÒF» 뺹ÇË Å„Æ$ÇÈ+É2É*Ê}ȄË+ÇÌ^ÍÏ΂ÐÑÒÎ‚Ó ÑÐÈ+ɂԞÈ+ÕÉÖ×ÒÔ+ÐØË+ɂɂهɂÊ^È$Ç!ʇЮÌoÉ*Ë ÈÐØØ!ÉwÖ;Ó ÊאȪÚ#ÐԄ×ÒԂ۰ÜÊgÖÇÙKÐ×ÒÊ Ô®Ô„Ó ÎÕgÐÔªÌ Ð˄ȄÝYÇÞÝFԄÌoÉ*ɂÎßÕ ÈÐØØ!×ÒÊ Ø—ÇË9Ú ÐԄÉ(ÊÇ!Ó Ê¦Ì ÕËÐԄÉ;ÎßÕ}ÓÊ à}×ÒÊ Øá7ÉwÐÎÕ ÈÐØ!Ø!É‚Ö Ó ÊאÈ\×ÒԇÐ(âTÇË֛Û/ã/ÉGÌË+ÉßޕÉ*Ë\È+ÇgԄɂÑÒɂÎ*È\ɂÊ^È+×ÒË+É8ԄɂÊ^È+É‚Ê Î*É‚Ô ÐÔÎwÐÊ#Ö×ÖÐÈ+ɂÔޕÇË®ÐÊ ÊÇÈÐÈ+×Çʥ۞ÜÊ8Ԅ×ÒÈ+Ó ÐÈ+×ÒÇ!ÊÔ®ÑÒ×ÒàAɪÈ+ÕɂԄÉ!á È+Õ É7Ö×ÒÔ+ÐØË+ɂɂهɂÊ^ÈJÇäAÉß˺È+Õ É$ɂÊ^È+אË+ÉTԄɂÊ^È+É‚Ê Î*É7×ÒԺԄ×ÒÙ‡Ì ÑÍªÈ+ÕÉ Ù‡ÉwÐʵÖ×ÒÔ+ÐØË+ɂɂهÉ*Ê}ÈÇäAÉßËSÈ+ÕɊâTÇËÖÔ×ÒÊ9È+Õ ÉŠÔ„É‚Ê^È+É‚Ê Î‚ÉÛ åßæ ×ä}×Ö×ÒÊؼÚ^͗ÑÒÇ!ؗç—ÊÇË+ÙKÐÑÒ×Òè‚É‚Ô\ÞéÇËKÈ+Õ É8Ê}Ó ÙêÚoÉßËêÇÞ Ù‡Ç}ÖÉ*ÑÔ ¶–ÒF»¿º,Âj½A¾¿³FºÌé˔뺹j¶[Ò »–ºëº+¾–»¿µ·ÀÇà‹É1³Fµ·À ³zµ·¶1΋º÷ ½FºÎä¹±¶Ô ë$ì ³ üYí ´®î nþŠïò »¿ºÀ+µ·¶–µÅÂj½>ïæ1ºÀ¹ÇÆŽÆ í ´ ïò »¿ºÀµŽ¶–µŽÂ±½ î æ1ºnÀ¹ÇÆŽÆ É1³Fº»¿º òN»–ºnÀ+µ·¶[µŽÂ±½ ³ ð ÂÇÌÀ±»¿»–ºnÀ¾´¸F»¿Â±¸SÂj¶–ºÎzƎ¹±È&ºÆÅƎµÅ½ Ðj¶ ð ±Ì¸F»¿Â±¸SÂj¶–ºÎAƎ¹±È&ºÆÅƎµŽ½FÐj¶ æ1ºÀ¹ÇÆŽÆ ³ ð ÂÇÌÀ±»¿»–ºnÀ¾´¸F»¿Â±¸SÂj¶–ºÎzƎ¹±È&ºÆÅƎµÅ½ Ðj¶ ð ÂÇÌÀ+Âj»–»¿ºÀ+¾fÆ·¹ÇÈSºƎÆÅµŽ½FÐd¶ ²´³FºäÄ޹ǻ¿µŽ¹±ÈFƎºgíö¹±ÆÅƎÂÞÉ(¶¸F»–ºnÀ+µ·¶[µŽÂ±½b¹Ç½ Î»¿ºÀ¹ÇƎÆN¾¿ÂÈSº ÉfºµŽÐ±³FºnÎ΋µSº»–º½j¾¿ÆÅájۃUè½ß¹±ÆÅƏ±ÒF»º+勸Sº»¿µÅ뺽d¾ ¶àTíÚµ·¶ ¶–º+¾(¾¿Â—±àFÐjµÅÄյ޽FÐ?¹Ç½º  Ò&¹ÇÆ%ÉfºµŽÐ±³d¾´¾–Â?ÈSÂǾ¿³¸F»–ºnÀ+µ·¶[µŽÂ±½ ¹±½ Î他ºnÀ¹ÇÆŽÆ”Û FÂj»A±Ҡ»A΋µ·¶–¹±Ð±»¿ººëº½j¾?ëºn¹±¶–ÒF»–º­âà1ɏºâÒ&¶[º¾¿³Fº ñ xz‚{^|£‹lòét|(tó¥|à É1³FµŽÀ ³µ·¶(À¹±ÆŽÀÒFÆ·¹Þ¾–ºnÎz¹j¶Ô ­ ³   À ô£õ„ö ô@÷wøù ü ¾ ë ½ üFú‘û–ü ˆ þwĂú¦üjü ˆ þ[þ–þ É1³Fº»–º9ý7µ·¶,¾–³Fº«À+ÂjÃ-ëµÅ¾[¾¿ºº«ÂÇÌNë‹΋ºƎ¶à$ú‘ûßÄ*ú¦ü¹±»–º µŽ½ Î‹µŽÄÕµŽÎ‹Ò&¹ÇÆfÃ-‹΋ºƎ¶"µÅ½qý-à0¹±½ Îë ½ üFúû–ü ˆ þÄ*ú¦ü±ü ˆ þ–þ4µ·¶ ½ ±Ì`ú‘û+ ¶%Æ·¹ÇÈSºƎÆÅµŽ½FÐ(Â±Ì ˆ »¿ºÆ·¹Þ¾¿µÅÄjº¾–ÂÏú¦ü} ¶%ºĄ¹±ÆÅÒ&¹Þ¾–µŽÂ±½ Â±Ì ˆ Û ¶ µÅÐjÒF»–º‡(¶–³FÂÞÉ(¶¾¿³Fº´¾–ºn¶Ê¾¶–º+¾ ¸Sº»–Ìͱ»¿Ãֹǽ Àº(¹ÇÐd¹ÇµŽ½ ¶Ê¾ ¾¿³Fº^½ÕÒFÃÈ&º»z±̧Éf±» ÎF¶ÖµŽ½W¾¿³Fºâ¾¿»¿¹±µÅ½FµŽ½FÐßÀ±»¿¸FÒ ¶?Ìͱ» ¶–º  ÒFº½j¾¿µŽ¹±Æ(¹Ç½F½ ÂǾ¿¹Ç¾–µŽÂ±½ã¹Ç½ Îã¹±À+¾–µŽÄ±ºÆÅºn¹Ç»¿½FµŽ½FРàÒ ¶[µŽ½FÐ ÄjÂǾ–ºº½j¾¿»–Âj¸Õá ¹±½ ÎßÌéËýÀ±ë¸FƎºëº½d¾A¹±¶¾¿³Fºëº¹±¶–ÒF»¿º¶ ±Ì΋µ·¶–¹±Ð±»¿ººÃ-º½d¾Û  ¶´À¹±½AÈSº§¶–ºº½AÌÍ»¿Â±Ã>¾¿³Fº§Ð±» ¹Ç¸F³&¶à ÌéËèÀ+ÂjÃ-¸ ÆÅºÃ-º½d¾ÐjµÅÄjº¶¹Ý¶–ÃÖ¹ÇÆŽÆdºë¸FµÅ»¿µ·À¹ÇÆÕÈSÂÕÂj¶[¾µŽ½¸Sº»–Ë ÌÍÂj»–Ãֹǽ&À+º±ÛHÂj»–ºµŽÃ«¸&Âj»[¾ ¹Ç½d¾–ÆŽá±àfÌéËèÀ+±ë¸FƎºëº½j¾äÀ¹Ç½ ÈSºÖÒ ¶–ºΒµÅ½¹±¸F¸FƎµŽÀ¹Þ¾–µŽÂ±½&¶§É1³Fº»¿º«µÅë¸FƎºëº½d¾ ¹Þ¾–µŽÂ±½’ÂÇÌ ÄjÂǾ–ºº½d¾–»¿Â±¸Õᵎ¶΋µÅç?À+ÒFÆÅ¾à„ÌÍÂj»/º勹±Ã«¸FÆÅºjà„¸ ¹Ç» ¶[µŽ½FÐ&Û#²´³Fº À±ë¸ ¹Ç»¿µ·¶[Âj½ÈSº+¾ÊÉfºº½¶[ዶ[¾–ºë¶Y¾–» ¹ÇµŽ½FºΧÂj½¹Ç½F½F±¾¿¹Ç¾–ºÎ þßÿ°ÙKÐàAɂÔTÈ+ÕÉlÝYهÉwÐԄÓË+ɪԁÍههÉ*ȄË+×ÒÎwÐÑ€Û ¶–º½d¾–º½ À+ºn¶z¶–ºƎºÀ+¾–ºÎMÈÕáÚ¹±À¾¿µÅÄjº^Ǝº¹±»–½FµŽ½FÐã¹Ç½ Îö¹Ç½F½FÂ±Ë ¾¿¹Ç¾–ºnÎW¶[º½j¾¿º½ Àº¶?¶–ºƎºÀ+¾–ºnÎW¶[º  Ò º½d¾–µ·¹ÇƎÆÅáã¶[³ ÂÞÉ(¶«¾–³ ¹Ç¾ ¹±À+¾–µŽÄ±ºfÆÅºn¹Ç»¿½FµŽ½FÐ,»¿ºÎFÒ À+ºn¶#¾¿³Fº1¹Çë±Ҡ½j¾ÂÇÌIÎF¹Þ¾ ¹Ý½ ººÎFºÎ ¾–ÂA»–ºn¹±À ³¹ÖбµŽÄ±º½äƎºÄjºÆ%±Ì0¸&º»[ÌÍÂj»–Ãֹǽ&À+º4ÈÕáz¹±¸F¸F»¿Â„åÕµÅË ÃÖ¹Þ¾¿ºƎáA¹-̹±À+¾–±»´Â±Ì/¾ÊÉfÂ Û V$WéV d>f}h!]ji¥k¦mSk#no[^c7]Yc7p-˜µ]YhŸ>k n`š K]F k  n›c » ™®k[Ai]Fœ]F_¥c Âd¶Ê¾fÂÇÌY¾–³ º,¸FÒ ÈFÆÅµ·¶–³FºÎAɏÂj»–Ó-±½A¹jÀ¾¿µÅÄjº(ÆÅºn¹Ç»¿½FµŽ½FÐ"¹Ç»¿º ¶–µÅÃÒFƎ¹Ç¾–µŽÂ±½ ¶ÂÇÌݹǽµ·Î‹ºn¹ÇƎµ)ºÎ ¶[µÅ¾–Ò&¹Þ¾–µŽÂ±½#Û î ½Fº?³ ¹j¶"¹ Æ·¹Ç»¿Ð±º¹Ç½ ½FÂǾ ¹Þ¾–ºnÎÀ+Âj»–¸FÒ&¶àS¹Ç½ Î¾¿³Fº"½ ºÉQ¾¿¹±Ðj¶1ÌÍÂj»(¾¿³Fº ù[½ ºÉ1ƎáK¹±½F½F±¾¿¹Þ¾¿ºÎFúM¶[º½j¾¿º½ Àº¶¹Ç»¿ºß¶[µŽÃ«¸FƎáKÎF»¿¹„É1½ ÌÍ»¿Â±Ã7É1³ ¹Ç¾fÉ´¹±¶fÂjÈ ¶[º»–ÄjºÎֵ޽侖³Fº4¹±½F½FÂǾ ¹Þ¾¿ºÎAÀ±»¿¸FÒ ¶à ¹±¶§µÅÌ´¾–³FºAб±ƷΒ¶Ê¾ ¹Ç½ ÎF¹±»¿Î’¹Ç½ ½FÂǾ ¹Þ¾–Âj»4Éf¹j¶§¸F»¿ÂÕÎFÒ À+µŽ½FÐ ¾–³ µŽ¶-Ìͺº΋Ƞ¹jÀ ӵŽ㻿º¹ÇÆf¾¿µÅ뺱àfÉ1³FµŽÆÅºä¾¿³Fºä¾–º¶[¾?¶[º¾Öµ‘¾–Ë ¶–ºÆÅ̵·¶à&ÂÇÌ0À+ÂjÒF»¿¶–º±à‹½ ÂǾ(Ò ¶–ºÎäÌÍÂj»´¾–³Fµ·¶1Ìͺº΋Ƞ¹jÀ ÓIÛN²´³Fµ·¶ µ·¶"¹Ç½µ·Î‹ºn¹ÇƎµ)ºÎ¶–µ‘¾¿Ò ¹Þ¾¿µÅÂj½%à¶–µŽ½ À+º?µÅ¾-¹j¶–¶–ÒF뺶4¾–³ ¹Ç¾-¹ ¾–»¿ÒFº¹±À+¾–µŽÄ±ºƎº¹Ç»¿½FµŽ½Fж[µÅ¾–Ò&¹Þ¾–µŽÂ±½ãÉf±ÒFÆ·Îß³&¹„ıº¹±ÀÀº¶¿¶ ¾–ÂÖ¶–±ëºÂj½FºÉ1³FÂÖÀ+±ҠƎÎä¹Ç½F½ ÂǾ¿¹Ç¾–ºÉ1µ‘¾¿³z¸Sº»–ÌͺÀ+¾ÝÀ+Âj½‹Ë ¶–µŽ¶[¾–º½ À+á ¾¿Â^¾¿³Fºб±ƷÎõ¶[¾¿¹±½ ÎF¹Ç» ÎßÀ±»¿¸FÒ ¶-¹Ç½F½ ÂǾ¿¹Ç¾–µŽÂ±½ À+Âj½Õıº½d¾¿µÅÂj½ ¶Û ºnÀ¹±Ò ¶[ºz±ÒF»-бÂd¹ÇÆfµŽ¶¾¿Â^µŽ½dÄjº¶[¾–µŽÐj¹Ç¾–º?¾¿³Fºä»–ºƎ¹Ç¾–µŽÄ±º À+Âd¶Ê¾ ¶«ÂÇ̧»–Ò ÆÅºÉ1»¿µ‘¾¿µÅ½FÐ Äjº» ¶[Ò ¶Ö¹±½F½F±¾¿¹Þ¾¿µÅÂj½%àNµÅ¾?µ·¶Öº¶[Ë ¶–º½d¾–µ·¹ÇÆ1¾¿³ ¹Þ¾zɏºÒ ¶–ºâ¹»¿º¹±ÆÅµ·¶Ê¾¿µŽÀäë‹΋ºÆ,ÂÇÌ4¹Ç½F½ ÂǾ¿¹ÇË ¾–µŽÂ±½#Û²´³Fº»–ºÌͱ»¿º±àYÉfº-΋ºnÀ+µ·Î‹ºÎ^¾–Â΋¹AÌÍÒFÆÅƎádË &º΋ÐjºÎ ¹±À+¾–µŽÄ±º?Ǝº¹±»–½ µÅ½FÐâ¹±½F½F±¾¿¹Þ¾¿µÅÂj½ º+勸&º»–µŽÃ«º½d¾àÉ1µÅ¾–³õ»¿º¹±Æ ¾–µŽÃ«º«³dҠë¹±½¶–ÒF¸Sº»¿Ädµ·¶–µÅÂj½%àY»¿¹Ç¾–³Fº»Ý¾¿³ ¹Ç½ ¹±¶¿¶–ÒFëº"¾¿³Fº ¶–µÅÃÒFƎ¹Ç¾–ºnÎÌͺºÎFÈ ¹±À Ó,ÂÇÌF¹±À+¾–Ò ¹±Æ±²#»¿ººÈ ¹Ç½ Ó,¹Ç½ ½FÂǾ ¹Þ¾–Âj»¿¶Û <ºÖ΋ºıºƎ±¸Sº΍¹Ç½’¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½¾¿ÂdÂjƾ–³ ¹Ç¾§µ·¶ëÂÕÎ‹Ë ºƎºÎKÂj½ —Uʲ1æ(ø@ ¶  ÆÅºÃ"ÈFµ·À‘<’±»¿ÓÕÈ&º½ À ³¼¶[±Ìé¾ÊÉf¹±»–º üY.¹„á3º+¾ ¹±Æ|ێà  I±þàDÈFҋ¾ É1»–µÅ¾[¾¿º½µŽ½ &d¹„Ä„¹ÑÌÍÂj» ¸FÆ·¹Þ¾–Ìͱ»¿ÃËýµŽ½ Î‹º¸Sº½&΋º½ Àº±Û>²#Âbº½ ¹±ÈFƎºÎF¹Ç¾¿¹ß¶[¾–Âj»¿¹±Ð±º ¹Ç½&ξ–³ º¹jÀ¾¿µÅÄjºAƎº¹Ç»¿½FµŽ½FЍ¶¿¹Çë¸FƎº¶[ºÆÅºnÀ¾–µŽÂ±½ß¾–¾¿¹±Ó±º ¸FÆ·¹±ÀºA±½õ¾–³Fºäë±»¿ºA¸SÂÞɏº»[ÌÍÒFƏÃÖ¹±À ³FµŽ½Fº¶µŽ½ß±Ҡ»"Æ·¹ÇÈ » ¹Þ¾–³ º»¾–³ ¹±½-¾–³Fº1Ò ¶–º»! ¶³F±ëº´ÃÖ¹±À ³FµŽ½Fº±àǾ–³Fº´¾¿ÂdÂjƋÉf¹j¶ ΋ºn¶[µŽÐ±½FºnÎDÉ1µ‘¾¿³Ú½Fº¾ÊɏÂj»–Óã¶[ÒF¸ ¸&Âj»[¾z¶–¾¿³ ¹Þ¾äµ‘¾zÀ±ÒFÆ·Î À+ÂjëÃ"ÒF½Fµ·À¹Ç¾–º§É1µÅ¾–³±ÒF»(¶–º»¿Ä±º»¿¶ÂÞıº»f¾–³Fº4µŽ½d¾–º»¿½Fº¾Û î ÒF»«»–ºn¹ÇÆÅË|¾¿µÅëºz¹jÀ¾¿µÅÄjºzƎº¹±»–½FµŽ½FЍº+勸&º»–µŽÃ«º½d¾Ö¶[ÒFÈFË 6ʺÀ+¾¿¶zɏº»–ºâ¶–ºÄjº½WÐj»¿¹j΋Ҡ¹Þ¾¿ºâ¶Ê¾¿Ò Î‹º½j¾ ¶?µŽ½MÀ+Âjë¸Fҋ¾–º» ¶¿À+µŽº½ Àº±ÛµÅÄjºz±Ì(¾–³ ºÃ¹Ç»¿ºA½&¹Þ¾–µŽÄ±ºø ½ Ð±ÆŽµŽ¶–³ß¶–¸&ºn¹ÇÓdË º» ¶à ÈFҋ¾½F±½ º4³ ¹±Î¹±½dázÌͱ»¿ÃÖ¹ÇÆ%ƎµŽ½FбÒFµ·¶[¾–µ·À¶1¾¿»¿¹±µÅ½ µÅ½FÐ&Û ²´³Fº(µŽ½FµÅ¾–µ·¹ÇÆ ¾–» ¹ÇµŽ½FµÅ½ Ð§¶–º+¾®©ãµ·¶¾¿³Fº1÷ » ¶Ê¾ÏEE§¶–º½d¾–º½ À+ºn¶ ÂÇÌ0æ(¹±Ã«¶–³ ¹„ÉM0â¹±»¿ÀÒ ¶Õ¾¿»¿¹±µÅ½ µÅ½FÐ?¶–º+¾nÛN²/«¹jÀ  Ò ¹±µÅ½d¾ ¾–³ º¶[ÒFț6ʺÀ¾ ¶´É1µ‘¾¿³¾–³Fº²/»¿ººÈ&¹Ç½FÓ?À±½Õıº½j¾¿µÅÂj½ ¶àj¾¿³Fºá Éfº»¿º(÷ » ¶Ê¾1¹±¶–Ó±ºnΫ¾–«¶–¸&º½ Îz¶–±ëº,¾–µŽÃ«º,µŽ½ä¹"Ìͺºn΋Ƞ¹jÀ Ó ¸F³ ¹j¶[ºjà#É1³Fº»–º«¾¿³FºáÉÂjÒFƷΒ¹±½F½F±¾¿¹Þ¾¿ºÖÒF¸’¾–ƒ,E¶[º½‹Ë ¾–º½ À+ºn¶ü;–³FºáWɏº»–º¹ÇƎÆÅÂÞÉfºÎ㾿Âã¶[¾–Âj¸M¹Þ¾¹Ç½Õáß¾¿µÅ뺄þ ΋» ¹„É1½ÌÍ»¿Â±Ã ¾–³ ºzµŽ½FµÅ¾–µ·¹ÇƇEE^¶–º½d¾–º½ À+ºn¶"µŽ½3©Û²´³Fº ¶–º½d¾–º½ À+ºn¶,Éfº»¿º«¹Ç½F½F±¾¿¹Ç¾–ºÎ^±½Fº«¹Þ¾4¹A¾–µŽÃ-ºjà#¹Ç½&Î⾿³Fº ²/»–ººÈ ¹±½FÓ¹±½F½FÂǾ ¹Þ¾¿µÅÂj½ÖÉf¹j¶ ¶[³ ÂÞÉ1½«¾–Â4¾¿³FºÃ¹ÞÌ龿º»ºÄdË º»¿áõ¶–º½d¾–º½ À+ºjÛ î ½D¹„ıº» ¹ÇÐjº±à¾–³ º¹±½F½F±¾¿¹Þ¾¿Â±» ¶¶–¸Sº½d¾ ¹Ç»¿Â±Ò ½ Î-!,«Ã«µŽ½Õҋ¾–ºn¶,Âj½¾–³ µŽ¶ÝÌͺºn΋Ƞ¹±À Ó¸F³&¹±¶–º"ÈSº+Ìͱ»¿º ΋ºnÀ+µ·Î‹µÅ½ Ð«¾–³ ¹Ç¾1¾–³FºázÉfº»¿º4À+±Ã-ÌÍÂj»[¾ ¹ÇÈFƎº§º½FÂjÒFб³É1µÅ¾–³ ¾–³ ºÀ+±½ÕÄjº½d¾–µŽÂ±½%Û ²´³Fº-¹jÀ¾¿µÅÄjºƎº¹±»–½FµŽ½FÐz¸F³ ¹±¶–ºÌÍÂjÆÅƎÂÞÉ(¶Ý¾–³Fº-Ìͺºn΋Ƞ¹jÀ Ó ¸F³ ¹j¶[ºjÛ^²´³FºAÌéËýÀ±ë¸FƎºëº½d¾-ÎFµŽ¶¿¹ÇÐj»–ººëº½d¾ëº¹j¶[ÒF»¿º É´¹±¶Ò ¶–ºÎ ¾¿Â’¶–ºƎºÀ+¾µ,}E¶[º½d¾–º½&À+º¶"ÌÍ»¿Â±Ã3¾–³Fºä»¿º¶[¾ÂÇÌ æ(¹±ÃÖ¶[³ ¹„Ée0Hâ¹±»¿ÀÒ ¶±¾–» ¹ÇµŽ½FµŽ½FÐ"¶–º+¾f¹±½ Î-¾–³ º¹±½F½FÂǾ ¹ÞË ¾¿Â±»4É´¹±¶,µÅ½ ¶[¾–»¿Ò À+¾–º΍¾¿Âä¹±½F½FÂǾ ¹Þ¾¿º¾¿³FºÃÛ«²´³FºÖ¹Ç½F½ ÂÇË ¾ ¹Þ¾–ºnÎA¶–º½d¾–º½ À+ºn¶NÉfº»¿º1¾¿³Fº½ä¶[º½j¾fÈ ¹±À Ó-¾¿Â¾–³ º¶–º»¿Ä±º»Û ²´³Fº¶[ዶ[¾–ºÃÑÀ ³FÂj¶–º§¾–³ º½Fº+åÕ¾ê,EÖ¶[º½d¾–º½&À+º¶ۏ²´³ ºº+åÕË ¸Sº»¿µÅ뺽j¾AÀ±½ ¶–µ·¶Ê¾ ¶«ÂÇ̵E’µ‘¾¿º» ¹Þ¾¿µÅÂj½ ¶àf΋ÒF»¿µÅ½ Ð’É1³ µŽÀ ³ ¾¿³Fº¹Ç½F½F±¾¿¹Ç¾–±» ¶ɏº»–º¹ÇƎÆÅÂÞÉfºÎõ¾–Â’ÃÖ¹ÇÓ±ºäÒ ¶–ºÂÇÌݾ¿³Fº Âj»–µŽÐ±µŽ½ ¹Çƪ!EEä¶[º½j¾¿º½ Àº¶4¹j¶4¹z»¿º+Ìͺ»–º½ À+ºÖÀ±»¿¸FÒ ¶Û  ÌéË ¾¿º»"À±ë¸FƎº+¾–µŽ½FÐâ¹±ÆÅÆ@!Eµ‘¾¿º» ¹Þ¾¿µÅÂj½ ¶à#¾–³Fºá^Éfº»¿ºÖ¹±¶–Ó±ºÎ ¾¿Â߹ǽF½ ÂǾ¿¹Ç¾–º^¹’ÌÍÒ »[¾¿³Fº»¦EEÀ±½ ¶–ºÀҋ¾–µŽÄ±ºâ¶–º½d¾¿º½ Àº¶ ΋» ¹„É1½’»¿¹±½ Î‹Â±Ã«ÆŽáÌÍ»¿Â±Ã¾¿³FºÖ¾¿º¶[¾"¶[º¾Ûz²´³ ºÖ¸FÒF»¿¸&Âd¶[º ±Ì%¾–³Fµ·¶f÷ ½ ¹±ÆY¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½AÉ´¹±¶N¾–Âê6ÊÒ ÎFбº³FÂÞÉÚÉfºƎÆI¹±½‹Ë ½F±¾¿¹Ç¾–±» ¶¾¿¹±Ð ¶–º½d¾¿º½ Àº¶Ö΋» ¹„É1½ßÉ1µÅ¾–³ã¾–³ ºz¾¿»–Ò ºâ΋µŽ¶[Ë ¾¿»–µŽÈFҋ¾¿µÅÂj½âÌÍ»¿Â±ÃÁ¾–³ º"¾¿º¶[¾§À±»¿¸FÒ ¶àI¹j¶ÝÉfº«¶[³ ¹±ÆÅÆ0¶–ººµÅ½ ¶–ºÀ+¾–µŽÂ±½—,‹Û î ½¹„ıº» ¹ÇÐjº±àd¾–³ º¹Ç½F½F±¾¿¹Ç¾–±» ¶f¾–ÂÕÂjӃAI"뵎½dÒF¾–º¶1¾– ¹±½F½FÂǾ ¹Þ¾¿ººn¹±À ³¶[º¾1ÂÇÌ,}E«¶[º½d¾–º½&À+º¶à‹» ¹Ç½FÐjµÅ½FÐÌÍ»¿Â±Ã  ¾¿Âƒ%E뵎½dÒF¾–º¶Ûâ²´³FºA¹„Äjº» ¹ÇÐjº«¹±Ã«Â±ÒF½d¾ÂÇÌ´¾¿µÅëº?¾¿³Fº ¶–º»¿Ä±º»¾–ÂÕÂjӍ¾¿Â^»¿ÒF½õ¾¿³Fºz¹jÀ¾¿µÅÄjºAƎº¹Ç»¿½FµŽ½FЍ¹±ÆÅÐj±»¿µ‘¾¿³Fà ¹±½ Î"¶–ºƎºÀ+¾¾¿³Fº´½Fº+åվȠ¹Þ¾ À ³±Ì&¶–º½d¾–º½ À+ºn¶/É´¹±¶¹Ç¸ ¸F»–„åÕË µŽÃÖ¹Þ¾–ºÆÅáK%,ëµÅ½Õҋ¾¿º¶àǹݻ¿º¶[¾/È »–ºn¹ÇÓÌͱ»¾–³ ºf¹±½F½FÂǾ ¹Þ¾¿Â±» ¶Û ²´³FºÖ¹Ç½ ¹±ÆÅዶ–µŽ¶±̏¾¿³Fº«»–ºn¶[ÒFÆÅ¾¿¶§µ·¶§¸F»¿º¶–º½d¾–ºnÎ^µŽ½ ¶[ºnÀË ¾¿µÅÂj½—,FÛ  OH%EC ì O ì L ¯feDíÊH#a Uè½-¸F»¿ºÄÕµÅÂjÒ ¶ɏÂj»–ÓIà}f»¿µÅƎƛ0MðÝÐj¹ÇµYüjþ/¶[³ ÂÞɏºnÎ4¾–³&¹Þ¾ ÒF½&΋º»ÝÀº»–¾¿¹±µÅ½À+µŽ»¿ÀÒFÃֶʾ ¹Ç½ Àº¶àÕµ‘¾Ýµ·¶´¸SÂj¶¿¶[µŽÈFƎº§Ìͱ»1³ÕÒ‹Ë Ãֹǽ&¶,É1»¿µÅ¾–µŽ½FÐA»¿ÒFƎº¶Ý¾–Âä¸&º»[ÌÍÂj»–Ã.¹±¶,ÉfºƎƹj¶¹z¶Ê¾ ¹Þ¾¿º+Ë Â±ÌéË|¾¿³Fº+Ëè¹Ç»–¾(ÃÖ¹±À ³FµŽ½FºƎº¹±»–½FµŽ½FÐ?¶–ዶʾ¿ºÃ7ÌÍÂj»ÝÈ ¹j¶[º4½ Â±ÒF½ ¸F³ »¿¹j¶[ºÀ ³ÕÒF½FÓյ޽FÐ Û <W³&¹Þ¾?¾¿³ ¹Þ¾A¶[¾–Ò ÎFáb΋µ·ÎD½FÂǾA¹jÎÕË Î‹»¿º¶¿¶à#³FÂÞÉfºÄjº»nàYÉf¹j¶¾–³Fº?ÀÂj¶[¾§Â±Ì¾¿³Fº?³ÕÒFÃֹǽ’Æ·¹ÇÈS±» ¹±½ Î„Âj»´Ã«¹jÀ ³FµŽ½Fº4À+á‹À+Ǝº¶´µŽ½Õı±ƎıºnÎÖ¾¿Â?À+±½&¶Ê¾¿»–Ò À+¾(¶[Ò À ³ ¹¶–ዶʾ¿ºÃà´½F±»«¾¿³Fºâ»¿ºÆ·¹Þ¾–µŽÄ±ºÀÂj¶[¾«Â±ÌÂjȋ¾¿¹±µÅ½ µÅ½FÐ ¾¿³Fº ¾¿»¿¹±µÅ½FµŽ½FÐMΠ¹Þ¾¿¹ÚÌͱ»^¾–³Fºbë¹jÀ ³FµŽ½FºßƎº¹±»–½FµŽ½FÐM¶–ዶʾ¿ºÃÛ ²´³Fµ·¶(¸&¹Ç¸Sº»(É1µŽÆÅÆ/º¶[¾–µŽÃÖ¹Þ¾–º¹Ç½ ÎÀ+Âj½d¾–» ¹±¶[¾f¾¿³Fº¶–º"ÀÂj¶[¾¿¶ »¿ºÆ·¹Þ¾¿µÅÄjº,¾¿Â«¸&º»[ÌÍÂj»–Ãֹǽ&À+º±Û ²#§µŽ½Õıº¶[¾–µŽÐj¹Ç¾–º¾–³Fº(ÀÂj¶[¾¿¶ÂÇÌI¹³dҠë¹±½»¿ÒFƎº+ËýÉ1»–µÅ¾–µŽ½FÐ ¶–ዶʾ¿ºÃàÉfºÒ ¶–ºÎD¹ ¶[µŽÃ«µÅÆ·¹Ç»«ÌÍ» ¹ÇëºÉf±»¿Ó’¾– ¾¿³ ¹Þ¾ÖÂÇÌ f»¿µÅƎƣ0ðÝÐj¹±µ|ÛÚ²´³Fºâ¶–ዶʾ¿ºÈÉf¹j¶«É1»–µÅ¾[¾¿º½W¹±¶Ö¹Àбµ ¶¿À+»¿µÅ¸F¾#É1³Fµ·À ³À+±ҠƎÎ4ÈSºN¹jÀÀ+ºn¶–¶–ºÎ4¹jÀ+»¿Âj¶¿¶S¾–³FºÉºȧÌÍ»–Âjà ¹ÖÈ »–ÂÞÉ(¶–º»Ý¶–Ò À ³â¹j¶Ýð(º¾¿¶¿À¹±¸&ºÂj»£Uè½d¾–º»–½ º+¾,ø åÕ¸ ÆÅÂj»–º»Û ñ#µÅÓjºêf»¿µÅƎÆJ0Ùð(Ðd¹Çµ ¶ê!"¹Ç¸F¸F»¿Âj¹jÀ ³%àd±ÒF»´»¿ÒFÆÅºn¶fɏº»–º È ¹j¶[ºnÎõÂj½ßòº»¿Æ»¿ºÐjÒFÆ·¹Ç»"º勸F»–ºn¶–¶–µŽÂ±½ ¶ےÿÝÂÞɏºıº»àµŽ½‹Ë ¶[¾–ºn¹±Î±Ì&ºåÕ¸ ÆÅµ·À+µÅ¾–ÆŽá΋º+÷&½FµÅ½ Ð»¿ÒFƎºf¹jÀ¾¿µÅÂj½ ¶/¹±½ Î³&¹„ÄdµŽ½FР΋µIº»¿º½d¾1ÓdµŽ½ ÎF¶´Â±Ì»¿ÒFƎº¶àÕ±ÒF»1»¿ÒFƎº¶fµÅë¸FƎµ·À+µÅ¾–ÆŽáz΋º÷ ½Fº ¾¿³FºµŽ»(¹±À+¾–µŽÂ±½ ¶ÈÕá?Ò ¶–µÅ½FЫ΋µSº»–º½d¾´¶–áÕÃ"ÈS±Ʒ¶N¾¿Â-ÎFº½F±¾–º ¾¿³FºÖ¸FÆ·¹±À+ºëº½d¾ÂÇÌf¾–³FºÖÈ&¹±¶–ºÖ½F±ÒF½ ¸F³ »¿¹j¶[º˔º½ À+ƎÂj¶–µÅ½ Ð ¸ ¹±»–º½d¾–³Fºn¶[ºn¶4¸F»¿µÅÂj»§¾–Â^¹±½ Î¹ÞÌ龿º»4¾¿³FºA¹Ç¸ ¸FÆÅµ·À¹Ç¾–µŽÂ±½ ÂÇÌ ¾¿³Fºâ»¿ÒFÆÅºjÛö²¹ÇÈFƎº‘¸F»¿º¶–º½d¾¿¶Ö¹À±ë¸ ¹±»–µ·¶[Âj½b±ÌÂjÒF» »¿ÒFƎºAÌÍÂj»–ÃÖ¹Þ¾Ö¹ÇÐd¹ÇµŽ½ ¶[¾"¾–³&¹Þ¾«ÂÇÌêf»¿µÅƎƪ0ð(Ðd¹Çµ ¶Ûõ²´³Fº »¿ÒFƎº¶1¸F»¿º¶–º½d¾¿ºÎä³Fº»¿º4ÃÖ¹„áAÈSºÀ±½ ¶–µŽÎFº»¿ºÎzÆÅºn¶–¶(ÀÒFÃ-Ë ÈSº» ¶[Âj뺧¹±½ ÎzÃ-Âj»–º§µŽ½d¾–ÒFµÅ¾–µŽÄ±ºjÛ Uè½õ¹Éf¹„á⾖³ ¹Ç¾"µŽ¶¶[µŽÃ«µÅÆ·¹Ç»¾–Âf»–µŽÆŽÆS0]ðÝÐj¹±µF ¶¶–áÕ¶[Ë ¾¿ºÃà´Â±ÒF»Ö»¿ÒFƎº¶«Éfº»¿ºz¾¿»¿¹±½ ¶–ÆŽ¹Ç¾–ºÎbµŽ½j¾¿Âõòº»¿Æ´»–ºбÒFÆ·¹Ç» ºåÕ¸ »–ºn¶–¶–µÅÂj½ ¶ä¹Ç½ ÎöºÄÞ¹ÇÆŽÒ ¹Ç¾–ºÎM±½M¾–³FºÀ+Âj»–¸ Ò ¶Û]ð(ºÉ   !#"$&%(' )+*,%-!/.10(2 *,345!7698:.(&;<'&!70% =   2 0-0(2 * 3?>@4BAC *EDGF .-%-! HEAJI2 *,3>@8:.-;<'&!70 K#LEM > KNLOM = K#LOM " 8P.-2  $Q4B3&2 R:S M *TC   R:S M *TC,  R:S MVUWUNUYX[Z ½&\ 65]^_`A&C 0a ½Jbbb K:cedOMfX[Z ½&\ 0<]^g^"a X+h 0]#iikj lB=TmonJaBp KPced qkM/X[h 0]Npf65r#a X r0]^4B4BD:nJ=,nJa K:ced MJX[Z ½\ 0]Ns&a X r0]#t:g#a lC, !Wu,A.-&0 X r@0<]^4B45D:nf= nJa vwS M *TC,  d S M *TC,  d S M *TC   KPcedyx`M/X[h 0]^izikj lB=TmonJa X[Z ½\ 65]Npf6Br^{ /| a d S M *TC,  45!76 lC, ! Z }~€}5T‚Nƒ^„…ƒ^† \ ‡ ZƒPˆ5‰Š„‹:}B‚Nƒ^„P…5ƒ^† \ Z?}5T‚N~PŒ[eŽ}Pˆ \W‘ Z _kAC0 j s?4B’BHrm \ ukA&.-&0 “”!/;70A&• "F !f–P—˜*k™^™w.-*`šG›G– œo X 4B!76B™^™^ž ŸGA&.-' ™^™^ž˜uk.-2 { /|,™^™^ž˜a &_`AC0¡/™ X sz¢?£¤–¥a lC, ! ¦ ¦ ¦ >B , 2 ;/&0(2 A* X "F ! –P— * ™^™ aP.(&* šG›G– œ  X 4B!76 ™^™^ž ŸGA.-' ™^™^ž a X u,.-2 {,/| ™^™^ž a X _`A&C 0 ¡/™ sz¢?£ ¤– a ²/¹±ÈFÆÅº(jÔNôÂ±Ã«¸ ¹Ç»¿µ·¶[Âj½AÂÇ̱Ҡ»(À+ÒF»¿»–º½d¾f»¿ÒFƎº§Ìͱ»¿Ã«¹Ç¾1É1µ‘¾¿³/f»¿µÅƎÆ0ÙðÝÐj¹±µ ü+dþ »¿ÒFÆÅºn¶«¹Ç»¿ºä¹±¸F¸&º½ Î‹ºnÎõ±½d¾¿Â^¾¿³Fºº½ ÎbÂÇÌ,¾–³FºƎµŽ¶[¾«¹±½ Î º¹jÀ ³»–Ò ÆÅºA¹±¸F¸FƎµÅºnÎµŽ½õ±» ΋º»nàµÅ½¾¿³FºA¸&¹Ç» ¹±Î‹µŽÐ±ÃïÂÇÌݹ ¾–» ¹Ç½&¶ÊÌÍÂj»–ÃÖ¹Þ¾¿µÅÂj½‹ËýÈ ¹±¶–ºÎÖ»¿ÒFƎº§ÆÅµ·¶Ê¾nÛ §žWYX  šFk$¨ßa:[^]jh!]Yc7p ¡Ï¢ ™Sk [^]Y k#cºh!œ ²´³Fºf»–Ò ÆÅº˔É1»¿µ‘¾¿µÅ½ Ð(º+勸Sº»¿µÅ뺽d¾ ¶/Éfº»¿ºNÀ±½ Î‹Ò&À¾–ºnÎÈÕá¹ б»¿Â±Ò ¸ÂÇÌ@AI?¹±Î‹ÄÞ¹±½ À+ºnÎÀ+±ë¸FÒF¾–º»§¶¿À+µŽº½ Àº¶[¾–Ҡ΋º½d¾¿¶à Ò ¶–µÅ½ Ð^¾¿³FºµŽÎ‹º½d¾–µ·À¹ÇÆf¾¿º¶[¾Ö¶–º+¾?¹j¶"µŽ½ß¾¿³Fº¹Ç½F½ ÂǾ¿¹Ç¾–µŽÂ±½ º+勸Sº»¿µÅ뺽j¾ ¶«¹Ç½ Îõ¾–³Fº¶¿¹ÇëºzµŽ½FµÅ¾–µ·¹ÇƇ!EEâÐj±ƷÎß¶[¾¿¹±½‹Ë ÎF¹±»¿Îö¶[º½j¾¿º½ Àº¶äÌÍÂj»È&±¾–³QµÅ½FµÅ¾–µ·¹ÇÆ4È »¿¹jÀ Ó±º+¾¿µÅ½ Ðã¶[¾¿¹±½‹Ë ÎF¹±»¿ÎF¶%бÒFµ·ÎF¹Ç½&À+ºN¹±½ Î§»–Ò ÆÅºË  Ò ¹±ÆÅµÅ¾Êá(Ìͺº΋Ƞ¹jÀ Óݾ¿³F»¿Â±ÒFÐj³‹Ë ±ÒF¾1¾–³FºµÅ»1Éf±»¿ÓIÛ ©«ª,¬­ ®°¯`­k±`±²¬k³°±k´ µ«¶·`¸N¹`¹7º«»,¼$½`¾¿»kÀÁ”À$¾Ã»k¼°½`¾¿»kÀ°ÁT”ÀÄÆÅ ©ÈÇ(®°±`­ ÉÊ®°±k­ É˱`­”ÌͲÎ`¬°ÏÐ¯`Ñ5Ç®°±k­ ÉÆ®°±`­ ÉÐÇ/±`­”Ì ÍÊÎk¬°ÏÐ µVÒ »kÀ`ӔÔkÕ°ÖÀ ª ¾²Å²µÃ¶”½`½¿ÔÖ×`ذ¶ ÙÆÚ²Å µVҊÀ°Á,Ԑ¶,»,¼$½,ÄÆÅ«µ¿Ô°Ö×`ذ¶ÙÊÚÆÅ ©²­®°Û ³”ÍËÜkÝÈÇ/­ ÞÆÛ³`Þ$ß,´$ÐïkÑÈÇ/­ ®°Û³”ÍÆÜkÝÊ­ÞËÛ ³`Þ$ß ´$Ð µ Ç Á Þ°±,àâá-Û Þ°±,àâá » ®°Û ³”ÍWá(­ ®°Û ³`͏Р¶B¹`¹<º¶ Ç/ã`ÜNá1©$Ð ¹`¹7º ã ¶,ä,¼N¹`¹<ÄåҘ»kÀ`ӔÔ,Õ$ÖÀ ª IJڲе«¶·`¸`·N¹`¹7¾²¶ Çæ ¼”Ô á ½`½ Ð ¹k¹7ºÆ¶ Ç ¼”Ô á+æ · æ°ã`Ü#á[æ Á`ç Ð »k¼$½k¾ ã ¶·`¸N¹`¹7ºéè`Ø`·`¸°ºêҊ»kÀ`ӔÔ,Õ$ÖÀ ª IJÚÊÅ ©Èǘ®°Û,àÐ ¯`ÑÈǀÍ,ɰ­,Ͳ®°Û,àËÐ Ò Ç Ô É°­,Íâá[Íkɰ­,͏Р¶`¶ ¼”ԵûkÀ`Ó`ÔkÕ$ÖÀ ª ÄÆÅ¿Ú ©ÈÇë­ ®°Û ³”Í«ìîíï`ðËÐ µ Ç7Û Þ°±,àâá-­ ®°Û ³”Í$Ð ¶B¹`¹7º Ç/ã`Ü#á-©$Ð ¶B¹k¹7ºÆ¶kä,¼N¹`¹ÄÊÅ µVÒ »kÀ`ӔÔkÕ°ÖÀ ª ÄñÒ<º¿»kÀ`ӔÔkÕ°ÖÀ ª ¾Ãڐº¿Úʶ ¯ ò ·`¸ ¯óã Ҋ»kÀ`ӔÔ,Õ$ÖÀ ª IJÚó¶ ¯ ·`·k¸ ¯ Ò »kÀkӔÔkÕ$ÖÀ ª Ä²ÚÆÅ ©¿æ`¬Û Þ°Û ³kÞ$ÌÊ­,¬”´¿³°Ì³°­`±`± ಮ°­”Ì,´ À æ$Ì µ«¶¼”Ô#¹`¹7ºÊ¶ æ · æ Å ©Èôkô/­ ޔõ^ökö÷³$̳°­`±k±,àøÏ`ÌÞ#öÍËÏÞË­¿®­”Ì,´ À æ µVҀ¶ ã çkÄ^¹k¹<Ä²Ú Ç/­ޔõâá1ùÐ ¶êÒ÷¶ ã çkÄB¹`¹<IJÚÊÅ ©YúÛ,¬”´ËÌ`ÏÞ`û±k´k͐ÛÞÊ®°­”Ì,´ À æ$Ì µ«¶ Ç ¼”Ô á Ø`ü á Ù æ?á Ù`¼”Ô Ð Å¿è`Ø`·k¸ ©ÊÌ,Ûzú´«Þ`³ ú®´k¬°Ì«­ ¬”´ÆÌ`ÏÞ`û±k´k͐ÛÞË®­”Ì,´ À æ°Ì µVÒ »kÀ`ӔÔkÕ°ÖÀ ª ÚêҀ¶kä,¼ÊڲŠ©ÈÇýú°³$ßɰïúÛ”Ì ÍîÐ¿Û þ µ«¶ Ç ¼”Ô á ·k¸ Ð ¹`¹7º Ç1ú³$ßÉ?áÿúÛ”Ì ͏Р¶ÆÅ²¶`ÖÀ µÅÐjÒF»¿ºÏ‹Ô  ½?ø 勹±Ã«¸FÆÅº,æ1ÒFƎºÝñ%µ·¶Ê¾nÛ0ñ%µŽ½Fº¶NÈ&ºбµŽ½F½FµŽ½FÐ É1µÅ¾–³³ ¹±¶–³äë¹±»–Ó‹¶4ü ð þ´¹±»–º4À±ë뺽d¾¿¶Û ²´³Fºz¾–µŽÃ-ºz¾–³&¹Þ¾-¾–³Fº¶Ê¾¿Ò Î‹º½j¾ ¶«¶[¸Sº½d¾«Â±½¾¿³FºA¾ ¹±¶–Ó Ä޹ǻ¿µŽºÎâÉ1µ·Î‹ºÆÅájàIÌÍ»–Âjù?ëµÅ½ µÅÃÒFÃÂÇÌϱې,?³FÂjÒF» ¶(¾¿Âä¹ ÃÖ¹Þ勵ŽÃ"ÒFÃ]ÂÇÌ2-³F±ÒF» ¶à‹É1µ‘¾¿³¹±½¹„ıº» ¹ÇÐjºÝ±Ì2,³FÂjÒF»¿¶Û fºÀ¹ÇÒ ¶–º4Éfº"À¹±¸‹¾–ÒF»¿ºÎ¹±½ Î¶¿¹„ıºÎzºÄjº»¿ázÀ ³ ¹±½Fбº§¾¿³Fº ¶[¾–Ò ÎFº½d¾¿¶äÃÖ¹±Î‹º¾–Âß¾–³FºµÅ»»¿ÒFƎºÆŽµŽ¶[¾¹Ç½ ÎWƎ±ÐjбºnÎWºÄdË º»–á«Ã«Â±Ò ¶–º§À+ƎµŽÀ Ó-¾–³ ºáÖÃÖ¹±Î‹ºÉ1³FµŽÆÅº§Î‹ÂjµÅ½Fо–³ º,ºåÕ¸Sº»–Ë µŽÃ«º½d¾àµ‘¾É´¹±¶§¸SÂj¶¿¶–µÅÈFƎº«Ìͱ»4Ò&¶§¾–¾¿»¿¹jÀ+º-¾–³FºÖ¸Sº»–Ìͱ»–Ë Ãֹǽ&À+º(±ÌY¾–³FºÝ¶–ዶʾ¿ºÃ¹±¶N¹§ÌÍÒF½ À+¾–µŽÂ±½?±ÌY¾–µŽÃ-ºjÛ2/µŽÐ±Ò »–º ݶ[³FÂÞÉ(¶#¾–³ º»¿ÒFƎºNƎµŽ¶[¾À±½ ¶[¾–»¿Ò À+¾–ºÎÈdá§Âj½FºN±ÌF¾–³Fº´¶–ÒFÈ‹Ë 6ʺnÀ¾¿¶ۏ²´³ º  Ò&¹Ç½d¾–µÅ¾¿¹Ç¾–µŽÄ±º§»–ºn¶[ҠƑ¾ ¶1ÂÇ̾–³Fº4»¿ÒFƎº+ËýÉ1»–µÅ¾–µŽ½FÐ ºåÕ¸Sº»¿µŽÃ-º½d¾¿¶1¹Ç»¿º§¸F»¿º¶–º½d¾–ºnÎAµŽ½ä¾–³Fº4½Fºåվݶ[ºnÀ¾–µŽÂ±½#Û  H#C OR H ì °ÆDH#aÕí ʰ‹a Wí ÊH  C OʰFO ì LÏX1a ^® ì,ì _0°‹E°FOÊ_ ì ²´³Fµ·¶ä¶[ºnÀ¾–µŽÂ±½MÉ1µÅƎƹ±½ ¹ÇƎá#)º^¹±½ ÎÚÀ±ë¸ ¹Ç»¿º¾¿³Fº¸Sº»–Ë ÌÍÂj»–Ãֹǽ&À+º"±̏¶–áÕ¶[¾–ºÃÖ¶,À±½ ¶[¾–»¿Ò À¾¿ºÎÉ1µÅ¾–³³ ¹Ç½&ÎÕ˔ȠÒFµÅÆÅ¾ »¿ÒFƎº¶?É1µÅ¾–³Ú¶–ዶʾ¿ºÃÖ¶«¾¿³ ¹Þ¾äɏº»–º¾¿»¿¹±µÅ½FºnÎbÌÍ»–Âjà ÎF¹Ç¾¿¹ ¶–ºƎºÀ+¾–ºÎ΋Ҡ»–µŽ½FЫ»–ºn¹ÇÆÅË|¾¿µÅëº4¹±À+¾–µŽÄ±ºÆÅºn¹Ç»¿½FµŽ½FÐ Û ²´³FºA¸Sº»–Ìͱ»¿Ãֹǽ Àº?ÂÇÌÝæÝ¹ÇÃÖ¶[³&¹„ÉQ0 â¹±»¿ÀÒ ¶/¶–áÕ¶[Ë ¾¿ºà ¾–» ¹ÇµŽ½FºÎÚ±½D¾–³ º’¹±½F½FÂǾ ¹Þ¾¿µÅÂj½ ¶ÖÂÇ̺n¹±À ³Ú¶–ÒFÈo6ʺnÀ¾ µŽ½â¾¿³Fº«»–ºn¹ÇÆÅË|¾¿µÅ뺹jÀ¾¿µÅÄjº"Ǝº¹Ç»¿½FµŽ½FÐAº勸&º»–µŽÃ«º½d¾¿¶à%¹Ç½ Î ¾¿³Fºâ¸Sº»–Ìͱ»¿Ã«¹±½ À+ºâ¹jÀ ³FµŽºıºnÎãÈdáb¾–³ ºâÃֹǽÕÒ ¹ÇƎƎábÀ+Âj½‹Ë ¶[¾–»¿Ò À¾¿ºÎ×¶[ዶ[¾–ºÃÖ¶’ÂÇÌz¾–³Fºb¾–Âj¸C†ö»¿ÒFÆÅºãÉ1»¿µ‘¾¿º» ¶¹±»–º ¶–³FÂÞÉ1½ßµŽ½eµÅÐjÒF»–ºn¶µ%¹±½ ÎqL&àÎ‹º¸Fµ·À¾–µŽ½FВ¾–³ ºä¸Sº»–Ìͱ»–Ë Ãֹǽ&À+ºA¹jÀ ³FµÅºıºnÎ Èdᒺ¹jÀ ³µÅ½&΋µÅÄÕµ·Î‹Ò ¹±Æf¶–ዶʾ¿ºÃÛâ²´³Fº åÕËè¹Þ勺¶4¶–³FÂÞÉ×¾–³FºÖ¾¿µÅëºA¶–¸Sº½d¾"ÈÕá^ºn¹±À ³ ³ÕÒFÃֹǽõ¶–ÒFÈ‹Ë 6ʺnÀ¾âüͺµ‘¾¿³Fº»?¹±½F½FÂǾ ¹Þ¾¿µÅ½FЍ±»«É1»¿µ‘¾¿µÅ½FВ»¿ÒFÆÅºn¶¿þ"µŽ½b뵎½‹Ë ҋ¾¿º¶Y¾–³ ºádËè¹Þ勺¶¶[³ ÂÞÉK¾–³FºÌé˔ëºn¹±¶–ÒF»–º¸&º»[ÌÍÂj»–ÃÖ¹±½ À+º ¹jÀ ³FµÅºıºnÎÈÕá§¾–³Fº1¶–ዶʾ¿ºÃÖ¶ÈFÒFµŽÆ‘¾0Ò ¶–µŽ½Fо–³Fº´Ð±µŽÄ±º½"ƎºÄjºÆ ±̶–ÒF¸Sº»¿Ädµ·¶–µÅÂj½%Û WYX dgc7n›š œ!]Yœµ_l¸8_¥ ™7no[^noh!]jilk ¡£¢ ™®k[^]F k#cºhn›š®¹>noh!n Uý¾§µŽ¶,µŽÃ-¸S±»–¾¿¹±½d¾Ý¾–Âz½FÂǾ¿º¾¿³ ¹Þ¾§É1³Fº½À+±ë¸ ¹±»–µŽ½FÐÖ¾¿³Fº ÀÒF»–Äjº¶µŽ½;µÅÐjÒF»–ºªL&౺+勸Sº»¿µÅ뺽j¾ ¹ÇÆ À±½ ÎFµ‘¾¿µÅÂj½ ¶0¹±À»–Âd¶–¶ Ðj»–ÂjÒF¸ ¶ɏº»–ºzÓ±º¸F¾«¹j¶"º  Ò&¹ÇÆ1¹±¶-¸SÂj¶¿¶[µŽÈFƎº±à É1µÅ¾–³ã¹Ç½Õá ÓÕ½FÂÞÉ1½¸&±¾–º½d¾¿µŽ¹±ÆÈ µŽ¹j¶[ºn¶(̹„Äj±»¿µÅ½FÐ?¾–³Fº«»¿ÒFÆÅºn¶ÊËýÉ1»¿µ‘¾¿µÅ½FÐ Ðj»–ÂjÒF¸%Û$µŽ»¿¶[¾àÞÈSÂǾ¿³б»¿Â±ÒF¸&¶YÈ&ºÐj¹±½É1µ‘¾¿³4¾–³ ºµ·Î‹º½d¾¿µŽÀ¹ÇÆ Active Learning (Expert Annotation) Rule Writing (Non-expert) Performance (F-Measure) Human Labor (Minutes) Sequential Training (Expert Annotation) Active Learning (Non-expert Annotation) 80 82 84 86 88 90 0 50 100 150 200 250 300 350 400 µÅÐjÒF»¿º>%FÔzôÂj½ ¶–º½ ¶–Ò ¶-òº»–Ìͱ»¿Ã«¹±½ À+º«ÌÍÂj»«Ü‹áÕ¶[¾–ºÃÖ¶À±½ ¶[¾–»¿Ò À¾¿ºÎÄÕµ·¹^æ1Ò ÆÅº¼<»¿µÅ¾–µŽ½FÐ à ½ Â±½‹Ëýº+勸Sº»–¾  ½F½ ÂǾ¿¹Ç¾–µŽÂ±½ ¹Ç½&Îz²/»–ººÈ ¹±½FÓ  ½F½FÂǾ ¹Þ¾¿µÅÂj½ üYU轠΋µŽÄÕµŽÎ‹Ò&¹ÇÆ%À+Ò »–Äjº¶1¹Ç»¿ºµÅ½—µÅÐjÒF»¿ºêLÕþ Hand-Coded Rules (non-expert) Trained on Annotation (non-expert) Trained on Annotation (expert) Performance (F-Measure) Human Labor (Minutes) 70 75 80 85 90 0 50 100 150 200 250 300 350 400 µÅÐjÒF»¿º;L Ô  ½F½ ÂǾ¿¹Ç¾–µŽÂ±½Äjº» ¶[Ò ¶"æ1ÒFƎº;<»¿µÅ¾–µŽ½FРÔÖòº»–Ë Ìͱ»¿Ãֹǽ Àº4΋º+¾ ¹ÇµŽÆÅºnÎAÈÕáAµŽ½ Î‹µŽÄÕµŽÎFÒ ¹ÇÆY¸ ¹±»[¾¿µŽÀµÅ¸&¹Ç½d¾Û EEÖ¶[º½d¾–º½&À+º4бÂjƎζ[¾¿¹±½ ÎF¹±»¿Î¶–º+¾à Ìͱ»,µÅ½FµÅ¾–µ·¹ÇÆ/µÅ½&¶[¸SºÀ+Ë ¾–µŽÂ±½>¹Ç½&ÎϸSº»–Ìͱ»¿Ã«¹±½ À+ºbÌͺºn΋Ƞ¹jÀ ÓK¾–³F»¿Â±ÒFÐj³F±ÒF¾¾¿³Fº »¿ÒFÆÅº˔É1»¿µÅ¾–µŽ½FÐ߸F»¿Â‹À+ºn¶–¶Û²´³Fº³ µÅÐj³Fº»¶Ê¾ ¹Ç»–¾–µŽ½FÐ߸S±µŽ½d¾ Ìͱ»¾–³ º,¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½FËý΋»¿µŽÄ±º½ÖƎº¹±»–½ µÅ½FÐÀ+ÒF»¿Ä±ºn¶ É´¹±¶NÎFÒFº ¾–Â’¾¿³Fºä̹±À¾«¾¿³ ¹Þ¾Ö¾¿³Fºë¹jÀ ³FµŽ½FºÆÅºn¹Ç»¿½FµÅ½ Ð¹ÇƎбÂj»–µÅ¾–³ Ã À+ÂjÒFÆ·Î-ÎFÂµŽ½FµÅ¾–µ·¹ÇƋ¾¿»¿¹±µÅ½ µÅ½FЧµŽÃ«Ã-ºn΋µ·¹Þ¾–ºÆÅá±½-¾–³ µŽ¶ Π¹Þ¾¿¹ Û ²´³Fº§»¿ÒFÆÅº˔É1»¿µÅ¾–µŽ½FÐ"Ǝº¹±»–½Fº»¿¶f¹ÇÆ·¶–Â"»¿ºÀ+ºµÅÄjºÎAµÅëëº΋µ·¹Þ¾¿º Ìͺºn΋Ƞ¹jÀ Ó^±½ ¾¿³FºµŽ»÷&»¿¶[¾»–Ò ÆÅºn¶Ò ¶–µÅ½ Ð¾–³Fµ·¶Π¹Þ¾¿¹ à/Ƞҋ¾ Éfº»¿º´¶–ÆŽÂÞɏº»/¾¿Â4µÅ½&À+±»¿¸S±» ¹Þ¾–º¾–³ µŽ¶Ìͺº΋È&¹±À Ó"µŽ½d¾–¾–³ ºµŽ» ½FºÉÙ»¿ÒFÆÅºn¶Û²´³Fº-¶–µÅå⻖ÒFƎº+ËýÉ1»¿µ‘¾¿º» ¶ÝÒ ¶–ºÎ^Ìͱ»4À±ë¸ ¹Ç»–Ë ¹Þ¾¿µÅÄjº§¸FÒF»¿¸&Âd¶[ºn¶´Éº»–º4¹ÇƎÆY½ ¹Ç¾–µŽÄ±º¶–¸&ºn¹ÇÓ±º»¿¶àÕÉ1³FµŽÆÅº§¾¿³Fº ¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½zб»¿Â±ÒF¸zµÅ½&À+ƎҠ΋ºΗ½FÂj½‹Ë”½&¹Þ¾–µŽÄ±º§¶–¸&ºn¹ÇÓ±º»¿¶Û  Æ·¶– à(¾¿ÂõÌÍÒF»–¾–³ º»ëµÅ½FµŽÃ«µ)º^¾¿³Fº’¸&±¾–º½d¾¿µŽ¹±Æ,ÌÍÂj»ä¹±½Õá ÒF½ Ód½ ÂÞÉ1½-È µŽ¹j¶[ºn¶0µÅ½?¶¿¹Çë¸FƎºÝ¶–ºƎºÀ+¾–µŽÂ±½«µÅ½«Ì¹„Äj±»0±ÌY¹±½‹Ë ½F±¾¿¹Ç¾–µŽÂ±½%à±¾–³Fº(»¿ÒFƎº+ËýÉ1»–µÅ¾–º»¿¶É1³ Â4ɏº»–º1ºĄ¹±ÆÅÒ&¹Þ¾–ºnÎֹǽ Î µŽÆÅƎҠ¶[¾–» ¹Þ¾¿ºÎDµŽ½W¾–³Fºn¶[ºÐ±» ¹Ç¸F³&¶Öɏº»–ºâ¾–³Fº †õ¶Ê¾¿»–Âj½Fбºn¶Ê¾ ¸Sº»–Ìͱ»¿Ã«º» ¶ ±ÒF¾NÂÇÌY¾¿³FºÝ¸SÂÕ±ÆSÂÇÌAIojÉ1³ µÅƎºÝ¹ÇƎÆI§¹Ç½F½ ÂÇË ¾ ¹Þ¾–µŽÂ±½b»–ºn¶[ҠƑ¾ ¶¹±»–ºAÀ±ë¸ ¹±»–ºnÎYÛ .,º¶–¸FµÅ¾–ºä¾–³ µŽ¶"̹„ıÂj»[Ë ¹±ÈFÆÅº ¾¿»–ºn¹Þ¾¿Ã-º½d¾à4»¿ÒFÆÅº˔É1»¿µÅ¾–µŽ½FÐW¶Ê¾¿µÅƎÆÒF½ ÎFº»¿¸&º»[ÌÍÂj»–ÃÖ¶ ¹±½F½FÂǾ ¹Þ¾¿µÅÂj½‹Ë”È&¹±¶–ºΠƎº¹Ç»¿½FµŽ½FÐ^É1µÅ¾–³D¶[¾¿¹Þ¾¿µŽ¶[¾–µ·À¹±Æ1¶[µŽÐ±½FµÅÌéË µ·À¹±½ À+º§ÂÇÌ7Œ¤E EÌͱ»\EE"뵎½Õҋ¾–ºn¶´ÂÇÌ/µŽ½Õıºn¶Ê¾¿Ã-º½d¾à ¹±½ ÎÉ1µ‘¾¿³â¶[µŽÐ±½ µ‘÷&À¹Ç½ Àº4ÂÇÌŒeE E,ÌÍÂj»1¾–µŽÃ«º¶(Ò ¸¾– ¹Ç¾ÝÆÅºn¹±¶[¾ê‹Û,-³FÂjÒF»¿¶Û´²´³Fº³FµŽÐ±³Ä޹ǻ¿µ·¹Ç½ Àº4µÅ½¾¿³Fº»¿ÒFƎº+Ë É1»¿µ‘¾¿º»Ö¸SÂÕ±ÆÝÀ±ë¸FƎµŽÀ¹Þ¾–ºn¶?¹÷ ½ Î‹µŽ½Fб̧¶–µÅÐj½FµÅ÷&À¹±½ À+º ÈSºáj±½ Îz¾–³ µŽ¶Ý¸&ÂjµÅ½d¾à&ÈFҋ¾,¹Ç¾,¹ÇÆŽÆ  Ò ¹±½j¾¿µ‘¾¿µÅºn¶1ÂÇÌ0³ÕÒFÃֹǽ Æ·¹ÇÈS±»0µŽ½Õıº¶[¾–ºnÎYà±Ã«ºn¹Ç½«¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½‹ËýÈ ¹±¶–ºÎ9/˔ëºn¹±¶–ÒF»–º Âjҋ¾–¸Sº»–Ìͱ»¿Ã«ºÎQ»–Ò ÆÅº˔É1»¿µ‘¾¿µÅ½ ÐD¹±½ Îö¾–³Fºn¶[º ¾¿»–º½ ÎF¶â¹±¸‹Ë ¸Sº¹±»´¾–ÂÖº+åÕ¾¿»¿¹±¸&ÂjƎ¹Ç¾–º±Û W•” dgc7n›š œ!]Yœµ_l¸  n›c3Z£k [^¸_¥[^ n›c7fk_lc hŸ7k—dgc7cž_¥hn›h]Y_lcŠn›œb Uý¾¹Ç¸ ¸&ºn¹Ç» ¶1¾–³&¹Þ¾¹?ÃÖ¹^6ÊÂj»,ƎµŽÃ-µÅ¾–µŽ½FÐä̹±À+¾–Âj»Ý¾¿Âz³FµŽÐ±³ º» ¹±½F½FÂǾ ¹Þ¾¿µÅÂj½‹Ë”È&¹±¶–ºΒÆÅºn¹Ç»¿½FµŽ½FÐµŽ¶"¾–³ ºz¹±ÀÀ+ÒF» ¹±ÀáâÂÇÌ1¾¿³Fº ¹±½F½FÂǾ ¹Þ¾¿Â±» ¶"¾¿³FºÃÖ¶–ºƎıºn¶»¿ºÆ·¹Þ¾¿µÅÄjºA¾¿Â’¾–³FººĄ¹±ÆÅÒ&¹Þ¾–µŽÂ±½ Ðj±ƷÎö¶[¾¿¹±½ ÎF¹Ç» ÎÙü;–³Fºõ²/»¿ººÈ&¹Ç½FÓWµŽ½M¾¿³Fµ·¶âÀ¹j¶[º„þÛÁ²# ¶[¾–Ò ÎFá7¾–³ µŽ¶D̹jÀ¾¿Â±»nà^¹Þ¾D¾–³ º¼º½ ÎÑÂÇ̍¾–³FºµÅ»Ú¹jÀ¾¿µÅÄjº+Ë ÆŽº¹±»–½FµŽ½FÐKº+勸Sº»¿µÅ뺽j¾ ¶¹Ç½F½ ÂǾ¿¹Ç¾–Âj»¿¶’Éfº»¿ºD¹j¶[ÓjºÎϾ– ¹±½F½FÂǾ ¹Þ¾¿ºä¹^ÌÍÒF»–¾–³Fº»¼!EE¶[º½j¾¿º½ Àº¶ÌÍ»¿Â±Ã ¾–³ º¶¿¹Çëº ¾¿º¶[¾0Π¹Þ¾¿¹,Ò ¶–ºÎ¾–ºÄÞ¹ÇÆŽÒ ¹Þ¾¿ºN¾¿³Fº´ÆÅºn¹Ç»¿½FµŽ½FÐ,¹±ÆÅÐj±»¿µ‘¾¿³FÃÖ¶Û ²´³FºµÅ»8#Ëý뺹j¶[ÒF»¿ºÖ¸Sº»–Ìͱ»¿Ã«¹±½ À+ºÖÂj½’¾–³Fµ·¶"ÎF¹Ç¾¿¹ à¹j¶4µ‘Ì ¾¿³Fºá§Éfº»¿º¹(À±ë¸&º¾–µŽ½FÐ,¹±½F½F±¾¿¹Þ¾¿µÅÂj½"¶–áÕ¶[¾–ºÃànµ·¶/бµŽÄ±º½ µŽ½ö²¹ÇÈFƎº¦FÛײ´³Fºn¶[ºÃ-ºn¹±¶–ÒF»¿º¶?±Ì"¹ÇÐj»–ººëº½d¾AÉ1µÅ¾–³ ¾¿³Fº,Ðj±ƷÎ?¶[¾¿¹Ç½&ÎF¹Ç» ÎÖºIºÀ+¾–µŽÄ±ºÆÅáÖÀ+Âj½ ¶[¾–µÅ¾–Ò‹¾¿º¶´¹Ç½?Ò ¸F¸&º» ÈS±ÒF½&α½¾¿³Fº"¸Sº»–Ìͱ»¿Ãֹǽ ÀºÂÇÌN¹±½Õá䶖ዶʾ¿ºÃѾ–» ¹ÇµŽ½FºÎ Âj½A¾¿³FºµŽ»ÝÎF¹Ç¾¿¹FÛ /˄º¹j¶[Ò »–º4òº»–Ìͱ»¿Ãֹǽ ÀºÝÂj½ !EE³FºÆ·ÎÕËý±ҋ¾(¶–º½d¾–º½ À+ºn¶  ½F½F±¾¿¹Þ¾¿Â±»‡ ‹Û   ½F½F±¾¿¹Þ¾¿Â±»£ ‹Û,^L  ½F½F±¾¿¹Þ¾¿Â±»@% ›±ÛI  ½F½F±¾¿¹Þ¾¿Â±»°L EFې}E  ½F½F±¾¿¹Þ¾¿Â±»£, FÛéAI  ½F½F±¾¿¹Þ¾¿Â±»@† †FÛéL  ½F½F±¾¿¹Þ¾¿Â±»ÏI %FÛ † ²¹ÇÈFƎº£‹Ô  ½F½ ÂǾ¿¹Ç¾–µŽÂ±½Öòº»–Ìͱ»¿Ãֹǽ Àº´Â±½/EE²/º¶[¾fÜÕº¾ ÜÕº½j¾¿º½ Àº¶ ²´³dÒ&¶(¾¿ÂÖÌÍÒF»–¾–³Fº»,¸FÒF¾¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½FË|¾¿»¿¹±µÅ½Fºnζ[ዶ[¾–ºà ¸Sº»–Ìͱ»¿Ã«¹±½ À+º^µÅ½Ú¸Sº» ¶–¸&ºnÀ¾–µŽÄ±ºjà@µÅÐjÒF»–º-,¶–³FÂÞÉ(¶Ö¾¿³Fº ¸Sº»–Ìͱ»¿Ã«¹±½ À+º ±ÌµŽ½ Î‹µŽÄdµ·Î‹Ò ¹±Æ4¾–» ¹ÇµŽ½FºnÎö¶–ዶʾ¿ºÃÖ¶»–ºƎ¹ÇË ¾–µŽÄ±ºâ¾– ¾–³ º^³FµŽÐ±³Fºn¶Ê¾A¹jÀ ³FµŽºıºnÎã¸&º»[ÌÍÂj»–Ãֹǽ&À+ºÂÇ̧¾¿³Fº ¹Ç½ ½FÂǾ ¹Þ¾–Âj»§Â±½ É1³Fµ·À ³^¾¿³ ¹Þ¾"¶–ዶʾ¿ºÃÉ´¹±¶,¾¿»¿¹±µÅ½FºnÎYÛ(Uè½ º¹jÀ ³’À¹±¶–º±àY¾¿³FºÖ» ¹Þ¾–µŽÂµŽ¶4ÀÆÅÂd¶[º-¾¿Â‘±à%µŽ½ Î‹µ·À¹Ç¾–µŽ½Fо–³ ¹Ç¾ ¾–³ º,ÃÖ¹±À ³ µÅ½Fº,ÆÅºn¹Ç»¿½FµŽ½FÐë‹΋ºÆI¹±À ³FµŽºÄjº¶N¸&º»[ÌÍÂj»–Ãֹǽ&À+º À+ƎÂj¶–º¾–Â侖³ ¹Ç¾ÂÇ̏¾–³Fº?¹±½F½FÂǾ ¹Þ¾¿Â±»É1³FÂj¶–ºÖÎF¹Þ¾ ¹zµÅ¾4Éf¹j¶ ¾–» ¹ÇµŽ½FºnÎAÂj½%Û System 6 System 5 System 4 System 3 System 1 System 2 Human Annotator’s Performance (relative to Treebank) Learned System Performance as percentage of Human Labor (Minutes) 89 90 91 92 93 94 95 96 97 98 99 100 0 50 100 150 200 250 µÅÐjÒF»¿º ,‹Ôbòº»–Ìͱ»¿Ã«¹±½ À+º^ÂÇÌæ(¹±ÃÖ¶[³ ¹„É’0 ^¹Ç» À+Ò ¶ ¶–áÕ¶[¾–ºÃѾ–» ¹ÇµŽ½FºÎÂj½^¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½&¶ÝÈÕá  ½F½ ÂǾ¿¹Ç¾–Âj» û àI¹j¶ ¹ã¸&º»¿Àº½d¾¿¹±Ð±º’ÂÇÌ-¾–³ ¹Ç¾â¸&º»¿¶–±½$ ¶äÂÞÉ1½Q¸&º»[ÌÍÂj»–Ãֹǽ&À+º 뺹j¶[ÒF»¿ºΧ¹±Ðj¹ÇµŽ½ ¶[¾Y¾–³Fºf²#»¿ººÈ ¹Ç½ Óü¹±½4ºIºÀ+¾–µŽÄ±º ÒF¸ ¸&º» ÈS±ÒF½ Î&þÛ  _Naj°9_1Z(HW[aP _ C CF_ada9_1Z(E OH%EC ì O ì L1 _®R  EC O[ad_ ì ²/»¿¹j΋µ‘¾¿µÅÂj½ ¹ÇƎƎá±à´ºÄÞ¹ÇÆŽÒ ¹Ç¾–µŽÂ±½Wë‹΋ºÆ·¶?µŽ½D¾–³FºÃÖ¹±À ³FµŽ½Fº Ǝº¹Ç»¿½FµŽ½FÐ ÆÅµÅ¾–º»¿¹Ç¾–ÒF»¿ºëº¹±¶–ÒF»¿º¸&º»[ÌÍÂj»–Ãֹǽ&À+ºä»–ºƎ¹Ç¾–µŽÄ±º ¾–ÂÄÞ¹±»–µ·¹ÇÈFƎº  Ò ¹Ç½d¾–µÅ¾–µŽº¶D±̍¾¿»¿¹±µÅ½ µÅ½FÐ7Π¹Þ¾¿¹ Û ²´³Fµ·¶ 뺹j¶[ÒF»¿º§µŽ¶(µÅ½ ¹±¸F¸F»¿Â±¸Fµ·¹Þ¾¿º±àF³ ÂÞɏºıº»nàdÌͱ»ÝÀ±½d¾–» ¹±¶[¾–µŽ½FÐ ÎF¹Ç¾¿¹Þ˔¾–» ¹ÇµŽ½FºnÎÉ1µ‘¾¿³b»¿ÒFƎº+ËýÉ1»–µÅ¾–µŽ½FВ¹Ç¸F¸F»¿Âj¹jÀ ³Fº¶"É1³Fº»¿º ¾–µŽÃ«ºb¹±½ ÎKƎ¹±È&Âj»^À+Âd¶Ê¾’¹Ç»¿º¾–³Fºß¸F»–µŽÃֹǻ¿áMÄ޹ǻ¿µ·¹ÇÈFƎº¶Û ÿ(º»–ºÝɏº,¸F»–ºn¶[º½d¾À+Âd¶Ê¾Në‹΋ºƎ¶f¹ÇƎÆÅÂÞÉ1µŽ½FÐ4¾¿³Fº¶–ºÝ΋µIº»–Ë º½d¾«¹Ç¸F¸F»¿Âj¹jÀ ³Fº¶4¾–ÂâÈSºäÀ±ë¸ ¹±»–ºnΒÎFµÅ»¿ºÀ+¾–ÆŽá ¹±½ ÎÂjÈ‹Ë 6ʺÀ+¾–µŽÄ±ºÆÅájÛ ºÆÅÂÞÉb¹Ç»¿º¾ÊÉfÂÉf¹„á‹¶/¾–§ºÄÞ¹±ÆÅÒ ¹Ç¾–ºf¾–³ º1»–ºƎ¹Ç¾–µŽÄ±º´À+Âj¶[¾ ±Ì¹ÆÅºn¹Ç»¿½FµÅ½ Ð¹ÇƎбÂj»–µÅ¾–³ ÃäÛb²´³FºA÷&»¿¶[¾ÖÃ-ºn¹±¶–ÒF»¿ºzÀ+Âj½‹Ë ¾¿»¿¹j¶Ê¾ ¶/¸Sº»–Ìͱ»¿Ãֹǽ Àº»¿ºÆ·¹Þ¾¿µÅÄjºN¾¿Â4¹À±ëë±½-¶Ê¾ ¹Ç½ ÎF¹±»¿Î ±Ì-µŽ½Õıº¶[¾–ºnÎM³ÕÒFÃÖ¹±½öÆ·¹ÇÈS±»nÛ.²´³Fº¶[ºnÀ+±½&ÎM뺹j¶[Ò »–º À±½ ¶–µŽÎFº» ¶¹AÌÍÒFƎÆÅº»4»¿¹±½FбºÂÇÌf¸&±¾–º½d¾¿µŽ¹±Æ ΋ºÄjºƎ±¸F뺽d¾ ÀÂj¶[¾¿¶´µŽ½¹-ë±½ º+¾¿¹±»–ádËýÈ ¹±¶–ºÎzÀ+ÂjÃ-ëÂj½ä΋º½F±ëµŽ½ ¹Þ¾¿Â±»nÛ $WYX K]Y qk°¨8n`œk#8_lœh  Â±»ä¾–³Fº ¸FÒ »–¸SÂj¶–º¶zÂÇ̾¿³Fº¹ÇÈSÂÞıºº+勸&º»–µŽÃ«º½d¾¿¶àɏº ³ ¹„Äjº7À ³FÂj¶–º½ï¾–±¾¿¹±Æß³ÕÒFÃֹǽ ºI±»–¾Øü͵޽9¾¿µÅëºnþϹ±¶ ¾¿³FºöÄÞ¹±»–µ·¹ÇÈFƎºÚ»¿º¶–±ÒF» À+ºjàAÉ1³Fµ·À ³Ø¶–Ò ÀÀº¶¿¶ÊÌÍÒ ÆÅƎá>ë¹±¸ ¶ ÃÖ¹±À ³ µÅ½Fº˔Ǝº¹±»–½ µÅ½FÐϹ±½ Î7»¿ÒFƎº+ËýÈ ¹±¶–ºÎ>Ǝº¹±»–½ µÅ½FÐÙ±½Ø¹ À±ëë±½?뺹j¶[Ò »–º(±ÌY¾–» ¹ÇµŽ½FµŽ½Fл¿º¶–±ÒF» À+º(µŽ½Õıºn¶Ê¾¿Ã-º½d¾Û ²´µÅëº˔Ƞ¹j¶[ºnÎDºÄÞ¹ÇÆŽÒ ¹Ç¾–µŽÂ±½Wµ·¶A¹ÇÆ·¶–ÂÒ ¶[ºÌÍÒFÆ,Ìͱ»zÀ±Ã-Ë ¸ ¹±»–µŽ½FÐÚ¶–ዶʾ¿ºÃÖ¶¾¿»¿¹±µÅ½FºnÎK±½ÙÎFµIº»¿º½d¾¹±½F½FÂǾ ¹Þ¾¿Â±» ¶Û  ¶A¶–³FÂÞÉ1½DµÅ½eµÅÐjÒF»¿º—L à1¹Ç½F½ ÂǾ¿¹Ç¾–Âj»¿¶«Ä޹ǻ¿µŽºÎãµÅ½D¾¿³Fº ¾¿µÅëºÖ¾¿³Fºá’Ò ¶–ºΒ¾–Â⾿¹Çо¿³FºÖÌÍÒFƎƊ,EEÇËý¶–º½d¾–º½ À+ºÖÎF¹Ç¾¿¹ ¶–º+¾n۞fºÀ¹ÇÒ ¶–º0¸&º»[ÌÍÂj»–ÃÖ¹±½ À+º¾–º½ Î‹ºnξ–Â(ÈSº ƎÂÞɏº»YÉ1³Fº½ ¾¿»¿¹±µÅ½FºnÎâÂj½â¾–³ ºÌ¹j¶Ê¾¿º»4¹Ç½ ½FÂǾ ¹Þ¾–Âj»¿¶-ü͸Sº»¿³ ¹±¸ ¶§Î‹ÒFº¾– Ǝº¶¿¶zÀ¹Ç»¿º+ÌÍÒFƧÉf±»¿ÓFþàݹ߸&º»[ÌÍÂj»–Ãֹǽ&À+º+Ëý¸Sº»–Ëý¶–º½d¾–º½ À+ºn¶ÊË ¾ ¹ÇбÐjºÎMÃ-ºn¹±¶–ÒF»¿º^Éf±ÒFÆ·ÎW¾¿º½ ÎM¾–Â㻿¹±½FÓD¾–³ º¶–ºµŽ½ Î‹µÅË ÄÕµ·Î‹Ò ¹ÇÆ·¶?ƎÂÞÉfº»«¾–³&¹Ç½D¾–³ ºµŽ»z¶–ÆÅÂÞÉfº»^üÈFҋ¾äÃ-Âj»–ºâÀ¹Ç»¿º+Ë ÌÍÒFÆéþ4À±ƎÆÅºn¹ÇбҠº¶Û-ÿÝÂÞɏºıº»nàYÉ1³Fº½’ëºn¹±¶–ÒF»–ºnÎ^ÈÕ᾿³Fº Ǝº¹±»–½FµŽ½FйjÀÀ+Ò »¿¹jÀ+á¹±À ³ µÅºıºÎ-¸Sº»N¹±Ã«Â±ÒF½d¾ ÂÇÌ%¹±½F½FÂǾ ¹ÞË ¾¿Â±»0¾¿µÅëº(µŽ½Õıº¶[¾–ºnÎYàÞ¾¿³Fº1̹±¶[¾–º» ÈFҋ¾½F±µ·¶[º» ¹Ç½F½ ÂǾ¿¹Ç¾–Âj»¿¶ ¸Sº»–Ìͱ»¿Ã«ºÎÃ"Ò À ³4ëÂj»–º0À+Âjë¸&º¾–µÅ¾–µŽÄ±ºƎá«ü¹ÇÆÅ¾–³ Â±ÒFÐj³,¾¿³Fº »¿ºÆ·¹Þ¾¿µÅÄjº"ÈSº½Fº÷F¾¿¶§Â±ÌN³FµŽÐ±³Fº»,Äj±ƎÒF뺶ÂÇÌN½ Â±µ·¶[º»ÎF¹Ç¾¿¹ ÃÖ¹„áÖ½F±¾fºåd¾¿»¿¹±¸&ÂjƎ¹Ç¾–º,ɏºÆÅÆéþÛ  ½F½FÂǾ ¹Þ¾¿Â±»–Ë|¾¿µÅëº+ËýÈ ¹j¶[ºnÎ ¸Sº»–Ìͱ»¿Ãֹǽ Àº?뺹±¶–ÒF»¿º¶"¸F»–ÂÞÄÕµ·Î‹ºA¹^Ò ¶[ºÌÍÒFÆ´¶Ê¾ ¹Ç½ ÎF¹±»¿Î ÌÍÂj»1ºÄÞ¹ÇÆŽÒ ¹Þ¾¿µÅ½ Ð¾¿³Fµ·¶1ıÂjÆÅÒFëº˔½FÂjµŽ¶–º,¾–» ¹±Î‹ºÂ}Û $W•” ·:_lcžk hn›[ 8_¥œh  ½F½FÂǾ ¹Þ¾¿Â±»0Æ·¹ÇÈS±» ΋ÂÕº¶–½ ¾ À¹Ç¸F¾–ÒF»¿º´¾–³Fº(À±ë¸FƎº+¾¿ºf»¿ºÆÅË ¹Ç¾–µŽÄ±º§À+Âj¶[¾Â±Ì/¹бµŽÄ±º½z¹Ç¸F¸F»¿Âj¹jÀ ³%àd³FÂÞɏºıº»۞Uý¾´µ·¶fÒ ¶–º+Ë ÌÍÒFƔà/¾–³Fº»–ºÌͱ»¿º±à%¾¿Â뺹±¶–ÒF»¿º«»–ºn¶[ÂjÒF»¿ÀºµŽ½Õıºn¶Ê¾¿Ã-º½d¾µÅ½ ¾¿º»¿Ã«¶AÂÇ̾–³Fº’À+ÂjÃ-ëÂj½DÎFº½FÂjÃ-µŽ½ ¹Ç¾–±»AÂÇ̧Ã-Âj½Fº+¾ ¹Ç»¿á ÀÂj¶[¾-ÂÞÄjº»«¹^ÌÍÒ ÆÅƎº»Ö¶–º+¾ÖÂÇ̸&±¾–º½d¾¿µŽ¹±Æ´ÀÂj¶[¾ÖĄ¹±»–µ·¹ÇÈ ÆÅºn¶Û ²¹ÇÈFƎºƒ%΋º¾¿¹±µÅÆ·¶A¹¶–º+¾A±Ì4ÂǾ–³ º»?ë±½ º+¾¿¹±»–áb¸ ¹Ç» ¹ÇÃ-Ë º¾–º» ¶^À±½ ¶–µ·Î‹º»¿ºÎQµŽ½¼¾–³FºãÀ+ÒF»¿»¿º½d¾^¶Ê¾¿Ò Î‹µŽº¶Û ? µŽÄ±º½ ¾¿³Fº¶–ºA¸ ¹±»¿¹±Ã«º+¾–º»¿¶à±½Fºz¸&Âd¶–¶–µŽÈFÆÅºä¹Ç¸F¸ »–„勵ŽÃ«¹Ç¾–µŽÂ±½ ÂÇÌ ¾¿³FµŽ¶4ÀÂj¶[¾,ÌÍÒ ½ À¾¿µÅÂj½ÐjµÅÄjº½âÄ޹ǻ¿µ·¹ÇÈFƎº"¾¿µÅ뺫µŽ½dÄjº¶[¾–뺽d¾ ©K¹Ç½ ÎzƎº¹Ç»¿½FµŽ½FЫëº+¾–³ ÂÕΗú µŽ¶Ô ú"!$# ˆ §&%('$)o*!,+§+üúMÄ©,þ ³ ­g ô î ü/.10 î32 54 6fþ î üj©87ü:9Š ô î ú 5;þ–þ  ÆÅ¾–³FÂjÒFб³ÁÉfºÏ¹±¶¿¶[Ò Ã-ºKº  Ò&¹ÇÆÆ·¹ÇÈS±»ÚÀ+Âd¶Ê¾W» ¹Þ¾¿º¶ 9Š ô ÌÍÂj»¹±½F½F±¾¿¹Þ¾¿µÅÂj½¹±½ Î⻿ÒFƎº"É1»¿µ‘¾¿µÅ½FÐ&àI¾–³Fºn¶[º-ÃÖ¹„á ¶–ÒFÈ ¶[¾¿¹±½j¾¿µŽ¹±ÆÅƎá§Î‹µIº»/µÅ½¶[ÂjÃ-ºNº½ÕÄյŻ¿Â±½F뺽j¾ ¶à„¹Ç½&Î4À+º»–Ë ¾ ¹ÇµŽ½FÆÅáÉ1µÅƎƋÈSº1³FµÅÐj³Fº»ÌÍÂj»0¸F»¿ÂÇÌͺ¶¿¶–µÅÂj½ ¹ÇÆÅË  Ò ¹±ÆÅµÅ¾Êá4¹Ç½F½ ÂÇË ¾ ¹Þ¾–µŽÂ±½^±»Ý»¿ÒFƎº+ËýÈ ¹±¶–ºÎÎFºıºÆÅÂj¸F뺽d¾Û  ½ Î^É1³FµÅƎº¾¿³Fº ºn¶Ê¾¿µÅÃÖ¹Þ¾¿º¶"ÂÇÌ´¾–³FºAë¹jÀ ³FµŽ½FºAÀ+á‹ÀÆÅºAÀÂj¶[¾½FºÀº¶¿¶–¹±»–á⾖ ¶–ÒF¸F¸S±»–¾%¾–³ µŽ¶/ɏÂj»–Ó,±½ñ#µÅ½ÕҋåÕËýÈ ¹±¶–ºÎ4ò´ôÏ ¶#Ą¹±»–á¶[Âjëº+Ë É1³ ¹Ç¾à§¾¿³FºáK¹Ç»¿º »–ºƎ¹Ç¾–µŽÄ±ºƎáM΋ɴ¹Ç»–ÌͺÎQÈÕáÚ¾¿³FºÆ·¹ÇÈS±» ÀÂj¶[¾¿¶ۑ<ºä³ ¹„ıºä¹±¶¿¶[Ò Ã-ºnξ¿³ ¹Þ¾-¾–³FºäµŽ½‹ÌÍ»¿¹j¶Ê¾¿»–Ò&À¾–Ò »–º ΋ºıºÆÅÂj¸F뺽d¾/À+Âd¶Ê¾ ¶%ÌÍÂj»%¾¿³Fº ¾ ¹ÇбÐjµÅ½ Ðݹǽ Î§»¿ÒFƎº+ËýÉ1»–µÅ¾–µŽ½FÐ ôÂd¶Ê¾@‹΋ºÆ#ò0¹Ç» ¹Çëº+¾¿º»  ½ ½FÂǾ ¹Þ¾–µŽÂ±½ æ1ÒFƎº+ËýÉ1»¿µ‘¾¿µÅ½FÐ ­g ô ³ Uè½FÌÍ»¿¹j¶Ê¾¿»–Ò À+¾–ÒF»¿º\.ݺıºƎ±¸ Ã-º½d¾ôÂj¶[¾4üÍÌͱ»´¾ ¹ÇбÐjµÅ½ ÐÞæ$<Vº½ÕÄյ޻–Âj½F뺽d¾ þ ܋³ ¹Ç»¿ºΠ܋³ ¹Ç»¿ºÎ .=< ³ ðÝÒFÃ"ÈSº»ÝÂÇÌ/µŽ½FµÅ¾–µ·¹ÇÆ%Ðj±ƷÎz¶[¾¿¹±½ ÎF¹±»¿Îz¶[º½j¾¿º½ Àº¶fÌÍÂj»1¾–» ¹ÇµŽ½FµŽ½FÐ EE EE 2 54 6 ³ Ðj±ƷÎz¶[¾¿¹±½ ÎF¹±»¿Îü²/»–ººÈ ¹±½FÓFþ  ½F½F±¾¿¹Þ¾¿µÅÂj½^ôÂj¶[¾4ü¸&º»Ý¶–º½d¾–º½ À+º„þ ¯ ¯ 9Š ô ³ ñ/¹ÇÈS±»ÝôÂj¶[¾1Ìͱ»  ½F½F±¾¿¹Þ¾¿µÅÂj½zÂj»(æ<;ü͸Sº»1³FÂjÒF»þ >o!FÛ EE„³ Â±ÒF» >o!FÛ EE„³ Â±ÒF» ú  ; ³ ôÂd¶Ê¾1±ÌTâ¹jÀ ³FµŽ½Fº"ôáÕÀÆÅºn¶fÌͱ»  ½F½FÂǾ ¹Þ¾¿µÅÂj½5Þæ$< ÜÕÒF¸ ¸&Âj»[¾ >^EFې^LG„³FÂjÒF» >^EFÛé!„³FÂjÒF» © ³ " ¹±»–µ·¹ÇÈ ÆÅº¾–µŽÃ«º4µÅ½ÕÄjº¶[¾–뺽d¾ ²/¹±ÈFƎºK%FÔ0ø0åF¹Çë¸FƎº‡±½ º+¾¿¹±»–ád˄´¹±¶–ºÎzôÂd¶Ê¾(ò0¹Ç» ¹Ç뺾–º» ¶NÌͱ»@‹΋ºÆôÂ±Ã«¸ ¹±»–µ·¶[Âj½ º½ÕÄյ޻–Âj½F뺽d¾¿¶à„É1³FµÅƎºNµŽ½FµÅ¾–µ·¹ÇƎÆÅá§Ä޹ǻ¿µ·¹ÇÈFƎº¹jÀ+»¿Âj¶¿¶Yëº+¾–³FË Â‹ÎF¶à ³ ¹„ıº¹ÇƎ»¿º¹±ÎFáÈSºº½bÈS±»¿½Fº¹Ç½ Îõ¾–¾–³Fºº+åÕ¾¿º½d¾ ¾–³&¹Þ¾ ÈSÂǾ–³«µŽ½d¾–º»–̹±Àº1¶[ዶ[¾–ºÃÖ¶¸S±»–¾¾–Â4½FºÉ߯·¹Ç½FÐjÒ ¹ÇÐjº¶ ¹Ç½&Îß΋Âjë¹±µÅ½&¶"É1µÅ¾–³ã»–ºƎ¹Ç¾–µŽÄ±ºzº¹±¶–º±à¾¿³FºµÅ½&À+»¿ºëº½d¾¿¹±Æ ΋ºıºƎ±¸ Ã-º½d¾"À+Âj¶[¾¿¶4Ìͱ»"½ ºÉ ¾–»¿µŽ¹±ÆŽ¶"¹Ç»¿ºÖÆÅµŽÓ±ºÆÅá^¾–ÂâÈSº »¿ºÆ·¹Þ¾–µŽÄ±ºÆÅáƎÂÞÉÙ¹±½ Î À+±ë¸ ¹±»¿¹±ÈFƎº±Û8µŽ½ ¹ÇƎƎá±àY¾¿³FµŽ¶"À+Âj¶[¾ ë‹΋ºƋ¾ ¹ÇÓjº¶µŽ½j¾¿Â¹±ÀÀ+±Ҡ½j¾¾¿³FºÝÀ+Âd¶Ê¾0±ÌI΋ºıºÆÅÂj¸FµÅ½ ÐÂj» ¹±À  ÒFµÅ»¿µŽ½FÐA¾¿³Fº?. 0 бÂjƎ΍¶Ê¾ ¹Ç½ ÎF¹±»¿Î⾿¹±Ð±Ð±ºnÎ^ÎF¹Ç¾¿¹’üͺjÛ Ð&Û ÌÍ»¿Â±ÃÚ¾¿³Fº²/»¿ººÈ&¹Ç½FÓFþ&¾–Â1¸ »–ÂÞÄÕµ·Î‹º0µÅ½ µ‘¾¿µŽ¹±Æj¹Ç½&΄Âj»YµÅ½ À»–ºË 뺽d¾¿¹±ÆI¾–» ¹ÇµŽ½FµŽ½FÐ"Ìͺº΋È&¹±À ÓÖ¾¿Â¾¿³Fº4¹Ç½F½ ÂǾ¿¹Ç¾–Âj»Âj»1»–ÒFƎº É1»¿µ‘¾¿º»%¾¿Â(³FºÆÅ¸4ÌÍÂj»¿ÀºNÀ+Âj½ ¶–µŽ¶[¾–º½ À+á,É1µ‘¾¿³4¾–³FºÐ±ÂjƎÎ4¶[¾¿¹±½‹Ë ÎF¹±»¿ÎYÛ-<’ºz³&¹„ıº?ÌÍÂjÒF½ Îõ¾–³ ¹Ç¾-ÈSÂǾ¿³bƎº¹±»–½FµŽ½FÐâë‹΋ºn¶ À¹±½õÈSº½Fº÷F¾ÌÍ»¿Â±Ã3¾¿³FµŽ¶-³FµŽÐ±³  Ò&¹ÇƎµ‘¾Êá’Ìͺºn΋Ƞ¹jÀ ÓSà0Ƞҋ¾ ¾–³ ºâÀ+Âj¶[¾ ¯ ÂÇÌ4΋ºÄjºƎ±¸FµŽ½FÐ ¶[Ò&À ³ã¹ ³FµŽÐ±³FË  Ò ¹±ÆÅµÅ¾Êá»–ºË ¶–±ÒF» À+º,Ìͱ»1½FºÉMÆ·¹Ç½FÐjÒ ¹ÇÐjº¶Â±»1΋Âjë¹±µÅ½&¶´µŽ¶´ÒF½ Ód½ ÂÞÉ1½%à ÈFҋ¾,ÆÅµŽÓ±ºÆÅáA¾–Â?ÈSº³FµŽÐ±³Fº»1¾–³&¹Ç½ä¾–³Fº"½F±½‹Ëýº+勸Sº»–¾(Ǝ¹±È&Âj» À+Âd¶Ê¾ ¶´ºë¸FƎÂÞᱺnÎA³Fº»–ºjÛ @ DíÊH#aX1a ^® ì,ì _0°‹E°FOÊ_ ì n¯,EadH#Z OH%EC ì O ì LA ®WZ1XE ì °‹EL H#aßE ì Z UÚO[adE0Z1XE ì °‹EL H#a Uè½¾–³ º"¸F»¿ºÄÕµŽÂ±Ò ¶(¶–ºÀ+¾–µŽÂ±½ ¶à Éfº4µŽ½dÄjº¶[¾–µŽÐj¹Ç¾–ºÎz¾¿³Fº"¸Sº»–Ë Ìͱ»¿Ãֹǽ Àºz΋µIº»¿º½ Àº¶«¹Ç½ Îß»–ºn¶[ÂjÒF» À+ºAÀÂj¶[¾¿¶-µÅ½ÕÄj±ƎıºÎ Ìͱ»«Ò ¶–µŽ½FЍ³ÕÒFÃÖ¹±½ ¶¾–Â’É1»–µÅ¾–º»¿ÒFÆÅºn¶"ċ¶ÛßÒ ¶–µŽ½FÐ^¾¿³Fºà Ìͱ»Ö¹±½F½FÂǾ ¹Þ¾¿µÅÂj½ ¶ÛqUè½õ¾–³ µŽ¶Ö¶–ºÀ+¾–µŽÂ±½%à ÉfºÉ1µÅƎÆfÌÍÒF»–¾–³Fº» À+Âj븠¹Ç»¿ºÝ¾¿³Fº¶–º¶–áÕ¶[¾–ºÃØÎ‹ºıºƎ±¸ Ã-º½d¾´¸&¹Ç» ¹±Î‹µŽÐ±ÃÖ¶Û  ½F½F±¾¿¹Ç¾–µŽÂ±½‹ËýÈ ¹±¶–ºÎÙ³ÕÒFÃֹǽ ¸ ¹Ç»–¾–µ·À+µŽ¸ ¹Þ¾¿µÅÂj½×³&¹±¶¹ ½ÕÒFÃ"ÈSº»ÂÇÌS¶[µŽÐ±½FµÅ÷&À¹Ç½d¾/¸F» ¹±À+¾–µ·À¹±Æd¹j΋Ä޹ǽd¾¿¹±Ð±º¶%»–ºƎ¹Ç¾–µŽÄ±º ¾–ÂÖ΋ºıºÆÅÂj¸FµÅ½ Ð«¹«¶–ዶʾ¿ºÃØÈÕá?ÃÖ¹±½dÒ&¹ÇÆY»¿ÒFÆÅº˔É1»¿µÅ¾–µŽ½FÐ Ô B  ½F½FÂǾ ¹Þ¾¿µÅÂj½‹Ë”È&¹±¶–ºÎWƎº¹±»–½FµŽ½FÐãÀ¹Ç½QÀ+Âj½j¾¿µÅ½ÕÒFºµÅ½FË Î‹º÷ ½FµÅ¾–ºÆÅájà ÂÞıº»1ɏººӋ¶1¹Ç½&ÎzëÂj½j¾¿³ ¶à É1µÅ¾–³»–ºƎ¹ÇË ¾¿µÅÄjºƎáõ¶[ºƑÌéËèÀ+Âj½j¾ ¹ÇµŽ½FºÎ߹ǽF½F±¾¿¹Ç¾–µŽÂ±½bÎFºÀ+µ·¶–µÅÂj½ ¶¹Ç¾ ºn¹±À ³K¸&ÂjµÅ½d¾nÛ Uè½ÏÀ+±½d¾¿»¿¹j¶Ê¾nà4»–ÒFƎº+ËýÉ1»¿µ‘¾¿º» ¶Ã"Ò ¶[¾ »¿ºÃֹǵ޽MÀ±б½ µ)n¹Ç½d¾?ÂÇ̸SÂǾ¿º½d¾–µ·¹ÇƧ¸F»¿ºÄÕµŽÂ±Ò ¶A»–ÒFƎº µŽ½d¾–º» ΋º¸&º½ Î‹º½&À+µŽº¶õÉ1³ º½7¹±Î Î‹µÅ½ Ð¼Â±»õ»¿ºÄÕµ·¶[µŽ½FÐ »¿ÒFƎº¶àAÒFÆÅ¾–µŽÃÖ¹Þ¾–ºÆÅá>ÈS±ÒF½&΋µÅ½ Ð×À+Âj½d¾–µŽ½dÒ ºÎ>»¿ÒFÆÅºË ¶–ዶʾ¿ºÃ7Ðj»–ÂÞÉ´¾¿³zÈÕáAÀ±б½ µ‘¾¿µÅÄjºƎÂj¹jÎÖ̹jÀ¾–Âj»¿¶Û B  ½F½FÂǾ ¹Þ¾¿µÅÂj½‹Ë”È&¹±¶–ºÎ>Ǝº¹Ç»¿½FµŽ½FÐÏÀ¹±½ë±»¿ºÚºIºÀ+Ë ¾¿µÅÄjºƎáMÀ+ÂjÃ"ÈFµŽ½Fº ¾–³ ººI±»–¾¿¶ÂÇÌ«Ã"ҠƑ¾¿µÅ¸FƎºµŽ½ Î‹µÅË ÄÕµ·Î‹Ò ¹ÇÆ·¶Û´²´³ º¾¿¹±Ð±ÐjºÎ¶[º½d¾–º½&À+º¶1ÌÍ»–ÂjÃÑ΋µIº»¿º½d¾ ÎF¹Ç¾¿¹4¶–º+¾ ¶ À¹±½-ÈSºÝ¶–µÅë¸FƎáÀ±½ À¹Þ¾¿º½ ¹Ç¾–ºÎ"¾–ÂÌͱ»¿Ã ¹Æ·¹Ç»¿Ð±º»"ÎF¹Ç¾¿¹^¶–º+¾É1µ‘¾¿³ßÈF»¿Âj¹j΋º»"ÀÂÞıº»¿¹±Ð±º±Û¼Uè½ À±½d¾–» ¹±¶[¾àFµÅ¾,µ·¶(ÃÒ À ³ë±»¿º"΋µÅç?À+ÒFÆÅ¾àSµ‘Ì ½ ÂǾݵŽÃ-Ë ¸SÂj¶¿¶–µÅÈFƎº±à´ÌÍÂj»z¹»¿ÒFƎº^É1»¿µ‘¾¿º»A¾–Âõ»¿º¶–ÒFëºâÉ1³Fº»–º ¹±½FÂǾ¿³Fº»´Âj½Fº§ÆÅºÌé¾´ÂÛ ÒF»–¾–³Fº»–ë±»¿º±à‹À±Ã"È µÅ½FµŽ½FÐ »¿ÒFƎº§ÆÅµ·¶Ê¾ ¶1µ·¶´Ä±º»¿áA΋µÅç?À+ÒFÆÅ¾(ÈSºÀ¹±Ò ¶–º±Ì#¾¿³Fº§¾–µŽÐ±³d¾ ¹±½ ÎWÀ+Âjë¸FÆÅºåbµŽ½d¾–º» ¹±À+¾–µŽÂ±½ãÈ&º¾Êɏºº½Ú¶–Ò ÀÀº¶¿¶[µŽÄ±º »¿ÒFƎº¶ÛNôÂ±Ã"È µÅ½ ¹Ç¾–µŽÂ±½Ö±Ì%»–ÒFƎºÝÉ1»¿µÅ¾–µŽ½Fж–ዶʾ¿ºÃÖ¶ µ·¶ ¾¿³Fº»¿º+ÌÍÂj»–ºÆÅµŽÃ«µ‘¾¿ºÎ"¾–Âı±¾–µŽ½FÐÝÂj»¶–µÅ뵎Ǝ¹±»ÀƎ¹j¶–¶–µÅ÷ º» ¾¿ºÀ ³F½ µ  ÒFºn¶"É1³Fµ·À ³bÀ¹Ç½õÈSº¹Ç¸F¸FƎµŽºÎ ¾¿Â’¹±½F½FÂǾ ¹ÞË ¾¿µÅÂj½¶[ዶ[¾–ºÃÖ¶1¹j¶´ÉºÆÅÆ”Û B æ(ÒFÆÅº˔È&¹±¶–ºÎ7Ǝº¹±»–½ µÅ½FÐ×»–º  ÒFµÅ»¿º¶ã¹ÏÆ·¹Ç»¿Ð±º»ã¶–ÓյůŽÆ ¶–º+¾nà0µÅ½&À+ƎҠ΋µŽ½FÐ^½F±¾"±½ ÆÅᒾ–³ ºAƎµÅ½FÐjÒFµ·¶Ê¾¿µŽÀÖÓÕ½FÂÞÉ1ÆÅË ºn΋бº"½Fººn΋ºÎäÌÍÂj»,¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½#à ÈFҋ¾§¹ÇÆ·¶[Â?À±ë¸Sº+Ë ¾¿º½ Àº,µŽ½A»¿ºÐjÒFÆ·¹Ç»Nº勸F»–ºn¶–¶–µŽÂ±½ ¶f¹Ç½ ÎA¹Ç½z¹±ÈFµŽÆÅµÅ¾Êá-¾– Ðj»¿¹j¶[¸¾¿³Fº«À+±ë¸FƎº+åⵎ½d¾–º»¿¹jÀ¾–µŽÂ±½&¶(É1µÅ¾–³FµŽ½¹z»–Ò ÆÅº Ǝµ·¶Ê¾nÛ²´³Fºn¶[º´¹±Î Î‹ºÎ¶[Óյޯůd»–º  ÒFµÅ»¿ºëº½j¾ ¶#½&¹Þ¾–Ò »¿¹±ÆÅÆŽá ¶–³F»¿µÅ½ Ó«¾–³Fº§¸SÂdÂjÆIÂÇÌ#ÄÕµ·¹ÇÈFƎº,¸&¹Ç»–¾–µ·À+µŽ¸ ¹Ç½d¾¿¶f¹Ç½&Î?µŽ½‹Ë À»–ºn¹±¶–º¶¾–³ ºµŽ»(ÆÅµŽÓ±ºÆÅázÀ+Âj¶[¾Û B ´¹j¶[ºnÎßÂj½ãºë¸FµÅ»¿µ·À¹ÇÆ1ÂjÈ ¶–º»¿Ä„¹Ç¾–µŽÂ±½%àN¾–³Fº¸Sº»–Ìͱ»–Ë ÃÖ¹±½ À+º4±Ì»¿ÒFƎº4É1»–µÅ¾–º»¿¶´¾¿º½ Îz¾¿ÂÖº+勳FµŽÈFµ‘¾À+±½&¶[µ·ÎÕË º»¿¹±ÈFƎá^ë±»¿ºÖÄ޹ǻ¿µŽ¹±½ À+ºjà/É1³FµŽÆŽºA¶[ዶ[¾–ºë¶¾–» ¹ÇµŽ½FºÎ Âj½^¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½¾¿º½ Î⾖ÂzádµŽºÆ·ÎÃÒ À ³âë±»¿º"À+Âj½‹Ë ¶–µ·¶Ê¾¿º½d¾(»–ºn¶[ҠƑ¾ ¶Û B µŽ½ ¹ÇƎƎá±à ¾–³Fº À+Ò »–»¿º½d¾ ¸Sº»–Ìͱ»¿Ãֹǽ Àº ÂÇÌ ¹±½F½F±¾¿¹Þ¾¿µÅÂj½‹ËýÈ ¹±¶–ºÎ]¾–» ¹ÇµŽ½FµŽ½FÐ7µ·¶MÂj½FƎáÁ¹7ÆÅÂÞÉfº» ÈS±Ҡ½ ÎõÈ ¹j¶[ºnα½ß¾–³Fºä¸Sº»–Ìͱ»¿Ã«¹±½ À+ºzÂÇÌÝÀÒF»–»¿º½d¾ Ǝº¹±»–½ µÅ½FÐW¹±ÆÅÐj±»¿µ‘¾¿³FÃÖ¶ÛÜյ޽ Àºõ¹Ç½F½F±¾¿¹Ç¾–ºÎKÎF¹Ç¾¿¹ À¹Ç½ÈSºÑÒ ¶[ºnÎ9ÈÕá ÂǾ–³ º»×ÀÒF»¿»–º½j¾ ±»ÙÌÍҋ¾–ÒF»¿º ÃÖ¹jÀ ³FµÅ½ ºƎº¹Ç»¿½FµŽ½FÐ ¾¿ºÀ ³F½Fµ  ÒFº¶à¶–ÒFÈ ¶–º  ÒFº½d¾ ¹±ÆÅÐj±»¿µ‘¾¿³F뵎À?µÅë¸F»¿ÂÞıºÃ-º½d¾¿¶ÃÖ¹„á’áյźƎθSº»–Ìͱ»–Ë ÃÖ¹±½ À+ºµŽÃ«¸F»–ÂÞÄjºëº½d¾ ¶KÉ1µ‘¾¿³F±ҋ¾ ¹Ç½ÕáïÀ ³ ¹±½Fбº µŽ½Q¾–³FºõΠ¹Þ¾¿¹ Û Uè½¼À+±½d¾¿»¿¹j¶Ê¾nàݾ¿³Fº¸&º»[ÌÍÂj»–ÃÖ¹±½ À+º ¹jÀ ³FµŽºıºnΒÈdá¹â¶–º+¾ÂÇÌ(»¿ÒFÆÅºn¶µ·¶ºSºnÀ¾–µŽÄ±ºÆÅá÷ ½ ¹±Æ É1µÅ¾–³ Â±Ò‹¾Ý¹jÎF΋µÅ¾–µŽÂ±½ ¹±ÆY³dҠë¹±½»–ºÄdµ·¶–µÅÂj½%Û ²´³Fº¸SÂǾ¿º½d¾–µ·¹ÇÆQ΋µŽ¶¿¹±ÎFĄ¹±½d¾¿¹ÇÐjº¶±ÌϹǽF½F±¾¿¹Ç¾–µŽÂ±½‹Ë È ¹j¶[ºnÎD¶[ዶ[¾–ºà ΋ºÄjºƎ±¸F뺽j¾ÖÌÍÂj»A¹Ç¸ ¸FÆÅµ·À¹Ç¾–µŽÂ±½ ¶A¶[Ò À ³ ¹j¶õÈ ¹j¶[ºMð,òÀ ³ÕÒF½FÓյ޽FÐٹǻ¿ºWƎµÅëµÅ¾–ºÎ%Û ? µŽÄ±º½>¾¿³Fº ÀÂj¶[¾ë‹΋ºƎ¶ä¸F»¿º¶–º½d¾–ºnÎMµŽ½¼ÜÕºÀ+¾–µŽÂ±½H†Fà±½Fº’¸&±¾–º½‹Ë ¾¿µŽ¹±Æ½ ºÐj¹Ç¾–µŽÄ±º^¶¿À+º½ ¹Ç»¿µŽÂõɏÂjÒFÆ·ÎDÈ&º ¹Ç½Úº½ÕÄdµŽ»¿Â±½F뺽d¾ É1³Fº»–ºz¾–³ ºäÃÖ¹jÀ ³FµÅ½ ºäÀÂj¶[¾«¶–µÅÐj½FµÅ÷&À¹±½j¾¿ÆÅá Âjҋ¾ÊɏºµÅÐj³FºÎ ³ÕÒFÃֹǽ"Ǝ¹±È&Âj»/À+Âd¶Ê¾ ¶àn±»/É1³Fº»¿ºN¹jÀÀ+ºn¶–¶%¾–ÂݹjÀ¾¿µÅÄjº ÆÅºn¹Ç»¿½‹Ë µŽ½FÐ"¹±½ ÎֹǽF½ ÂǾ¿¹Ç¾–µŽÂ±½«µŽ½‹ÌÍ»¿¹j¶Ê¾¿»–Ò&À¾–Ò »–º1É´¹±¶ÒF½&¹„Ä„¹±µÅÆ·¹ÇÈ ÆÅº Âj»"À+Âd¶Ê¾¿ÆÅájÛÿÝÂÞɏºıº»à/ÒF½ Î‹º»"½FÂj»–ÃÖ¹ÇÆÀ+µŽ»¿ÀÒFÃֶʾ ¹Ç½ Àº¶ É1³Fº»–ºÃÖ¹±À ³FµŽ½Fº¹±½ ¹ÇƎዶ[µ·¶Ö±Ì¾–ºåd¾äµŽ¶z¸FÒF» ¶–ÒFºÎYà¹Ç½ Î ¸FÒFÈ ÆÅµ·À΋±Ãֹǵ޽D¹±ÀÀº¶¿¶¾¿Â ±ÒF»Ö¹Ç½ ½FÂǾ ¹Þ¾–µŽÂ±½ã¹Ç½ Îã¹±À+Ë ¾–µŽÄ±º-Ǝº¹Ç»¿½FµŽ½FÐÖ¾¿ÂdÂjÆÅÓյž¿¶µ·¶¹j¶–¶–ÒFëºÎ%àI¶–Ò À ³^¹ä¶¿À+º½&¹Ç»¿µÅ µ·¶´ÒF½FƎµÅÓjºƎá±Û C _ ì G^–íÝaÕOÊ_ ì ²´³Fµ·¶-¸ ¹Ç¸Sº»«³ ¹j¶"µŽÆÅƎҠ¶[¾–» ¹Þ¾¿ºÎ¾¿³ ¹Þ¾-¾–³ º»¿ºä¹±»–ºz¸&±¾–º½FË ¾–µ·¹ÇƎƎáÚÀ+±ë¸SºƎÆÅµŽ½FÐb¸F»¿¹jÀ¾¿µŽÀ¹ÇƧ¹Ç½ ÎW¸Sº»–Ìͱ»¿Ãֹǽ Àº¹jÎÕË Ä޹ǽd¾¿¹±Ð±ºn¶(¾¿Âz¸FÒ »¿¶–ÒFµŽ½FÐä¹jÀ¾¿µÅÄjº+ËýÆÅºn¹Ç»¿½FµÅ½ ÐÖÈ ¹±¶–º΍¹Ç½F½FÂ±Ë ¾¿¹Ç¾–µŽÂ±½K» ¹Þ¾–³ º»¾–³&¹Ç½K»–Ò ÆÅº˔É1»¿µ‘¾¿µÅ½ Ðb¾¿ÂÚ΋ºÄjºƎ±¸QÈ ¹±¶–º ½FÂjÒF½ß¸F³ »¿¹j¶[ºäÀ ³dÒ ½FÓ±º»¿¶Ûõ²´³Fºä»–ºƎ¹Ç¾–µŽÄ±ºAÈ&¹ÇÆ·¹Ç½ Àºz΋ºË ¸Sº½ ÎF¶ÒFÆÅ¾–µŽÃÖ¹Þ¾–ºÆÅá’±½Âj½Fº ¶"ÀÂj¶[¾"ë‹΋ºƔàÈFҋ¾ÐjµÅÄjº½ ¾–³ ºбÂd¹ÇÆ%±Ì뵎½FµŽÃ-µé)µŽ½FÐA¾–ÂǾ ¹ÇÆ/³dҠë¹±½Ǝ¹±È&Âj»ÝÀ+Âd¶Ê¾n࠵ž ¹Ç¸ ¸&ºn¹Ç» ¶4¾–Â^È&ºÀ+Âj½ ¶[µ·¶[¾–º½d¾¿ÆÅá’ë±»¿º?º+ç?À+µŽº½d¾«¹Ç½ Î ºÌéË ÌͺÀ+¾–µŽÄ±º1¾¿ÂµŽ½dÄjº¶[¾0¾–³Fºn¶[º,³ÕÒFÃֹǽ«»–ºn¶[ÂjÒF» À+º¶µŽ½A¶–ዶʾ¿ºÃ-Ë Î‹ºıºƎ±¸ Ã-º½d¾ÄÕµ·¹§¹Ç½F½F±¾¿¹Ç¾–µŽÂ±½"» ¹Þ¾¿³Fº»¾¿³ ¹Ç½«»¿ÒFÆÅº´É1»¿µ‘¾–Ë µŽ½FÐ Û D ®WGFc ì _/`øÊH/Z(L H$R H ì °Fa ²´³FºÖ¹ÇÒF¾–³FÂj»¿¶Éf±ÒFÆ·ÎâÆŽµÅÓjº-¾–Âz¾–³ ¹±½FӃ&d¹Ç½ÿ,¹^6ʵ·ÀÇà#øN»–µ·À f»–µŽÆŽÆ|àæ(¹±Î‹Ò-ÆÅÂj»–µ·¹Ç½#à#¹Ç½ Î’Ä޹ǻ¿µŽÂ±Ò ¶ëºÃÈ&º»¿¶§Â±Ì¾¿³Fº ðݹǾ–ÒF» ¹ÇÆñ#¹±½FбÒ&¹ÇбºòN»–‹Àº¶¿¶[µŽ½FÐ?ñ#¹±È^¹Þ¾\&±Âj³F½ ¶ÿ(Âj¸‹Ë ÓյŽ&¶ Ìͱ»¾¿³FºµŽ»õÄÞ¹ÇÆŽÒ ¹±ÈFÆÅºDÌͺº΋È&¹±À ÓÙ»¿ºÐd¹Ç» ΋µŽ½FÐM¾¿³Fµ·¶ Éf±»¿ÓSÛ DHP H%CFH ì GIH#a E ÛGFË+ØAÐهÇÊ¥ážÜßÛ æ ÐØAÐÊ`á2ÐÊ#ÖIHÏÛKJªË„ÍهÇ!ÑÒÇwâԄà׀ÛËMLNLOÛ FqهɂهÇ˄Í}ÝYÚ#ÐԄÉwևÐÌÌË+Ç!ÐÎÕ8È+Ç\ÑÒÉwÐË+Ê ×ÒÊ ØÏԄÕ#ÐÑÒÑÒÇwâ3ÑÐÊÝ ØÓ#ÐØ!ÉµÌ ÐȄÈ+É*Ë+ÊԂۗÜÊQPSRUTMVXW&W&Y[Z]\,^N_`Tbadcfe(W?ghicfekjX\lcmWnRpo \rqNcsZ TN\rqtGuvT[\MaMWnRUWn\wVXWTN\xuvTNy{zr|,c:qNcsZ TN\rqtw}=Z]\,^N|,Z]_&csZ Vp_+á Ì ÐØɂÔ5~M€,NÛw‚„ƒ…¥Üm†ˆ‡®ÝmF‰‚S…$Û æ Û[Š7ÇÓË+×ÒØAÐÓ ÑÈwÛ?LL$‹^Û E Ó˄ހÐÎ*É7ØËÐهÙKÐÈ+×ÒÎwÐÑ}ÐÊ#ÐѐÍԄ×ÒÔºÞéÇË È+ÕÉ8ÉnŒȄËÐÎ*È+×ÒÇ!ÊÇÞÈ+É*Ë+ه×ÒÊÇ!ÑÒÇ!Ø×΂ÐÑÊ ÇÓ Ê—Ì ÕËÐԄɂԂۼÜÊ PSRUTMVXW&WUYNZ]\^N_TbaŽcfe(W*,[cfe’‘‰\(\l|lqt”“W&WncsZ]\,^`T/acfe(WŽ‘5_no _nTMVpZ q[csZ TN\Tba•uvT[y5zr|,c:qNcsZ T[\wqt}=Z]\,^N|,Z]_&csZ Vp_+á Ì#ÐØɂÔL€ LNO°ÛwF®Ô„Ô„Ç^Î*×ÐÈ+×ÒÇ!Ê;ÇÞv‚7ÇÙ‡Ì ÓÈÐÈ+×ÒÇ!Ê Ðє…¥×ÒÊ ØÓ ×ÒԁÈ+×ÒÎ‚Ô‚Û – ÛrŠžË+×ÒÑÒÑ¥ÐÊ#֗‡ÏÛ(†®ØAÐ×jÛeLLL}ۙ˜µÐʵä}ԂÛÙKÐÎÕ ×ÒÊ ÉNšKFDÎwÐÔ„É ÔÈ+Ó Ö͗×ÒÊÚ#ÐԄÉ9ÊÇ!Ó ÊÌÕËÐԄÉ8ÎßÕ}ÓÊ à}×ÒÊ ØÛ¼ÜÊQPSRUTMVXW&W&YNo Zf\,^N_›Tbaœ‘u”}žgŸ$Ÿ$ŸÛ=F®Ô„Ô„Ç^΂×ÐÈ+×ÒÇʃÞéÇ˕‚7ÇÙ‡Ì ÓÈÐÈ+×ÒÇ!Ê ÐÑ …`×ÒÊ Ø!Ó×ÒԁÈ+×Î*Ô‚Û – ۈŠžË+×ÒÑÒÑ€Û MLNL$ }Û¢¡¥ËÐÊԁޕÇË+ÙKÐÈ+×ÒÇ!ÊÝYÚ ÐԄÉwÖ¤É*˄Ë+Ç˄ÝFÖË+×ÒäAÉ‚Ê ÑÒÉwÐË+Ê ×ÒÊ ØÐÊ#Ö3Ê ÐÈ+ÓËÐÑ£ÑÐÊ ØÓ#ÐØÉš F1ÎwÐԄÉԁÈ+Ó#Ö}Íq×Ê Ì Ð˄ÈSÇÞ$ԄÌoɂÉ*ÎßÕ9ÈÐØ!Ø×ÊØÛ*uvT[y5zr|,c:qNcsZ T[\wqt}=Z]\,^N|,Z]_&csZ Vp_+á ‹”$£s¤$¥nš  i¤$M€ N~ }Û ‚®Û¦‚TÐËÖ×ÒÉêÐÊ#Ö æ Ûw§×ÉßË+΂É!ÛMLNLO}Û – ˄Ë+Ç˄ÝFÖ}Ë+×ä!ɂÊ(ÌË+ÓÊ ×ÒÊ Ø ÇÞ È„Ë+ɂÉ*Ú#ÐÊàªØËÐÙKÐË+ÔlÞéÇËÚ ÐԄÉ7ÊÇ!ÓÊ£ÌÕËÐԄÉ7×ÖɂÊ^È+ש¨#΂ÐÝ È+×ÒÇʥ۞ÜÊ?PSRUTMVXW&WUYNZ]\,^N_Tbaˆcfe(WŽ,ª[cfe—‘‰\l\(|lqt¦“«WXWncsZf\,^dT/a cfelW™‘5_&_pTMVpZsqNcsZ TN\›Tba‰uvTNy5zr|,c:qNcsZsTN\wqt}KZf\,^N|,Z]_&csZ Vp_+áÌ#ÐØÉ‚Ô ‹”MOM€‹‹i¤ÛlF®Ô„Ô„Ç^Î*×ÐÈ+×ÒÇ!ÊKÇÞ ‚7Ç!هÌÓÈÐÈ+×ÇÊ#ÐÑr…¥×ÒÊØ!Ó ×ÒԁÈ+×ÒÎ‚Ô‚Û J\ÛS‚7ÕÓË+ÎßÕ`ÛMLNOOÛF5ԁÈ+Ç^ÎÕ#ÐԁÈ+×ÒÎgÌ Ð˄È+ÔKÌË+ÇØËÐْÐÊ#Ö ÊÇ!ÓÊ—Ì ÕËÐԄÉ8Ì ÐË+ԄÉ*ËÏޕÇË‡Ó ÊË+ɂԁȄË+×ÒÎ*È+ÉwÖ¼È+ÉpŒ}ÈwÛgÜÊIPSRUTo VXW&WUYNZ]\^N_œT/acfe(W¬wW&V&TN\rY«uvTN\aMWnR&Wn\rVnW*T[\?‘™z$zwt Z:WUY*­ŽqNc o |,Rbqt”} qN\,^N|lqM^WŽPSRUTMVXWn_&_&Z]\,^álÌ#ÐØÉ‚Ô ~i€¤Û=FSԄԄÇ^΂×ÐÝ È+×ÒÇʵÇÞv‚7ÇÙ‡Ì ÓÈÐÈ+×ÒÇ!Ê Ðє…¥×ÒÊ ØÓ ×ÒԁÈ+×ÒÎ‚Ô‚Û ã¤Û æ ÐɂÑÒɂÙKÐÊԂáNF£ÛäÐÊ@ÖÉ*ʜŠ7ÇԄÎßÕ`áAÐÊ#֎®ÛN¯›Ðwä}Ë+É*ÑjÛWMLNLLÛ ÇË+Ø!ÉßȄÈ+×ÊØ\ÉpŒ΂ɂÌÈ+×ÒÇ!Ê Ô7×ÔÕ ÐË+Ù\ޕÓÑo×ÒÊ9ÑÐÊØ!Ó#ÐØ!ɰÑÒÉwÐË+ÊÝ ×ÒÊØÛªÜFÊ«“«q$VXe,Zf\Wˆ}1WUqNRX\(Z]\^N°„_ zlW&VpZsqt1Z]_&_&|rW*T[\\wq[cs|Rbqt t±q[\,^N|lqM^Wt²W&q[Rn\lZ]\,^á¥äAÇ!ÑÒÓهɘ,!á¥Ì#ÐØɂÔýkX€¤Û£È+Ç9ÐÌÝ ÌoɂÐËwÛ æ Û æ ЂÍ^á*®ێF®ÚoÉßËÖɂɂÊ`ፅۈ³Š×Ë+ԄÎÕ ÙKÐÊ`á*´@ÛJ°Çè‚×ÒÉ*Ë+Ç!àoá §ºÛ´SÇ!Ú×ÊԄÇ!Ê`áÐÊ#֛˜>Û}Æ®×ÑÐ×Ê`ÛMLNL$}ۙ˜9שŒÉwÖÝY×ÒÊ ×È+×ÐÈ+×ÒäAÉ ÖɂäAɂÑÒÇ!ÌهɂÊ^ȵÇއÑÐÊ ØÓ#ÐØ!ɗÌË+Ç^Î*ɂԄԄ×ÒÊ Ø‘ÔÍԁÈ+ɂهԂÛ5ÜÊ µ Z aXcfe`uvTN\aMWnR&Wn\rVXW5TN\*‘Szzwt Z:WUY{­ŽqNcs|,Rbqt,} qN\^N|lqM^WSPSRUTo VXWn_&_&Z]\,^á7Ì#ÐØɂԍ[¤$Oi€$ N }ۙF®Ô„Ô„Ç^΂×ÐÈ+×ÒÇʦޕDz‚7Ç!هÌÓÈÐÝ È+×ÒÇ!Ê ÐÑ …¥×ÒÊ ØÓ ×ÒԁÈ+×Ò΂ԂᘵÐË+ÎßÕ`Û E Û – Ê Ø!É*ÑԄÇÊ-ÐÊ#ÖÜßÛ æ ÐØ!ÐÊ`ÛøLL~}Û3˜9×ÒÊ ×Òه×Òè‚×ÒÊ Ø>ÙKÐÊÝ Ó ÐÑ#ÐÊ ÊÇÈÐÈ+×ÒÇÊ\΂Ç!ԁȞ×ÒÊ\Ô„Ó ÌoÉßË+ä×ÒԄÉw֣ȄËÐ×ÒÊ ×ÒÊ Ø®ÞéË+Ç!٠΂ÇË„Ý ÌoÇËÐÛSÜÊ?PSRUTMVXW&WUYNZ]\^N_Tba5‘u”}gNŸ$ŸªÛFSԄԄÇ^΂×ÐÈ+×ÒÇ!Ê;ޕÇË ‚7ÇÙ‡Ì ÓÈÐÈ+×ÒÇ!Ê#ÐÑ …`×ÒÊ Ø!Ó×ÒԁÈ+×Î*Ô‚Û HêÛNË+É*Ó Ê#֛áv³ÏÛ E Û E ɂÓÊ Øá – Û E Õ#ÐهאËwáTÐÊ#Ök†ÏÛv¡T×ÒԄÕ}Ú^Í^Û LL$^Û E ɂÑÒɂÎ*È+×ÒäAɇÔ+ÐÙ‡Ì ÑÒ×ÒÊ ØµÓԄ×ÒÊ Ø8È+Õɍ¶^Ó É*˄ÍgÚ^Í;΂ÇÙ\Ý Ù‡×È„È+ɂɰÐÑÒØ!ÇË+אÈ+Õٵۄ“·q$VXe,Z]\wW5}1WUqNRX\(Z]\^á‹[OšoMNi€~O}Û ®ۙ®!ÓÈ+ɂԄÇʑÐÊ Ö E ۙJ@ÐÈ+è!ÛMLNL$ }Û¡lÉ*ÎßÕÊ ×ÒÎwÐÑÈ+É*Ë+ه×ÒÊ ÇÑÝ ÇØÍš E Ç!هÉGÑ×ÒÊØ!Ó ×ÒԁÈ+×ÒÎ(ÌË+ÇÌoÉ*˄È+×ÒɂÔKÐÊ Ö¦ÐÊ ÐÑÒØ!ÇË+אÈ+ÕÙ ÞéÇË\×ÖɂÊ^È+×±¨ ÎwÐÈ+×ÒÇʗ×Ê/È+ÉnŒÈwÛ·­ŽqNcs|,RbqtK} qN\^N|lqM^W¸„\^NZso \wWXWnRXZ]\^á5š LM€‹^Û æ Ûw…`É*â×ÒÔªÐÊ#Ö`®Û‚TÐÈ+ÑÒÉ*ȄÈwÛELLN¤Û¹³ŠÉ*È+ÉßË+Ç!Ø!É*Ê É‚ÇÓ ÔŠÓÊ Î‚ÉßË„Ý ÈÐ×Ê^ÈFÍ@Ô+ÐÙ‡Ì ÑÒ×ÒÊ ØSޕÇ˞ԄÓÌoÉ*Ë+ä}×ÒԄÉwÖ£ÑÒÉwÐË+Ê ×ÒÊ ØÛÜʜP„RbTMVXWXWUYo Z]\^N_dTbadcfe(W?g$g[cfejX\(cmWnRX\wqNcsZ T[\wqtˆuvTN\aMWnR&Wn\rVnW?TN\x“«qo VXe,Zf\W5} W&q[Rn\lZ]\,^Û æ Ûw…`É*â×ÒԊÐÊ#Ö9ã¤Ûr‡ªÐÑÒÉ!ÛOMLL[¤ۉF3Ԅɶ}ÓɂÊ^È+×ÐѺÐÑÒØ!ÇË+אÈ+ÕÙ ÞéÇË$ȄËÐ×ÒÊ ×ÒÊ Ø®È+ÉpŒ}È΂ÑÐԄԄש¨#É*Ë+ԂÛ$ÜFʍPSRUTMVXW&W&Y[Z]\,^N_{Tba™‘u “·o ¬,j[ºGjU»¼gŸŸ½ÛwF‰‚„˜8Ý E Üb‡ŠÜ/´@Û ´£Ûl…`×ÉßË+ɰÐÊ Öd§ºÛl¡ºÐ!ÖɂÌ#ÐÑÒÑ׀ÛyMLNL$}ÛGFSÎ*È+×ÒäAɊÑÒÉwÐË+Ê ×ÒÊ ØêâאÈ+Õ Î‚ÇههאȄÈ+ɂɂԮÞéÇ˰È+ÉpŒ}ȊÎwÐÈ+ɂØÇË+×ÒèwÐÈ+×ÇÊ¥ÛÏÜÊ«PSRUTMVXW&W&Y[Z]\,^N_ Tba‰cfe(W µ TN|,RncmW&Wn\(cfe›­ˆqNcsZ TN\rqt=uvTN\MaMWnRUWn\wVXWŽTN\d‘‰RXcsZ ¾™VpZ qNt jX\lcmWpt]t Z²^Wn\wVXWßá Ì#ÐØɂÔ{ NLn€ [L~}Û¦F5F5FSÜßÛ ˜>ے˜µÐË+Î‚Ó Ô‚áx˜>ے˜µÐË+΂×ÒÊ à}×ÒÉ*â×Ò΂è!áƒÐÊ ÖŠ®Û E ÐÊ}È+ÇË+×ÒÊ ×€Û LL}Û¿Š7Ó×ÒÑÖ×ÒÊ Ø-ÐÑÐË+ØÉÐÊ ÊÇÈÐÈ+ÉwÖqÎ*ÇË+Ì ÓÔ(ÇÞ – ÊÝ ØÑ×ÒԄզšK¡TÕ É‰§É‚Ê Ê¡`Ë+ɂɂÚ#ÐÊ àoÛ5uvT[y5zr|,c:qNcsZ T[\wqt}=Z]\,^N|,Z]_Xo csZ Vp_+áMLl£:‹N¥n𠁰MM€,NÀÛ ˜>Û ˜9Ó Ê Çè!á¥ÆÏÛ¦§Ó Ê^Í}ÐàÐÊÇ!àoá æ Û´SÇÈ+Õ`álÐÊ#Ö æ ۔¯o×ÙKÐàoÛ LLL}Û`FBÑɂÐË+Ê ×ÒÊØ(ÐÌÌË+ÇAÐÎßÕ>È+ǵԄÕ#ÐÑÑÒÇwâHÌ ÐË+Ԅ×ÒÊ ØÛGÜÊ PSRUTMVXW&WUYNZ]\,^N_›T/aˆ¸™“—­‰}”P¹oiÁQÂr}¹uŽÃ Ÿ$Ÿ۔F®Ô„Ô„Ç^΂×ÐÈ+×ÇʼޕÇË ‚7ÇÙ‡Ì ÓÈÐÈ+×ÒÇ!Ê#ÐÑ …`×ÒÊ Ø!Ó×ÒԁÈ+×Î*Ô‚Û …$Û{´ŠÐهԄÕ#Ђâ1ÐÊ ÖĘ>ێ˜µÐË+Î‚Ó Ô‚Û LLL}Û¢¡lÉnŒÈ;ÎÕ}Ó Êà}Ý ×ÒÊ Ø9Ó Ô„×ÒÊ ØGȄËÐÊ ÔÞéÇË+ÙKÐÈ+×ÇÊÝYÚ#ÐԄÉwÖ¼ÑÒÉwÐË+Ê ×ÒÊ ØÛKÜÊÅ­ŽqNcs|o Rbqt,} qN\^N|lqM^W¹P„RbTMVnWn_&_&Z]\^«Æ”_&Z]\^«ÂrWnRXÇ5} qNRs^WuvTNRmzT[RUqÛ J°ÑÒÓ^âTÉ*ËwÛ ³ÏÛ E Û E É‚Ó ÊØáG˜>ÛvƒŠÌÌoÉ*Ëwá$ÐÊ ÖųÏÛ E Ç!هÌoÇÑ×ÒÊԄà}Í^Û¥MLL‹}Û È ÓÉ*˄ÍÚ^͗΂Ç!ههאȄÈ+ɂÉÛ¦ÜFÊÉPSRUTMVXW&WUYNZ]\^N_?Tba—cfe(W µ Z ancfe ‘‰\l\(|lqtK‘u “ÊÁ—TNR&Ë[_belTXzÌTN\8uvTNy{zr|,c:qNcsZ TN\rqtK}1WUqNRX\ro Z]\^·Í”e(WUTNRXÇá#Ì#ÐØ!ɂÔ{‹[O$M€,‹NL[¤Û¦F‰‚„˜>Û ¡TÕ ÉˆÎ5¡1F‰‡´®É‚Ô„É‚ÐË+ÎÕ·‡®Ë+Ç!Ó Ì`ÛeMLLNOۙF3ÑÒÉpŒ×ÒÎwÐÑÒ×Òè‚ÉwÖ9ȄË+É‚É ÐÖ[ρÇ×ÒÊ ×ÒÊ Ø/ØËÐهÙKÐˇޕÇËGɂÊØ!ÑÒ×ÒԄեÛ8¡¥É‚ÎßÕÊ ×ÒÎwÐÑ{´SɂÌoÇË„È Üm´‰‚ E ¡lɂÎÕ*´SɂÌoÇ˄ÈvLNOÝJMO}áЊÊ×ÒäAÉ*Ë+ԄאÈÍ£ÇÞ¦§ºÉ*Ê Ê ÔÍÑÒäÐÊÝ Ê×Ð}Û – Ûv¡ ÏÇ!ÊØJ°×ÒÙ E ÐÊ Ø>ÐÊ Öx®ÛƞÉ*É‚Ê ÔÈ„ËÐ}Û²MLNLLÛI´SɂÌË+ÉßÝ Ô„É‚Ê^È+×ÒÊ ØgÈ+ÉpŒ}ȇÎÕÓÊ à}ԂۦÜÊÑP„RbTMVnW&WUYNZ]\^N_?Tba•¸G‘u”}Ã ŸŸÛ F‰‚S…$Û ®ۛÆ$É‚É‚Ê ÔÈ„ËÐ}Û MLLNOÛE#ÐԁÈ5†§ ÎÕ}Ó Ê à}×ÒÊ Ø‡ÓԄ×ÒÊ ØKهɂهÇ˄Í}Ý Ú ÐԄÉwÖqÑɂÐË+Ê ×ÒÊئÈ+ɂÎÕ Ê×Ò¶^ÓɂԂÛCÜFÊ ÓG¸™­¹¸G}”¸™‘‰»¹­ˆomŸÔ,Õ PSRUTMVXW&WUYNZ]\,^N_„TbaGcfe(Wv¸„Z±^ie,cfeˆÓ5Wpt ^NZ qN\(osÖ|,c:VXe—uvT[\MaMWnRUWn\wVXW TN\ד·q$VXe,Z]\wWI} W&q[Rn\lZ]\,^áµã/ÐØÉ‚Ê ×ÒÊ Øɂʥá8È+Õ ÉɆ®É*È+Õ ÉßË„Ý ÑÐÊ ÖÔ‚Û F£Û›Æ$Ç!ÓÈ+×ÑÐ×Êɂʥ۠LL}ۛ­¹PŽÍrTMTt á¥Ð9ÖÉßÈ+ɂÎ*È+Ç˪ÇÞ – Ê ØÑÒ×Ô„Õ ÊÇ!Ó Ê¼Ì ÕËÐԄɂԂÛ8ÜÊkPSRUTMVXW&WUYNZ]\^N_—T/acfelWxÁ—TNR&Ë[_belTXzÌTN\ Â(WnRXÇ?} qNRs^WuvTNRmzT[RUqá2Ì ÐØɂÔ*¤$OM€ N}ۙF®Ô„Ô„Ç^΂×ÐÈ+×ÒÇʦޕÇË ‚7ÇÙ‡Ì ÓÈÐÈ+×ÒÇ!Ê#ÐÑ …`×ÒÊ Ø!Ó×ÒԁÈ+×Î*ԂÛ
2000
16
Using existing systems to supplemen t small amoun ts of annotated grammatical relations training data ∗ Alexander Y eh Mitre Corp. 202 Burlington Rd. Bedford, MA 01730 USA [email protected] Abstract Grammatical relationships (GRs) form an imp ortan t lev el of natural language pro cessing, but dieren t sets of GRs are useful for dieren t purp oses. Therefore, one ma y often only ha v e time to obtain a small training corpus with the desired GR annotations. T o b o ost the p erformance from using suc h a small training corpus on a transformation rule learner, w e use existing systems that nd related t yp es of annotations. 1 In tro duction Grammatical relationships (GRs), whic h include argumen ts (e.g., sub ject and ob ject) and mo diers, form an imp ortan t lev el of natural language pro cessing. Examples of GRs in the sen tence T o day, my do g pushe d the b al l on the o or. are pushe d ha ving the sub ject my do g , the ob ject the b al l and the time mo dier T oday, and the b al l ha ving the lo cation mo dier on (the o or). The resulting annotation is my do g −subj→ pushe d on −mod-loc→ the b al l ∗ This pap er rep orts on w ork p erformed at the MITRE Corp oration under the supp ort of the MITRE Sp onsored Researc h Program. Marc Vilain pro vided the motiv ation to nd GRs. W arren Grei suggested using randomization-t yp e tec hniques to determine statistical signicance. Sabine Buc hholz and John Carroll ran their GR nding systems o v er our data for the exp erimen ts. Jun W u pro vided some helpful explanations. Christine Doran and John Henderson pro vided helpful editing. Three anon ymous review ers pro vided helpful suggestions. etc. GRs are the ob jects of study in relational grammar (P erlm utter, 1983). In the SP ARKLE pro ject (Carroll et al., 1997), GRs form the top la y er of a three la y er syn tax sc heme. Man y systems (e.g., the KERNEL system (P almer et al., 1993)) use GRs as an in termediate form when determining the seman tics of syn tactically parsed text. GRs are often stored in structures similar to the Fstructures of lexical-functional grammar (Kaplan, 1994). A complication is that dieren t sets of GRs are useful for dieren t purp oses. F or example, F erro et al. (1999 ) is in terested in semantic in terpretation, and needs to dieren tiate b et w een time, lo cation and other mo diers. The SP ARKLE pro ject (Carroll et al., 1997), on the other hand, do es not dieren tiate b et w een these t yp es of mo diers. As has b een men tioned b y John Carroll (p ersonal comm unication), com bining mo dier t yp es together is ne for information retriev al. Also, ha ving less dieren tiation of the mo diers can mak e it easier to nd them (F erro et al., 1999). F urthermore, unless the desired set of GRs matc hes the set already annotated in some large training corpus, 1 one will ha v e to either man ually write rules to nd the GRs, as done in Aït-Mokh tar and Chano d (1997 ), or annotate a new training corpus for the desired set. Man ually writing rules is exp ensiv e, as is annotating a large corpus. Often, one ma y only ha v e the resources to pro duce a small annotated training set, and man y of the less common features of the set's 1 One example is a memory-based GR nder (Buc hholz et al., 1999) that uses the GRs annotated in the P enn T reebank (Marcus et al., 1993). domain ma y not app ear at all in that set. In con trast are existing systems that p erform w ell (probably due to a large annotated training set or a set of carefully hand-crafted rules) on related (but dieren t) annotation standards. Suc h systems will co v er man y more domain features, but b ecause the annotation standards are sligh tly dieren t, some of those features will b e annotated in a dieren t w a y than in the small training and test set. A w a y to try to com bine the dieren t adv antages of these small training data sets and existing systems whic h pro duce related annotations is to use a sequence of t w o systems. W e rst use an existing annotation system whic h can handle man y of the less common features, i.e., those whic h do not app ear in the small training set. W e then train a second system with that same small training set to tak e the output of the rst system and correct for the dierences in annotations. This approac h w as used b y P almer (1997 ) for w ord segmen tation. Hw a (1999 ) describ es a somewhat similar approac h for nding parse brac k ets whic h combines a fully annotated related training data set and a large but incompletely annotated nal training data set. Both these w orks deal with just one (w ord b oundary) or t w o (start and end parse brac k et) annotation lab el t yp es and the same lab el t yp es are used in b oth the existing annotation system/training set and the nal (small) training set. In comparison, our w ork handles man y annotation lab el t yp es, and the translation from the t yp es used in the existing annotation system to the t yp es in the small training set tends to b e b oth more complicated and most easily determined b y empirical means. Also, the t yp e of baseline score b eing impro v ed up on is dieren t. Our w ork adds an existing system to impro v e the rules learned, while P almer (1997 ) adds rules to impro v e an existing system's p erformance. W e use this related system/small training set com bination to impro v e the p erformance of the transformation-based error-driv en learner describ ed in F erro et al. (1999 ). So far, this learner has started with a blank initial lab eling of the GRs. This pap er describ es exp erimen ts where w e replace this blank initial lab eling with the output from an existing GR nder that is go o d at a somewhat dieren t set of GR annotations. With eac h of the t w o existing GR nders that w e use, w e obtained impro v ed results, with the impro v emen t b eing more noticeable when the training set is smaller. W e also nd that the existing GR nders are quite unev en on ho w they impro v e the results. They eac h tend to concen trate on impro ving the reco v ery of a few kinds of relations, lea ving most of the other kinds alone. W e use this tendency to further b o ost the learner's p erformance b y using a merger of these existing GR nders' output as the initial lab eling. 2 The Exp erimen t W e no w impro v e the p erformance of the F erro et al. (1999) transformation rule learner on a small annotated training set b y using an existing system to pro vide initial GR annotations. This exp erimen t is rep eated on t w o dieren t existing systems, whic h are rep orted in Buc hholz et al. (1999 ) and Carroll et al. (1999 ), resp ectiv ely . Both of these systems nd a somewhat dieren t set of GR annotations than the one learned b y the F erro et al. (1999 ) system. F or example, the Buc hholz et al. (1999 ) system ignores v erb complemen ts of v erbs and is designed to lo ok for relationships to v erbs and not GRs that exist b et w een nouns, etc. This system also handles relativ e clauses dieren tly . F or example, in Mil ler, who or ganize d ... , this system is trained to indicate that who is the sub ject of or ganize d , while the F erro et al. (1999 ) system is trained to indicate that Mil ler is the sub ject of or ganize d . As for the Carroll et al. (1999 ) system, among other things, it do es not distinguish b et w een subt yp es of mo diers suc h as time, lo cation and p ossessiv e. Also, b oth systems handle copulas (usually using the v erb to b e) dieren tly than in F erro et al. (1999 ). 2.1 Exp erimen t Set-Up As describ ed in F erro et al. (1999 ), the transformation rule learner starts with a p-o-s tagged corpus that has b een c h unk ed in to noun c h unks, etc. The starting state also includes imp erfect estimates of pp-attac hmen ts and a blank set of initial GR annotations. In these exp erimen ts, this blank initial set is c hanged to b e a translated v ersion of the annotations pro duced b y an existing system. This is ho w the existing system transmits what it found to the rule learner. The setup for this exp erimen t is sho wn in gure 1. The four comp onen ts with + signs are tak en out when one w an ts the transformation rule learner to start with a blank set of initial GR annotations. The t w o arcs in that gure with a * indicate where the translations o ccur. These translations of the annotations pro duced b y the existing system are basically just an attempt to map eac h t yp e of annotation that it pro duces to the most lik ely t yp e of corresp onding annotation used in the F erro et al. (1999 ) system. F or example, in our exp erimen ts, the Buc hholz et al. (1999 ) system uses the annotation np-sbj to indicate a sub ject, while the F erro et al. (1999 ) system uses the annotation subj. W e create the mapping b y examining the training set to b e giv en to the F erro et al. (1999 ) system. F or eac h t yp e of relation ei output b y the existing system when giv en the training set text, w e lo ok at what relation t yp es (whic h tk 's) co-o ccur with ei in the training set. W e lo ok at the tk 's with the highest n um b er of co-o ccurrences with that ei . If that tk is unique (no ties for the highest n um b er of co-o ccurrences) and translating ei to that tk generates at least as man y correct annotations in the training set as false alarms, then mak e that translation. Otherwise, translate ei to no relation. This latter translation is not uncommon. F or example, in one run of our exp erimen ts, 9% of the relation instances in the training set w ere so translated, in another run, 46% of the instances w ere so translated. Some relations in the Carroll et al. (1999 ) system are b et w een three or four elemen ts. These relations are eac h rst translated in to a set of t w o elemen t sub-relations b efore the examination pro cess ab o v e is p erformed. Ev en b efore applying the rules, the translations nd man y of the desired annotations. Ho w ev er, the rules can considerably impro v e what is found. F or example, in t w o of our early exp erimen ts, the translations b y themselv es pro duced F-scores (explained b elo w) of ab out 40% to 50%. After the learned rules w ere applied, those F-scores increased to ab out 70%. An alternativ e to p erforming translations is to use the un translated initial annotations as an additional t yp e of input to the rule system. This alternativ e, whic h w e ha v e y et to try , has the adv an tage of tting in to the transformation-based error-driv en paradigm (Brill and Resnik, 1994) more cleanly than ha ving a translation stage. Ho w ev er, this additional t yp e of input will also further slo wdo wn an already slo w rule-learning mo dule. 2.2 Ov erall Results F or our exp erimen t, w e use the same 1151 w ord (748 GR) test set used in F erro et al. (1999), but for a training set, w e use only a subset of the 3299 w ord training set used in F erro et al. (1999 ). This subset contains 1391 (71%) of the 1963 GR instances in the original training set. The o v erall results for the test set are Smaller T raining Set, Ov erall Results R P F ER IaC 478 (63.9%) 77.2% 69.9% 7.7% IaB 466 (62.3%) 78.1% 69.3% 5.8% NI 448 (59.9%) 77.1% 67.4% where ro w IaB is the result of using the rules learned when the Buc hholz et al. (1999 ) system's translated GR annotations are used as the Initial Annotations, ro w IaC is the similar result with the Carroll et al. (1999 ) system, and ro w NI is the result of using the rules learned when No Initial GR annotations are used (the rule learner as run in F erro et al. (1999 )). R(ecall) is the n umb er (and p ercen tage) of the k eys that are recalled. P(recision) is the n um b er of cor# ✧ ✥ ✦ existing system + # ✧ ✥ ✦ existing system + test set ✗ ✖ ✔ ✕ ✚✙ ✛✘ ✗ ✖ ✔ ✕ ❄ ✲ ◗◗◗ s ❍❍❍❍ ❥ ✟✟✟ ✯  z ✚✚✚✚ ❃ ❄ ✲ ✲ ✟✟✟✟ ✯ ✲ small training set rule learner k ey GR annotations for small training set * * rules + GR annotations initial test + initial training GR annotations nal test GR annotations rule in terpreter Figure 1: Set-up to use an existing system to impro v e p erformance rectly recalled k eys divided b y the n umb er of GRs the system claims to exist. F(-score) is the harmonic mean of recall (r ) and precision (p ) p ercen tages. It equals 2pr/(p + r). ER stands for Error Reduction. It indicates ho w m uc h adding the initial annotations reduced the missing F-score, where the missing F-score is 100%−F. ER= 100%× (FIA−FNI )/(100%−FNI ) , where FNI is the F-score for the NI ro w, and FIA is the F-score for using the Initial Annotations of in terest. Here, the dierences in recall and Fscore b et w een NI and either IaB or IaC (but not b et w een IaB and IaC) are statistically signican t. The dierences in precision is not. 2 In these results, most of the mo dest F-score gain came from increasing recall. One ma y note that the error reductions here are smaller than P almer (1997 )'s error reductions. Besides b eing for dieren t tasks (w ord segmen tation v ersus GRs), the reductions are also computed using a dieren t t yp e of baseline. In P almer (1997 ), the baseline is ho w w ell an existing system p erforms b efore the rules are run. In this pap er, the baseline is the p erformance of the rules learned without 2 When comparing dierences in this pap er, the statistical signicance of the higher score b eing b etter than the lo w er score is tested with a one-sided test. Dierences deemed statistically signican t are signican t at the 5% lev el. Dierences deemed nonstatistically signican t are not signican t at the 10% lev el. F or recall, w e use a sign test for matc hed-pairs (Harnett, 1982, Sec. 15.5). F or precision and F-score, a matc hed-pairs randomization test (Cohen, 1995, Sec. 5.3) is used. rst using an existing system. If w e w ere to use the same baseline as P almer (1997 ), our baseline w ould b e an F of 37.5% for IaB and 52.6% for IaC. This w ould result in a m uc h higher ER of 51% and 36%, resp ectiv ely . W e no w rep eat our exp erimen t with the full 1963 GR instance training set. These results indicate that as a small training set gets larger, the o v erall results get b etter and the initial annotations help less in impro ving the o v erall results. So the initial annotations are more helpful with smaller training sets. The o v erall results on the test set are F ull T raining Set, Ov erall Results R P F ER IaC 487 (65.1%) 79.7% 71.7% 6.3% IaB 486 (65.0%) 76.5% 70.3% 1.7% NI 476 (63.6%) 77.3% 69.8% The dierences in recall, etc. b et w een IaB and NI are no w small enough to b e not statistically signican t. The dierences b et w een IaC and NI are statistically signican t, 3 but the dierence in b oth the absolute F-score (1.9% v ersus 2.5% with the smaller training set) and ER (6.3% v ersus 7.7%) has decreased. 2.3 Results b y Relation The o v erall result of using an existing system is a mo dest increase in F-score. Ho w ev er, this increase is quite unev enly distributed, with a 3 The recall dierence is semi-signican t, b eing signican t at the 10% lev el. few relation(s) ha ving a large increase, and most relations not ha ving m uc h of a c hange. Dieren t existing systems seem to ha v e dieren t relations where most of the increase o ccurs. As an example, tak e the results of using the Buc hholz et al. (1999 ) system on the 1391 GR instance training set. Man y GRs, lik e p ossessive mo dier, are not aected b y the added initial annotations. Some GRs, lik e lo c ation mo dier, do sligh tly b etter (as measured b y the F-score) with the added initial annotations, but some, lik e subje ct, do b etter without. With GRs lik e subje ct, some dierences b et w een the initial and desired annotations ma y b e to o subtle for the F erro et al. (1999 ) system to adjust for. Or those dierences ma y b e just due to c hance, as the result dierences in those GRs are not statistically signican t. The GRs with statistically signican t result dierences are the time and other 4 mo diers, where adding the initial annotations helps. The time mo dier 5 results are quite dieren t: Smaller T raining Set, Time Mo dier s R P F ER IaB 29 (64.4%) 80.6% 71.6% 53% NI 14 (31.1%) 56.0% 40.0% The dierence in the n um b er recalled (15) for this GR accoun ts for nearly the en tire dierence in the o v erall recall results (18). The recall, precision and F-score dierences are all statistically signican t. Similarly , when using the Carroll et al. (1999 ) system on this training set, most GRs are not aected, while others do sligh tly b etter. The only GR with a statistically signican t result dierence is obje ct, where again adding the initial annotations helps: Smaller T raining Set, Obje ct Relations R P F ER IaC 198 (79.5%) 79.5% 79.5% 17% NI 179 (71.9%) 78.9% 75.2% The dierence in the n um b er recalled (19) for this GR again accoun ts for most of the dif4 Mo diers that do not fall in to an y of the subt yp es used, suc h as time, lo cation, p ossessiv e, etc. Examples of unuse d subt yp es are purp ose and mo dalit y . 5 There are 45 instances in the test set k ey . ference in the o v erall recall results (30). The recall and F-score dierences are statistically signican t. The precision dierence is not. As one c hanges from the smaller 1391 GR instance training set to the larger 1963 GR instance training set, these F-score impro v emen ts b ecome smaller. When using the Buc hholz et al. (1999 ) system, the impro v emen t in the other mo dier is no w no longer statistically signican t. Ho w ev er, the time mo dier F-score impro v emen t sta ys statistically signican t: F ull T raining Set, Time Mo dier s R P F ER IaB 29 (64.4%) 74.4% 69.0% 46% NI 15 (33.3%) 57.7% 42.3% When using the Carroll et al. (1999 ) system, the obje ct F-score impro v emen t sta ys statistically signican t: F ull T raining Set, Obje ct Relations R P F ER IaC 194 (77.9%) 85.1% 81.3% 16% NI 188 (75.5%) 80.3% 77.8% 2.4 Com bining Sets of Initial Annotations So the initial annotations from dieren t existing systems tend to eac h concen trate on impro ving the p erformance of dieren t GR t yp es. F rom this observ ation, one ma y w onder ab out com bining the annotations from these dieren t systems in order to increase the p erformance on all the GR t yp es aected b y those dieren t existing systems. V arious w orks (v an Halteren et al., 1998; Henderson and Brill, 1999; Wilk es and Stev enson, 1998) on com bining dieren t systems exist. These w orks use one or b oth of t w o t yp es of sc hemes. One is to ha v e the dieren t systems simply v ote. Ho w ev er, this do es not really mak e use of the fact that differen t systems are b etter at handling dieren t GR t yp es. The other approac h uses a com biner that tak es the systems' output as input and ma y p erform suc h actions as determining whic h system to use under whic h circumstance. Unfortunately , this approac h needs extra training data to train suc h a combiner. Suc h data ma y b e more useful when used instead as additional training data for the individual metho ds that one is considering to com bine, esp ecially when the systems b eing com bined w ere originally giv en a small amoun t of training data. T o a v oid the disadv an tages of these existing sc hemes, w e came up with a third metho d. W e com bine the existing related systems b y taking a union of their translated annotations as the new initial GR annotation for our system. W e rerun rule learning on the smaller (1391 GR instance) training set with a Union of the Buc hholz et al. (1999 ) and Carroll et al. (1999 ) systems' translated GR annotations. The o v erall results for the test set are (sho wn in ro w IaU) Smaller T raining Set, Ov erall Results R P F ER IaU 496 (66.3%) 76.4% 71.0% 11% IaC 478 (63.9%) 77.2% 69.9% 7.7% IaB 466 (62.3%) 78.1% 69.3% 5.8% NI 448 (59.9%) 77.1% 67.4% where the other ro ws are as sho wn in Section 2.2. Compared to the F-score with using Carroll et al. (1999 ) (IaC), the IaU F-score is b orderline statistically signican tly b etter (11% signicance lev el). The IaU F-score is statistically signican tly b etter than the F-scores with either using Buc hholz et al. (1999 ) (IaB) or not using an y initial annotations (NI). As exp ected, most (42 of 48) of the o v erall increase in recall going from NI to IaU comes from increasing the recall of the obje ct, time mo dier and other mo dier relations, the relations that IaC and IaB concen trate on. The ER for obje ct is 11% and for time mo dier is 56%. When this com bining approac h is rep eated the full 1963 GR instance training set, the o v erall results for the test set are F ull T raining Set, Ov erall Results R P F ER IaU 502 (67.1%) 77.7% 72.0% 7.3% IaC 487 (65.1%) 79.7% 71.7% 6.3% IaB 486 (65.0%) 76.5% 70.3% 1.7% NI 476 (63.6%) 77.3% 69.8% Compared to the smaller training set results, the dierence b et w een IaU and IaC here is smaller for b oth the absolute F-score (0.3% v ersus 1.1%) and ER (1.0% v ersus 3.3%). In fact, the F-score dierence is small enough to not b e statistically signican t. Giv en the previous results for IaC and IaB as a small training set gets larger, this is not surprising. 3 Discussion GRs are imp ortan t, but dieren t sets of GRs are useful for dieren t purp oses and dieren t systems are b etter at nding certain t yp es of GRs. Here, w e ha v e b een lo oking at w a ys of impro ving automatic GR nders when one has only a small amoun t of data with the desired GR annotations. In this pap er, w e impro v e the p erformance of the F erro et al. (1999 ) GR transformation rule learner b y using existing systems to nd related sets of GRs. The output of these systems is used to supply initial sets of annotations for the rule learner. W e ac hiev e mo dest gains with the existing systems tried. When one examines the results, one notices that the gains tend to b e unev en, with a few GR t yp es ha ving large gains, and the rest not b eing aected m uc h. The dieren t systems concen trate on impro ving dieren t GR t yp es. W e lev erage this tendency to mak e a further mo dest impro v emen t in the o v erall results b y pro viding the rule learner with the merged output of these existing systems. W e ha v e y et to try other w a ys of com bining the output of existing systems that do not require extra training data. One p ossibilit y is the example-based com biner in Brill and W u (1998 , Sec. 3.2). 6 F urthermore, nding additional existing systems to add to the com bination ma y further impro v e the results. References S. Aït-Mokh tar and J.-P . Chano d. 1997. Sub ject and ob ject dep endency extraction using nitestate transducers. In Pr o c. A CL workshop on automatic information extr action and building 6 Based on the pap er, w e w ere unsure if extra training data is needed for this com biner. One of the authors, W u, has told us that extra data is not needed. of lexic al semantic r esour c es for NLP applic ations, Madrid. E. Brill and P . Resnik. 1994. A rule-based approac h to prep ositional phrase attac hmen t disam biguation. In 15th International Conf. on Computational Linguistics (COLING). E. Brill and J. W u. 1998. Classier com bination for impro v ed lexical disam biguation. In COLING-A CL'98, pages 191195, Mon tréal, Canada. S. Buc hholz, J. V eenstra, and W. Daelemans. 1999. Cascaded grammatical relation assignmen t. In Joint SIGD A T Confer enc e on Empiric al Metho ds in NLP and V ery L ar ge Corp or a (EMNLP/VLC'99). cs.CL/9906004. J. Carroll, T. Brisco e, N. Calzolari, S. F ederici, S. Mon temagni, V. Pirrelli, G. Grefenstette, A. Sanlipp o, G. Carroll, and M. Ro oth. 1997. Sparkle w ork pac k age 1, sp ecication of phrasal parsing, nal rep ort. A v ailable at http://www.ilc.p i.c nr .i t/sparkle/sparkle. ht m, No v em b er. J. Carroll, G. Minnen, and T. Brisco e. 1999. Corpus annotation for parser ev aluation. In EA CL99 workshop on Linguistic al ly Interpr ete d Corp or a (LINC'99). cs.CL/9907013. P . Cohen. 1995. Empiric al Metho ds for A rticial Intel ligenc e. MIT Press, Cam bridge, MA, USA. L. F erro, M. Vilain, and A. Y eh. 1999. Learning transformation rules to nd grammatical relations. In Computational natur al language le arning (CoNLL-99), pages 4352. EA CL'99 w orkshop, cs.CL/9906015. D. Harnett. 1982. Statistic al Metho ds. A ddisonW esley Publishing Co., Reading, MA, USA, third edition. J. Henderson and E. Brill. 1999. Exploiting div ersit y in natural language pro cessing: com bining parsers. In Joint SIGD A T Confer enc e on Empiric al Metho ds in NLP and V ery L ar ge Corp or a (EMNLP/VLC'99). R. Hw a. 1999. Sup ervised grammar induction using training data with limited constituen t information. In A CL'99. cs.CL/9905001. R. Kaplan. 1994. The formal arc hitecture of lexical-functional grammar. In M. Dalrymple, R. Kaplan, J. Maxw ell I I I, and A. Zaenen, editors, F ormal issues in lexic al-functional gr ammar. Stanford Univ ersit y . M. Marcus, B. San torini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of english: the p enn treebank. Computational Linguistics, 19(2). M. P almer, R. P assonneau, C. W eir, and T. Finin. 1993. The k ernel text understanding system. A rticial Intel ligenc e, 63:1768. D. P almer. 1997. A trainable rule-based algorithm for w ord segmen tation. In Pr o c e e dings of A CL/EA CL97. D. P erlm utter. 1983. Studies in R elational Gr ammar 1. U. Chicago Press. H. v an Halteren, J. Za vrel, and W. Daelemans. 1998. Impro ving data driv en w ordclass tagging b y system com bination. In COLING-A CL'98, pages 491497, Mon tréal, Canada. Y. Wilk es and M. Stev enson. 1998. W ord sense disam biguation using optimized com binations of kno wledge sources. In COLING-A CL'98, pages 13981402, Mon tréal, Canada.
2000
17
2000
18
  ! #"$ % '& ($*) $ ,+-. 0/.12! 3 4$ 0  567)(89 :;<2=?>@;3ACBEDGF0HJIK;L MONQPSRUTVTXWVYZRUNQP\[?]?^ PSRU_`^bacedZYZcgfh_ ibNWkjZRUce^gWkPO]dhlmanWkPgPS^eopceYZq anWVPgPg^goprcgYZqsautwvyxZz|{Z}~ €|‚„ƒr…|†r‡ˆQ‰yŠŒ‹ŠuˆŽ?Ž‹ZƒZ‘ ’”“E•—– Hy;˜ – ™š„›œž Ÿ„¡ ž£¢h›¥¤y¦?§©¨E¤¡ª§¦Q««ž¬§O­§®­š„¯°­ ± ¦„œ¥­›Ÿ?œ¥žK«¤y¯œk§²£¯³$´žn¯µ„µQ¡ž¬§§ž¬µ´|¶¯ ³„¤ ± ›³„¯œ#žS·hŸ„¡ ž¬§§›¥¤y³¸r­šQž£¡ž$›§³Q¤¹§O¶Z§*º ­Ož ± ¯°­›²»¨E¤¡ª¼µQž¬§²S¡ ›´?›³Q«½¨.š„¯°­¾«¤y¯œk§ ›k³¿¯µ„µ?›¥­›¥¤y³¿­O¤À›µQž¬³|­›¥Á?²£¯°­›¥¤y³ ± ›¥«yš|­ ´ž3¡ž¬œ¥ž£¢°¯³|­”¯³?µÂšQ¤—¨Ã§Ÿrž¬¯°ªž£¡e§¼²£¯³ ¦?§Ož³„¤ ± ›³„¯œ.žS·hŸ„¡ ž¬§§›¥¤y³„§©­O¤Ä¯² š„›ž£¢ž ­š„ž ±Å4Æ ³À­š„›k§`Ÿ?¯°Ÿž£¡¬¸\¨ÇžsÁ„¡ §O­šJ¶|º Ÿ¤­šQž¬§ ›¥È£ž`¯³h¦ ± ´rž£¡É¤ÊŒ²S¤ ±É± ¦„³?›²£¯Uº ­›¢žK«¤y¯œ§u­š„¯°­Ë²S¤y¦„œµ7´žÇ¯µ?µQ¡ž¬§§Ož¬µ¾´|¶ ³„¤ ± ›³„¯œhžS·ZŸ„¡ž¬§§›¤y³„§Ë›³Ì­¯§OªJº@¤¡e›¥ž¬³J­Ož¬µ µ?›¯œ¥¤«y¦Qž¬§ Å ™¼žb­šQž¬³µQž¬§²S¡e›¥´ž¾­šQž½ÍVÎ„Ï ÐÒÑ Î Ð ÍÔÓ°ÎÕ°Ö5ÍXά×EØ Ñ ÎÙ ÑSÚ ± ¤ZµQž¬œuÊV¤¡¾³„¤ ± º ›k³„¯œŒžS·hŸ?¡ž¬§§›¥¤y³«ž¬³„ž£¡ ¯°­›¥¤y³Û­š„¯°­ ¯°­*º ­Ož ± Ÿ„­§Ü­O¤w§› ± ¦„œ¥­¯³Qž£¤y¦?§œ¥¶w¯µ„µQ¡ ž¬§§ ­š„ž$ÍÞÝ Ñ Î Ð Í ßÇÙ Õ Ð ÍÔӰΩ«¤y¯œ#¯³„µ¹­šQž¬§ž¾¯µZº µ?›¥­›¥¤y³„¯œ#«¤y¯œ§¨.›¥­š`¯¹§›k³Q«yœ¥ž\³Q¤ ± ›³„¯œ žS·ZŸ„¡ž¬§ §›¥¤y³ Åà ¦Q¡¹ž£¢U¯œk¦„¯°­›¥¤y³¡ž¬§¦?œ¥­§ § šQ¤—¨À­š?¯°­n­šQž$ÍVÎ ÐÒÑ Î Ð ÍÔÓ°ÎÕ°ÖuÍVΗ×ÇØ Ñ ÎrÙ ÑgÚ ± ¤hµQž¬œÁ„­§¹­š„ž³Q¤ ± ›³„¯œžS·ZŸ„¡ž¬§§ ›¥¤y³„§ ›k³©­š„ž¾á à á àŒâã ä ²S¤¡Ÿ¦„§n¯§n¨Ež¬œœ¯§ Ÿ?¡ž£¢h›¤y¦„§»¯²£²S¤y¦„³|­§É­š?¯°­ÉÊX¤h²£¦„§©§O¤yœ¥ž¬œ¥¶ ¤y³®­šQž»ÍÔÝ Ñ Î Ð Í ßÇÙeÕ Ð ÍÞӏΩ«¤y¯œ Å å æ L – HyF5IKçK˜ –yè F#L é¡ ž£¢h›¥¤y¦?§\¨E¤¡ªê¤y³3³Q¤ ± ›³„¯œžS·ZŸ„¡ž¬§ §›¥¤y³s«ž¬³Qž£¡Oº ¯°­›¥¤y³¼š„¯§ ± ¯›k³„œ¥¶ÊX¤h²£¦„§ž¬µ ¤y³”­š„ž¦?§Ožb¤ÊE³Q¤ ± º ›³„¯œŒžS·ZŸ„¡ž¬§ §›¥¤y³„§½­O¤ë¯²eš„›¥ž£¢žs¯Ä§Ÿrž¬¯°ªž£¡—ìí§®«¤y¯œ ­O¤ÂÍÔÝ Ñ Î Ð Í îgï𯏳ܤ´ZñOž¬²S­›³­šQžòµ„›k§²S¤y¦Q¡ §Ožð²S¤y³Zº ­OžS·Z­ÂóÞôŒ¯œ¥žÜ¯³„µöõž¬›¥­Ož£¡—¸ò÷¬øøùhúé˯§§¤y³„³Qž¬¯¦¸ ÷¬øøùyû Å ™š„›œžÄ¤­šQž£¡s¨E¤¡ªü§¦„««ž¬§O­§ê­š?¯°­ð›¥­ §šQ¤y¦?œµ3´ž®Ÿ¤y§§›¥´?œž½ÊX¤¡©¯s³Q¤ ± ›³?¯œmžS·hŸ„¡ ž¬§§›¥¤y³ ­O¤»²S¤y³|­O¡ ›¥´?¦Q­OžŒ­O¤»­š„žŒ§ ¯°­›§OÊÞ¯²S­›¥¤y³¤Êu¯µ?µ„›¥­›¥¤y³„¯œ ý þÿ      ! "$# %'&)( * +-,/.0,21 354+675358'6 @ÿ9:;<=> / @?   (eÿBA+ C $D)E  ?F G"HE * I&;JKÔÿ «¤y¯œ§7óML Ÿ„Ÿž¬œ¥­£¸÷¬øONùhú„éG¤yœœk¯²ªr¸r÷¬øøZ÷°úPZ­O¤y³Qž ¯³„µ ™¼ž£´„´ž£¡¬¸Œ÷¬øøONyûe¸­šQž£¡ ž`›§$³Q¤ê§¶h§O­Ož ± ¯°­›²¨E¤¡ª µQž¬§ ²S¡ ›¥´?›³„« ¨.š„¯°­3­šQž¬§žÛ«¤y¯œ§ ± ›¥«yš|­3´ržÛ¯³„µ šQ¤ ¨Â§OŸž¬¯°ªž£¡ § ²£¯³”¦„§žb³Q¤ ± ›k³„¯œžS·ZŸ„¡ž¬§§›¤y³„§­O¤ ¯²eš„›¥ž£¢ž\­š„ž ±ÅQ ¤¡ žS·Z¯ ± Ÿœ¥ž¸?²S¤y³„§›kµQž£¡n­šQž\µ„›k¯Uº œ¥¤«y¦„žE²S¤y³|­O¡ ›¥´?¦„­›¥¤y³b›k³½ó*÷—ûu›³Ì¯Œ²S¤y³J­OžS·Z­›³¾¨ š„›² š ­šQžm²S¤yœ¥¤¡Ë¤Ê„­šQžE­¯°´?œ¥žm›§G³Q¤­Ë³Qž¬²Sž¬§§¯°¡¶b­O¤)RTSVUWX Y R/ZB[ ­šQžÄµ„›§ ²S¤y¦Q¡ §Ožsž¬³J­›¥­¶¿¦„³?µQž£¡µ?›§²£¦„§§ ›¥¤y³¸ ´?¦„­7¨.šQž£¡ž­šQžÉ²S¤yœ¤¡¾¤ÊÇ­š„žÉ›¥­Ož ± ²S¤y¦„œµ´rž$›³Qº ÊXž£¡¡ž¬µ­O¤É´rž]\_^ Y R2`Ia Y RM^VWÊX¤¡­šQž¾Ÿ?¡¤Ÿ¤y§¯œ Åb c 6Gd!e EGf 0hg!i*  ;5j-,/KK G@j L!Ÿ?œ¯¦„§ ›¥´?œ¥ž\š|¶hŸr¤­š„ž¬§›§›§­š„¯°­7­šQž$¯œ¥­Ož£¡ ³„¯Uº ­›¥¢ž ¦Q­O­Ož£¡ ¯³„²Sž`›k³Àókyû$²S¤y¦„œµÄ¯œk§O¤§¦QŸ„Ÿ¤¡­­šQž \_^ Y R2`Ia Y RT^0W›k³QÊVž£¡ ž¬³„²Sž¸?¯³„µ`­š„¯°­.­šQžÌ³Q¤ ± ›³?¯œ žS·ZŸ„¡ž¬§§ ›¥¤y³mlïon Ñ Ý Ð ÕBpSÖ Ñ ²S¤y¦„œµ©­š|¦„§E§› ± ¦„œ¥­¯³„žSº ¤y¦„§ œ¥¶½¯²eš„›¥ž£¢žÌ­šQž¾­¨Ç¤½«¤y¯œ§¤ÊnÍÔÝ Ñ Î Ð Í îgï°ÍVÎIq¾­šQž ¤´ZñOž¬²S­`¦„³?µQž£¡µ„›§²£¦„§ §›¥¤y³2¯³„µÛ§¦QŸ„Ÿ¤¡­›k³Q«ê­šQž \_^ Y R2`Ia Y RT^0W”›³QÊXž£¡ž¬³„²Sž Å c2r d!e EGf 0hg!i Gs* ;5j ™¼ž3š|¶|Ÿ¤­šQž¬§›È£ž¬µ ­š„¯°­”›³¿¯µ„µ„›¥­›¥¤y³À­O¤ ­šQž ²S¤y³|¢ž£¡ §¯°­›¥¤y³?¯œ›³QÊXž£¡ž¬³„²Sž¤ÊC\_^ Y R2`Ia Y RM^VWð­š„¯°­ ¯¾§Ÿrž¬¯°ªž£¡ ± ›¥«yš|­¯œ§O¤¾¯°­O­Ož ± Ÿ„­Ç­O¤¾¯² š„›ž£¢ž.¤­šQž£¡ ­¯§Oª|º@¡ž¬œ¥ž£¢°¯³|­Œ›k³QÊVž£¡ ž¬³„²Sž¬§.¢Z›¯É­šQžb«ž¬³„ž£¡ ¯°­›¥¤y³¤Ê ³Q¤ ± ›³„¯œžS·ZŸ„¡ž¬§§›¤y³„§ Å Æ ³-¤¡ µQž£¡ð­O¤­Ož¬§­s­š„›k§sš|¶|Ÿ¤­šQž¬§ ›§ÊÞ¦Q¡­šQž£¡—¸ ¨Ež Á?¡ §O­©§Ÿrž¬²£›Á„ž¬µë¯s³|¦ ± ´ž£¡©¤Ê¾§Ÿrž¬²£›Á?²®²S¤ ± º ± ¦„³„›²£¯°­›¢ž«¤y¯œ§ü­š„¯°­!¨Çž ´ž¬œ›ž£¢ž ²£¯³Ã´ž ¯µ„µ„¡ž¬§§Ož¬µ´h¶ ³Q¤ ± ›³„¯œ¹žS·ZŸ„¡ž¬§ §›¥¤y³„§Ä›³ ­¯§ªJº ¤¡ ›ž¬³J­Ož¬µµ„›¯œ¤«y¦Qž¬§ Å ™¼žm­šQž¬³› ± Ÿ?œ¥ž ± ž¬³|­Ož¬µ¾­¨Ç¤ ± ¤ZµQž¬œ§0¤Êh³Q¤ ± ›³„¯œ°žS·ZŸ„¡ž¬§§›¤y³7«ž¬³Qž£¡ ¯°­›¥¤y³ ÅGä š„ž Á„¡e§O­ ± ¤hµQž¬œu¨m¯§ ­š„žtRTWVuvU\)UW Y aVwx\_^VS0Uw©¤Ê ôŒ¯œ¥ž\¯³?µ®õž¬›¥­Ož£¡Ìó*÷¬øøù|ûe¸m¨.š„›²ešÄ› ± Ÿ?œž ± ž¬³J­§$¯ §O­O¡e¯°­Ož£«¶®ÊV¤¡§¯°­›§OÊX¶Z›³Q«»­šQžÉÍÞÝ Ñ Î Ð Í ßÇÙ Õ Ð ÍÔӰ髏¤y¯œ Å y &;iEz gi5A{ ]O  _H5" Ez+CC5;|}0ÿHÿ @ÿ;~O =E$ ;\ÿ  ;C*  ;5A €@ÿ;~O =E!  ÿ;G J E J |';(‚i5)H iƒ _ " g_A„ $ÿ G<=  G ') (  EG_;O ) @ÿEK Gs…  ' 5j ä šQžÌ§Ož¬²S¤y³„µ ± ¤ZµQž¬œ0¨Ežb²£¯œœu­šQž©ÍVÎ ÐÒÑ Î Ð ÍÔÓ°ÎÕ°ÖËÍXÎ„Ï ×EØ Ñ ÎÙ ÑSÚ ± ¤hµ„ž¬œÔú­š„›§ ± ¤hµ„ž¬œ$¤Ê³Q¤ ± ›³„¯œžS·hº Ÿ„¡ž¬§ §›¥¤y³ü«ž¬³„ž£¡ ¯°­›¥¤y³ ¯°­O­Ož ± Ÿ„­§3­O¤Ü§› ± ¦„œ¥­¯³QžSº ¤y¦„§œ¶ ¯² š?›¥ž£¢ž­šQžòÍÔÝ Ñ Î Ð Í ßEÙ Õ Ð ÍÔÓ°Î뫏¤y¯œb¯³„µ­O¤ ²£¦Qž¹¤­šQž£¡$­¯§Oª|º@¡ž¬œ¯°­Ož¬µë›³QÊXž£¡ž¬³„²Sž¬§Ì¨.›¥­š3¯”§›³Zº «yœ¥ž”³Q¤ ± ›³?¯œžS·ZŸ„¡ž¬§§›¤y³ Å ™¼ž”­šQž¬³2ž£¢°¯œ¦„¯°­Ož¬µ ­šQž¬§Ož¼­¨Ç¤ ± ¤hµQž¬œk§½¦?§›³Q«÷+†ð¤Êb­š„ž”µ?›¯œ¥¤«y¦Qž¬§ ÊX¡¤ ± ­šQžiu„^:u„^VWK‡ Y ²S¤¡Ÿ?¦„§G¤Ê„­¯§Oª|º@¤¡ ›¥ž¬³|­Ož¬µ$µ„› º ¯œ¥¤«y¦Qž¬§½óÞôŒ›$ˆK¦Q«ž¬³?›¥¤ž£­\¯œ Å ¸E÷¬øøONyû Å½ä š?›§7§¦Q´Qº §Ož£­¤Ê5µ„›k¯œ¥¤«y¦Qž¬§²S¤y³|­¯›³„§b÷+‰O‰»³Q¤y³Zº@Ÿ?¡¤y³Q¤ ± ›³?¯œ µ„›§ ²S¤y¦Q¡ §Ož¯³„¯°ŸšQ¤¡ ›²EžS·ZŸ„¡ž¬§§›¤y³„§5¨.š„›k² š¨Ež ²£¯œœ vUS0U-Š'uv„R/‹ Y RM^VW0Š`óXž Å « Å lï Ð Õ@pSÖ Ñ ¯³„µŒlïn Ñ Ý Ð Õ@p£Ö Ñ ›³ëó*÷—û\¯³„µ2ókyûOû Åà ¦Q¡b¡ž¬§¦„œ¥­§\§šQ¤—¨-­š„¯°­ ­šQžsÍVÎ ÐÒÑ Î Ð ÍÔÓ°ÎÕ°ÖÍVΗ×ÇØ Ñ ÎrÙ ÑgÚ ± ¤ZµQž¬œ.Á„­§É­š„ž`¡žSº µQž¬§²S¡e›¥Ÿ„­›¥¤y³„§m›³¹­š„ž\²S¤¡Ÿ?¦„§n¯§.¨Çž¬œkœ¯§Ÿ„¡ž£¢Z›¥¤y¦„§ ¯²£²S¤y¦„³|­§®¨.š?›² š2ÊX¤h²£¦„§§¤yœ¥ž¬œ¥¶ò¤y³Û­š„žðÍÔÝ Ñ Î Ð Í ßKÏ Ù Õ Ð ÍÞÓ°Î뫏¤y¯œ Åüä ¤ð­Ož¬§O­¹­šQž”¢°¯œ›µ„›¥­¶Ä¤Ê¾­šQž¼§Ož£­ ¤Ê5›³QÊXž£¡ž¬³„²Sž¬§m¨Ež\²S¤y³„§›kµQž£¡ž¬µ®›³¹­š„ž»ÍVÎ ÐÒÑ Î Ð ÍÔÓ°ÎÕ°Ö ÍXά×ÇØ Ñ ÎÙ ÑSÚ ± ¤ZµQž¬œÔ¸¨Ež¼¯œ§O¤Ä²S¤ ± Ÿ?¯°¡ž¬µÛ›­©­O¤ò¯ ± ¤ZµQž¬œ­š?¯°­©¯µ„µQ¡ ž¬§§Ož¬§»­š„ž ›µQž¬³|­›¥Á?²£¯°­›¤y³ò«¤y¯œ ´?¦Q­7¯œ§¤®›³„²£œ¦„µ„ž¬§ ¯µ„µ?›¥­›¥¤y³„¯œ ± ¦Q­¦„¯œœ¥¶ ªh³„¤—¨.³ ¯°­O­O¡ ›¥´¦Q­Ož¬§¼¯°­¡ ¯³„µ„¤ ±Å ™¼žòÊV¤y¦?³„µ­š„¯°­­šQž ›³QÊXž£¡ž¬³„²Sž½§Ož£­¨Çž¹²S¤y³„§›kµQž£¡ž¬µs¨E¯§$§›«y³„›¥Á?²£¯³|­œ¥¶ ´ž£­O­Ož£¡­š„¯³`¡ ¯³„µQ¤ ±Å Phž¬²S­›¤y³ŒksµQž¬§²S¡ ›´rž¬§»­šQžŽu„^:u„^VWK‡ Y ²S¤¡Ÿ?¦„§£¸ ­šQžÇ­¯§Oª¾­š„¯°­G­šQžÇ²S¤y³|¢ž£¡ §¯³|­§Ë¨Ež£¡žE¯°­O­Ož ± Ÿ„­›³Q« ­O¤¼¯² š„›ž£¢ž¸­šQž¹²S¤y³J¢ž£¡ § ¯°­›¥¤y³„¯œn›³QÊXž£¡ž¬³„²Sž¬§b¡ž¬œ º ž£¢°¯³J­7­O¤®­š„žÌ­¯§ª¸#¯³?µ”¤y¦Q¡¾šJ¶hŸ¤­šQž¬§Ož¬§7¯°´¤y¦Q­ ­šQžŒ¨m¯¬¶½­šQž¬§Ož¾²S¤y³|¢ž£¡ §¯°­›¥¤y³„¯œ#›³QÊXž£¡ž¬³„²Sž¬§m²S¤y¦„œµ Ÿ„¡¤ ¢h›kµQž.¯µ„µ„›­›¥¤y³„¯œ„›³?¦Qž¬³„²Sž¬§K¤y³É­šQž§OŸž¬¯°ªž£¡¬ìí§ «ž¬³Qž£¡ ¯°­›¤y³”¤ÊK³Q¤ ± ›³?¯œ#žS·ZŸ„¡ž¬§§ ›¥¤y³„§ Å Phž¬²S­›¥¤y³† žS·ZŸ?œ¯›³„§©šQ¤—¨,¨Ež”­Ož¬§O­Ož¬µ¤y¦Q¡šJ¶hŸ¤­šQž¬§Ož¬§¹¯³„µ §Ož¬²S­›¥¤y³x‘$Ÿ„¡ž¬§Ož¬³|­§¤y¦„¡n¡ž¬§¦„œ¥­§ Å ’ :ÌF – =L –yè ;> æ L$“Eç=LK˜Z= • F#L ”=I= • ˜ZH è*•–yè F#L • ä šQž u„^0u„^0WK‡ Y ²S¤¡Ÿ?¦„§ ²S¤y³„§›k§O­§ ¤Ê k;‘ ²S¤ ± Ÿ?¦„­Ož£¡Oº ± ž¬µ„›¯°­Ož¬µ µ„›k¯œ¥¤«y¦Qž¬§ ›³ ¨.š?›² š ­Ò¨E¤ÜŸž£¤Ÿ?œ¥ž ²S¤yœkœ¯°´¤¡ ¯°­Ožë¤y³ ¯§› ± Ÿ?œžòµQž¬§›¥«y³ ­¯§Oªr¸©´¦Q¶h›k³Q«ÀÊÞ¦Q¡ ³„›¥­¦„¡ž3ÊV¤¡ð­Ò¨E¤Ü¡¤|¤ ± §¤Ê`¯ šQ¤y¦„§ž Å ä šQžŸ?¯°¡­›²£›¥Ÿ¯³J­§£ì ± ¯›k³ «¤y¯œ®›§3­O¤ ³Qž£«¤­›¯°­Ož­šQžÌŸ?¦Q¡e² š„¯§Ož¬§¬úr­š„žÌ›¥­Ož ± § ¤Êǚ„›¥«yšQž¬§O­ Ÿ„¡ ›¤¡ ›¥­Ò¶2¯°¡žê¯ò§¤ÊX¯ëÊV¤¡®­š„žêœ›¥¢Z›³Q«ð¡¤h¤ ± ¯³„µ ¯ ­¯°´?œ¥ž©¯³?µêÊX¤y¦Q¡² š„¯›¥¡e§\ÊX¤¡b­š„ž½µ„›k³„›³Q«¡¤h¤ ±Å ä šQž7Ÿ¯°¡­›²£›¥Ÿ?¯³|­§n¯œ§O¤©š„¯¬¢žÌ§OŸž¬²£›¥Á?²Œ§Ož¬²S¤y³„µ„¯°¡ ¶ «¤y¯œ§Ä¨ š„›² š ÊÞ¦Q¡­š„ž£¡3²S¤y³?§O­O¡ ¯›³­š„žÛŸ„¡¤´?œ¥ž ± §O¤yœ¥¢Z›³Q«¼­¯§ª Å é˯°¡­›k²£›¥Ÿ?¯³|­§©¯°¡ž›³„§O­O¡e¦„²S­Ož¬µÄ­O¤ ­O¡¶¾­O¤ ± ž£ž£­¯§ ± ¯³|¶\¤Ê?­šQž¬§Ožm«¤y¯œ§5¯§GŸ¤y§§›¥´?œž¸ ¯³„µÜ¯°¡ž ± ¤­›¥¢°¯°­Ož¬µÜ­O¤ÛµQ¤Û§O¤´|¶¿¯§§O¤Z²£›¯°­›³Q« Ÿ¤y›³|­§®¨.›­š §¯°­›§Á„ž¬µ «¤y¯œk§ Å ä š„žs§Ož¬²S¤y³„µ?¯°¡¶ «¤y¯œ§¯°¡žO–Â÷—û¦?§Ožê¤y³Qžð²S¤yœ¥¤¡ ¯°­O­O¡e›¥´?¦Q­Ožs¢U¯œ¦„ž ÊX¤¡b¯œœ5›­Ož ± §¾¨.›­š„›³”¯¡¤h¤ ± ¸:kyû7´?¦Q¶¯§ ± ¦„² š ÊÞ¦Q¡ ³„›­¦Q¡žÇ¯§Ë¶¤y¦$²£¯³0¸B†yûG§OŸž¬³„µÌ¯œœh¶¤y¦Q¡ ± ¤y³Qž£¶ Å ä š„ž®Ÿ?¯°¡­›²£›Ÿ?¯³J­§É¯°¡ž®ž+—h¦„¯œ§É¯³?µ ± ¦?§O­É¯°«¡ž£ž ¤y³­š„ž¾Á?³„¯œŸ?œk¯³¹ÊV¤¡ÊÞ¦Q¡ ³?›§š„›³„«\­šQž¾šQ¤y¦?§Ož Å ™¼ž3š|¶|Ÿ¤­šQž¬§›È£ž¬µ ­š„¯°­ ± ¯³|¶À¤Ê$­šQžê­¯§ªJº ¡ž¬œk¯°­Ož¬µ¼›³QÊXž£¡ž¬³„²Sž¬§­š„¯°­7­šQžŸ?¯°¡­›k²£›¥Ÿ?¯³|­§ ± ¦„§O­ ± ¯°ªž”›³Ä­š?›§»µQ¤ ± ¯›k³ò­O¤2ó*÷—û»ž"˜É²£›¥ž¬³|­œ¥¶Ä²S¤ ± ž ­O¤»¯³¹¯°«¡ž£ž ± ž¬³|­£¸?¯³„µ¼ókyûEµQ¤Ì¨Ež¬œœr¤y³¹­šQž ­¯§ª¸ ²S¤y¦„œkµŸ¤­Ož¬³J­›¯œkœ¥¶©´ž\²£¦„ž¬µ®´h¶½­š„ž\³Q¤ ± ›³?¯œžS·|º Ÿ„¡ ž¬§§›¥¤y³„§3µQž¬§ ²S¡ ›¥´?›³„« ­šQžÀ›¥­Ož ± §3¤Ê ÊÞ¦Q¡ ³„›­¦Q¡ž ¦„§ž¬µ­O¤É§O¤yœ¥¢ž\­šQž7­¯§Oª Å™ š)›Iœ'ž0Ÿ' T¡¢¤£m¥K¦$¡-§¨0›B' TO– ä š„ž»Á„¡ §O­7­¯§ªJº ¡ž¬œk¯°­Ož¬µ ›³„ÊVž£¡ž¬³?²Sž\­š„¯°­¨ÇžÌ²S¤y³„§›µ„ž£¡›§.­šQž©\_^KX Y Rª`a Y RT^0W›³QÊXž£¡ž¬³„²Sž¼žS·Zž ± Ÿ?œk›¥Á„ž¬µ›³ÛžS·Q¯ ± Ÿ?œ¥ž¬§ ó*÷—û¯³„µ ókyû Å é¡ ž£¢h›¥¤y¦?§¿¡ ž¬§Ož¬¯°¡ ² šÃ§¦Q««ž¬§O­§ ­š„¯°­µ?›§²S¤y¦Q¡ §žs¡ž¬œk¯°­›¥¤y³„§¼§¦„²ešÂ¯§«l»Ó Ð Í/¬°Õ Ð ÍÔÓ°Î ²£¯³ð›³F?¦„ž¬³„²Sž$­šQž©²S¤y³|­Ož¬³|­Ì¯³„µêÊV¤¡ ± ¤Ê¦Q­O­Ož£¡Oº ¯³„²Sž¬§Äó­ ¯³„³Ü¯³„µ ä šQ¤ ± Ÿ?§¤y³¸»÷¬øON¯®|ú°­ ²"±7žSº ¤ ¨.³¸÷¬øONùhú­`¤y§Ož£¡Ç¯³?µ²­®¤h¤¡ž¸÷¬øøùyû ÅÆ ­K§Ož£ž ± § Ÿ?œk¯¦„§›¥´?œžE­š?¯°­K­šQž.§Ÿrž¬¯°ªž£¡m²£¯³½²£¦Qž.­šQž¬§ž §¯ ± ž ›³„ÊVž£¡ž¬³?²Sž¬§5¢h›k¯¾³Q¤ ± ›³?¯œZ¡ž¬µQž¬§²S¡ ›Ÿ„­›¥¤y³„§ ÅhQ ¤¡EžS·|º ¯ ± Ÿœ¥ž¸„›³ó†yûE³Œ¤y³Qž\²£¯³`›³QÊXž£¡nÊV¡¤ ±'à ìí§œ¯§O­¦Q­*º ­Ož£¡ ¯³?²Sžê¯³„µÛ­šQž¼¡ ž¬µQž¬§²S¡ ›¥Ÿ?­›¥¤y³´lÌÍXÎ Ñ îSÓOn¶µ¯·@¸ ­š„¯°­½š„›§ ± ¤­›¢U¯°­›¥¤y³ëÊV¤¡ÉŸ„¡¤Ÿ¤y§›³Q«¼š„›k§$¡ ¦Q«›§ ›¥­§´rž£­O­Ož£¡.Ÿ„¡ ›²Sž Å c 45d 9!¹Fÿ; J<5: $ ;;V ;($ r5º» j@ÿ J„„ ;sG G<= $K@ÿ º» ~ "©@ÿ;E5' 5;$" tg  ÿ; J<5:0j ¼ ¹F ½E $* =~@ÿi ;;~ () r º » A{g‚ ;( 5;) Kg J Hÿs0ÿ;Hÿ)"EGm 6 º » j 9!¹$5@f uÿ; G<=: Cg J Hÿ@jjj ¼ ¹B@ÿ;E)E K;~¾!¿À¯ÁÂà ÄÅ Æ"Çj ‹UvKŠ+‡Ka0Š"RM^VWÉȄ[0‹-^ Y ÈKU-Š+RTŠ–s顤Ÿž£¡Oº ­›¥ž¬§Œ­š„¯°­¾¯°¡ž¡ž¬œ¥ž£¢°¯³|­7­O¤½«ž£­O­›k³Q«­šQž šQž¬¯°¡ž£¡`­O¤2¯°«¡ž£žð¨.›¥­šÛ­šQžs§OŸž¬¯°ªž£¡¬ìí§ Ÿ„¡¤Ÿ¤y§Ož¬µÌ¯²S­›¥¤y³ ± ¯¬¶b´ržÇžS·hŸ?¡ž¬§§Ož¬µ›³ ­šQž²S¤y³J­OžS·Z­K¤Ê¯Œ«¤y¯œ„­O¤\Ÿ„¡¤Ÿ¤y§Ožm­š„¯°­ ¯²S­›¥¤y³ Å Ê ,/C@ÿ; OHE{5'ÿ"'O @ÿ-  Kgi - h@ÿ;5  Ëo  =E c 6G3 354+d ;‚Ì5gÿ;M  ; c 6G3 3 7+d A;0ÿ;nÿ"'O @ÿ1 ÍG‚@ÿ J:OG  =E h‚*  "12  " G‚' 5( ;h5;  M*  i'  g J 5©@ÿ; $ !  G 5+©©@ÿ~H5gigi  (   ;h… F@ÿ;-;' O5{ H;;(V J H; I*  "1ª   G  VH '<5E   5; B'E E;Hj ÎÏ - {@ÿ;Ð ¼ Ð ¼ #09ËþEz'H>  $ OG $<5E  J g Ez'H'B@ÿ; @„-* HEÍE{ G'HE  5  F V; H gi* J  ;V„5gi:;  0   V0ÿ‚@ÿ;E)  ;'1    G :@ÿ;„O +@ÿ;Ez gi;5jBËt„;HG   giM1 5;KK@ÿ©Ñ jjjÒ j Ó ¡¢V+§'œŸ{ T¢§ Ó ¨0Ÿ-¢0Ô-›IÕ£x¥K¦$¡§;¨V›I' TO– ä šQž §Ož¬²S¤y³„µ ­Ò¶hŸž®¤Ê7­¯§Oª|º@¡ž¬œ¯°­Ož¬µ2›³QÊXž£¡ž¬³„²Sž`›k§ ± ¤°º ­›¥¢°¯°­Ož¬µò´h¶s­š„ž¤´?§Ož£¡¢°¯°­›¥¤y³Ä­š„¯°­ÉŸ?¯°¡­›²£›¥Ÿ¯³J­§ ›³s­¯§Oª|º@¤¡ ›¥ž¬³|­Ož¬µÄµ„›¯œ¥¤«y¦Qž¬§Ì¯°Ÿ„Ÿž¬¯°¡­O¤”´rž¹¯°´?œ¥ž ­O¤`²S¤h¤¡ µ„›³?¯°­Ož$¤y³­šQž$¡ž¬œk¯U·Z¯°­›¥¤y³¤ÊEŸ¯°¡­›²£¦„œ¯°¡ ­¯§OªÛ²S¤y³„§O­O¡e¯›³J­§`¨.›¥­šQ¤y¦„­¹³„ž£ž¬µ„›³Q«Ä­O¤ëµ?›§²£¦„§§ ›¥­ ÅÖQ ¤¡”žS·Z¯ ± Ÿœ¥ž¸\­šQžðŸ?¯°¡­›k²£›¥Ÿ?¯³|­§ ± ¯—¶ µQžSº ²£›µQž®›­»›§»› ± Ÿ¤y§§ ›¥´?œ¥ž½­O¤ê¯² š?›¥ž£¢ž`­šQž`¤Ÿ„­›¥¤y³„¯œ ­¯§Oª®«¤y¯œG¤Ê ± ¯°­²eš„›³Q«½ÊÞ¦Q¡ ³„›¥­¦„¡ž7²S¤yœ¥¤¡ §¨.›¥­š„›k³ ¯ ¡¤|¤ ±®Å®Æ ³ê­šQžmu^:u„^VWV‡ Y µ„›¯œ¥¤«y¦Qž¬§¬¸Ë›k³¶†ON¯× ¤ÊK­š„ž»²£¯§Ož¬§7¨.šQž£¡ žb¤Ÿ„­›¤y³„¯œË«¤y¯œ§Œ¨Ež£¡ž»¯°´?¯³Zº µQ¤y³Qž¬µ0¸­šQžŸ?¯°¡ ­›²£›¥Ÿ?¯³|­§ ¯°Ÿ?Ÿrž¬¯°¡ ž¬µ`­O¤¯°«¡ž£ž­O¤ ¯°´?¯³„µ„¤y³¾­šQž«¤y¯œ|¨.›¥­š„¤y¦Q­žS·hŸœ›²£›¥­0µ„›k§²£¦„§§›¤y³ ÅØ à ¦Q¡nš|¶|Ÿ¤­šQž¬§ ›§Ç›k§Ç­š?¯°­n­š„›§m›³QÊXž£¡ž¬³„²SžŒ²£¯³¯œ§O¤ ´ž»²£¦Qž¬µ¼´h¶ ­š„žÉ²S¤y³J­Ož¬³|­\¤Ên¯®³„¤ ± ›³„¯œGžS·ZŸ„¡ž¬§*º §›¥¤y³ ¨.šQž¬³­š„¯°­ŒžS·ZŸ„¡ž¬§§ ›¥¤y³`¡ ž¬¯œ›¥È£ž¬§Ÿ„¡¤Ÿž£¡­›¥ž¬§ ¤Ê¯òµQ¤ ± ¯›³¤´Qñ*ž¬²S­®­š?¯°­`¯°¡žê³Q¤­®³„ž£ž¬µQž¬µÛ­O¤ ›µQž¬³|­›¥ÊX¶b¨ š„›² šÉ¤´ZñOž¬²S­E›§Ç¦„³„µQž£¡Çµ„›§²£¦„§ §›¥¤y³ Å:Q ¤¡ žS·Q¯ ± Ÿ?œ¥ž¸h›³`óT‘|û L§OŸž¬²£›¥Á?ž¬§5´¤­š$­šQž.²S¤yœ¥¤¡m¯³„µ Ÿ„¡ ›k²SžÊV¤¡”´¤­šÀ­š„žð§O¤ÊÞ¯2¯³„µ ­š„žsœ¯ ± Ÿ¿ž£¢ž¬³ ­šQ¤y¦Q«yš3­š„žŸ„¡ ›²Sž¯°­O­O¡ ›¥´¦Q­Ož¬§$¯œ¥¤y³Qž¨E¤y¦„œµ3¯µZº ž+—h¦„¯°­Ož¬œ¥¶À›µQž¬³|­›¥ÊX¶ ž¬¯²ešÜ›¥­Ož ±ÅÚÙ ¶À§OŸž¬²£›¥ÊX¶h›³„« ­šQžÉ²S¤yœ¤¡¬¸u¤y³„ž©²£¯³ž¬¯§ ›œ¥¶”›³QÊXž£¡¾­š„¯°­\­š„žÉ²S¤yœ¥¤¡ ± ¯°­²eš\²S¤y³„§O­O¡e¯›³J­uš„¯§´ž£ž¬³¾µQ¡¤Ÿ?Ÿrž¬µŒ›³¾­šQž5Ÿ„¡¤°º Ÿ¤y§¯œ Å L š„¯§¾ž¬œ› ± ›³„¯°­Ož¬µš?¯¬¢Z›³Q«­O¤žS·hŸ?œk›²£›¥­œ¥¶ ²S¤ ±»± ¦„³„›²£¯°­Ož.­š„›§›³„ÊV¤¡ ± ¯°­›¥¤y³®óX™¯œ¥ªž£¡¬¸÷¬øøO†yû ¯³„µ¡ž¬µ?¦„²Sž¬µ­šQžn¡ ›§ª\¤Êr­šQžšQž¬¯°¡ž£¡ ± ›k§§›³Q«­šQž ›³QÊXž£¡ž¬³„²SžÉó@án¯°¡ œ¥ž£­O­¯Z¸#÷¬øøOkyû Å c 7"d %¯¹„Ñ jjjÒÛ:„]]@ÿ s#ÿ G<= 7 »5» ‚  ° 4 º » "GŒÜ +A ;ƒGÿ; G<=€ r5º» ;:Ý;" V gi)0 6 º» "Jx ;('jÑ jjjÒ Ï ¹„Ñ jjjÒ²h;G}„ÿ; J<5 8 » » >…-@ÿ<+;(  " g_j~~(5>iÞ=à ß+āà"Æ=Ç|Þ"ÁEááÃâäã>ÃMÂåC  Þ5à ß+ÄKæ+Æ"Ç$çá ß;ÁIèÃGà ÄVáå ¾:é;A5@ÿ J5( "+  0gi„ OHG ;,I5@f „ÿ G<= ++@ÿ(0 O> E |g!+<="  "j S:^V\|aKRTWxu„^VWVŠ Y v„aKR/W Y uÈaVW0êVU-ŠÈ„[KX ‹{^ Y ÈKU-Š+RTŠO– éK¡¤Ÿž£¡­›¥ž¬§ ¡ž¬œ¯°­Ož¬µ ­O¤ ²S¤y³?§O­O¡ ¯›³|­b²eš„¯³Q«ž¬§$¯°¡ž½žS·hŸ?¡ž¬§§Ož¬µð›³ ¯©²S¤y³J­OžS·Z­Œ¨.šQž£¡žb­šQžb²eš„¯³Q«ž$›§.­O¤©´ž ›k³QÊVž£¡ ¡ž¬µ½´|¶½­šQž\šQž¬¯°¡ ž£¡ Å Ó ¡ëìëí /§ëì›I¢§î£m¥K¦$¡-§¨0›B' TO– ä šQž!³„žS·h­ ­Ò¨E¤`­¶|Ÿž¬§\¤Êm›k³QÊVž£¡ ž¬³„²SžÉ¯°¡ž»´¯§Ož¬µ¤y³¼­š„ž©›µQž¬¯ ­š„¯°­#›Êh¯§Ÿrž¬¯°ªž£¡u¡ž£Ÿž¬¯°­§#¯³¾¦„­O­Ož£¡ ¯³„²SžÇ¯³?µ7Ÿ„¡¤°º ¢Z›µQž¬§m³Q¤»³Qž£¨À›³„ÊV¤¡ ± ¯°­›¥¤y³¸h­š„›§m²£¯³®§šQ¤ ¨Û­š„¯°­ ï ,/ c 4=d @ÿ;> 05gi Ez';H 'H;5€  O5'0@ÿ H :g J Hÿs(55 Üj ¯¹§O­¯°«ž¤ÊK­š„žÌ›³|­Ož£¡ ¯²S­›¥¤y³”›§²S¤ ± Ÿ?œž£­OžóX™š„›¥­*º ­¯°ªž£¡Ì¯³„µ¶Ph­Ož¬³|­O¤y³¸Ç÷¬øONONhú$ð¤¡eµ„¯³¼¯³„µ¼ôŒ›$ˆK¦Qº «ž¬³„›¤Q¸÷¬øø¯®û Å õž£Ÿž¬¯°­›³Q«Ÿ„¡¤Ÿž£¡­›ž¬§\ÊX¤¡$¯`¡žSº ²Sž¬³|­œ¥¶¼ž£¢¤ªž¬µð›¥­Ož ± ²S¤y¦„œkµ§šQ¤ ¨!­š„¯°­b­šQžÉ²£¦Q¡Oº ¡ž¬³|­Œ§O­¯°«ž$š„¯§5ñ¦„§O­ ´ž£ž¬³`²S¤ ± Ÿ?œž£­Ož¬µ ¨ š„›œ¥ž¾µQ¤°º ›³„«»§O¤$ÊV¤¡ ¯³®¤yœµQž£¡›­Ož ± ²S¤y¦„œµ›³?µ„›²£¯°­Ož7­š„¯°­.¯ š„›«yšQž£¡ œ¥ž£¢ž¬œG§¦Q´?Ÿ„¡¤´?œ¥ž ± š?¯§.´ž£ž¬³²S¤ ± Ÿ?œ¥ž£­Ož¬µ Å Æ ³êóÔùyûe¸„Pìí§§Ož¬²S¤y³„µ®¦„­O­Ož£¡ ¯³„²Sžb¯°Ÿ?Ÿrž¬¯°¡e§n­O¤»ž¬³„µ®¯ §O­¯°«ž›k³¾­šQžn›³|­Ož£¡ ¯²S­›¥¤y³¸›³b­š„›§G²£¯§Ožm­šQžmž¬³„µb¤Ê ­šQž7¯°«¡ž£ž ± ž¬³J­Ÿ?¡¤h²Sž¬§ §mÊV¤¡¯ Ú£Ñ Ö Ñ Ù ÐÚ Ó*î£Õ7¯²S­›¥¤y³ óÞôŒ›KˆÇ¦Q«ž¬³„›¥¤$ž£­.¯œ Å ¸{kñOñOñJû Å c2º d %¯¹„Ñ jjjÒä,ÿ G<= ~ò 4 » » "JxÜ )Ñ jjjÒ ó ¹{D|) Ü +f $  $gi  $Ez'O<5 ]Þ=à ß+Ä ô à+Ç=Ç~Þ"ÁEááÃâ°ã>ÃMÂå j Ï _Ñ jjjÒ %¯¹„Ñ jjjÒöõiK (5÷ eÿG 5x ;m äø…ù;Á ô à+Ç5Ç Þ=ÁEááà â÷ãEÃTÂå j u^0\)\sR Y \)UW Y Ȅ[0‹-^ Y ÈKU-Š+RTŠ– Æ ³3­šQž ²S¤y³J­OžS·Z­$¤Ê ¯”²S¤ ±»± ›­ ± ž¬³J­­O¤¼¯Ÿ„¡¤°º Ÿr¤y§ ¯œÔ¸y¯œœZ­šQžmŸ„¡¤Ÿž£¡­›ž¬§ËžS·hŸ„¡ ž¬§§Ož¬µ$›³ ­šQž¾Ÿ„¡¤Ÿ¤y§¯œr¨.›kœœr´rž7¡ ž£Ÿrž¬¯°­Ož¬µ Å ú ž:ëìë쟜 /û@ŸF§ T¡{¢î£x¥K¦$¡§¨0›I' TO– ä šQžê§Ož¬²gº ¤y³„µê²£¯§OžÉ›³”¨.š?›² 𼝮š?›¥«yšQž£¡¾œ¥ž£¢ž¬œ§ ¦Q´„Ÿ„¡¤´œ¥ž ± ¨m¯§ ²S¤ ± Ÿ?œ¥ž£­Ož¬µ`›k§.›œœk¦„§O­O¡ ¯°­Ož¬µ¹´h¶¹­šQž\§¦ ±É± ¯°¡¶ ›³2ó‰yû Å â ¤­Ož¹­š„¯°­Ìô'§¦ ±»± ¯°¡e›¥È£ž¬§¾´¤­šðœ›¥¢Z›³Q« ¡¤h¤ ± óÞ¯§\¡ž+—h¦Qž¬§O­Ož¬µrû7¯³„µðµ„›³„›k³Q«¹¡¤|¤ ± ›¥­Ož ± § Å PZ¦ ±»± ¯°¡ ›¥ž¬§bµ„›üž£¡bÊX¡¤ ± ²S¤ ±»± ›­ ± ž¬³J­§$›k³ê­š„¯°­ ­šQž£¶3¯°¡ž`µ„ž¬œ¯¬¶ž¬µÄ¡ ž¬µQž¬§²S¡ ›¥Ÿ?­›¥¤y³„§ Åðä šQž®¯²S­›¥¤y³ ¯§§¤h²£›¯°­Ož¬µ»¨.›¥­šÌ­šQžn¤´ZñOž¬²S­5¨E¯§5²S¤ ± Ÿ?œ¥ž£­Ož¬µ»¯³„µ ­šQž\Ÿ?¯°¡­›²£›¥Ÿ¯³J­§š?¯µ ± ¤—¢ž¬µ`¤y³®­O¤É¯É³Qž£¨ÀŸ?¯°¡­ ¤ÊË­šQž7­¯§Oª Å c 85d ó ¹,(5~@ÿ;s ;(j}Ëÿ; ']=5Éÿ G<=)}@ÿ; <+(! " gý 0ÿ JV  h@ÿ;0 HK@ÿ;  gi þC¹@ÿ:(  E)Ü !s@ÿ;:<+;(! " g 4 º» j{'1 ;(} "5gCÿäÒ à÷Þ"ÁEááÃâ  ùå ¿Ä*ã Æ Áå  ù+AsÅ ù+¿Jù2ø2å5çáÁÞ"ÁEá…áà âAFÅCÞ=ÁEááà â°Ä*ß Š+‡K\)\sa0v„R a Y RT^0W Ȅ[0‹{^ Y ÈKU-Š+RTŠO– Æ ³ ­šQžÂ²S¤y³J­OžS·Z­Û¤Ês¯ Ÿ„¡ž£¢Z›¥¤y¦„§ œ¥¶ ²S¤ ± º Ÿ?œ¥ž£­Ož¬µŸ„¡¤´?œž ± ¤¡§¦„´„Ÿ„¡¤´?œž ± ¸\¯œœ ­šQž ± ¦Q­¦„¯œœ¥¶2ªZ³Q¤—¨.³ÀŸ„¡¤Ÿž£¡­›¥ž¬§ÊX¤¡ ¯³®›¥­Ož ± ¨.›kœœr´rž7¡ ž£Ÿrž¬¯°­Ož¬µ Å €›Iœ  ¯ŸF§; /¡¢Œ£m¥K¦$¡-§¨V›I' TO– ä šQž¾Á?³?¯œ­Ò¶hŸrž ¤ÊE›³QÊXž£¡ž¬³„²Sž¨Çž$²S¤y³„§ ›µQž£¡ž¬µ”›§¨.šQž¬³¼¯¹§OŸž¬¯°ªž£¡ ¡ž£Ÿž¬¯°­§¯³ ¦„­O­Ož£¡ ¯³„²SžÌ­O¤½§š„¤—¨­š„¯°­›­¨E¯§Œ¦„³Qº µQž£¡e§O­O¤|¤Zµó@ámœ¯°¡ªb¯³„µ}PZ² š„¯°ž£ÊXž£¡¬¸?÷¬øONøhú Ù ¡ž¬³„³„¯³0¸ ÷¬øøñZúb™¯œ¥ªž£¡¬¸$÷¬øøOkhú\™¯œ¥ªž£¡—¸Ì÷¬øøO†yû Å Æ ³2­šQž u„^0u„^0WK‡ Y ²S¤¡Ÿ?¦„§£¸Œ­šQžš„ž¬¯°¡ž£¡`§O¤ ± ž£­› ± ž¬§¡žSº Ÿž¬¯°­§­šQžµQž¬§²S¡ ›Ÿ„­›¥¤y³»›³»­šQž­¦Q¡ ³©› ±»± ž¬µ?›¯°­Ož¬œ¥¶ ÊX¤yœœ¥¤—¨ ›³Q« ÅoQ ¤¡\žS·Q¯ ± Ÿ?œž¸#›³òóÔùyû ¡ž£Ÿž¬¯°­§©Prìí§ µQž¬§²S¡e›¥Ÿ„­›¥¤y³©¤Ê#­šQž7§O¤ÊÞ¯Z¸?¯œ¥­š„¤y¦Q«yš½­šQž7§O¤ÊX¯¨m¯§ ›³|­O¡¤Zµ„¦„²Sž¬µ¼´h¶ìP Å ™¼ž¹²£œ¯› ± ­š„¯°­Ì­š„›§¾­¶|Ÿž©¤Ê ¡ž¬µQž¬§ ²S¡ ›¥Ÿ„­›¥¤y³½²S¤y¦„œµ¹šQž¬œŸ½¢ž£¡e›¥ÊV¶É­š„¯°­n­šQžŒŸ„¡¤ŸQº ž£¡­¶¹›k³QÊV¤¡ ± ¯°­›¥¤y³¹¨m¯§.²S¤¡¡ž¬²S­œ¶½¦„³?µQž£¡ §O­O¤h¤hµ Å `:Uv„R/ZIRTu{a Y RT^0W´È„[0‹-^ Y ÈKU-Š"RMŠ– Æ ³ð­šQž ²S¤y³|­OžS·Z­¼¤Êɯ³Qž£¨.œ¥¶À›³|­O¡¤Zµ„¦„²Sž¬µ ž¬³Zº ­›­Ò¶¸Z¯œœZ­šQžŸ„¡ ¤Ÿrž£¡ ­›¥ž¬§5žS·ZŸ„¡ž¬§§Ož¬µ»¨.›œœ ´ž¡ ž£Ÿrž¬¯°­Ož¬µ½´h¶$­šQžŒšQž¬¯°¡ž£¡m›³Éš„›k§UšQž£¡ ³„žS·h­n­¦Q¡e³ Å   • =„H è <2=?L – ;> ’•• HyF0;˜ ä ¤ð¢ž£¡e›¥ÊV¶ë¤y¦Q¡šJ¶hŸ¤­šQž¬§Ož¬§¯°´¤y¦Q­½¨.š„¯°­²S¤y¦„œµ ›³F¦Qž¬³„²Sžë¯°­O­O¡ ›´?¦Q­Ož§Ož¬œ¥ž¬²S­›¥¤y³¸½¨Ež¦?³„µQž£¡­O¤h¤ª ¯ ­Ò¨E¤ Ÿ?¯°¡ ­ ²S¤¡Ÿ?¦?§ ›³|¢ž¬§O­›¥«y¯°­›¤y³ Å Q ›¥¡ §O­£¸\¨Çž µ„›µ²S¤¡ ¡ž¬œ¯°­›¥¤y³„¯œ`§O­¦„µ„›¥ž¬§ò¤y³­šQž ²S¤¡Ÿ¦„§Ä­O¤ «ž£­Ì«y¦„›µ„¯³„²Sž©¤y³ê¨ š„›² šê¤Ên­šQž¹šJ¶hŸ¤­šQž¬§Ož¬§b¨Çž §šQ¤y¦?œµÛžS·Q¯ ± ›³Qž ± ¤¡žs²£œ¥¤y§ž¬œ¥¶ Å à ¦Q¡²S¤¡¡ž¬œ¯Uº ­›¥¤y³„¯œn§­¦„µ„›¥ž¬§$§š„¤—¨Ež¬µÄ­š„¯°­$­šQž®²S¤y³|­OžS·h­§½¯³„µ ¯°­O­O¡ ›¥´¦Q­Ož§ž¬œ¥ž¬²S­›¥¤y³„§m›³„µ?›²£¯°­Ož¬µ©›k³»¤y¦Q¡nš|¶|Ÿ¤­šQžSº §Ož¬§Ÿ¤y§›¥­›¢ž¬œ¥¶ê²S¤¡¡ž¬œk¯°­Ož¬µÄÊV¤¡»¯œkœÇ´¦Q­5ž£¡ ›¥Á?²£¯Uº ­›¥¤y³®¯³?µäPZ¦ ±»± ¯°¡ ›¥È¬¯°­›¥¤y³¼óð¤¡eµ„¯³¸-kñOñOñ¯Jû Å Æ ³ ­šQžÀ§Ož¬²S¤y³„µ Ÿ¯°¡­Ä¤Ê ¤y¦„¡ò›³J¢ž¬§­›¥«y¯°­›¥¤y³¸ ¨.š„›k² šë›k§É­šQž§¦Q´ZñOž¬²S­½¤Ê7­š„›§©Ÿ?¯°Ÿž£¡¬¸m¨Çž”¯³„¯Uº œ¥¶hÈ£ž¬µ¹šQ¤—¨Û¨Ež¬œœr²S¤ ± Ÿ?¦Q­Ož£¡§› ± ¦„œk¯°­Ož¬µ©§Ož¬œž¬²S­›¥¤y³„§ ÊX¤¡½­š„žá à á àŒâã ä ²S¤¡Ÿ?¦?§ ± ¯°­² šQž¬µÛšh¦ ± ¯³ §Ož¬œ¥ž¬²S­›¤y³„§ Å ™¼ž$¡ž¬¯§O¤y³„ž¬µ”­š„¯°­7›¥Ê¤y¦Q¡7š|¶|Ÿ¤­šQžSº §Ož¬§b¨Çž£¡ ž©¢°¯œ›µê­šQž¬³s¯§Ož¬œ¥ž¬²S­›¤y³s§O­O¡e¯°­Ož£«¶­š„¯°­ ›³„²S¤¡ Ÿr¤¡e¯°­Ož¬§­šQž ± §šQ¤y¦„œkµ ± ¯°­² š-­šQž§Ož¬œ¥ž¬²gº ­›¥¤y³„§ ± ¯µQž´|¶š|¦ ± ¯³„§7¯°­¾œ¥ž¬¯§O­¾¯§Œ¨Ež¬œœ5¯§7¯³ ›µQž¬³|­›¥Á?²£¯°­›¤y³Zº@¤y³„œ¥¶¹§Ož¬œ¥ž¬²S­›¥¤y³”§O­O¡e¯°­Ož£«¶ Å ™¼ž$¯³Zº ­›²£›¥Ÿ¯°­Ož¬µ3­š„¯°­­šQž®µQž£«¡ž£ž¹¤Ê ± ¯°­² šò²S¤y¦?œµð´rž §› ± ›œ¯°¡`§›k³„²Sžs­š„ž£¡ž ± ¯¬¶À´ž ± ¯³J¶¯œœ¥¤—¨m¯°´?œ¥ž ¨m¯¬¶Z§#­O¤ žS·hŸ„¡ ž¬§§#¯µ„ž¬§²S¡ ›¥Ÿ„­›¤y³7ÊV¤¡u›µQž¬³|­›¥Á?²£¯°­›¥¤y³ Ÿ?¦Q¡ Ÿr¤y§ž¬§$¯³„µÄ­šQž`§ž¬œ¥ž¬²S­›¥¤y³„§»›k³J­Ož¬³„µ„ž¬µ3­O¤s²£¦Qž ­šQž”›³QÊXž£¡ž¬³„²Sž¬§½²S¤y¦„œµ ›³J­Ož£¡e§Ož¬²S­§O¤ ± ž¤Ê¾­šQž¬§Ož ¯œœ¥¤ ¨E¯°´œ¥ž®¨m¯¬¶Z§ Å ¤—¨Ež£¢ž£¡¬¸.›Ê ­šQž šJ¶hŸ¤­šQž¬§Ož¬§ ¨Ež£¡žs›³|¢°¯œ›µ2­šQž¬³À­š„žê¡ž¬§¦?œ¥­›³Q«ÄµQž¬§²S¡e›¥Ÿ„­›¥¤y³„§ §šQ¤y¦?œµ»¤y³„œ¶ ± ¯°­² š¹­šQžŒ²S¤¡Ÿ?¦„§E¯§E¨Çž¬œkœ?¯§m›µQž¬³Zº ­›¥Á?²£¯°­›¤y³„¯œœ¥¶½¯µQž+—|¦?¯°­Ož\µQž¬§²S¡ ›Ÿ„­›¥¤y³„§E­š„¯°­.š?¯¬¢ž §O¤ ± žb¡ ¯³„µQ¤ ± ¯°­O­O¡ ›¥´?¦„­Ož¬§.›³„²£œk¦„µQž¬µ ÅQ ¤¡ žS·Q¯ ± º Ÿ?œ¥ž¸u›¥Ê|lï Ð ÕBpSÖ Ñ ›§\›µQž¬³|­›¥Á?²£¯°­›¥¤y³„¯œkœ¥¶`¯µQž+—h¦„¯°­Ož ­šQž¬³2›¥­ ± ›¥«yš|­©¯œ§O¤s¡ ¯³„µQ¤ ± œ¥¶ò›³„²£œ¦?µQž®¯³J¶ë¤Ê ­šQž½¡ž ± ¯›³?›³Q« ± ¦Q­¦„¯œœ¶ªZ³Q¤ ¨.³s¯°­O­O¡e›¥´?¦Q­Ož¬§¯§ ¨Ež¬œœóXž Å « Å lï²n Ñ Ý Ð ÕBpSÖ Ñ ¸Klï I·@¸ Ð Õ@p£Ö Ñ û Å ™¼žm§› ± ¦„œ¯°­Ož¬µ¾§ž¬œ¥ž¬²S­›¥¤y³„§#ÊV¤¡G­šQžEá à á àŒâã ä µ„›¯œ¤«y¦Qž¬§´h¶»¦„§›³„«¾¯³„³Q¤­¯°­›¥¤y³„§E¯°´¤y¦Q­­šQžµ„›§*º ²S¤y¦Q¡e§Ož¹ž¬³J­›¥­›ž¬§­O¤´žµQž¬§²S¡e›¥´ž¬µê¯³„µ3­šQž²S¤y³Zº ­OžS·Z­§ò›³ ¨.š?›² š ­šQž£¶ ¯°Ÿ„Ÿž¬¯°¡ž¬µ¯§ò›³QŸ¦Q­ê­O¤ ­šQžÀ§ž¬œ¥ž¬²S­›¥¤y³§O­O¡ ¯°­Ož£«y›ž¬§Ä¨ÇžÛ¨ ›§šQž¬µ ­O¤Â­Ož¬§O­ Å ™¼ž½¦„§Ož¬µ¼žS·Q›§O­›³„«¹¯³?³Q¤­¯°­›¥¤y³„§\­š„¯°­¾¨Ež£¡ž»Ÿ„¡žSº ¢Z›¥¤y¦„§œ¶-µQž£¢ž¬œ¥¤Ÿž¬µ ¯³„µ­Ož¬§O­Ož¬µ ÊV¤¡ë­šQžÜá à º á à7â ãä Ÿ?¡¤Uñ*ž¬²S­Œ¯§µQž¬§²S¡ ›¥´ž¬µ®›k³ðóÞôŒ›:ˆK¦Q«ž¬³„›¤ ž£­.¯œ Å ¸{kñOñOñJû¯§¨Çž¬œœ#¯§¤y³Qž¬§µ„ž£¢ž¬œ¥¤Ÿž¬µ®§OŸž¬²£›¥ÊVº ›²£¯œkœ¥¶ÉÊX¤¡­š„›§m¡ ž¬§Ož¬¯°¡ ² šðóð¤¡ µ?¯³¸{kñOñOñ°´rû Å! à ³Qž\­¶|Ÿž\¤Ê¯³?³Q¤­¯°­›¥¤y³ÊVž¬¯°­¦„¡ž\¨EžÌ¦?§Ož¬µ®­O¤ ›µ„ž¬³J­›¥ÊX¶$§O¤ ± žŒ¤Ê­šQžŒ²S¤y³J­OžS·Z­§›³?µ„›²£¯°­Ož¬µÉ›k³É¤y¦Q¡ š|¶|Ÿ¤­šQž¬§ž¬§£¸„¨Ež£¡ž\­šQ¤y§ž¾­š„¯°­ µ„ž£Á?³Qž¬µž¬œ¥ž ± ž¬³J­§ ¤Ê­šQž¯°«¡ž£ž ± ž¬³J­EŸ„¡¤Z²Sž¬§§µQž¬§ ²S¡ ›¥´ž¬µ›³®óÞôŒ›-ˆK¦Qº «ž¬³„›¤¾ž£­Ç¯œ Å ¸IkñOñOñJû Å!Q ›¥¡ §O­5¨Ež¨.›kœœhŸ„¡ž¬§Ož¬³|­Kš„›¥«yšQº œ¥ž£¢ž¬œZµQž£Á³„›¥­›¥¤y³„§#¤Ê?­šQž¬§žE¯°«¡ž£ž ± ž¬³J­5Ÿ„¡ ¤h²Sž¬§§Gž¬œ º ž ± ž¬³|­§n¯³„µ©­šQž¬³©¨ÇžŒ¨.›œkœQžS·hŸœ¯›³Éš„¤—¨¨ÇžŒ¦„§Ož¬µ ­šQž¬§ž\µQž£Á?³„›­›¥¤y³„§E­O¤É›µQž¬³|­›¥ÊX¶É­šQž¾²S¤y³|­OžS·Z­§ Å " Ÿ„¡¤Ÿ¤y§OžO– ä šQž§OŸž¬¯°ªž£¡K¤üž£¡ §­š„ž.›¥­Ož ± ¯³„µ ¦„³„²S¤y³„µ„›­›¥¤y³„¯œœ¥¶¹²S¤ ±»± ›­§ ­O¤¦„§ ›³Q«©›­¯³„µ ­šQž¤üž£¡ ± ¯°ªž¬§K­šQž ± ¦Q­¦„¯œ„§O¤yœk¦Q­›¥¤y³$§O­¯°­Ož µQž£­Ož£¡ ± ›³?¯°­Ož Å " Ÿ?¯°¡­³Qž£¡”µQž¬²£›µ„¯°´œ¥ž¤Ÿ„­›¥¤y³K– ä šQžs§OŸž¬¯°ªž£¡ ¤ürž£¡ §E¯³½›¥­Ož ± ¯³„µ©²S¤y³„µ„›¥­›¤y³„¯œœ¥¶Ì²S¤ ±»± ›¥­§ ­O¤Œ¦?§›³Q«›¥­u´¦Q­#­šQžm¤üž£¡œ¥ž¬¯¬¢ž¬§Ë­š„ž ± ¦Q­¦„¯œ §O¤yœ¦Q­›¥¤y³§­¯°­Ožb›³„µ„ž£­Ož£¡ ± ›³„¯°­Ož Å " ¦„³„²S¤y³„µ„›­›¥¤y³„¯œn²S¤ ±»± ›­"– ä š„ž`§OŸž¬¯°ªž£¡©›³Qº µ„›²£¯°­Ož¬§Eš„›§¦„³?²S¤y³„µ„›¥­›¥¤y³?¯œQ²S¤ ±»± ›¥­ ± ž¬³|­­O¤ ¦„§›³Q«­šQž¾›­Ož ± " ¦„³Qž¬³„µQ¤¡e§Ož¬µÀ¤Ÿ„­›¤y³– ä šQžð§OŸž¬¯°ªž£¡¤üž£¡ § ¯³›­Ož ± ´?¦„­ µQ¤hž¬§Œ³Q¤­§ šQ¤—¨Â¯³|¶`²S¤ ±»± ›¥­*º ± ž¬³J­Ì­O¤”¦„§›³Q«®›­\¨.šQž¬³ê­šQž ± ¦Q­¦„¯œE§O¤yœ¦Qº ­›¥¤y³®§O­¯°­OžÌ›§n¯œ¥¡ž¬¯µQ¶¹µQž£­Ož£¡ ± ›³„¯°­Ož Å ä šQž²S¤y³J­OžS·Z­GÊV¤¡u­šQžCPZ¦ ±É± ¯°¡ ›¥È¬¯°­›¥¤y³7š|¶hŸr¤­šQº ž¬§›k§ ›§`­šQž ± ¤y§O­¡ž¬§O­O¡ ›²S­›¢ž Å L ³¯°«¡ž£ž ± ž¬³|­ ± ¦„§O­š„¯—¢ž\´ž£ž¬³¡ž¬¯²ešQž¬µ®ÊX¤¡.¯³®¯³?³Q¤­¯°­Ož¬µ`¯²gº ­›¥¤y³É¨.›¥­šQ¤y¦„­­šQž.¯²S­›¥¤y³©´ž¬›³Q«7¡ž¬¯µ?µQ¡ž¬§§Ož¬µÉ´žSº ­¨Çž£ž¬³½­šQž.¯°«¡ ž£ž ± ž¬³J­m¯³„µÉ­šQž_PZ¦ ±»± ¯°¡ ›¥È¬¯°­›¤y³ Å ä š„ž ¯² š?›¥ž£¢ž ± ž¬³|­n¤Ê#¯³¹¯°«¡ž£ž ± ž¬³|­m§­¯°­OžŒ›k§Ç¯°ŸQº Ÿ„¡ ¤¬·Q› ± ¯°­Ož¬µ½¨.šQž¬³©ž¬›¥­šQž£¡Œ÷—ûǯbŸ?¡¤Ÿ¤y§Ož.¤¡mŸ?¯°¡­*º ³Qž£¡ÇµQž¬²£›µ„¯°´?œžE¤Ÿ?­›¥¤y³$¨m¯§­šQž.œk¯§O­§O­¯°­Ož.ÊX¤¡­šQž ¯²S­›¥¤y³¼¯³?µ ›¥­Œš„¯°Ÿ„Ÿž¬³Qž¬µ ± ¤¡žÌ­š„¯³­Ò¨E¤¹­¦Q¡ ³„§ ¯°«¤ð¤¡ƒkyû»¯³ë¦„³„²S¤y³?µ„›¥­›¥¤y³„¯œm²S¤ ±»± ›¥­»¨m¯§É­šQž œ¯§­§O­¯°­Ožb¯³„µ›­š„¯°Ÿ„Ÿž¬³Qž¬µ½­Ò¨E¤É¤¡ ± ¤¡ž7­¦Q¡ ³„§ ¯°«¤ Å Æ ³2­šQž”Á„¡e§O­½²£¯§ž¸­š„ž”¯°«¡ ž£ž ± ž¬³J­ ± ¦„§O­ #EÏ ' O@ÿK ; * J 5;   EE  ; Ëÿ G<= (5"+_" E H+'EK   ;T"j ´ž¾›³QÊXž£¡¡ž¬µ¹¯³„µ®›k³¹­šQž7¤­šQž£¡.­š„ž¾¯°«¡ž£ž ± ž¬³|­ ›§ ± ¤¡ž\žS·hŸ?œk›²£›¥­ Å ä š„ž áE¤ ±»± ›¥­ ± ž¬³J­ê²S¤y³|­OžS·Z­sžS·Q›§O­§ê¨.šQž¬³ü¯ ²S¤ ±»± ›­ ± ž¬³J­5›§G­O¤Œ´ž ± ¯µQžn¯³„µÌž¬›¥­šQž£¡n÷—ûu­šQž£¡ž ›§Œ¯½Ÿ„¡ž£¢Z›¥¤y¦„§Ÿ„¡¤Ÿ¤y§¯œu¤¡¾¦„³„²S¤y³?µ„›¥­›¥¤y³„¯œu²S¤ ± º ± ›¥­ ± ž¬³J­ÇÊV¤¡Ç­šQž ¯²S­›¥¤y³½›³|¢¤yœ¥¢Z›³Q«¾­šQž ž¬³J­›­Ò¶»›³ ­šQž¹› ±»± ž¬µ?›¯°­Ož¬œ¥¶Ÿ„¡ž£¢Z›¥¤y¦„§¾­¦„¡ ³s¯³?µê³Q¤¤­šQž£¡ ¦„³Q¡ ž¬œ¯°­Ož¬µÉž¬³|­›¥­›¥ž¬§mš„¯—¢žŒ´rž£ž¬³½µ„›§ ²£¦„§§Ož¬µÉÊV¤¡m­šQž ¯²S­›¥¤y³›³À­š„žs›³|­Ož£¡ › ± ¤¡kyû`¯ë§OŸž¬¯°ªž£¡”›³„µ„›¥º ²£¯°­Ož¬µë¦?³„²S¤y³„µ„›¥­›¤y³„¯œK²S¤ ±»± ›¥­ ± ž¬³|­»›³3š„›§ÌŸ„¡žSº ¢Z›¥¤y¦„§­¦Q¡ ³ Åòä š„›§µQž£Á?³?›¥­›¥¤y³ð¡ž"„ž¬²S­§©²S¤ ±»± ›¥­*º ± ž¬³|­5Ÿ?¯°­O­Ož£¡ ³„§5µ„ž¬§²S¡ ›¥´ž¬µÌ›³½óÞôŒ›ˆÇ¦Q«ž¬³„›¥¤ž£­¯œ Å ¸ kñOñOñJû Å ä š„ž\éGž£¡ §¦„¯§›¤y³ ²S¤y³|­OžS·h­žS·Q›§O­§ ¨.šQž¬³`¯ÉŸ„¡¤°º Ÿ¤y§¯œ›§É­O¤ê´ž ± ¯µQž¯³„µë¯œ­Ož£¡ ³„¯°­Ož §¤yœ¦Q­›¥¤y³„§ žS·Q›§O­¯³„µ­š„ž£¡ž¾›§n¯»²S¤y³|­O¡ ¯§O­´ž£­¨Çž£ž¬³`­šQž\²S¤yœ º ¤¡ § ¤¡”Ÿ„¡ ›²Sž¬§`­š„¯°­ ± ¯°ªžð­š„žêŸ„¡¤Ÿ¤y§Ož¬µ¿›¥­Ož ± ²£œ¥ž¬¯°¡ œ¶ ¯Ä´ž£­O­Ož£¡`²ešQ¤y›²Sž Å Œ›¥¢ž¬³À­šQžê¯³„¯œ¥¶Z§›§ ¤Ê¾­šQž¯°«¡ž£ž ± ž¬³|­Ÿ„¡¤Z²Sž¬§§½›³¿óÞôŒ›)ˆK¦Q«ž¬³„›¤êž£­ ¯œ Å ¸_kñOñOñJûe¸¨Ež¼›µQž¬³|­›¥ÊX¶ÄŸ„¡¤Ÿ¤y§¯œ§©´|¶ œ¥¤|¤ªZ›³Q« ÊX¤¡Kž¬›¥­šQž£¡Ç¯¾Ÿ„¡¤Ÿ¤y§Ož¦„­O­Ož£¡ ¯³„²Sž¸Z¤¡Ç¯³©¦„³„²S¤y³„µ„›¥º ­›¥¤y³„¯œK²S¤ ±»± ›¥­ ± ž¬³|­¾¦Q­O­Ož£¡ ¯³?²SžÉ¨.šQž£¡ ž$­šQž$Ÿ„¡žSº ¢Z›¥¤y¦„§0§O­¯°­OžEÊX¤¡#¯³\¯³„³„¤­¯°­Ož¬µb¯²S­›¥¤y³Ì›§0¯³b¦„³Qž¬³Zº µQ¤¡ §ž¬µ¼¤Ÿ„­›¥¤y³¼¤¡Ì¯Ÿ?¯°¡­³„ž£¡\µQž¬²£›µ„¯°´œ¥ž¤Ÿ„­›¥¤y³ Å ä šQž®¯œ­Ož£¡ ³„¯°­›¥¢ž¬§½¯°¡ž`¯°Ÿ„Ÿ?¡¤¬·Q› ± ¯°­Ož¬µÄ´h¶Ä¯²£²£¦Zº ± ¦„œ¯°­›³Q«”¯”œ›§O­b¤Ê.­šQž›¥­Ož ± §ž£¢¤ªž¬µÄÊV¤¡$ž¬¯²eš ¯³„³Q¤­¯°­Ož¬µÛ¯²S­›¤y³2¦„Ÿ2¦„³|­›œ ¯sŸ?¡¤Ÿ¤y§Ož`¤¡¦„³Zº ²S¤y³„µ„›­›¥¤y³„¯œ²S¤ ±»± ›­ ± ž¬³J­n¤Z²£²£¦Q¡ § Å à ³?²Sž©¨Ežš„¯¬¢ž›µQž¬³|­›¥Á„ž¬µsŸ„¡¤Ÿ¤y§¯œk§\¯³„µ3¯œ º ­Ož£¡ ³„¯°­›¢ž3§¤yœ¦Q­›¥¤y³„§¬¸b³QžS·Z­ ¨EžÄ² šQž¬² ª¿ÊX¤¡¼²S¤y³Zº ­O¡ ¯§O­§ð­O¤­šQž ¯œ­Ož£¡ ³„¯°­›¥¢ž§O¤yœ¦Q­›¤y³„§ê¯³„µ-­šQž Ÿ?¯°¡­›k¯œ§¤yœ¦Q­›¥¤y³ Å Q ¤¡`²S¤yœ¥¤¡¨Ež”²S¤ ± Ÿ?¯°¡ž¼­šQž ²S¤yœ¥¤¡Œ¤Ê­šQžŸ„¡ ¤Ÿr¤y§ž¬µ`›¥­Ož ± ­O¤¹­šQ¤y§Ož$›¥­Ož ± §¯œ º ¡ž¬¯µQ¶©§Ož¬œ¥ž¬²S­Ož¬µ¹ÊX¤¡Ç­šQžŒ¡¤h¤ ± ¯³„µ©­šQž¯œ­Ož£¡ ³„¯°­Ož ›¥­Ož ± § ÅGÆ Êr­šQžŸ„¡¤Ÿ¤y§Ož¬µÉ›¥­Ož ±ü± ¯°­²ešQž¬§K­š„ž.²S¤yœ¥¤¡ ¤Ê›¥­Ož ± §Ì¯œ¥¡ž¬¯µ„¶§Ož¬œ¥ž¬²S­Ož¬µsÊV¤¡Ì­šQžÉ¡ ¤|¤ ± ¨.š„›œž ³Q¤y³Qž$¤ÊE­šQžÉ¯œ­Ož£¡ ³„¯°­Ož¬§\µ„¤Q¸0­šQž¬³¯®éGž£¡ § ¦„¯§›¥¤y³ ²S¤y³|­OžS·h­bžS·Z›§­§ Å©Q ¤¡\Ÿ„¡ ›k²Sž¬§ ­š„ž£¡ž»¯°¡ž$­Ò¨E¤®Ÿ¤y§*º §›¥´›œ›¥­›¥ž¬§Œ­š?¯°­ÌµQž£Ÿž¬³„µ¤y³s¨ šQž£­šQž£¡Ì¤¡Ì³Q¤­Ì­šQž ž¬³„µs¤Ên­šQž½Ÿ„¡¤´?œ¥ž ± §O¤yœ¥¢Z›³Q«®ž=ü¤¡­›§Ì³Qž¬¯°¡ ›³Q« Å L³›­Ož ± ± ¯—¶`´žb¯©´ž£­O­Ož£¡7² šQ¤y›²Sž÷—û.¨.šQž¬³ ­šQž Ÿ„¡ ›k²Sž¤Ê„­šQžÇŸ„¡¤Ÿ¤y§Ož¬µ\›­Ož ± ›k§0«¡ž¬¯°­Ož£¡Ë­š„¯³b­š„¯°­ ¤Êž¬¯² šë¯œ­Ož£¡ ³„¯°­Ožsóޛ Å ž Å ›­ ± ¯¬¶ð´ž®šQž¬œ¥Ÿ?›³„«`­O¤ §OŸž¬³„µs¤y¦Q­­šQž½´?¦„µQ«ž£­eû\¤¡}kyûb¨.šQž¬³s­šQž¹Ÿ„¡ ›²Sž ¤ÊG­šQžŒŸ„¡¤Ÿ¤y§Ož¬µ¹›¥­Ož ± ›§nœ¥ž¬§ §Ç­š?¯³¹­š„¯°­n¤Êuž¬¯²eš ¯œ¥­Ož£¡ ³?¯°­Ož óޛ Å ž Å ­šQž»² š„ž¬¯°Ÿrž£¡b›¥­Ož ±w± ¯—¶ ´žŸ„¡žSº ÊXž£¡¡ž¬µ§›³„²Sž”›¥­¹œ¥ž¬¯¬¢ž¬§§¤ ± ž ± ¤y³Qž£¶ëÊX¤¡¹¤­šQž£¡ Ÿ?¦Q¡e² š„¯§Ož¬§gû Å ä š„ž®¡ž ± ¯›³?›³Q«²S¤y³|­OžS·h­§¹¯°¡ž ž¬¯§›¥ž£¡É­O¤s¡ž¬²gº ¤«y³„›È£ž Åbä šQž$5ž£¡ ›¥Á²£¯°­›¥¤y³¼²S¤y³J­OžS·Z­¾žS·Q›§O­§¨.šQž¬³ ¯³»›­Ož ± ¨E¯§›k³„›¥­›¯œœ¶Œµ„ž¬§²S¡ ›¥´ž¬µÌ›³Ì­šQž› ±»± ž¬µ?› º ¯°­Ož¬œ¥¶¼Ÿ?¡ž£¢h›¤y¦„§Œ­¦Q¡ ³ Å½ä šQžÉô¤ ± ¯›k³ê²S¤y³„§O­O¡e¯›³J­ ²eš„¯³Q«žÄ²S¤y³J­OžS·Z­¼žS·Q›§O­§¨.šQž¬³Qž£¢ž£¡”¯³Ü› ± Ÿ?œk›²£›¥­ ²S¤y³„§­O¡ ¯›³|­.² š„¯³„«ž\›§µ„›¡ž¬²S­œ¥¶©¯³?³Q¤­¯°­Ož¬µ Å ™¼ž¾¦?§Ož¬µ¹­šQž7š|¦ ± ¯³½«ž¬³Qž£¡ ¯°­Ož¬µ®µQž¬§ ²S¡ ›¥Ÿ„­›¥¤y³?§ ›³½­šQž\á à á àŒâã ä ²S¤¡Ÿ?¦„§m­O¤ž£¢°¯œ¦„¯°­Ož7­šQž7µQžSº §²S¡e›¥Ÿ„­›¥¤y³„§b²S¡ž¬¯°­Ož¬µð´|¶¼­š„ž½§Ož¬œž¬²S­›¥¤y³s§­O¡ ¯°­Ož£«y›¥ž¬§ ¨Ež ¨ ›§šQž¬µë­O¤ð­Ož¬§­ ÅÜä ¤Ä²S¤ ± Ÿ?¯°¡ž­šQžŸž£¡ÊV¤¡º ± ¯³?²Sž½¤Ê¯¼§Ož¬œ¥ž¬²S­›¥¤y³3§O­O¡e¯°­Ož£«¶ð­O¤”­š„¯°­$¤Ê š|¦Zº ± ¯³?§£¸\¨Ežs¦„§ž¬µ¿¯ ± ž¬¯§¦Q¡ žê¤Ê$­šQž3µQž£«¡ž£žð¤Ê ± ¯°­²eš ´ž£­¨Çž£ž¬³¿­šQžsšh¦ ± ¯³0ìí§®¯³„µÀ­š„žê§O­O¡ ¯°­*º ž£«¶rìí§Œ§Ož¬œ¥ž¬²S­›¥¤y³`¤Ê˯°­O­O¡e›¥´?¦Q­Ož¬§ ÊV¤¡ ­šQž\§¯ ± žbµ„›§Oº ²S¤y¦Q¡e§Ož¼ž¬³J­›­Ò¶›³­šQž¼§¯ ± žµ„›k¯œ¥¤«y¦Qž¼²S¤y³J­OžS·Z­ Å Æ ³„²£œ¦„§ ›¥¤y³¼¯³„µžS·Q²£œ¦„§ ›¥¤y³¼¤Ê¯³ð¯°­O­O¡ ›¥´?¦Q­OžÉ´r¤­š ²S¤y¦„³|­u›k³7­šQžÇµ„ž£«¡ž£žK¤Ê ± ¯°­²eš Å L3Ÿrž£¡ ÊVž¬²S­ ± ¯°­² š ± ž¬¯³?§­š„¯°­$­šQž§O­O¡ ¯°­Ož£«¶Ä² š„¤y§Ož­O¤›³„²£œ¦?µQžÉ¤¡ žS·Q²£œ¦„µQžn­š„ž.§¯ ± ž.¯°­O­O¡e›¥´?¦Q­Ož¬§¯§K­šQžšh¦ ± ¯³»µ„›kµ ÊX¤¡É¯”Ÿ?¯°¡­›k²£¦„œ¯°¡ž¬³J­›­Ò¶ Åëä š„ž ± ž¬¯§¦Q¡ž¸&%(')ê¸ ¡ ¯³„«ž¬§©´ž£­¨Çž£ž¬³ ñ𯏳„µ¿÷›³„²£œk¦„§›¥¢ž¸m¨.šQž£¡ ž*% ›§ ­šQžÌ³|¦ ± ´ž£¡.¤ÊK¯°­O­O¡e›¥´?¦Q­OžÌ›³„²£œ¦„§ ›¥¤y³„§n¯³„µ žS·|º ²£œ¦?§›¥¤y³„§\­š„¯°­Ì¯°«¡ ž£ž½¨ ›¥­šê­š„ž½šh¦ ± ¯³sµQž¬§²S¡ ›¥Ÿ„º ­›¥¤y³s¯³„µ+)›§7­š„žÉ³|¦ ± ´ž£¡¾¤Ên¯°­O­O¡ ›¥´¦Q­Ož¬§¾­š„¯°­ ²S¤y¦„œkµÀ´žêžS·ZŸ„¡ž¬§§ž¬µÀÊX¤¡¯³¿ž¬³J­›­Ò¶ Å ä š„›§¡žSº §OŸ¤y³„§ž ¢°¯°¡ ›¯°´?œž›§m²£¯œœž¬µ-,/.02143›³½­šQžŒžS·hŸž£¡ ›¥º ± ž¬³|­§­š?¯°­.ÊX¤yœœ¥¤—¨ Å L.ÊX­Ož£¡ µ„¤y›³Q«»¯³`¯³?¯œ¥¶h§ ›§m¤Ê ¢°¯°¡ ›¯³„²Sž¹ó­`¯°­£¸Ë÷¬øøONyû.¤y³`­š„ž¾¡ž¬§¦„œ­§¤Ê5žS·ZŸrž£¡º › ± ž¬³J­§¨ šQž£¡ž¼¨Ež¼¢U¯°¡ ›ž¬µÛ­šQž§Ož¬œ¥ž¬²S­›¤y³À§O­O¡ ¯°­*º ž£«¶¸Œ¨Ež”¦?§Ož¬µ ± ¦?œ¥­›¥Ÿ?œ¥ž`Ÿ?¯›¥¡¨.›k§Ož ²S¤ ± Ÿ¯°¡ ›§O¤y³„§ ó­”á~L7û$ó  § ¦¸÷¬øøO‰yû65Ì­O¤®œ¥¤Z²£¯°­Ož»¨.šQž£¡ ž§ ›¥«y³„›¥ÊVº ›²£¯³|­mŸž£¡ÊV¤¡ ± ¯³„²SžŒµ„›üž£¡ž¬³„²Sž¬§m´ž£­Ò¨Ež£ž¬³§O­O¡ ¯°­OžSº «y›¥ž¬§ÉžS·Z›k§O­Ož¬µ Å87 ™¼ž µ?›§OŸ?œ¯—¶ê­šQž`¡ž¬§¦„œ¥­§¤ÊŒ­šQž ± ¦„œ¥­›¥Ÿœ¥ž5²S¤ ± Ÿ?¯°¡ ›§¤y³„§#¯§Ëøù¯×À²S¤y³„Á?µQž¬³„²SžK›k³J­Ož£¡Oº ¢°¯œ§£¸móXž Å « Å ¯§\›³ Q ›¥«y¦„¡ž÷†yûe¸G¨.š„›²e𝰡ž©¯œ¨E¯—¶h§ ¤ÊË­šQž7ÊX¤¡ ± – c EM g J  d:9 c HE  HG iO + d; c M*  *ŽE1  V FM g J  d ä šQž$²S¡ ›¥­›²£¯œGŸ¤y›³|­›³­šQž»¯°´¤ ¢ž»²£¯œ²£¦„œ¯°­›¥¤y³ µQž£Ÿž¬³„µ?§ ¤y³”­š„ž ± ¦„œ¥­›Ÿ?œ¥žb²S¤ ± Ÿ?¯°¡ ›§O¤y³ ± ž£­šQ¤hµ ¦„§ž¬µ2óXž Å « Å”ä ¦„ªž£¶¸ô¦?³„³Qž£­O­£¸=<:PZô¾û Å ™¼ž½²ešQ¤y§Ož ­šQž ± ž£­šQ¤Zµ`­š?¯°­ ²S¡ž¬¯°­Ož¬µ­šQžÌ§ ± ¯œœ¥ž¬§­.²S¡ ›¥­›²£¯œ Ÿ¤y›³|­¯³„µ¹­š„›§.›§n›³„µ„›k²£¯°­Ož¬µ›³¹ž¬¯² š Á„«y¦Q¡ž Å?> @ Ë©);ƒ%"1/;fgC HE5gio;HE 5} ]OE g @ÿ;hg!; ;hH gi J  ; c D_ GA 6G3 3A=d j B D‚Ð Ï V CM* ; J*)M*  M HG I "HEG' h V; 1 KiH5gi;  5j!,2$ 5DCMM h@ÿ; Ï # ¼FE Ï H5GOE;H + > <5 €tE ©' 5; (= J 5 cIH @A 6353 8J Ðeÿ;@A 6G3 3 º d j K %"1/;f0gC; H gix…;;H>  HG 5'   äH5'1 ( ) Summ − Summ −0.018 −0.014 −0.010 −0.006 −0.002 0.0 individual 95 % confidence limits, LSD method response variable: match − + Q ›¥«y¦Q¡žÌ÷–ôŒ›ürž£¡ ž¬³„²Sž ´ž£­¨Çž£ž¬³ ± ž¬¯³©Ÿž£¡ÊV¤¡ ± ¯³„²Sž¬§Ç¨.šQž¬³»žS·Q²£œ¦„µ?›³Q«\²S¤y³„§ ›µQž£¡ ¯°­›¥¤y³©¤Ê­šQž€PZ¦ ±»± ¯Uº ¡ ›¥È¬¯°­›¤y³®šJ¶hŸ¤­šQž¬§›§¾óVºEPQ¦ ±»± ûǯ³„µ®›k³„²£œ¦„µ„›k³Q«b›¥­\óML]PZ¦ ±»± ûKÊX¤¡ ÆÆ*â|Q ( ) Verif − Verif 0.0 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 individual 95 % confidence limits, LSD method response variable: match − + Q ›¥«y¦Q¡ž_kI–ôŒ›ürž£¡ ž¬³„²Sž ´ž£­¨Çž£ž¬³ ± ž¬¯³©Ÿž£¡ÊV¤¡ ± ¯³„²Sž¬§Ç¨.šQž¬³»žS·Q²£œ¦„µ?›³Q«\²S¤y³„§ ›µQž£¡ ¯°­›¥¤y³É¤Ê0­šQžN5ž£¡ ›¥Á?²£¯Uº ­›¥¤y³®š|¶hŸr¤­š„ž¬§›§¾óVºM5ž£¡ ›¥Êeûn¯³„µ®›³„²£œk¦„µ„›³Q«Ì›¥­¾óMLN5ž£¡ ›¥ÊeûmÊV¤¡ Æ Æ â)Q Æ ³J­Ož£¡ ¢U¯œ§`›³Û­šQžÁ?«y¦Q¡ž¬§­š„¯°­`žS·Z²£œ¦?µQž¼È£ž£¡¤ ›³„µ?›²£¯°­Ož §O­¯°­›k§O­›²£¯œœ¥¶Ä§›«y³„›¥Á?²£¯³|­ÉŸž£¡ÊX¤¡ ± ¯³„²Sž µ„›üž£¡ž¬³„²Sž¬§ Åöä š„žœ¯°´ž¬œ§¤y³À­š„žPOÛ¯U·Z›k§®›³„µ„›¥º ²£¯°­OžÄ­šQžð­Ò¨E¤Àœ¥ž£¢ž¬œ§¤¡”žS·hŸž£¡ › ± ž¬³J­¯œbÊÞ¯²S­O¤¡ § ÊX¤¡¨ š„›² š ­šQž ± ž¬¯³”µ„›üž£¡ž¬³„²Sž¬§Œ¯°¡ž$§š„¤—¨.³ ÅŒÆ Ê ­šQž`›k³J­Ož£¡¢°¯œ.›k§$­O¤ê­šQž ¡ ›¥«yš|­$¤Ê7È£ž£¡¤ê­š„ž¬³ò­šQž Á„¡ §­ ± ž ± ´ž£¡n¤Ê#­š„ž¾œ¯°´ž¬œŸ¯›¥¡mŸrž£¡ ÊV¤¡ ± ž¬µ©´ž£­*º ­Ož£¡b¯³?µ›¥ÊE­šQž»›³|­Ož£¡¢U¯œK›§7­O¤®­šQž©œ¥ž£ÊV­\­šQž¬³¼­šQž §Ož¬²S¤y³„µ ± ž ± ´ž£¡¤ÊË­šQž7Ÿ¯›¥¡nŸž£¡ÊV¤¡ ± ž¬µ¹´ž£­O­Ož£¡ Å Q R è <ëçK>@; –yè FuL ””= • çK> –• Q ›¥¡ §O­£¸?¨Çžbž¬§O­¯°´?œ›§ šQž¬µ®¯»´¯§Ož¬œ›³Qž7¨ šQž£¡ž\›µ„ž¬³J­› º Á?²£¯°­›¥¤y³¹›§Ç­šQž¤y³„œ¥¶$«¤y¯œ Åä ¤¯µ„µQ¡ž¬§§Ç­šQžŒ›µQž¬³Zº ­›¥Á?²£¯°­›¤y³É«¤y¯œÔ¸h¨Ež¦„§ž¬µ»­šQžŒ›³„²S¡ž ± ž¬³J­¯œ§O­O¡ ¯°­*º ž£«¶¤ÊÌôŒ¯œ¥žb¯³„µ®õž¬›­Ož£¡bó*÷¬øøù|û”óTS4U Ó û Å S4U Ó ›³„²S¡ ž ± ž¬³J­¯œkœ¥¶´?¦„›kœµ„§\¯¼µ„ž¬§²S¡ ›¥Ÿ„­›¤y³ê´h¶s²ešQž¬²ª|º ›³Q«½¯³¤¡ µQž£¡ž¬µ”œ›§­¤ÊK¯°­O­O¡ ›¥´¦Q­Ožb­¶|Ÿž¬§Œ¯³„µ”§OžSº œ¥ž¬²S­›³„«ë¯³ ¯°­O­O¡ ›´?¦Q­Ož¤y³„œ¶ ¨.š„ž¬³À›¥­®¡ ¦?œ¥ž¬§¤y¦Q­ ¯³|¶½¡ ž ± ¯›³„›³„«µ?›§O­O¡ ¯²S­O¤¡ § Å L §.µ„›§O­O¡ ¯²S­O¤¡e§.¯°¡ž ¡ ¦„œž¬µ®¤y¦Q­£¸­šQž£¶ ³Q¤½œ¤y³Q«ž£¡7›³F?¦Qž¬³?²Sž¾­šQž$§Ož¬œ¥ž¬²gº ­›¥¤y³©Ÿ„¡¤Z²Sž¬§§ Åä š„ž ›³„›­›¯œ?§Ož£­E¤Ê#µ„›k§O­O¡ ¯²S­O¤¡ §m¯°¡ž ²S¤ ± Ÿ?¦„­Ož¬µÌ¯²£²S¤¡eµ„›³Q«­O¤Œ¨.š„¯°­Ë›§#žS·ZŸž¬²S­Ož¬µ­O¤Œ´rž ›³¹ÊX¤h²£¦?§mÊV¤¡n­šQž¾§Ÿrž¬¯°ªž£¡ ¯³„µ­šQž¾šQž¬¯°¡ ž£¡n´?¯§Ož¬µ ¤y³­šQžb›³J­Ož¬³|­›¥¤y³„¯œ§­O¡ ¦„²S­¦Q¡ž7¤ÊË­šQž¾µ?›¯œ¥¤«y¦Qž Š⠞S·h­£¸Q¨Ež¾²S¡ž¬¯°­Ož¬µ®¯Ÿ¯°¡ ¯ ± ž£­Ož£¡ ›¥È£ž¬µ§ž¬œ¥ž¬²S­›¥¤y³ §O­O¡ ¯°­Ož£«¶$²£¯œœž¬µ©ÍXÎ ÐÒÑ Î Ð ÍÞӏÎհքÍXά×EØ Ñ ÎÙ ÑSÚ óTSVS4U*W û Å SVS4U*W읧˟?¯°¡e¯ ± ž£­Ož£¡ ›¥È£ž¬µ$ÊX¤¡¨.š?›² š$²S¤y³|­OžS·h­§E¯°¡ž ¯œœ¥¤ ¨Çž¬µ\­O¤ ›³?¦Qž¬³„²Sž¯°­O­O¡ ›´?¦Q­OžÇ§ž¬œ¥ž¬²S­›¥¤y³\§¤­š„¯°­ ¨Ež²£¯³”µQž£­Ož£¡ ± ›³QžÌ¨.š„›²eš ²S¤ ± ´?›³?¯°­›¥¤y³„§ ¤ÊK¤y¦Q¡ E ;@ÿK<5 ~giE@ÿ+F G;!@ÿ;Kg MHE  HG  O5"Gj š|¶|Ÿ¤­šQž¬§ž¬§5¡ž¬§¦„œ­G›³­š„žn´rž¬§­ ± ¯°­² š»­O¤¾šh¦ ± ¯³ µQž¬§ ²S¡ ›¥Ÿ„­›¥¤y³?§ Å L ÊX­Ož£¡§Ož¬œž¬²S­›³Q«©¯°­O­O¡e›¥´?¦Q­Ož¬§ ¯§ ›³Qº µ„›k²£¯°­Ož¬µü´h¶­š„žë›³„²£œ¦?µQž¬µÜšJ¶hŸ¤­šQž¬§Ož¬§£¸SVS4U*W ­šQž¬³ë¦„§Ož¬§­šQž/S4U Ó §O­O¡ ¯°­Ož£«¶Ä­O¤µQž£­Ož£¡ ± ›³Qž›¥Ê ¯µ„µ?›¥­›¥¤y³„¯œn¯°­O­O¡ ›´?¦Q­Ož¬§»¯°¡ž ³Qž£ž¬µQž¬µò­O¤ê¡ ¦„œ¥ž¤y¦Q­ ¯³|¶¹¡ž ± ¯›³„›k³Q«Ìµ?›§O­O¡ ¯²S­O¤¡ § Å ä ¤ µQž£­Ož£¡ ± ›³QžXSVS4U*W7ìí§Ÿ?¯°¡ ¯ ± ž£­Ož£¡ð§Ož£­O­›³Q«y§ ÊX¤¡­š„›§Ÿ?¯°Ÿž£¡¬¸¨Çž.¨.›œœZ¯²£²Sž£Ÿ„­Ç­šQžnŸ¤y§›¥­›¥¢ž²S¤¡Oº ¡ž¬œk¯°­›¥¤y³„¯œ7¡ž¬§ ¦„œ¥­§¹ÊX¡¤ ± ­šQžÁ?¡ §O­Ÿ?¯°¡­¤Ê¤y¦Q¡ §O­¦?µQ¶s¯³„µ3¤y³?œ¥¶ê§Oªž£Ÿ„­›k²£¯œœ¥¶ê­Ož¬§­$­šQž®³Qž£«y¯°­›¥¢ž ¤y³Qž¬§ Å ™¼žÀÊX¤y¦„³„µ ­š„¯°­ÛPZ¦ ±É± ¯°¡ ›¥È¬¯°­›¥¤y³ š„¯µ ¯²£œ¥ž¬¯°¡¼Ÿ¤y§›¥­›¢žs›³F¦Qž¬³„²Sžê¨ š„›œ¥ž+5ž£¡e›¥Á?²£¯°­›¥¤y³ š„¯µ ¯ò²£œ¥ž¬¯°¡ ³„ž£«y¯°­›¥¢žê¤y³Qž Å Q ¤¡PQ¦ ±»± ¯°¡ ›È¬¯Uº ­›¥¤y³­š„ž£¡ž›§¯Ä§›¥«y³?›¥Á?²£¯³|­½µ„›Üürž£¡ž¬³?²Sž›³2Ÿrž£¡º ÊX¤¡ ± ¯³„²Sž`óTY[ZÉkù]\ ®h÷_^T`bac\ñOñOñOñOñOñ‘|ûb¯³„µ”­šQž Ÿž£¡ÊX¤¡ ± ¯³„²Sž²S¤ ± Ÿ?¯°¡ ›§O¤y³ §šQ¤—¨ ³À›³ Q ›¥«y¦Q¡žò÷ ›³?µ„›²£¯°­Ož¬§E­š„¯°­›¥­m›§E´ž£­O­Ož£¡n­O¤»›³„²£œ¦„µ„ž.­šQž7§¦ ± º ± ¯°¡e›¥È¬¯°­›¥¤y³®š|¶|Ÿ¤­šQž¬§ ›§ ÅQ ¤¡d5ž£¡ ›¥Á?²£¯°­›¤y³®­šQž£¡ž ›§ ¯œ§O¤É¯©§›¥«y³„›Á?²£¯³J­µ?›ürž£¡ ž¬³„²Sž\›³Ÿž£¡ÊX¤¡ ± ¯³?²Sž óTYeZ ÷+N]\ ®h÷_^T`fag\ñOñOñOñ¯kyû»´¦Q­ Q ›¥«y¦Q¡žÕk›³„µ?› º ²£¯°­Ož¬§›¥­.›§´rž£­O­Ož£¡³Q¤­­O¤É²S¤y³?§›µQž£¡h5ž£¡e›¥Á?²£¯°­›¥¤y³ Å L§b¯ ¡ž¬§¦?œ¥­£¸u­š„ž(SVS4U*W!§­O¡ ¯°­Ož£«¶ê¨EžÉ­Ož¬§O­šQž£¡ž ¨.›kœœ#›³?²£œ¦„µQžÌ¯œœG´?¦Q­­šQž5ž£¡ ›¥Á?²£¯°­›¥¤y³”š|¶|Ÿ¤­šQžSº §›k§ Å ä šQžÁ?³„¯œ|§Ož¬œ¥ž¬²S­›¥¤y³Ì§O­O¡ ¯°­Ož£«¶\­š„¯°­u¨EžK¨ ›œœ°­Ož¬§O­ ›§`²£¯œœ¥ž¬µÛ¡ ¯³?µQ¤ ± ›¥È£ž¬µÀ›k³F?¦Qž¬³„²Sž¬§óTijS4U*Wû ÅÆ ­ ›§ ± ¤­›¥¢°¯°­Ož¬µ´h¶Ä¤y¦Q¡¹žS·hŸž¬²S­¯°­›¥¤y³2­š„¯°­›¥Ê7­šQž ´ž¬§O­5²S¤ ± ´?›³„¯°­›¥¤y³»¤Ê?­šQž²S¤ ±»± ¦„³„›k²£¯°­›¥¢ž«¤y¯œ§£¸ ¯§¹›³„µ?›¥¡ž¬²S­œ¥¶3¡ž£Ÿ„¡ ž¬§Ož¬³J­Ož¬µ›³ ¤y¦Q¡¹šJ¶hŸ¤­šQž¬§Ož¬§£¸ ¯°¡ž³Q¤­m›³F?¦Qž¬³|­›¯œQ›³É§Ož¬œ¥ž¬²S­›³Q«Ì¯°­O­O¡ ›¥´?¦„­Ož¬§K­šQž¬³ ­šQž¬§ž½¯µ„µ?›¥­›¥¤y³„¯œ5«¤y¯œk§\¨E¤y¦„œµê´ržÉ­šQž½§ ¯ ± ž©¯§ ( ( ( ) ) ) INC − RINF INC − IINF RINF − IINF −0.16 −0.12 −0.08 −0.04 0.0 0.04 0.08 0.12 simultaneous 95 % confidence limits, Tukey method response variable: match Q ›¥«y¦Q¡ž‚†I–ôŒ›üž£¡ž¬³„²Sž¬§m´ž£­Ò¨Ež£ž¬³ ± ž¬¯³¹Ÿž£¡ÊX¤¡ ± ¯³„²Sž¬§Ç¤Ê#­š„žŒ›k³„²S¡ž ± ž¬³|­¯œ ± ¤hµ„ž¬œËó Æ*â áûe¸Q›³J­Ož¬³|­›¥¤y³„¯œ ›³F¦Qž¬³„²Sž¬§ ± ¤ZµQž¬œó ÆÆ*â|Q ûm¯³?µ¹¡ ¯³„µQ¤ ± ›k³F?¦Qž¬³„²Sž¬§ ± ¤hµQž¬œóÞõ Æ â)Q û  ¶|Ÿ¤­šQž¬§›k§ éuž£¡e²Sž¬³J­¯°«ž»áE¤y³|­O¡ ›¥´?¦Q­›¤y³©­O¤©ô ž¬§ ²S¡ ›¥Ÿ„­›¥¤y³?§ Æ µQž¬³|­›¥Á?²£¯°­›¥¤y³ kø Å †O†¯× ám¤ ±»± ›¥­ ± ž¬³|­ kO‰¯× PZ¦ ±»± ¯°¡e›¥È¬¯°­›¥¤y³ kOk Å ‰¯®O× éGž£¡ § ¦„¯§›¥¤y³ ÷+‰ Å ‰¯®O× ô¤ ± ¯›³`²S¤y³„§O­O¡ ¯›³|­² š?¯³Q«ž¬§ ù Å †O†¯× ä ¯°´œ¥ž»÷–EáE¤y³|­O¡ ›¥´?¦„­›¥¤y³„§E¤Ê=Œ¤y¯œGáE¤y³|­OžS·h­§.­O¤Éõž¬µ„ž¬§²S¡ ›¥Ÿ„­›¤y³„§ SVS4U*W ± ¯°ªZ›³Q«¡e¯³„µQ¤ ± §ž¬œ¥ž¬²S­›¥¤y³„§\¤ÊE­š„žÉ³Q¤y³Zº ›µQž¬³|­›¥Á?²£¯°­›¤y³„¯œœ¥¶ ³Qž¬²Sž¬§§¯°¡ ¶ ¯°­O­O¡ ›¥´¦Q­Ož¬§ Å ä ¤ ­Ož¬§O­b­š„›§\›kµQž¬¯Z¸u­š„ž(ijS4U*W!§­O¡ ¯°­Ož£«¶¡ ¯³„µ„¤ ± œ¥¶ µQž¬²£›µ„ž¬§$¨.šQž£­šQž£¡$­O¤s§Ož¬œ¥ž¬²S­É¯¼¡e¯³„µQ¤ ± ³h¦ ± ´ž£¡ ¤Ê»¯°­O­O¡ ›¥´¦Q­Ož¬§ Å L § ¨.›¥­škSVS4U*W¾¸›¥­®­šQž¬³¦„§Ož¬§ S4U Ó ­O¤bµQž£­Ož£¡ ± ›³Qž›¥Ê¯µ„µ„›¥­›¤y³„¯œQ¯°­O­O¡ ›¥´?¦„­Ož¬§K¯°¡ž ³Qž£ž¬µQž¬µ®­O¤»¡ ¦„œ¥žŒ¤y¦Q­ µ„›§O­O¡ ¯²S­O¤¡e§ Å ™¼žÛÊX¤y¦„³„µ §›¥«y³„›Á?²£¯³J­3µ„›üž£¡ž¬³„²Sž¬§s´ž£­Ò¨Ež£ž¬³ ­šQž»­š„¡ž£ž»§Ož¬œ¥ž¬²S­›¥¤y³ê§O­O¡ ¯°­Ož£«y›¥ž¬§£¸SVS4U*W7¸lijS4U*W7¸ ¯³„µbS4U Ó óTYmZ ‰]\ñyù]^T`fag\ñOñ¯†yû Å L§©§š„¤—¨.³ ´h¶É­šQž]­”á~L ²S¤y³„Á?µQž¬³„²Sž¾›³|­Ož£¡¢°¯œ§m›³ Q ›¥«y¦Q¡ž‚†h¸ ¨EžÊV¤y¦?³„µ­š„¯°­nSVS4U*W ± ¯°­²ešQž¬µÉšh¦ ± ¯³$µ„ž¬§²S¡ ›¥ŸQº ­›¥¤y³„§5§ ›¥«y³„›¥Á?²£¯³|­œ¥¶7´ž£­O­Ož£¡­š„¯³oijS4U*W3¨ šQž£¡ž¬¯§ S4U Ó µ„›µ®³Q¤­ Å SVS4U*W7¸Q¨.š?›œ¥ž7§O­¯°­›§O­›k²£¯œœ¥¶§› ± ›¥º œ¯°¡7­O¤/S4U Ó ¸¯œ§O¤®š„¯µ¼¯¹­O¡ž¬³?µ”­O¤—¨m¯°¡ µ„§7´ž£­O­Ož£¡ ± ¯°­²ešQž¬§.¨ šQž¬³²S¤ ± Ÿ?¯°¡ž¬µ®­O¤S4U Ó Å ä ¯°´?œ¥ž÷ §š„¤—¨.§s­šQžë¡ ž¬œ¯°­›¥¢ž²S¤y³J­O¡ ›´?¦Q­›¥¤y³„§ ¤Êm­šQž»š|¶|Ÿ¤­šQž¬§Ož¬§\›³„²£œ¦„µ„ž¬µ›k³pSqS4UrW¯³„µ¼­šQž ²S¤y³|­O¡ ›¥´?¦Q­›¤y³ ¤ÊE­šQž$›µQž¬³|­›¥Á?²£¯°­›¤y³”«¤y¯œ5¨.›¥­š„›k³ SVS4U*W Åä šQž²S¤y³J­O¡e›¥´?¦Q­›¥¤y³ ± ¯µQž´h¶`­šQž$›µ„ž¬³J­› º Á?²£¯°­›¥¤y³¹«¤y¯œ#›³„²£œ¦„µ„ž¬§K´¤­š½­šQž7²£¯§Ož¬§›³½¨.š?›² š ›µQž¬³|­›¥Á?²£¯°­›¤y³¨E¯§K­šQžn¤y³„œ¥¶ÌŸ„¡ž¬µ„›k²S­Ož¬µ«¤y¯œuóޛ Å ž Å ³Q¤y³Qž ¤Ê0­šQžŒ²S¤y³J­OžS·Z­§n›³„µ?›²£¯°­Ož¬µÉ›³É¤y¦Q¡EÊV¤y¦Q¡mš|¶Jº Ÿ¤­šQž¬§Ož¬§¯°Ÿ?Ÿ?œ›¥ž¬µ¹ÊX¤¡.¯$Ÿ?¯°¡ ­›²£¦„œ¯°¡µQž¬§ ²S¡ ›¥Ÿ„­›¥¤y³rû ¯³„µs­šQž¹²£¯§Ož¬§$›³¨ š„›² šð¯µ„µ„›­›¥¤y³„¯œ¯°­O­O¡ ›´?¦Q­Ož¬§ š„¯µÄ­O¤ê´rž®¯µ?µQž¬µÄ­O¤ž¬³„§ ¦Q¡ž®¦„³„›2—|¦Qž¹›µQž¬³|­›¥Á?¯Uº ´?›œk›¥­Ò¶¼¯°ÊX­Ož£¡Ì­š„ž©›³„›­›¯œ§Ož¬œ¥ž¬²S­›¤y³„§ ± ¯µQž½›k³ê¯²gº ²S¤¡ µ„¯³?²Sžb¨.›­š`¤y¦Q­Œš|¶|Ÿ¤­šQž¬§Ož¬§ Å Lœ¥­šQ¤y¦Q«yš ­šQž ²S¤y³|­O¡ ›¥´?¦Q­›¤y³„§ ± ¯µQž¾´h¶¹­šQž\›µ„ž¬³J­›¥Á²£¯°­›¥¤y³«¤y¯œ ¯°¡ž$§ ± ¯œœ¥ž£¡­š„¯³¤y³Qž ± ›¥«yšJ­žS·ZŸrž¬²S­£¸­š?›§ŒµQ¤|ž¬§ ³Q¤­ ± ž¬¯³ ­š„¯°­­šQžb›µ„ž¬³J­›¥Á²£¯°­›¥¤y³®«¤y¯œG¨m¯§ ›³Qº ¢°¯œ›µðÊX¤¡»§O¤ ± ž¡ž¬µQž¬§ ²S¡ ›¥Ÿ„­›¥¤y³?§ ÅêÆ ³„§O­Ož¬¯µÄ›¥­$›³Qº µ„›k²£¯°­Ož¬§¾­š„¯°­b­šQžÉ›µQž¬³|­›¥Á?²£¯°­›¤y³”«¤y¯œÇš„¯µ´rž£ž¬³ ¯µ„µ„¡ž¬§§Ož¬µ ¯œ¥¡ž¬¯µQ¶ð´|¶ò§¤ ± ž®¤Ê7­šQž®¤­š„ž£¡½¯°ŸQº Ÿ?œk›²£¯°´?œ¥žŒ«¤y¯œ§¯³„µ®¡ ž"„ž¬²S­§ ¯É­Ò¶hŸž¾¤Ê5Ÿ¤­Ož¬³J­›¯œ ž¬²S¤y³Q¤ ± ¶­š„¯°­.²£¯³`´ž¾¯² š„›¥ž£¢ž¬µ`¨.šQž¬³ ± ¦„œ¥­›Ÿ?œ¥ž «¤y¯œ§ ›³F?¦„ž¬³„²SžŒ¤y³Qž7žS·hŸ?¡ž¬§§›¥¤y³ Å s t F#LǘQ>Òç •°è F#L à ¦„¡»¡ž¬§¦„œ­§»›³„µ„›k²£¯°­Ož­š„¯°­É¨Ež š„¯—¢ž ›µ„ž¬³J­›¥Á?ž¬µ ¯Ä§Ož£­`¤Ê̯µ„µ?›¥­›¥¤y³„¯œ «¤y¯œ§­š„¯°­ ²£¯³À›³F?¦„ž¬³„²Sž ¯°­O­O¡ ›´?¦Q­Ož©§ž¬œ¥ž¬²S­›¥¤y³sÊV¤¡Ì¡ž¬µQž¬§²S¡e›¥Ÿ„­›¥¤y³„§ Å L §b¨Çž žS·ZŸž¬²S­Ož¬µ¸Q¯œœ¥¤—¨ ›³Q« ± ¦„œ¥­›Ÿ?œ¥žn«¤y¯œ§E­O¤$›³F?¦„ž¬³„²Sž ¡ž¬µ„ž¬§²S¡ ›¥Ÿ„­›¤y³„§©µ?›µë¡ ž"„ž¬²S­¯œœ¥¤—¨m¯°´?œ¥ž¸¯œ¥­Ož£¡ ³„¯Uº ­›¥¢ž¹¨m¯¬¶Z§¤Ê ›µ„ž¬³J­›¥ÊX¶Z›³Q«®¤´ZñOž¬²S­§ ÅÆ ³ðŸ?¯°¡­›²£¦Qº œ¯°¡—¸¨Çž¼§¯—¨ ­š„¯°­¹­šQž¼µQž¬§²S¡ ›Ÿ„­›¥¤y³„§É«ž¬³Qž£¡ ¯°­Ož¬µ ¯§Œ¯½¡ž¬§¦„œ­.¤Ê ± ¦„œ¥­›¥Ÿœ¥ž7«¤y¯œ§7¯³„µ ­šQžµQž¬§²S¡ ›¥Ÿ„º ­›¥¤y³?§7«ž¬³Qž£¡ ¯°­Ož¬µs­O¤`§¯°­›§ÊV¶ÉñO¦?§O­7­šQžÉ›µ„ž¬³J­›¥Á²£¯Uº ­›¥¤y³ë«¤y¯œ ± ¯°­² š ž+—|¦„¯œkœ¥¶s¨Ež¬œœn¨ ›¥­šÄ¨.š„¯°­©š|¦Zº ± ¯³?§n«ž¬³Qž£¡ ¯°­Ož Å PZ¤\ÊÞ¯°¡Ç¨Ežš?¯¬¢žŒ¤y³„œ¥¶Ÿ?¯°¡­›k¯œœ¥¶¯µ„µQ¡ž¬§§ž¬µÉ¤y¦Q¡ ¤¡ ›«y›³„¯œ)—|¦Qž¬§­›¥¤y³ ¯°´¤y¦Q­½­šQž”¡ž¬œ¯°­›¤y³„§š„›¥Ÿ ´žSº ­¨Çž£ž¬³ ± ¦„œ¥­›¥Ÿ?œžG«¤y¯œk§u¯³„µb³Q¤ ± ›³?¯œ°žS·hŸ„¡ ž¬§§›¥¤y³„§ Å L ± ¤y³Q«¤­š„ž£¡©­š„›k³Q«y§£¸K¨Ež §­›œœn³Qž£ž¬µë­O¤ê¯§²Sž£¡Oº ­¯›³`­šQž\µQž£«¡ ž£ž¾­O¤É¨.š?›² š­š?›§n¡ž¬œ¯°­›¤y³„§š„›¥Ÿ¯²gº ­¦„¯œkœ¥¶”ž¬²S¤y³Q¤ ± ›¥È£ž¬§­šQž¹§OŸž¬¯°ªž£¡¬ìí§²S¤y³|­O¡ ›¥´?¦Q­›¤y³ ¯³„µ ± ¯°ªž¬§nÊV¤¡ ± ¤¡ž7ž=ürž¬²S­›¢ž¾²S¤ ±»± ¦?³„›²£¯°­›¥¤y³ Å Æ ³»¯µ„µ„›­›¥¤y³¸¨Ež.³Qž£ž¬µ$­O¤\§ž£Ÿ?¯°¡ ¯°­Ož¤y¦Q­K­šQž«¤y¯œ§ ¡ž£Ÿ„¡ ž¬§Ož¬³J­Ož¬µ¹›³É­šQžuSqS4UrWë§Ož¬œ¥ž¬²S­›¤y³©§O­O¡e¯°­Ož£«¶É­O¤ §Ož£ž ¨.š„›²eš$²£¯§Ož¬§Ç¯°¡ ž ²S¡ ›¥­›k²£¯œZÊV¤¡Çž¬³„§¦Q¡ ›k³Q«7¯³É›³Zº ÊXž£¡ž¬³„²Sž\›§ ± ¯µQž Åv ¤ ¨Çž£¢ž£¡¬¸­O¤»­Ož¬§­.­šQž¬§Ožb«¤y¯œ§ ›³„µ?›¥¢h›kµ„¦„¯œœ¥¶ ¨Çž³„ž£ž¬µ­O¤¾²S¤yœœ¥ž¬²S­ ± ¤¡ž›k³„§O­¯³„²Sž¬§ ¤ÊË¡ž¬µQž¬§²S¡e›¥Ÿ„­›¥¤y³„§ Å ””=xw*=„Hy=?LǘZ= • y{z|~}q€q‚„ƒn…6†?‡~‡]ˆ‰‹Š… ŒqŽq … ’‘“4””–•I”V—™˜”V—‘•›šœ ž Ÿ ”–¡ Ÿ ”]¢ Ÿ š2…6£’€¥¤d¦~§¨›© }qˆNªn«~¨‹¬qˆ‰§2‚­¨‹ŠM®°¯F§±ˆ‚±‚²³£’€¥¤N´ ¦~§±¨›© }ˆq²~ªh… µu… ¶ |‚±€¥«-ƒ?…x·’§±ˆ‰«~«–€¥«x…$Œq¸~… ž¹Ÿ2Ÿ±º •I”V—j“4”¹»’¼½¥¾• »¥•I”V— ˜’¾• » Ÿ ”]¢ Ÿ ¿ ½¥¼ÁÀo ¡ “4‘Äó”]» Ÿ ¼2šÅ¡:“4”¹»4•I”V—…³¯ÁÆx… yDž‰ŠÆ~ˆ‰´ ‚¨‚² ¶ ራ ÈÉz§2©oª?«~¨¬ˆ§±‚¨‹ŠM®$¯Ê‚­® Ë2Æ~zq‹zq}®Ìy{ˆ‰‡ Š…{ª?«~´ ‡~|¦~‹¨›‚Æ~ˆ©Ͱ€«V|–‚Ë‰§¨‡ Š… Îqˆ€¥«Ï£{…³£’€¥§±‹ˆ‰Š­Š2€~…ÌŒqÐ~…ÇÑ{•›š ºrÒ “ º •I”_—(“¥”]»ÌÑ Ÿ ¢±½¥¾Ó Ÿ ¼ÅÔ/•I” Ò “Õš º ÓÅּŕ Ÿ ”–¡ Ÿ »j×h• “¥‘‹½— Ÿ …X¯FÆx… yDžÁŠÆ~ˆ‚­¨›‚² ƒÊ© ¨‹«_¦~|§}qƪ?«¨‹¬qˆ‰§2‚­¨‹ŠM®… Ø?ˆ§¦]ˆ‰§Š{ØÇ… £Ê›€¥§±Ù$€«©oƒÁ©~ڒ€§±©ے… ¶ Ë2Æ€¥ˆ‰ÈɈ‰§…&ŒŽq~… £Êzq«qб§¨¦~| б¨‹«}(бzÏ© ¨‚±ËDzq|~§2‚­ˆq…™ÜF½—”•I¡ •I¾ Ÿož ¢D• Ÿ ”]¢ Ÿ ² ŒÝ~ÞßЏ¥4à Ѝ¥á… ¯l€¥|xâㅹ£ÊzqÆ~ˆ‰«x…nŒV …ʘ’änå]•I¼Å• ¢2“¥‘ÄÀ Ÿ ¡Iœ½»4š ¿ ½4¼&æ{¼ÅÓ ¡ • çÊ¢‰• “4‘¹èŔ¡ Ÿ ‘I‘•8— Ÿ ”]¢ Ÿ …FÍoéMêë¯F§±ˆ‚±‚‰² ·’zq‚­Šzq«x… â=zq¦–ˆ§­Šdy&€¥ˆ€«©ÏƒFÆ_|©-â=ˆ¨8бˆ‰§…jŒV …£Êzq¤u‡~| б€¥´ б¨‹zq«€¥–¨‹«VŠˆ§‡§ˆ‰Š±€4б¨‹zq«‚lzÈ]бÆ~ˆ&ì&§±¨›ËDˆ€«u¤u€¥í ¨‹¤6‚F¨« бÆ~ˆd}qˆ‰«~ˆ§±€¥Š¨z«z¥Èʧˆ‰ÈɈ‰§±§¨«~}6ˆDí ‡~§±ˆ‚±‚¨‹zq«‚‰…ÜF½—”•IÓ ¡ •I¾ Ÿhž ¢‰• Ÿ ”]¢ Ÿ ²ÄŒî ÐïDÞ ÐÝÝÕà~Ð¥ðqÝ~²q†n‡~§­à_Îq|~«~ˆ… ·€¥§±¦€¥§2€yn¨jƒF|~}qˆ‰«~¨z²j¯l€¥¤uˆ‰›€ñò…$Îqz§2©~€¥«Ä²Îz´ Æ€«~«€hyd… Íoz_z§±ˆ²V€¥«©6â=¨›Ë2Æ~¤uz«–©ÌØh…_êÆ~z¤6€q‚­zq«x… ŒŽ…6†?«Ïˆ‰¤u‡~¨§±¨Ë€¥F¨«_¬ˆ‚­Š¨}q€¥Š¨z«(z¥È=ËDz›€¥¦]z§2€4´ б¨‹¬qˆ© ¨›€¥z}|ˆ‚…Pé:«óædÜÄôõÓÅÜÖÁôxèö6÷=øù¥úǐ¼½¢ Ÿ2Ÿ »4Ó •I”_—4š½ ¿ ¡Éœ ŸÏÒ œ_•I¼Å¡ Ô4ÓTš2•8ûq¡ÉœpÜF½¥” ¿DŸ ¼ Ÿ ”¹¢ Ÿ ½ ¿ ¡Éœ Ÿ ævš2Ó š‰½¢D• “¥¡ü• ½¥” ¿ ½¥¼°ÜF½4änå] ¡:“4¡ • ½4”]“¥‘õôl•I”V— •›š2¡ • ¢Dš2²õÍozq« ´ б§ˆ€¥ ²–£’€¥«€q©~€~² †?|}|‚­Š… ·€¥§±¦€¥§2€/y{¨hƒF|~}qˆ‰«~¨z²{¯l€¥¤uˆ‰›€/ñò…=Îqz§2©~€¥«x²&â=¨›Ë2Æ ´ ¤uz«–©„ØÇ…ÁêÆ~zq¤u€q‚­zq«x²Ê€«©ýÎzqÆ€¥««€-yDžÁͰzVzq§ˆq… и¸¸…þêÆˆë€}§±ˆ‰ˆ‰¤uˆ«qŠ„‡~§±z_ˉˆ‚±‚‰Þÿ†?« ˆ‰¤u‡~¨§¨‹´ Ë€¥N¨‹«_¬ˆ‚Mб¨‹}V€4б¨‹zq«òz¥ÈÌÆ_|~¤6€¥« ´TÆ_|~¤6€¥«™Ë‰z¤u‡~| бˆ‰§´ ¤uˆ©~¨€¥Šˆ©óËDzq‹›€¥¦]z§2€4б¨‹¬qˆ°© ¨€‹zq}|~ˆ‚‰… Ò ½-æ’åå Ÿ “4¼ •I”ýèŔ¡ Ÿ ¼Å”]“4¡ • ½4”¹“4‘ ½¥Â ¼Å”]“4‘&½ ¿  äu“¥”ÓÅÜF½4änå] ¡ Ÿ ¼ ž ¡ü»¥• Ÿ šÅ… ÎV€‚z«$£{…~Ø{‚­|x…Œð…ÁÀo ‘¡ • å]‘ Ÿ ÜF½4änå~“¥¼Å•›š‰½4”~š Ò œ Ÿ Ó ½¥¼ÅÔ“4”]»6À Ÿ ¡Éœ~½»ÕšÅ…’£ÊÆ€‡~¤6€¥«€¥«©$Øn€‹ ²xzq«© zq«x… ·€¥§±¦€¥§2€PÎqzÆ«‚Mбz«~ˆq… Œq¥á…gâ=ˆ‡–ˆ‰Š¨‹Š¨z« ¨‹«k©~¨‚­´ ˉz|~§2‚­ˆqޙ† © ¨€‹zq}|~ˆq… é:«b·€¥§±¦€§±€+ÎqzÆ«‚Mбz«~ˆq² ˆ© ¨8бz§²+Ñ Ÿ å Ÿ ¡ü•I¡ • ½4”[•I”ÿ×h•›š‰¢2½4 ¼2š Ÿ  èD”¡ Ÿ ¼»¥•›š‰¢D•IÓ å¹‘•I”]“¥¼ÅÔ̐ Ÿ ¼2š å Ÿ ¢‰¡ •I¾ Ÿ š2ú –½4‘Â ä Ÿ ²Ä¬z|~¤uˆ {é­é zÈÁæ{»4¾4“4”¹¢ Ÿ šÇ•I”o×Ǖ›šD¢2½¥Â ¼2š Ÿ ¼½¢ Ÿ š2š Ÿ šÅ²]Ë2Ɩ€¥‡ бˆ‰§ÇŒq… †n¦~‹ˆ‰í … ¯l€¤Nˆ€’ñò…ÕÎzq§±©€¥«v€¥«©ã·€¥§±¦€§±€’yn¨_ƒF|}ˆ‰«¨‹z–…lŒ_… £Êzq«VŠ§±zx€¥«–©¨«~¨‹Š¨›€4б¨‹¬qˆã¨‹«oËDzq‹›€¥¦]z§2€4б¨‹¬qˆ{‡§zq¦~‹ˆ¤ ‚z¬_¨‹«}v©~¨€‹zq}|~ˆ‚‰…³é:«jÜF½¥ä?å¹Â ¡T“¥¡ü• ½¥”]“4‘–À½» Ÿ ‘š ¿ ½4¼ Ào•8û Ÿ »/èŔ–•I¡ü• “¥¡ü•I¾ Ÿ èD”¡ Ÿ ¼±“¢‰¡ü• ½¥”v=“±å Ÿ ¼2š ¿ ¼½¥ä ¡Iœ Ÿ øø$ænæ{æ?è ž å]¼Å•I”_— ž Ô4änå~½4šÅ•I ä ÒŸ ¢2œ_”–• ¢±“¥‘FÑ Ÿ Ó å½4¼Å¡ žž ÓMøÓ~²‡€¥}qˆ‚=Ž~ŒDà_Žá…_êÆ~ˆh†n†{†?é¯F§±ˆ‚±‚‰… ¯l€¤Nˆ€ñò…Îzq§±©€¥«x…Äи¸¸q€~…¹é:«|ˆ‰«ˉˆ‚xzq«h€4ŠŠ§±¨‹¦~|~Šˆ ‚ˆ‰ˆËDŠ¨z«°¨«°§±ˆ© ˆ‚Ë‰§¨‡ б¨‹zq«‚‰Þ=†bËDz§±‡~|‚{‚Mб|© ®q…{é:« ¼½¢ Ÿ2Ÿ »4•I”_—4šd½ ¿ Ül½±— ž ¢‰• !"4² †?|}|‚­Š… ¯l€¤Nˆ€vñò…_Îqz§2©~€¥«x…õÐ¥¸q¸¸¥¦Ä… èŔ–¡ Ÿ ”¡ • ½4”¹“4‘ èŔ$#Ê Ÿ ”]¢ Ÿ š ½¥”Ö&%(' Ÿ ¢‰¡]Ñ Ÿ » Ÿ šD¢‰¼Å• å]¡ • ½4”š?•I”N×Ǖ “4‘‹½±— Ÿ ’˜’¾• » Ÿ ”¹¢ Ÿ ¿ ¼±½4ä[“4”/˜’änå]•I¼Å• ¢2“4‘ ž ¡ü»¥Ô4…°¯FÆx… yDž³ŠÆˆ‚¨‚²lé:«VŠˆ‰‹´ ¨}ˆ‰«VŠ ¶ ®_‚­Šˆ¤6‚°¯F§±z}q§±€¤$²ãªn«~¨‹¬qˆ‰§2‚­¨‹ŠM®PzÈ̯F¨‹Š­Š±‚­´ ¦|~§}qÆx… ñò…£{…ÄͰ€«~«j€«© ¶ … †Ç…ÄêÆ~z¤u‡‚z«Ä…ŒqŽ! …Çâ=Æ~ˆ‰Šz§´ ¨›Ë‰€ ¶ Š§±|ËŊ±|~§±ˆjêÆ~ˆz§±®¹Þ*†ÿÛ§±€¤uˆ‰Ú’z§±Ù*ÈÉzq§бÆ~ˆ †n«€¥® ‚­¨›‚xzÈ~êˆDí_б‚…³ê³ˆË2Æ~«¨Ë€¥qâ?ˆ‰‡]z§Š³â ¶ ´:Ž!Õ´±Œ¸~² ª ¶ £*)Õé:«~ÈÉz§±¤u€¥Š¨z« ¶ ËD¨ˆ‰«–ËDˆ‚é:«–‚Mб¨8б| Šˆq… Ͱ€¥ŠÆ ¶ z¥ÈIв4é:«–Ë¥…² ¶ ˆ€¥Š­Šˆ²4ñ-€‚Æ~¨«~}¥Š±z«x² ŒqŽ~… ž Óü‘Â_š + ¿ ½4¼oà ”•8û÷ • » Ÿ ¡T½ ž ¡:“4¡ •›š2¡ • ¢Dš2² ¶ ˆ‰‡ бˆ‰¤d¦–ˆ§… µã€¥ŠÆ~ˆ‰ˆ«/âã… Íˉµvˆ‰z4Ú=«Ä…ÏŒŽV … ҖŸ ûq¡N÷ Ÿ ” Ÿ ¼“¥¡ü• ½¥” óšÅ•I”V— ×Ǖ›šD¢2½¥Â ¼2š Ÿ ž ¡ ¼“4¡ Ÿ —¥• Ÿ š“4”¹»-,Ľ¢‰Â_š ÜF½4”Ó šÅ¡ ¼“4•I”–¡ šý¡:½ ÷ Ÿ ” Ÿ ¼“¥¡ Ÿ öǓ¥¡ü ¼“¥‘dô祔V—“‰— ŸҖŸ û¡ … £’€¤Ç¦~§±¨›© }ˆvª?«~¨¬ˆ§±‚¨‹ŠM®¯F§±ˆ‚±‚‰… Íoˆ}q€«(ÍozV‚­ˆ§v€«©ÎqzƖ€¥«~«€yDž³Íoz_z§±ˆ…6Œqq~…hé:« ´ ¬qˆ‚­Š¨}q€¥Š¨«~}oËD|~ˆÌ‡€qËDˆ‰¤uˆ«qŠÇ€«©Ï‚ˆ‰ˆËŊ±¨‹zq«-¨‹«-б| ´ бz§±¨€ © ¨‚±ËDzq|~§2‚­ˆq…³é:«°’¼½¢ Ÿ±Ÿ »¥•I”V—¥šh½ ¿/.. ¼»dæ{”–”“4‘ À Ÿ2Ÿ ¡ü•I”_—/½ ¿ ¡Éœ Ÿ ævš±š‰½¢D• “¥¡ü• ½¥” ¿ ½4¼-ÜF½4änå] ¡:“4¡ • ½4”¹“4‘ ôl•I”V— •›šÅ¡ü• ¢DšÅ²~‡€¥}qˆ‚vŒÝ¸Õ์ÝV … â=ˆ¦–ˆˉË€ÿΖ…j¯l€q‚‚z««~ˆ€|x… Œq~… é:«VŠˆ}§2€4б¨‹«~} ì&§±¨›ËDˆ€«P€¥«–©+€4ŠŠˆ‰«Vб¨‹zq«€¥&ËDz«–‚Mб§±€¨‹«Vб‚…™é:« ’¼½4Ó ¢ Ÿ2Ÿ »4•I”_—4šÇ½ ¿ è]ÜÄæ?èãø + … Ͱ€§­Š±Æ€Ìƒn…–¯õz›€Ë2Ù¹…{ŒŒ…0{¬ˆ§zq€q© ¨‹«}N¨«VŠˆ«qб¨‹zq«‚ ÈÉzq§Êˆ1ÌËD¨ˆ‰«VŠ’‡§±€qËŊ¨›Ë‰€§±ˆ€q‚­zq«~¨‹«}…löǽ32 Â_šÅ² Џ~Þ ~ŒÝ&à Ýð… Ͱ€¥Š­Š±Æ~ˆ‰Ú ¶ Šzq«~ˆ6€¥«©(·Êzq«~«~¨ˆdñjˆ‰¦¦–ˆ§…ŒqŽ~…h곈Dí_´ б|€¥ ˆˉz«~zq¤Ç®ŠÆ~§±z|~}qÆoËDzq‚ˆÇËDzq|~‡~¨‹«~}z¥ÈF‚­®_«Vб€¥í €«©Ì‚ˆ‰¤6€¥«Vб¨Ë‚‰… é:«$’¼½¢ Ÿ2Ÿ »¥•I”V—¥šã½ ¿ øøqùÇèŔ–¡ Ÿ ¼Å”]“¥Ó ¡ • ½4”¹“4‘&4o½4¼ º šœ~½2å/½¥”öǓ¥¡ü ¼“¥‘–ô祔V—“‰— Ÿ ÷ Ÿ ” Ÿ ¼“4Ó ¡ • ½4”–²5?¨›€¥}q§±€¥´ zq« ´ ŠÆ~ˆ‰´6³€¥Ùˆq²_£’€«€©~€… Ͱ€§¨®V«+†Ç…Êñ-€¥Ùˆ§… ŒVÐ …òâ=ˆ©~|~«©~€«ËD®„¨‹«óËDz‹´ ›€¥¦]z§2€4б¨‹¬qˆ+© ¨›€¥z}|ˆ… é:«7,³½4 ¼Å¡ Ÿ2Ÿ ”–¡Iœ™èŔ–¡ Ÿ ¼Å”]“¥Ó ¡ • ½4”¹“4‘ÌÜF½¥” ¿DŸ ¼ Ÿ ”¹¢ Ÿ ½¥”kÜF½4änå] ¡:“4¡ • ½4”]“¥‘vôl•I”V— •›š2Ó ¡ • ¢Dš2²‡€¥}qˆ‚Ý¥á_à_ÝV Œq… Ͱ€§¨®V«-†h…Äñj€‹Ùqˆ‰§…ŒÝ…uèD” ¿ ½¥¼Åäu“¥¡ü• ½¥”]“4‘õÑ Ÿ »4 ”–Ó »q“4”¹¢DÔǓ¥”]»?Ñ Ÿ š‰½4 ¼±¢ Ÿ98 ½¥Â ”]»Õš=•I”d×Ǖ “4‘½— Ÿ …õ¯FÆÄ… yd… бÆ~ˆ‚¨›‚‰²ªn«~¨¬ˆ‰§2‚¨8ŠM®6zÈõ¯õˆ«~«‚®V¬4€¥«~¨›€~² y{ˆˉˆ‰¤Ç¦]ˆ‰§… ¶ бˆ‰¬qˆ$ñpƨ8ŠŠ±€¥Ùqˆ‰§6€¥«–©r¯ÁÆ~¨‹ ¶ Šˆ‰«Vбz«x…󌍎qŽ~…„£Ê|ˆ‚ €«©$ËDz«Vб§zq–¨«ˆ‰í ‡–ˆ§­Š?ËD¨ˆ‰«VŠ?© ¨€‹zq}|~ˆ‚‰… é:«°’¼½¢: ;աɜ(æ{”–”“¥‘õÀ Ÿ2Ÿ ¡ü•I”_—-½ ¿ ¡Éœ Ÿ ædÜÄô ú’ævš±š‰½¢D• “¥¡ü• ½¥” ½ ¿ ÜF½4änå] ¡:“4¡ • ½4”¹“4‘¹ôl•I”V— •›šÅ¡ü• ¢DšÅ²~‡€¥}qˆ‚vŒÕÐ¥ÝÕ์Ýq¸~…
2000
19
Generic NLP T ec hnologies: Language, Kno wledge and Information Extraction Junic hi Tsujii Departmen t of Information Science, F acult y of Science Univ ersit y of T oky o, JAP AN And Cen tre for Computational Linguistics, UMIST, UK  In tro duction W e ha v e witnessed signi can t progress in NLP applications suc h as information extraction (IE), summarization, mac hine translation, cross-lingual information retriev al (CLIR), etc. The progress will b e accelerated b y adv ances in sp eec h tec hnology , whic h not only enables us to in teract with systems via sp eec h but also to store and retriev e texts input via sp eec h. The progress of NLP applications in this decade has b een mainly accomplished b y the rapid dev elopmen t of corpus-based and statistical tec hniques, while rather simple tec hniques ha v e b een used as far as the structural asp ects of language are concerned. In this pap er, w e will discuss ho w w e can com bine more sophisticated, linguistically elab orate tec hniques with the curren t statistical tec hniques and what kinds of impro v emen t w e can exp ect from suc h an in tegration of di eren t kno wledge t yp es and metho ds.  Argumen t against linguistically elab orate tec hniques Throughout the 0s, researc h based on linguistics had ourished ev en in application orien ted NLP researc h suc h as mac hine translation. Eurotra, a Europ ean MT pro ject, had attracted a large n um b er of theoretical linguists in to MT and the linguists dev elop ed clean and linguistically elab orate framew orks suc h as CT A-, Simple T ransfer, Eurotra-, etc. A TR, a Japanese researc h institute for telephone dialogue translation supp orted b y a consortium of priv ate companies and the Ministry of P ost and Comm unication, also adopted a linguistics-based framew ork, although they c hanged their direction in the later stage of the pro ject. They also adopted sophisticated plan-based dialogue mo dels as w ell at the initial stage of the pro ject. Ho w ev er, the trend c hanged rather drastically in the early 0s and most researc h groups with practical applications in mind ga v e up suc h strategies and switc hed to more corpus-orien ted and statistical metho ds. Instead of sen ten tial parsing based on linguistically w ell founded grammar, for example, they started to use simpler but more robust tec hniques based on nite-state mo dels. Neither did kno wledge-based tec hniques lik e plan-recognition, etc. surviv e, whic h presume explicit represen tation of domain kno wledge. One of the ma jor reasons for the failure of these tec hniques is that, while these tec hniques alone cannot solv e the whole range of problems that NLP application encoun ters, b oth linguists and AI researc hers made strong claims that their tec hniques w ould b e able to solv e most, if not all, of the problems. Although formalisms based on linguistic theories can certainly con tribute to the dev elopmen t of clean and mo dular framew orks for NLP , it is rather ob vious that linguistics theories alone cannot solv e most of NLP's problems. Most of MT's problems, for example, are related with seman tics or in terpretation of language whic h linguistic theories of syn tax can hardly o er solutions for (Tsujii  ). Ho w ev er, this do es not imply , either, that framew orks based on linguistic theories are of no use for MT or NLP application in general. This only implies that w e need tec hniques complemen tary to those based on linguistic theories and that framew orks based on linguistic theories should b e augmen ted or combined with other tec hniques. Since tec hniques from complemen tary elds suc h as statistical or corpus-based ones ha v e made signi can t progresses, it is our con ten tion in this pap er that w e should start to think seriously ab out com bining the fruits of the researc h results of the 0s with those of the 0s. The other claims against linguistics-based and kno wledge-based tec hniques whic h ha v e often b een made b y practical-minded p eople are : () Eciency: The tec hniques suc h as senten tial parsing and kno wledge-based inference, etc. are slo w and require a large amoun t of memory () Am biguit y of P arsing: Sen ten tial parsing tends to generate thousands of parse results from whic h systems cannot c ho ose the correct one. () Incompleteness of Kno wledge and Robustness: In practice one cannot pro vide systems with complete kno wledge. Defects in kno wledge often cause failures in pro cessing, whic h result in the fragile b eha vior of systems. While these claims ma y ha v e b een the case during the 0s, the steady progress of suc h tec hnologies ha v e largely remo v ed these dif culties. Instead, the disadv an tages of curren t tec hnologies based on nite state tec hnologies, etc. ha v e increasingly b ecome clearer; the disadv an tages suc h ad-ho cness and opaqueness of systems whic h prev en t them from b eing transferred from an application in one domain to another domain.  The curren t state of the JSPS pro ject In a v e-y ear pro ject funded b y JSPS (Japan So ciet y of Promotion of Science) whic h started in Septem b er  , w e ha v e fo cussed our researc h on generic tec hniques that will b e used for di eren t kinds of NLP application and domains. The pro ject comprises three univ ersit y groups from the Univ ersit y of T oky o, T oky o Institute of T ec hnology (Prof. T okunaga) and Ky oto Univ ersit y (Dr. Kurohashi), and co ordinated b y m yself (at the Univ ersit y of T oky o). The Univ ersit y of T oky o has b een engaged in dev elopmen t of soft w are infrastructure for ecien t NLP , parsing tec hnology and on tology building from texts, while the groups of T oky o Institute of T ec hnology and Ky oto Univ ersit y ha v e b een resp onsible for NLP application to IR and Kno wledge-based NLP tec hniques, resp ectiv ely . Since w e ha v e deliv ered promising results in researc h on generic NLP metho ds, w e are no w engaged in dev eloping sev eral application systems that in tegrate v arious researc h results to sho w their feasibilit y in actual application environmen ts. One suc h application is a system that helps bio c hemists w orking in the eld of genome researc h. The system in tegrates v arious researc h results of our pro ject suc h as new tec hniques for query expansion and in telligen t indexing in IR, etc. The t w o results to b e in tegrated in to the system that w e fo cus on in this pap er are IE using a full-parser (sen ten tial parser based on grammar) and on tology building from texts. IE is v ery m uc h in demand in genome researc h, since quite a large p ortion of researc h is no w b eing targeted to construct systems that mo del complete sequences of in teraction of v arious materials in biological organisms. These systems require extraction of relev an t information from texts and its in tegration in xed formats. This en tails that the researc hers there should ha v e a mo del of interaction among materials, in to whic h actual pieces of information extracted from texts are tted. Suc h a mo del should ha v e a set of classes of in teraction (ev en t classes) and a set of classes of en tities that participate in ev en ts. That is, the on tology of the domain should exist. Ho w ev er, since the building of an ultimate on tology is, in a sense, the goal of science, the explicit on tology exists only in a v ery restricted and partial form. In other w ords, IE and On tology building are inevitably in tert wined here. In short, w e found that IE and On tology building from texts in genome researc h provide an ideal test b ed for our generic NLP tec hniques, namely soft w are infrastructure for ecien t NLP , parsing tec hnology , and on tology building from texts with initial partial kno wledge of the domain.  Soft w are Infrastructure and P arsing T ec hnology While tree structures are a v ersatile sc heme for linguistic represen tation, in v en tion of feature structures that allo w complex features and reen trancy (structure sharing) mak es linguistic represen tation concise and allo ws declarativ e sp eci cations of m utual relationships among represen tation of di eren t linguistic lev els (e.g.: morphology , syn tax, seman tics, discourse, etc.). More imp ortan tly , using bundles of features instead of simple non-terminal sym b ols to c haracterize linguistic ob jects allo w us to use m uc h ric her statistical means suc h as ME (maxim um en trop y mo del), etc. instead of simple probabilistic CF G. Ho w ev er, the p oten tial has hardly b een pursued y et mostly due to the ineciency and fragilit y of parsing based on feature-based formalisms. In order to remo v e the eciency obstacle, w e ha v e in the rst t w o y ears dev oted ourselv es to the dev elopmen t of : (A) Soft w are infrastructure that mak es processing of feature-based formalisms ecien t enough b oth for practical application and for com bining it with statistical means. (B) Grammar (Japanese and English) with wide co v erage for pro cessing real w orld texts (not examples in textb o oks of linguistics). A t the same time, pro cessing tec hniques that mak e a system robust enough for application. (C) Ecien t parsing algorithm for linguistics-based framew orks, in particular HPSG. W e describ e the curren t states of these three in the follo wing. (A) Soft w are Infrastructure (Miy ao 000): W e designed and dev elop a programming system, LiLF eS, whic h is an extension of Prolog for expressing t yp ed feature structures instead of rst order terms. The system's core engine is an abstract mac hine that can process features and execute de nite clause program. While similar attempts treat feature structure pro cessing separately from that of de nite clause programs, the LiLF eS abstract mac hine increases pro cessing sp eed b y seamlessly pro cessing feature structures and de nite clause programs. Div erse systems, suc h as large scale English and Japanese grammar, a statistical disambiguation mo dule for the Japanese parser, a robust parser for English, etc., ha v e already b een dev elop ed in the LiLF eS system. W e compared the p erformance of the system with other systems, in particular with LKB dev elop ed b y CSLI, Stanford Univ ersit y , b y using the same grammar (LinGo also pro vided b y Stanford Univ ersit y). A parsing system in the LiLF eS system, whic h adopts a naiv e CKY algorithm without an y sophistication, sho ws similar p erformance as that of LKB whic h uses a more re ned algorithm to lter out unnecessary uni cation. The detailed examination rev eals that feature uni cation of the LiLF eS system is ab out four times faster than LKB. F urthermore, since LiLF eS has quite a few built-in functions that facilitate fast subsumption c hec king, ecien t memory managemen t, etc., the p erformance comparison rev eals that more adv anced parsing algorithms lik e the one w e dev elop ed in (C) can b ene t from the LiLF eS system. W e ha v e almost nished the second v ersion of the LiLF eS system that uses a more ne-grained instruction set, directly translatable to naiv e mac hine co de of a P en tium CPU. The new v ersion sho ws more than t wice impro v emen t in execution sp eed, whic h means the naiv e CKY algorithm without an y sophistication in the LiLF eS system will outp erform LKB. (B) Grammar with wide co v erage (T ateisi  ; Mitsuishi  ): While LinGo that w e used for comparison is an in teresting grammar from the view p oin t of linguistics, the co v erage of the grammar is rather restricted. W e ha v e co op erated with the Univ ersit y of P ennsylv ania to dev elop a grammar with wide co v erage. In this coop eration, w e translated an existing wideco v erage grammar of XT A G to the framew ork of HPSG, since our parsing algorithms in (C) all assume that the grammar are HPSG. As w e discuss in the follo wing section, w e will use this translated grammar as the core grammar for information extraction from texts in genome science. As for wide-co v erage Japanese Grammar, w e ha v e dev elop ed our o wn grammar (SLUNG) . SLUNG exploits the prop ert y of HPSG that allo ws under-sp eci ed constrain ts. That is, in order to obtain wideco v erage from the v ery b eginning of grammar dev elopmen t, w e only giv e lo ose constrain ts to individual w ords that ma y o v er-generate wrong in terpretations but nonetheless guaran tee correct ones to b e alw a ys generated. Instead of rather rigid and strict constrain ts, w e prepare  templates for lexical en tries that sp ecify b eha viors of w ords b elonging to these  classes. The approac h is against the spirit of HPSG or lexicalized grammar that emphasizes constrain ts sp eci c to individual lexical items. Ho w ev er, our goal is rst to dev elop wide-co v erage grammar that can b e impro v ed b y adding lexicalitem sp eci c constrain ts in the later stage of grammar dev elopmen t. The strategy has pro v ed to b e e ectiv e and the curren t grammar can pro duce successful parse results for . % of sen tences in the EDR corpus with high eciency (0. sec p er sen tence for the EDR corpus). Since the grammar o v ergenerates, w e ha v e to c ho ose single parse results among a com binatorially large n um b er of p ossible parses. Ho w ev er, an exp erimen t sho ws that a statistic metho d using ME (w e use the program for ME dev elop ed b y NYU) can select around . % of correct analysis in terms of dep endency relationships among ! ! bunsetsu's the phrases in Japanese). (C) Ecien t parsing algorithm (T orisa w a 000): While feature structure represen tation provides an e ectiv e means of represen ting linguistic ob jects and constrain ts on them, c hec king satis abilit y of constrain ts b y linguistic ob jects, i.e. uni cation, is computationally exp ensiv e in terms of time and space. One w a y of impro ving the eciency is to a v oid uni cation op erations as m uc h as p ossible, while the other w a y is to pro vide ecien t softw are infrastructure suc h as in (A). Once w e c ho ose a sp eci c task lik e parsing, generation, etc., w e can devise ecien t algorithms for a v oiding uni cation. LKB accomplishes suc h reduction b y insp ecting dep endencies among features, while the algorithm w e c hose is to reduce necessary uni cation b y compiling giv en HPSG grammar in to CF G. The CF G sk eleton of giv en HPSG, whic h is semi-automatically extracted from the original HPSG, is applied to produce p ossible candidates of parse trees in the rst phase. The sk eletal parsing based on extracted CF G lters out the lo cal constituen t structures whic h do not con tribute to an y parse co v ering the whole sen tence. Since a large prop ortion of lo cal constituen t structures do not actually con tribute to the whole parse, this rst CF G phase helps the second phase to a v oid most of the globally meaningless uni cation. The eciency gain b y this compilation tec hnique dep ends on the nature of the original grammar to b e compiled. While the eciency gain for SLUNG is just t w o times, the gain for XHPSG (HPSG grammar obtained b y translating the XT A G grammar in to HPSG) is around  times for the A TIS corpus (T ateisi  ).  Information extraction b y sen ten tial parsing The basic argumen ts against use of sen ten tial parsing in practical application suc h as IE are the ineciency in terms of time and space, the fragilit y of systems based on linguistically rigid framew orks and highly am biguous parse results that w e often ha v e as results of parsing. On the other hand, there are argumen ts for sen ten tial parsing or the deep analysis approac h. One argumen t is that an approac h based on linguistically sound framew orks mak es systems transparen t and easy to re-use. The other is the limit on the qualit y that is ac hiev able b y the pattern matc hing approac h. While a higher recall rate of IE requires a large amoun t of patterns to co v er div erse surface realization of the same information, w e ha v e to widen linguistic contexts to impro v e the precision b y prev en ting extraction of false information. A patternbased system ma y end up with a set of patterns whose complex m utual n ullify the initial app eal of simplicit y of the pattern-based approac h. As w e see in the previous section, the eciency problem b ecomes less problematic b y utilizing the curren t parsing tec hnology . It is still a problem when w e apply the deep analysis to texts in the eld of genome science, whic h tend to ha v e m uc h longer sentences than in the A TIS corpus. Ho w ev er, as in the pattern-based approac h, w e can reduce the complexit y of problems b y com bining differen t tec hniques. In a preliminary exp erimen t, w e rst use a shallo w parser (ENGCG) to reduce part-ofsp eec h am biguities b efore sen ten tial parsing. Unlik e statistic POS taggers, the constrain t grammar adopted b y ENGCG preserv es all p ossible POS in terpretations just b y dropping in terpretations that are imp ossible in giv en local con texts. Therefore, the use of ENGCG do es not a ect the soundness and completeness of the whole system, while it reduces signi can tly the lo cal am biguities that do not con tribute to the whole parse. The exp erimen t sho ws that ENGCG prev en ts 0 % of edges pro duced b y a parser Based on naiv e CKY algorithm, when it is applied to 0 sen tences randomly c hosen from MEDLINE abstracts (Y akushiji 000). As a result, the parsing b y XHPSG b ecomes four times faster from 0.0 seconds to . second p er sen tence, whic h is further impro v ed b y using c h unking based on the output of a Named En tit y recognition to ol to . second p er sentence. Since the exp erimen t w as conducted with a naiv e parser based on CYK and the old v ersion of LiLF eS, the p erformance can b e impro v ed further. The problems of fragilit y and am biguit y still remain. XHPSG fails to pro duce parses for ab out half of the sen tences that co v er the whole. Ho w ev er, in application suc h as IE, a system needs not ha v e parses co v ering the whole sen tence. If the part in whic h the relev an t pieces of information app ear can b e parsed, the system can extract them. This is one of the ma jor reasons wh y pattern-based systems can w ork in a robust manner. The same idea can b e used in IE based on senten tial parser. That is, tec hniques that can extract information from partial parse results will mak e the system robust. The problem of am biguit y can b e treated in a similar manner. In a pattern-based system, the system extracts information when parts of the text matc h with a pattern, indep enden tly of whether other in terpretations that comp ete with the in terpretation in tended b y the pattern exist or not. In this w a y , a pattern-based system treats am biguit y implicitly . In case of the approac h based on sen ten tial parsing, w e treat the am biguit y problem b y preference. That is, an in terpretation that indicates relev an t pieces of information exist is preferred to other in terpretations. Although the metho ds illustrated in the ab o v e mak e IE based on sen ten tial parsing similar to the pattern-based approac h, the approac h retains the adv an tages o v er the pattern-based one. F or example, it can prev en t false extraction if the pattern that dictates extraction con tradicts with wider linguistic structures or with the more preferred in terpretations. It k eeps separate the general linguistic kno wledge em b o died in the form of XHPSG grammar that can b e used in an y domain. The mapping b et w een syn tactic structures to predicate structures can also b e systematic.  Information extraction of named en tities using a hidden Mark o v mo del The named en tit y to ol men tioned ab o v e, called NEHMM (Collier 000), has b een dev elop ed as a generalizable sup ervised learning metho d for iden tifying and classifying terms giv en a training corpus of SGML mark ed-up texts. HMMs themselv es b elong to a class of learning algorithms that can b e considered to b e sto c hastic nite state mac hines. They ha v e enjo y ed success in a wide n um b er of elds including sp eec h recognition and part of sp eec h tagging. W e therefore consider their extension to the named en tit y task, whic h is essen tially a kind of seman tic tagging of w ords based on their class, to b e quite natural. NEHMM itself striv es to b e highly generalizable to terms in di eren t domains and the initial v ersion uses bigrams based on lexical and c haracter features with one state p er name class. Data-sparseness is o v ercome using the c haracter features and linearin terp olation. Nobata et al. (Nobata  ) commen t on the particular diculties with iden tifying and classifying terms in the bio c hemistry domain including an op en v o cabulary and irregular naming con v en tions as w ell as extensiv e crosso v er in v o cabulary b et w een classes. The irregular naming arises in part b ecause of the n umb er of researc hers from di eren t elds who are w orking on the same kno wledge disco very area as w ell as the large n um b er of proteins, DNA etc. that need to b e named. Despite the b est e orts of ma jor journals to standardize the terminology , there is also a signi can t problem with synon ym y so that often an en tit y has more than one name that is widely used suc h as the protein names AKT and PKB. Class cross-o v er of terms is another problem that arises b ecause man y DNA and RNA are named after the protein with whic h they transcrib e. Despite the apparen t simplicit y of the kno wledge in NEHMM, the mo del has pro v en to b e quite p o w erful in application. In the genome domain with only 0 training MEDLINE abstracts it could ac hiev e o v er % Fscore (a common metric for ev aluation used in IE that com bines recall and precision). Similar p erformance has b een found when training using the dry-run and test set for MUC- (0 articles) in the news domain. The next stage in the dev elopmen t of our mo del is to train using larger test sets and to incorp orate wider con textual kno wledge, p erhaps b y marking-up for dep endencies of named-en tities in the training corpus. This extra lev el of structural kno wledge should help to constrain class assignmen t and also to aid in higher lev els of IE suc h as ev en t extraction.  Kno wledge Building and T ext Annotation Annotated corp ora constitute not only an integral part of a linguistic in v estigation but also an essen tial part of the design metho dology for an NLP systems. In particular, the design of IE systems requires clear understanding of information formats of the domain, i.e. what kinds of en tities and ev en ts are considered as essen tial ingredien ts of information. Ho w ev er, suc h information formats are often implicit in the minds of domain sp ecialists and the pro cess of annotating texts helps to rev eal them. It is also the case that the mapping b et w een information formats and surface linguistic realization is not trivial and that capturing the mapping requires empirical examination of actual corp ora. While generic programs with learning abilit y ma y learn suc h a mapping, learning algorithms need training data, i.e. annotated corp ora. In order to design a NE recognition program, for example, w e ha v e to ha v e a reasonable amoun t of annotated texts whic h sho w in what linguistic con texts named en tities app ear and what in ternal structures t ypical linguistic expressions of named en tities of a giv en eld ha v e. Suc h h uman insp ection of annotated texts suggests feasible to ols for NE (e.g. HMM, ME, decision trees, dictionary lo ok-up, etc.) and a set of feasible features, if one uses programs with learning abilit y . Human insp ection of annotated corp ora is still an inevitable step of feature selection, ev en if one uses programs with learning abilit y . More imp ortan tly , to determine classes of named en tities and ev en ts whic h should re ect the views of domain sp ecialists requires empirical in v estigation, since these often exist implicitly only in the mind of sp ecialists. This is particularly the case in the eld of medical and biological sciences, since they ha v e a m uc h larger collection of terms (i.e. class names) than, for example, mathematical science, ph ysics, etc. In order to see the magnitude of the w ork and diculties in v olv ed, w e c hose a w ellcircumscrib ed eld and collected texts (MEDLINE abstracts) in the eld to b e annotated. The eld is the reaction of transcription factors in h uman blo o d cells. The kinds of information that w e try to extract are the information on protein-protein in teractions. The eld w as c hosen b ecause a researc h group of National Health Researc h Institute of the Ministry of Health in Japan is building a database called CSNDB (Cell Signal Netw ork DB), whic h gathers this t yp e of information. They read pap ers ev ery w eek to extract relev an t information and store them in the database. IE of this eld can reduce the w ork that is done man ually at presen t. W e selected abstracts from MEDLINE b y the k ey w ords of "h uman", "transcription factors" and "blo o d cells", whic h yield 00 abstracts. The abstracts are from 00 to 00 w ords in length. 00 abstracts w ere c hosen randomly and annotated. Curren tly , semantic annotation of 00 abstracts has b een nished and w e exp ect 00 abstracts to b e done b y April (Oh ta 000). The task of annotation can b e regarded as iden tifying and classifying the terms that app ear in texts according to a pre-de ned classi cation sc heme. The classi cation sc heme, in turn, re ects the view of the elds that bioc hemists ha v e. That is, seman tic tags w e use are the class names in an on tology of the eld. On tologies of biological terminology ha v e b een created in pro jects suc h as the EU funded GALEN pro ject to pro vide a mo del of biological concepts that can b e used to in tegrate heterogeneous information sources while some on tologies suc h as MeSH are built for the purp ose of information retriev al According to their purp oses, on tologies di er from ne-grained to coarse ones and from asso ciativ e to logical ones. Since there is no appropriate on tology that co v ers the domain that w e are in terested in, w e decided to build one for this sp eci c domain. The design of our on tology is in progress, in whic h w e distinguish classi cation based on roles that proteins pla y in ev en ts from that based on in ternal structures of proteins. The former classi cation is closely link ed with classi cation of ev en ts. Since classi cation is based on feature lattices, w e plan to use the LiLF eS system to de ne these classi cation sc hemes and their relationships among them.  F uture Directions While the researc hes of the 0s and 0s in NLP fo cussed on di eren t asp ects of language, they ha v e b een so far considered separate dev elopmen t and no serious attempt has b een made to in tegrate them. In the JSPS pro ject, w e ha v e prepared necessary bac kground for suc h in tegration. T ec hnological bac kground suc h as ecien t parsing, a programming system based on t yp es, etc. will con tribute to resolving eciency problems. The tec hniques suc h as NE recognition, staged arc hitecture in con v en tional IE, etc. will giv e hin ts on ho w to incorp orate several di eren t tec hniques in the whole system. A reasonable size of seman tically annotated texts, together with relev an t on tology , ha v e b een prepared. W e are engaged no w in in tegrating these comp onen ts in the whole system, in order to sho w ho w theoretical w ork, together with collection of empirical data, can facilitate systematic dev elopmen t of NLP application systems. References Collier, N., Nobata, C., and Tsujii, J.: "Extracting the Names of Genes and Gene pro ducts with a Hidden Mark o v Mo del", COLING'000 (August), 000 Mitsuishi, Y. et.al.: HPSG-st yle Undersp eci ed Japanese Grammar with Wide Co v erage, in Pro c. of Coling-A CL , Mon treal,   Miy ao, Y., Makino, T., et.al.: The LiLF eS Abstract Mac hine and its Ev aluation with LinGo, A Sp ecial Issue on Ecien t Pro cessing of HPSG, Journal of Natural Language Engineering, Cam bridge Univ ersit y Press, 000 (to app ear) Nobata, C., Collier, N., and Tsujii, J.: "Automatic T erm Iden ti cation and Classi cation in Biology T exts", in pro ceedings of the Natural Language P aci c Rim Symp osium (NLPRS' ), Beijing, China,  . Oh ta, T., et.al.: A Seman tically T agged Corpus based on an On tology for Molecular Biology , in Pro c. of JSPS Symp osium 000, T oky o, 000 T ateisi, Y. et.al.: T ranslating the XT A G English Grammar to HPSG, in Pro c. of T A G+ w orkshop, Univ ersit y of P ennsylv ania,   T orisa w a, K. et.al.: An HPSG P arser with CF G Filtering, A Sp ecial Issue on Ecien t Pro cessing of HPSG, Journal of Natural language Processing, Cam bridge Univ ersit y Press, 000 (to app ear) Tsujii, J.: MT Researc h : Pro ductivit y and Conv en tionalit y of Language, RANLP- ,Tzigo v Chark,Bulgaria,- Septem b er,  Y akushiji, A.: Domain-Indep enden t System for Ev en t F rame Extraction using an HPSG P arser, Bsc Dissertation, Departmen t of Information Science, Univ ersit y of T oky o, 000
2000
2
An Empirical Study of the Influence of Argument Conciseness on Argument Effectiveness Giuseppe Carenini Intelligent Systems Program University of Pittsburgh, Pittsburgh, PA 15260, USA [email protected] Johanna D. Moore The Human Communication Research Centre, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, UK. [email protected] Abstract We have developed a system that generates evaluative arguments that are tailored to the user, properly arranged and concise. We have also developed an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. This paper presents the results of a formal experiment we have performed in our framework to verify the influence of argument conciseness on argument effectiveness 1 Introduction Empirical methods are critical to gauge the scalability and robustness of proposed approaches, to assess progress and to stimulate new research questions. In the field of natural language generation, empirical evaluation has only recently become a top research priority (Dale, Eugenio et al. 1998). Some empirical work has been done to evaluate models for generating descriptions of objects and processes from a knowledge base (Lester and Porter March 1997), text summaries of quantitative data (Robin and McKeown 1996), descriptions of plans (Young to appear) and concise causal arguments (McConachy, Korb et al. 1998). However, little attention has been paid to the evaluation of systems generating evaluative arguments, communicative acts that attempt to affect the addressee’s attitudes (i.e. evaluative tendencies typically phrased in terms of like and dislike or favor and disfavor). The ability to generate evaluative arguments is critical in an increasing number of online systems that serve as personal assistants, advisors, or shopping assistants1. For instance, a shopping assistant may need to compare two similar products and argue why its current user should like one more than the other. 1 See for instance www.activebuyersguide.com In the remainder of the paper, we first describe a computational framework for generating evaluative arguments at different levels of conciseness. Then, we present an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. Next, we describe the design of an experiment we ran within the framework to verify the influence of argument conciseness on argument effectiveness. We conclude with a discussion of the experiment’s results. 2 Generating concise evaluative arguments Often an argument cannot mention all the available evidence, usually for the sake of brevity. According to argumentation theory, the selection of what evidence to mention in an argument should be based on a measure of the evidence strength of support (or opposition) to the main claim of the argument (Mayberry and Golden 1996). Furthermore, argumentation theory suggests that for evaluative arguments the measure of evidence strength should be based on a model of the intended reader’s values and preferences. Following argumentation theory, we have designed an argumentative strategy for generating evaluative arguments that are properly arranged and concise (Carenini and Moore 2000). In our strategy, we assume that the reader’s values and preferences are represented as an additive multiattribute value function (AMVF), a conceptualization based on multiattribute utility theory (MAUT)(Clemen 1996). This allows us to adopt and extend a measure of evidence strength proposed in previous work on explaining decision theoretic advice based on an AMVF (Klein1994). Figure 1 Sample additive multiattribute value function (AMVF) The argumentation strategy has been implemented as part of a complete argument generator. Other modules of the generator include a microplanner, which performs aggregation, pronominalization and makes decisions about cue phrases and scalar adjectives, along with a sentence realizer, which extends previous work on realizing evaluative statements (Elhadad 1995). 2.1 Background on AMVF An AMVF is a model of a person’s values and preferences with respect to entities in a certain class. It comprises a value tree and a set of component value functions, one for each primitive attribute of the entity. A value tree is a decomposition of the value of an entity into a hierarchy of aspects of the entity2, in which the leaves correspond to the entity primitive attributes (see Figure 1 for a simple value tree in the real estate domain). The arcs of the tree are weighted to represent the importance of the value of an objective in contributing to the value of its parent in the tree (e.g., in Figure 1 location is more than twice as important as size in determining the value of a house). Note that the sum of the weights at each level is equal to 1. A component value function for an attribute expresses the preferability of each attribute value as a number in the [0,1] interval. For instance, in Figure 1 neighborhood n2 has preferability 0.3, and a distance-from-park of 1 mile has preferability (1 - (1/5 * 1))=0.8). 2 In decision theory these aspects are called objectives. For consistency with previous work, we will follow this terminology in the remainder of the paper. Formally, an AMVF predicts the value ) (e v of an entity e as follows: v(e) = v(x1,…,xn) = Σwi vi(xi), where - (x1,…,xn) is the vector of attribute values for an entity e - ∀attribute i, vi is the component value function, which maps the least preferable xi to 0, the most preferable to 1, and the other xi to values in [0,1] - wi is the weight for attribute i, with 0≤ wi ≤1 and Σwi =1 - wi is equal to the product of all the weights from the root of the value tree to the attribute i A function vo(e) can also be defined for each objective. When applied to an entity, this function returns the value of the entity with respect to that objective. For instance, assuming the value tree shown in Figure 1, we have: )) ( 6.0 ( )) ( 4.0 ( ) ( e v e v e v park from Dist od Neighborho Location − − ∗ + ∗ = = Thus, given someone’s AMVF, it is possible to compute how valuable an entity is to that individual. Furthermore, it is possible to compute how valuable any objective (i.e., any aspect of that entity) is for that person. All of these values are expressed as a number in the interval [0,1]. 2.2 A measure of evidence strength Given an AMVF for a user applied to an entity (e.g., a house), it is possible to define a precise measure of an objective strength in determining the evaluation of its parent objective for that entity. This measure is proportional to two factors: (A) the weight of the objective µ +δ −δ k =1 k = -1 k = 0 compellingness Figure 2 Sample population of objectives represented by dots and ordered by their compellingness (which is by itself a measure of importance), (B) a factor that increases equally for high and low values of the objective, because an objective can be important either because it is liked a lot or because it is disliked a lot. We call this measure s-compellingness and provide the following definition: s-compellingness(o, e, refo) = (A)∗ (B) = = w(o,refo)∗ max[[vo(e)]; [1 – vo(e)]], where − o is an objective, e is an entity, refo is an ancestor of o in the value tree − w(o,refo) is the product of the weights of all the links from o to refo − vo is the component value function for leaf objectives (i.e., attributes), and it is the recursive evaluation over children(o) for nonleaf objectives Given a measure of an objective's strength, a predicate indicating whether an objective should be included in an argument (i.e., worth mentioning) can be defined as follows: s-notably-compelling?(o,opop,e, refo) ≡ s-compellingness(o, e, refo)>µx+kσx , where − o, e, and refo are defined as in the previous Def; opop is an objective population (e.g., siblings(o)), and opop>2 − p∈ opop; x∈X = s-compellingness(p, e, refo) − µx is the mean of X, σx is the standard deviation and k is a user-defined constant Similar measures for the comparison of two entities are defined and extensively discussed in (Klein 1994). 2.3 The constant k In the definition of s-notably-compelling?, the constant k determines the lower bound of scompellingness for an objective to be included in an argument. As shown in Figure 2, for k=0 only objectives with s-compellingness greater Figure 3 Arguments about the same house, tailored to the same subject but with k ranging from 1 to –1 than the average s-compellingness in a population are included in the argument (4 in the sample population). For higher positive values of k less objectives are included (only 2, when k=1), and the opposite happens for negative values (8 objectives are included, when k=-1). Therefore, by setting the constant k to different values, it is possible to control in a principled way how many objectives (i.e., pieces of evidence) are included in an argument, thus controlling the degree of conciseness of the generated arguments. Figure 3 clearly illustrates this point by showing seven arguments generated by our argument generator in the real-estate domain. These arguments are about the same house, tailored to the same subject, for k ranging from 1 to –1. 3 The evaluation framework In order to evaluate different aspects of the argument generator, we have developed an evaluation framework based on the task efficacy evaluation method. This method allows Figure 4 The evaluation framework architecture the experimenter to evaluate a generation model by measuring the effects of its output on user’s behaviors, beliefs and attitudes in the context of a task. Aiming at general results, we chose a rather basic and frequent task that has been extensively studied in decision analysis: the selection of a subset of preferred objects (e.g., houses) out of a set of possible alternatives. In the evaluation framework that we have developed, the user performs this task by using a computer environment (shown in Figure 5) that supports interactive data exploration and analysis (IDEA) (Roth, Chuah et al. 1997). The IDEA environment provides the user with a set of powerful visualization and direct manipulation techniques that facilitate the user’s autonomous exploration of the set of alternatives and the selection of the preferred alternatives. Let’s examine now how an argument generator can be evaluated in the context of the selection task, by going through the architecture of the evaluation framework. 3.1 The evaluation framework architecture Figure 4 shows the architecture of the evaluation framework. The framework consists of three main sub-systems: the IDEA system, a User Model Refiner and the Argument Generator. The framework assumes that a model of the user’s preferences (an AMVF) has been previously acquired from the user, to assure a reliable initial model. At the onset, the user is assigned the task to select from the dataset the four most preferred alternatives and to place them in a Hot List (see Figure 5, upper right corner) ordered by preference. The IDEA system supports the user in this task (Figure 4 (1)). As the interaction unfolds, all user actions are monitored and collected in the User’s Action History (Figure 4 (2a)). Whenever the user feels that the task is accomplished, the ordered list of preferred alternatives is saved as her Preliminary Decision (Figure 4 (2b)). After that, this list, the User’s Action History and the initial Model of User’s Preferences are analysed by the User Model Refiner (Figure 4 (3)) to produce a Refined Model of the User’s Preferences (Figure 4 (4)). At this point, the stage is set for argument generation. Given the Refined Model of the User’s Preferences, the Argument Generator produces an evaluative argument tailored to the model (Figure 4 (5-6)), which is presented to the user by the IDEA system (Figure 4 (7)).The argument goal is to introduce a new alternative (not included in the dataset initially presented to the user) and to persuade the user that the alternative is worth being considered. The new alternative is designed on the fly to be preferable for the user given her preference model. 3-26 HotList NewHouse 3-26 Figure 5 The IDEA environment display at the end of the interaction All the information about the new alternative is also presented graphically. Once the argument is presented, the user may (a) decide immediately to introduce the new alternative in her Hot List, or (b) decide to further explore the dataset, possibly making changes to the Hot List adding the new instance to the Hot List, or (c) do nothing. Figure 5 shows the display at the end of the interaction, when the user, after reading the argument, has decided to introduce the new alternative in the Hot List first position (Figure 5, top right). Whenever the user decides to stop exploring and is satisfied with her final selections, measures related to argument’s effectiveness can be assessed (Figure 4 (8)). These measures are obtained either from the record of the user interaction with the system or from user selfreports in a final questionnaire (see Figure 6 for an example of self-report) and include: - Measures of behavioral intentions and attitude change: (a) whether or not the user adopts the new proposed alternative, (b) in which position in the Hot List she places it and (c) how much she likes the new alternative and the other objects in the Hot List. - A measure of the user’s confidence that she has selected the best for her in the set of alternatives. - A measure of argument effectiveness derived by explicitly questioning the user at the end of the interaction about the rationale for her decision (Olso and Zanna 1991). This can provide valuable information on what aspects of the argument were more influential (i.e., better understood and accepted by the user). - An additional measure of argument effectiveness is to explicitly ask the user at the end of the interaction to judge the argument with respect to several dimensions of quality, such as content, organization, writing style and convincigness. However, evaluations based on Figure 6 Self -report on user’s satisfaction with houses in the HotList Figure 7 Hypotheses on experiment outcomes judgements along these dimensions are clearly weaker than evaluations measuring actual behavioural and attitudinal changes (Olso and Zanna 1991). To summarize, the evaluation framework just described supports users in performing a realistic task at their own pace by interacting with an IDEA system. In the context of this task, an evaluative argument is generated and measurements related to its effectiveness can be performed. We now discuss an experiment that we have performed within the evaluation framework 4 The Experiment The argument generator has been designed to facilitate testing the effectiveness of different aspects of the generation process. The experimenter can easily control whether the generator tailors the argument to the current user, the degree of conciseness of the argument (by varying k as explained in Section 2.3), and what microplanning tasks the generator performs. In the experiment described here, we focused on studying the influence of argument conciseness on argument effectiveness. A parallel experiment about the influence of tailoring is described elsewhere. We followed a between-subjects design with three experimental conditions: No-Argument - subjects are simply informed that a new house came on the market. Tailored-Concise - subjects are presented with an evaluation of the new house tailored to their preferences and at a level of conciseness that we hypothesize to be optimal. To start our investigation, we assume that an effective argument (in our domain) should contain slightly more than half of the available evidence. By running the generator with different values for k on the user models of the pilot subjects, we found that this corresponds to k=-0.3. In fact, with k=-0.3 the arguments contained on average 10 pieces of evidence out of the 19 available. Tailored-Verbose - subjects are presented with an evaluation of the new house tailored to their preferences, but at a level of conciseness that we hypothesize to be too low (k=-1, which corresponds on average, in our analysis of the pilot subjects, to 16 pieces of evidence out of the possible 19). In the three conditions, all the information about the new house is also presented graphically, so that no information is hidden from the subject. Our hypotheses on the outcomes of the experiment are summarized in Figure 7. We expect arguments generated for the TailoredConcise condition to be more effective than arguments generated for the Tailored-Verbose condition. We also expect the Tailored-Concise condition to be somewhat better than the NoArgument condition, but to a lesser extent, because subjects, in the absence of any argument, may spend more time further exploring the dataset, thus reaching a more informed and balanced decision. Finally, we do not have strong hypotheses on comparisons of argument effectiveness between the NoArgument and Tailored-Verbose conditions. The experiment is organized in two phases. In the first phase, the subject fills out a questionnaire on the Web. The questionnaire implements a method form decision theory to acquire an AMVF model of the subject’s preferences (Edwards and Barron 1994). In the second phase of the experiment, to control for possible confounding variables (including subject’s argumentativeness (Infante and Rancer 1982), need for cognition (Cacioppo, Petty et al. 1983), intelligence and self-esteem), the subject Tailored Concise Tailored Verbose No-Argument > >> ? a) How would you judge the houses in your Hot List? The more you like the house the closer you should put a cross to “good choice” 1st house bad choice : __:__:__:__ :__:__:__:__:__: good choice 2nd house bad choice : __:__:__:__ :__:__:__:__:__: good choice 3rd house bad choice : __:__:__:__ :__:__:__:__:__: good choice 4th house bad choice : __:__:__:__ :__:__:__:__:__: good choice Figure 8 Sample filled-out self-report on user’s satisfaction with houses in the Hot List3 is randomly assigned to one of the three conditions. Then, the subject interacts with the evaluation framework and at the end of the interaction measures of the argument effectiveness are collected, as described in Section 3.1. After running the experiment with 8 pilot subjects to refine and improve the experimental procedure, we ran a formal experiment involving 30 subjects, 10 in each experimental condition. 5 Experiment Results 5.1 A precise measure of satisfaction According to literature on persuasion, the most important measures of arguments effectiveness are the ones of behavioral intentions and attitude change. As explained in Section 3.1, in our framework such measures include (a) whether or not the user adopts the new proposed alternative, (b) in which position in the Hot List she places it, (c) how much she likes the proposed new alternative and the other objects in the Hot List. Measures (a) and (b) are obtained from the record of the user interaction with the system, whereas measures in (c) are obtained from user self-reports. A closer analysis of the above measures indicates that the measures in (c) are simply a more precise version of measures (a) and (b). In fact, not only they assess the same information as measures (a) and (b), namely a preference ranking among the new alternative and the objects in the Hot List, but they also offer two additional critical advantages: 3 If the subject does not adopt the new house, she is asked to express her satisfaction with the new house in an additional self-report. (i) Self-reports allow a subject to express differences in satisfaction more precisely than by ranking. For instance, in the self-report shown in Figure 8, the subject was able to specify that the first house in the Hot List was only one space (unit of satisfaction) better then the house preceding it in the ranking, while the third house was two spaces better than the house preceding it. (ii) Self-reports do not force subjects to express a total order between the houses. For instance, in Figure 8 the subject was allowed to express that the second and the third house in the Hot List were equally good for her. Furthermore, measures of satisfaction obtained through self-reports can be combined in a single, statistically sound measure that concisely express how much the subject liked the new house with respect to the other houses in the Hot List. This measure is the z-score of the subject’s self-reported satisfaction with the new house, with respect to the self-reported satisfaction with the houses in the Hot List. A z-score is a normalized distance in standard deviation units of a measure xi from the mean of a population X. Formally: xi∈ X; z-score( xi ,X) = [xi - µ (X)] / σ(X) For instance, the satisfaction z-score for the new instance, given the sample self-reports shown in Figure 8, would be: [7 - µ ({8,7,7,5})] / σ({8,7,7,5}) = 0.2 The satisfaction z-score precisely and concisely integrates all the measures of behavioral intentions and attitude change. We have used satisfaction z-scores as our primary measure of argument effectiveness. 5.2 Results As shown in Figure 9, the satisfaction z-scores obtained in the experiment confirmed our hypotheses. Arguments generated for the Tailored-Concise condition were significantly more effective than arguments generated for Tailored-Verbose condition. The TailoredConcise condition was also significantly better than the No-Argument condition, but to a lesser extent. Logs of the interactions suggest that this happened because subjects in the No-Argument condition spent significantly more time further exploring the dataset. Finally, there was no significant difference in argument effectiveness a) How would you judge the houses in your Hot List? The more you like the house the closer you should put a cross to “good choice” 1st house bad choice : __:__:__:__ :__:__:__:X :__: good choice 2nd house(New house) bad choice : __:__:__:__ :__:__:X :__:__: good choice 3rd house bad choice : __:__:__:__ :__:__:X :__:__: good choice 4th house bad choice : __:__:__:__ :X :__:__:__:__: good choice Figure 9 Results for satisfaction z-scores. The average z-scores for the three conditions are shown in the grey boxes and the p-values are reported beside the links between the No-Argument and TailoredVerbose conditions. With respect to the other measures of argument effectiveness mentioned in Section 3.1, we have not found any significant differences among the experimental conditions. 6 Conclusions and Future Work Argumentation theory indicates that effective arguments should be concise, presenting only pertinent and cogent information. However, argumentation theory does not tell us what is the most effective degree of conciseness. As a preliminary attempt to answer this question for evaluative arguments, we have compared in a formal experiment the effectiveness of arguments generated by our argument generator at two different levels of conciseness. The experiment results show that arguments generated at the more concise level are significantly better than arguments generated at the more verbose level. However, further experiments are needed to determine what is the optimal level of conciseness. Acknowledgements Our thanks go to the members of the Autobrief project: S. Roth, N. Green, S. Kerpedjiev and J. Mattis. We also thank C. Conati for comments on drafts of this paper. This work was supported by grant number DAA-1593K0005 from the Advanced Research Projects Agency (ARPA). References Cacioppo, J. T., R. E. Petty, et al. (1983). “Effects of Need for Cognition on Message Evaluation, Recall, and Persuasion.” Journal of Personality and Social Psychology 45(4): 805-818. Carenini, G. and J. Moore (2000). A Strategy for Generating Evaluative Arguments. International Conference on Natural Language Generation, Mitzpe Ramon, Israel. Clemen, R. T. (1996). Making Hard Decisions: an introduction to decision analysis. Belmont, California, Duxbury Press. Dale, R., B. d. Eugenio, et al. (1998). “Introduction to the Special Issue on Natural Language Generation.” Computational Linguistics 24(3): 345-353. Edwards, W. and F. H. Barron (1994). “SMARTS and SMARTER: Improved Simple Methods for Multi-attribute Utility Measurements.” Organizational Behavior and Human Decision Processes 60: 306-325. Elhadad, M. (1995). “Using argumentation in text generation.” Journal of Pragmatics 24: 189-220. Infante, D. A. and A. S. Rancer (1982). “A Conceptualization and Measure of Argumentativeness.” Journal of Personality Assessment 46: 72-80. Klein, D. (1994). Decision Analytic Intelligent Systems: Automated Explanation and Knowledge Acquisition, Lawrence Erlbaum Associates. Lester, J. C. and B. W. Porter (March 1997). “Developing and Empirically Evaluating Robust Explanation Generators: The KNIGHT Experiments.” Computational Linguistics 23(1): 65-101. Mayberry, K. J. and R. E. Golden (1996). For Argument's Sake: A Guide to Writing Effective Arguments, Harper Collins, College Publisher. McConachy, R., K. B. Korb, et al. (1998). Deciding What Not to Say: An Attentional-Probabilistic Approach to Argument Presentation. Cognitive Science Conference. Olso, J. M. and M. P. Zanna (1991). Attitudes and beliefs ; Attitude change and attitude-behavior consistency. Social Psychology. R. M. Baron and W. G. Graziano. Robin, J. and K. McKeown (1996). “Empirically Designing and Evaluating a New Revision-Based Model for Summary Generation.” Artificial Intelligence journal 85: 135-179. Roth, S. F., M. C. Chuah, et al. (1997). Towards an Information Visualization Workspace: Combining Multiple Means of Expression. Human-Computer Interaction Journal. Young, M. R. “Using Grice's Maxim of Quantity to Select the Content of Plan Descriptions.” Artificial Intelligence Journal, to appear. Tailored Concise Tailored Verbose No-Argument > ? 0.88 0.05 0.25 0.02 >> 0.03 0.31
2000
20
Multi-Agen t Explanation Strategies in Real-Time Domains Kumik o T anak a-Ishii Univ ersit y of T oky o, 7-3-1 Hongo Bunky o-ku T oky o 113-8656 Japan [email protected] Ian F rank Electrotec hnical Lab oratory 1-1-4 Umezono, Tsukuba Ibaraki 305-0085 Japan [email protected] Abstract W e examine the b ene ts of using m ultiple agen ts to pro duce explanations. In particular, w e iden tify the ability to c onstruct prior plans as a k ey issue constraining the e ectiv eness of a singleagen t approac h. W e describ e an implemen ted system that uses m ultiple agen ts to tac kle a problem for whic h prior planning is particularly impractical: realtime so ccer commen tary . Our commentary system demonstrates a n um b er of the adv an tages of decomp osing an explanation task among sev eral agen ts. Most notably , it sho ws ho w individual agen ts can b ene t from follo wing di eren t discourse strategies. F urther, it illustrates that discourse issues suc h as con trolling in terruption, abbreviation, and maintaining consistency can also b e decomp osed: rather than considering them at the single lev el of one linear explanation they can also b e tac kled separately within eac h individual agen t. W e ev aluate our system's output, and sho w that it closely compares to the sp eaking patterns of a h uman commen tary team. 1 In tro duction This pap er deals with the issue of high-lev el vs lo w-lev el explanation strategies. Ho w should an explanation nd a balance b et w een describing the o v erall, high-lev el prop erties of the discourse subject, and the lo w-lev el, pro cedural details? In particular, w e lo ok at the diÆculties presen ted b y domains that c hange in real-time. F or suc h domains, the balance b et w een reacting to the domain ev en ts as they o ccur and main taining the o v erall, highlev el consistency is critical. W e argue that it is b ene cial to decomp ose the o v erall explanation task so that it is carried out b y more than one agen t. This allo ws a single agen t to deal with the trac king of the lo w-lev el dev elopmen ts in the domain, lea ving the others to concen trate on the high-lev el picture. The task of eac h individual agen t is simpli ed, since they only ha v e to main tain consistency for a single discourse strategy . F urther, discourse issues suc h as con trolling in terruption, abbreviation, and main taining consistency can also b e decomp osed: rather than considering them at the single lev el of one linear explanation they can b e tac kled separately within eac h individual agen t and then also at the lev el of in ter-agen t co op eration. W e lo ok at real-w orld examples of explanation tasks that are carried out b y m ultiple agen ts, and also giv e a more detailed proto col analysis of one of these examples: W orld Cup so ccer commentary b y TV announcers. W e then describ e an actual implemen tation of an explanation system that pro duces m ulti-agen t commen tary in realtime for a game of sim ulated so ccer. In this system, eac h of the agen ts selects their discourse conten t on the basis of imp ortance scores attac hed to ev en ts in the domain. The in teraction b et w een the agen ts is con trolled to maximise the imp ortance score of the uttered commen ts. Although our w ork fo cuses on real-time domains suc h as so ccer, our discussion in x2 puts our con tribution in a wider con text and identi es a n um b er of the general b ene ts of using m ultiple agen ts for explanation tasks. W e c hose the game of so ccer for our researc h primarily b ecause it is a m ulti-agen t game in whic h v arious ev en ts happ en sim ultaneously on the eld. Th us, it is an excellen t domain to study real-time conten t selection among man y heterogeneous facts. A second reason for c ho osing so ccer is that detailed, high-qualit y logs of sim ulated so ccer games are a v ailable on a real-time basis from So ccer Serv er, the oÆcial so ccer sim ulation system for the `Rob oCup' Rob otic So ccer W orld Cup initiativ e (Kitano et al., 1997). Ease of making prior plans difficult possible easy sports commentary mind games commentary car navigation systems panel discussion live lecture lecture(TV) business presentation Figure 1: Common explanation tasks categorised according to the ease of planning them in adv ance 2 Explanation Strategies In this pap er, w e use the term explanation in its broadest p ossible sense, co v ering the en tire sp ectrum from planned lectures to commen tating on sp orts ev en ts. An y suc h explanation task is affected b y man y considerations, including the lev el of kno wledge assumed of the listeners and the a v ailable explanation time. Ho w ev er, the issue w e mainly concen trate on here has not previously receiv ed signi can t atten tion: the b ene ts of splitting an explanation task b et w een m ultiple agen ts. 2.1 Explanations and Multi-Agency The general task of pro ducing explanations with m ultiple agen ts has not b een studied in depth in the literature. Ev en for the `naturally' m ultiagen t task of so ccer commen tary , the systems describ ed in the recen t AI Magazine sp ecial issue on Rob oCup (Andr  e et al., 2000) are all single-agen t. Ho w ev er, one general issue that has b een studied at the lev el of single agen ts is the trade-o b et w een lo w-lev el and high-lev el explanations. F or example, in tutoring systems (Ca wsey , 1991) has describ ed a system that handles real-time in teractions with a user b y separating the con trol of the con ten t planning and dialogue planning. W e b eliev e that the k ey issue constraining the use of high-lev el and lo w-lev el explanations in a discourse is the ability to c onstruct prior plans . F or example, researc hers in the eld of discourse analysis, (e.g., (Sinclair and Coulthard, 1975)) ha v e found that relativ ely formal t yp es of dialogues follo w a regular hierarc hical structure. When it is p ossible to nd these kinds of a priori plans for a discourse to follo w, approac hes suc h as those cited ab o v e for tutoring are v ery e ectiv e. Ho w ev er, if prior plans are hard to sp ecify , a single agen t ma y simply nd it b ecomes o v erloaded. T ypically there will b e t w o con icting goals: deal with and explain eac h individual (unplanned) domain ev en t as it o ccurs, or build up and explain a more abstract picture that con v eys the o v erall nature of the explanation topic. Th us, for an y c hanging domain in whic h it is hard to plan the o v erall discourse, it can b e b ene cial to divide the explanation task b et w een m ultiple agen ts. Esp ecially for real-time domains, the primary b ene t of decomp osing the explanation task in this w a y is that it allo ws eac h agen t to use a di eren t discourse strategy to explain differen t asp ects of the domain (t ypically , high-lev el or lo w-lev el). Ho w ev er, w e can see from Figure 1 that ev en some activities that are highly planned are sometimes carried out b y m ultiple agen ts. F or example, business presen tations are often carried out b y a team of p eople, eac h of whic h is an exp ert in some particular area. Clearly , there are other b ene ts that come from decomp osing the explanation task b et w een more than one agen t. W e can giv e a partial list of these here:  Agen ts ma y start with di eren t abilities. F or example, in a panel session, one panellist ma y b e an exp ert on Etruscan v ases, while another ma y b e an exp ert on Byzan tian art.  It can tak e time to observ e high-lev el patterns in a domain, and to explain them coheren tly . Ha ving a dedicated agen t for commen ting on the lo w-lev el c hanges increases the c hance that higher-lev el agen ts ha v e a c hance to carry out analysis.  A team of agen ts can con v erse together. In particular, they can mak e explanations to e ach other instead of directly explaining things to the listeners. This can b e a more comfortable psyc hological p osition for the listener to accept new information.  The simple lab el of \exp ert" adds w eigh t to the w ords of a sp eak er, as sho wn con vincingly b y the researc h of (Reev es and Nass, 1996). The use of m ultiple agen ts actually giv es a c hance to describ e individual agen ts as \exp erts" on sp eci c topics.  Ev en a single agen t sp eaking in isolation could describ e itself as an exp ert on v arious topics. Ho w ev er, (Reev es and Nass, 1996) also sho w that self-praise has far less impact than the same praise from another source. Rather than describing themselv es as exp erts, a m ulti-agen t framew ork allo ws agen ts to describ e the other agen ts as exp erts. T o illustrate the di eren t roles that can b e tak en b y m ultiple agen ts in an explanation task, w e carried out a simple proto col analysis of an example from the far left of our scale of Figure 1: so ccer commen tary . 2.2 So ccer Proto col Analysis W e analysed the video of the NHK co v erage of the rst half 1998 W orld Cup nal. This commen tary w as carried out b y a team of t w o p eople who w e call the `announcer' and the `exp ert'. The gures in T able 1 demonstrate that there are clear di erences b et w een the roles assumed b y this commentary team. Although b oth use some bac kground kno wledge to ll out their p ortions of the commentary , the announcer mostly commen ts on lo w-lev el ev en ts, whilst the exp ert mostly giv es higher-lev el, state-based information. F urther, w e can see that the announcer ask ed questions of the exp ert with a high frequency . Ov erall, there is a clear indication that one agen t follo ws the lo w-lev el ev en ts and that the other follo ws the high-lev el nature of the game. Accordingly , their discourse strategies are also differen t: the announcer tends to sp eak in shorter phrases, whereas the exp ert pro duces longer analyses of an y giv en sub ject. The commen tary team collab orates so that the consistency b et w een highlev el, lo w-lev el, and bac kground commen ts is balanced within the con ten t sp ok en b y eac h individual, and also within the o v erall commen tary . 2.3 A First Implemen tation As a rst step to w ards a m ulti-agen t explanation system based on the ab o v e observ ations, the follo wing sections describ e ho w w e implemen ted a commen tary system for a game of sim ulated so ccer. Our exp erience with this system re ected the discussion ab o v e in that w e found it w as v ery diÆcult to consisten tly manage all the p ossible discourse topics within a single-agen t framew ork. When c hanging to a m ulti-agen t system, ho w ev er, w e found that a small n um b er of simple rules for in ter-agen t in teraction pro duced a far more manageable system. W e also found that the system w as b eha viourally v ery similar to the proto col of T able 1. 3 An Arc hitecture F or MultiAgen t So ccer Commen tary Figure 2 sho ws the basic arc hitecture of our so ccer commen tator system. As w e men tioned in the In tro duction, this system is designed to pro duce liv e commen tary for games pla y ed on Rob oCup's So ccer Serv er. Since the So ccer Serv er w as originally designed as a testb ed for m ulti-agen t systems (No da et al., 1998), w e call our commen tator Mike (\Multi-agen t In teractions Kno wledgeably Explained"). T ypically , Mike is used to add atmosphere to games pla y ed in the Rob oCup tournamen ts, so w e assume that the p eople listening to Mike can also see the game b eing describ ed. Analyser Announcer Soccer Server Voronoi Statistics Basic TTS shared memory commentary Communicator Figure 2: Mike | a m ulti-agen t commen tator The So ccer Serv er pro vides a real-time game log of a v ery high qualit y , sending information on the p ositions of the pla y ers and the ball to a monitoring program ev ery 100msec. Sp eci cally , this information consists of 1) pla y er lo cation and orien tation, 2) ball lo cation, and 3) game score and pla y mo des (thro w ins, goal kic ks, etc). This information is placed in Mike's shared memory , where it is pro cessed b y a n um b er of `So ccer Analyser' mo dules that analyse higherlev el features of a game. These features include statistics on pla y er p ositions, and also `bigrams' of ball pla y c hains represen ted as rst order Mark o v c hains. The V oronoi analyser uses V oronoi diagrams to assess game features suc h as defensiv e areas. Note that w e do not consider the So ccer Analysers to b e `agen ts'; they are simply pro cesses that manipulate the information in the shared memory . The only true `agen ts' in the system are the Announcer and the Analyser, whic h comm unicate b oth with eac h other and with the audience. All information in Mike's shared memory is represen ted in the form of commen tary fragmen ts that w e call pr op ositions. Eac h prop osition consists of a tag and some attributes. F or example, a pass from pla y er No.5 to No.11 is represen ted as (Pass 5 11), where Pass is the tag, and the Commen tary F eature Announcer Exp ert Note Bac kground commen t (e.g., on stadium, or team bac kgrounds) 7% 20% (prede ned plan) Ev en t-based commen t 82% 3% (lo w-lev el) State-based commen t 11% 77% (high-lev el) Av erage length of commen t 1.3sec 3.8sec (consistency) Asks a question to the other 30 0 (new explanation mo de) In terrupts the other 5 0 (priorit y of roles) Announcer describ es exp ert as exp ert 0 n/a (adds w eigh t to exp ert) T able 1: Proto col analysis of announcer and exp ert utterances in professional TV co v erage of so ccer Lo cal Global E Kick ChangeForm v Pass e Dribble SideChange n ShootPredict t S Mark TeamPassSuccessRate t PlayerPassSuccessRate AveragePassDistance a ProblematicPlayer Score t PlayerActive Time e T able 2: Examples of Mike's prop osition tags n um b ers 5 and 11 are the attributes. Mike uses around 80 di eren t tags categorised in t w o w a ys: as b eing lo cal or global and as b eing state-based or ev en t-based. T able 2 sho ws some examples of categorised prop osition tags. The op eration of the Announcer and the Analyser agen ts is describ ed in detail in the follo wing section. Basically , they select prop ositions from the shared memory (based on their `imp ortance scores') and pro cess them with inference rules to pro duce higher-lev el c hains of explanations. The discourse con trol tec hniques of in terruption, repetition, abbreviation, and silence are used to control b oth the dialogue strategies of eac h individual agen t and also the in teraction b et w een them. T o pro duce v ariet y in the commen tary , eac h p ossible prop osition is asso ciated with sev eral p ossible commen tary templates (output can b e in English or Japanese). Figure 3 sho ws the o v erall rep ertoire of Mike's commen ts. The actual sp ok en commen tary is realised with o -the-shelf textto-sp eec h soft w are (F ujitsu's Japanese Syn thesiser for Japanese, and DecT alk for English). 4 Multi-Agen t NL Generation In this section, w e describ e ho w Mike uses imp ortance scores, real-time inferencing, and discourse con trol strategies to implemen t | and con trol the in teraction b et w een | agen ts with di ering expla Explanation of complex ev en ts. F ormation and p osition c hanges, adv anced pla ys.  Ev aluation of team pla ys. Av erage formations, formations at a certain momen t, pla yers' lo cations, indication of activ e or problematic pla y ers, winning passw ork patterns, w asteful mo v emen ts.  Suggestions for impro ving pla y. Lo ose defence areas, b etter lo cations for inactiv e pla y ers.  Predictions. P asses, game results, shots.  Set pieces. Goal kic ks, thro w ins, kic k o s, corner kic ks, free kic ks.  P assw ork. T rac king of basic passing pla y . Figure 3: Mike's rep ertoire of statemen ts nation strategies. T o form a single coheren t commen tary with m ultiple agen ts w e extended the single-agen t framew ork of (T anak a-Ishii et al., 1998). The basic principle of this framew ork is that giv en a set of scores that capture the information transmitted b y making an y utterance, the most e ectiv e dialogue is the one that maximises the total score of all the prop ositions that are v erbalised. W e therefore created t w o agen ts with di eren t strategies for con ten t sc heduling. One agen t acts as an announcer, follo wing the lo w-lev el ev en ts on the eld. This agen t's strategy is biased to allo w frequen t topic c hange and although it uses inference rules to lo ok for connections b et w een prop ositions in the shared memory , it only uses short c hains of inference. On the other hand, the second agen t acts as an `exp ert analyst', and is predominan tly state based. The exp ert's strategy is biased to ha v e more consistency , and to apply longer c hains of inference rules than the announcer. 4.1 Imp ortance Scores In Mike, imp ortance scores are designed to capture the amoun t of information that an y giv en prop osition will transmit to an audience. They are not xed v alues, but are computed from scratc h at ev ery game step (100msec). The imp ortance score of eac h prop osition dep ends on three factors: 1) the elapsed time since the prop osition w as generated, 2) for ev en t-based prop ositions, a comparison of the place asso ciated with the prop osition and the curren t lo cation of the ball, and 3) the frequency that the prop osition has already b een stated. T o k eep the n um b er of commen ts in the shared memory to a manageable n um b er they are simply limited in n um b er, with the oldest en tries b eing remo v ed as new prop ositions are added. 4.2 Real Time Inference Mike's commen tary prop ositions are the results of large amoun ts of real-time data pro cessing, but are t ypically lo w-lev el. A commen tary based solely on these prop ositions w ould b e rather detailed and disconnected. Th us, to analyse the pla y more deeply , Mike giv es the commen tary agen ts access to a set of forw ard-c haining rules that describ e the p ossible relationships b et w een the prop ositions. In total, there are 145 of these rules, divided in to the t w o classes of logical consequences and second order relations. W e giv e a represen tativ e example from eac h class here:  Logical consequence: (PassSuccessRate pl ay er per centag e) (PassPattern pl ay er Goal) ! (active pl ay er )  Second order relation: (PassSuccessRate pl ay er per centag e) (PlayerOnVoronoi Lin e pl ay er ) ! (Reason @1 @2) The basic premise of the announcer's dialogue strategy is to follo w the pla y b y rep eatedly c ho osing the prop osition with the highest imp ortance score. Before stating this prop osition, ho w ev er, the announcer c hec ks an y applicable inference rules in a top do wn manner, in an attempt to pro duce higher-lev el commen tary fragmen ts and bac kground related information. In con trast to this, the exp ert agen t has a library of themes (e.g., pass statistics, formation, stamina) b et w een whic h it c ho oses based on the prop ositions selected b y the announcer so far. It then uses inference rules to try to construct a series of high-lev el inferences related to the theme. The exp ert applies rules un til it succeeds in constructing a single coheren t piece of structured commen tary . When it is the agen t's turn to sp eak it can then send this commen tary to the TTS soft w are. 4.3 Discourse Con trol Strategies Consider a passage of commen tary where the announcer is sp eaking and a prop osition with a m uc h larger imp ortance score than the one b eing uttered app ears in the shared memory . If this o ccurs, the total imp ortance score ma y b ecome larger if the announcer immediately in terrupts the curren t utterance and switc hes to the new one. As an example, the left of Figure 4 sho ws (solid line) the c hange of the imp ortance score with time when an in terruption tak es place (the dotted line represen ts the imp ortance score without in terruption). The left part of the solid line is lo w er than the dotted, b ecause w e assume that the rst utterance con v eys less of its imp ortance score when it is not completely uttered. Ho w ev er, the righ t part of the solid line is higher than the dotted line, b ecause the imp ortance of the second utterance will b e lo w er b y the time it is uttered without in terrupting the commen tary . Note that after selecting a prop osition to b e uttered, its imp ortance score is assumed to decrease with time (as indicated in the gure, the decrease is computed dynamically and will b e di eren t for eac h prop osition, and often not ev en linear). The decision of whether or not to in terrupt is based on a comparison of the area b et w een the solid or dotted lines and the horizon tal axis. Similarly , it ma y happ en that when the t w o most imp ortan t prop ositions in shared memory Importance Score/Time Ends the first utterrance without interruption An Important event occurs with interruption without interruption time < > ? Importance Score/Time But can utter other important content with abbreviation without abbreviation time < > ? Becomes less comprehensive because of abbreviation Figure 4: Change of imp ortance score on in terruption and abbreviation Importance Score/Time Another utterrance with repetition without repetition time < > ? Repeated utterrance have higher score by emphasis Figure 5: Increase in imp ortance scores caused b y emphasis of rep eating a prop osition are of similar imp ortance, the amoun t of comm unicated information can b est b e maximised b y quic kly uttering the most imp ortan t prop osition and then mo ving on to the second b efore it loses imp ortance due to some dev elopmen t of the game situation. This is illustrated in the second graph of Figure 4. Here, the left hand side of the solid line is lo w er than that of the dotted b ecause an abbreviated utterance (whic h migh t not b e grammatically correct, or whose con text migh t not b e fully giv en) transmits less information than a more complete utterance. But since the second prop osition can b e uttered b efore losing its imp ortance score, the righ t hand part of the solid line is higher than that of the dotted. As b efore, the b ene ts or otherwise of this mo di cation should b e decided b y comparing the t w o areas made b y the solid and the dotted line with the horizon tal axis. W e originally designed these tec hniques just to impro v e the abilit y of the announcer agen t to follo w the pla y . With t w o commen tary agen ts, ho w ev er, b oth in terruption and abbreviation can b e adapted to con trol in ter-agen t switc hing. In Mike, the default op eration is for the announcer to k eep talking while the ball is in the nal third of the eld, or while there are imp ortan t prop ositions to utter. When the announcer has nothing to sa y , the exp ert agen t can sp eak or b oth agen ts can remain silen t. If the exp ert agen t c ho oses to sp eak, it ma y happ en that an imp ortan t ev en t on the eld mak es the announcer w an ts to sp eak again. W e mo del b oth in terruption and abbreviation as m ulti-agen t v ersions of the graphs of Figure 4: the agen t sp eaking the rst utterance is the exp ert and the agen t sp eaking the second is the announcer. W e use t w o further discourse con trol tec hniques in Mike: rep etition and silence. Rep etition is depicted in Figure 5. Sometimes it can happ en that the remaining un-uttered prop ositions in the shared memory ha v e m uc h smaller scores than an y of those that ha v e already b een selected. In this case, w e allo w individual agen ts to rep eat things that they ha v e previously said. Also, w e allo w them to rep eat things that the other agen t has said, to increase the e ectiv eness of the dialogue b et w een them. Finally , w e also mo del silence b y adding b on uses to the imp ortance scores of the prop ositions uttered b y the commen tators. Sp eci cally , w e add a b on us to the scores of prop ositions uttered directly b efore a p erio d where b oth commen tators are silen t (the longer that a commen tary con tin ues unin terrupted, the higher the silence b on us). This mo dels the b ene t of giving listeners time to actually digest the commen tary . Also, a p erio d of silence con tributes a b on us to the imp ortance scores of the immediately follo wing prop ositions. This mo dels the increased emphasis of pausing b efore an imp ortan t statemen t. 4.3.1 Comm unication T emplates T o impro v e the smo othness of the transfer of the commen tary b et w een the t w o agen ts w e devised a small n um b er of simple c ommunic ation templates. The phrases con tained in these templates are sp ok en b y the agen ts whenev er Mike realises that the commen tary is switc hing b et w een them. F or the purp oses of k eeping the t w o agen ts distinct, the exp ert agen t is referred to b y the announcer as EMike (\Exp ert Mik e"). T o pass the commen tary to the Exp ert, the Announcer can use a n um b er of phrases suc h as \E-Mike, o v er to y ou", \An y impressions, E-Mike?", or just \E-Mike?". The announcer can also pass o v er con trol b y simply stopping sp eaking. If the commen tary switc hes from Announcer to Exp ert with a question, the Exp ert will start with \Y es..." or \W ell...". The comm unication templates for passing the commen tary in the other direction (Exp ert to Announcer) are sho wn in T able 3. T o help listeners distinguish the dialogue b et w een the Announcer and Exp ert b etter, w e also use a female v oice for one agen t and a male v oice for the other. 5 Ev aluation Mike is robust enough for us to ha v e used it to pro duce liv e commen tary at Rob oCup ev en ts, and to b e distributed on the In ternet (it has b een do wnloaded b y groups in Australia and Hungary and used for public demonstrations). A short example of Mike's output is sho wn in Figure 6. T o ev aluate Mike more rigorously w e carried out t w o questionnaire-based ev aluations, and also a log comparison with the data pro duced from the real-w orld so ccer commen tary in x2. F or the rst of the questionnaire ev aluations, w e used as subQuestion Scale Results Is the game b etter with or without commen tary? (5=with, 1=without) 4.97 W as the commen tary easy to understand? (5=easy , 1=hard) 3.44 W ere the commen tary con ten ts accurate? (5=correct, 1=incorrect) 3.25 W as the commen tary informativ e? (5=y es, 1=no) 3.53 Did y ou get tired of the commen tary? (5=no, 1=quic kly) 3.97 T able 4: Av erage resp onses of 20 sub jects to rst questionnaire ev aluation of (t w o-agen t) Mike Question Scale 1-agen t 2-agen t Di Is the game b etter with or without...? (5=with, 1=without) 4.45 4.45 0% W as the commen tary easy to understand? (5=easy , 1=hard) 2.95 3.25 +10% W ere the commen tary con ten ts accurate? (5=correct, 1=incorrect) 2.65 2.95 +11% W as the commen tary informativ e? (5=y es, 1=no) 3.15 3.35 +6% Did y ou get tired of the commen tary? (5=no, 1=quic kly) 2.35 3.35 +43% T able 5: Di erence in resp onse with ten sub jects when viewing 1-agen t and 2-agen t v ersions of Mike Announcer in terrupts exp ert Sorry , E-MIKE. Ha v e to stop y ou there E-MIKE. Oh!... But lo ok at this! Announcer sp eaks when exp ert stops Thanks. That's v ery true. Thanks E-MIKE. Ma yb e that will c hange as the game go es on. OK... T able 3: Phrases used b y announcer when in terrupting the exp ert, or when sp eaking after the exp ert agen t has simply stopp ed (no in terruption) jects t w en t y of the attendees of a recen t Rob oCup Spring camp. All these sub jects w ere familiar with the Rob oCup domain and the So ccer Serv er environmen t. W e sho w ed them an en tire half of a Rob oCup game commen tated b y Mike and collated their resp onses to the questions sho wn in T able 4. These results largely sho w that the listeners found the commen tary to b e useful and to con tain enough information to main tain their atten tion. W e also included some op en-ended questions on the questionnaire to elicit suggestions for features that should b e strengthened or incorp orated in future v ersions of Mike. The most frequen t resp onses here w ere requests for more bac kground information on previous games pla y ed b y the teams (p ossible in Rob oCup, but to date w e ha v e only done this thoroughly for individAnnouncer: y ello w 9,in the middle of the eld,y ello w team (a set play happ ene d her e). An y impressions, E-Mike? Analyser: W ell, here are statistics concerning p ossessions, left team has sligh tly smaller v alue of p ossession, it is 43 p ercen t v ersus 56. righ t team has rather higher v alue of territorial adv an tage, Ov erall, righ t team is ahead there. (Sc or e is curr ently 0-0. E-Mikejudges that r e d te am is doing b etter). Announcer: Really . dribble , y ello w 3, on the left, great long pass made to y ello w 1, for red 6, red 2's pass success rate is 100 p ercen t. E-Mike? Analyser: Lo oking at the dribbles and steals, red team w as a little less successful in dribbling, red team has a lo w er v alue of dribble a v erage length, left is 21 meters whereas righ t is 11, righ t team has a few less pla y ers making zero passes, y ello w team has made sligh tly less stealing, Announcer: w o w (interruption b e c ause r e d 11 made a shot), red 11, goal, red 11, Goal ! It w as red 10, And a pass for red 11 ! The score is 0 1! Figure 6: Example of Mike's commen tary from Rob oCup'98 nal ual games), more con v ersation b et w een the agen ts (w e plan to impro v e this with more comm unication templates), and more emotion in the v oices of the commen tators (w e ha v e not y et tac kled suc h surface-lev el NLG issues). W e also ask ed what the ideal n um b er of commen tators for a game w ould b e; almost all sub jects replied 2, with just t w o replying 3 and one replying 1. The ab o v e results are encouraging for Mike, but to sho w that the use of m ultiple agen ts w as actually one of the reasons for the fa v ourable audience impression, w e carried out a further test. W e Commen tary feature Announcer Exp ert Note Bac kground commen t 16% 22% (prede ned plan) Ev en t-based commen t 64% 0% (lo w-lev el) State-based commen t 20% 78% (high-lev el) Av erage length of commen t 1.1sec 2.9sec (consistency) Asks a question to the other 12.2 0 (new explanation mo de) In terrupts the other 8.6 0 (priorit y of roles) Announcer describ es exp ert as exp ert 0 n/a (adds w eigh t to exp ert) T able 6: Breakdo wn of Mike's agen t utterances o v er ten randomly selected Rob oCup half-games created a single-agen t v ersion of Mike b y switc hing o the male/female v oices in the TTS softw are and disabling the comm unication templates. This single-agen t commen tator commen ts on almost exactly the same game con ten t as the m ultiagen t v ersion, but with a single v oice. W e recruited ten v olun teers with no prior kno wledge of Rob oCup and sho w ed them b oth the single-agen t and m ulti-agen t v ersions of Mike commen tating the same game as used in the previous exp erimen t. W e split the sub jects in to t w o groups so that one group w atc hed the m ulti-agen t v ersion rst, and the other w atc hed the single-agen t v ersion rst. T able 5 sho ws that the a v erage questionnaire resp onses o v er the t w o groups w ere lo w er than with the sub jects who w ere familiar with Rob oCup, but that the m ulti-agen t v ersion w as more highly ev aluated than the single-agen t v ersion. Th us, ev en the sup er cially small mo di cation of remo ving the agen t dialogue has a measurable e ect on the commen tary . Finally , w e analysed Mike's commen tary using the same criteria as our proto col analysis of h uman so ccer commen tary in x2.2. W e selected ten half-games at random from the 1998 Rob oCup and compiled statistics on Mike's output with an automatic script. The results of this analysis (T able 6) sho w a mark ed similarit y to those of the h uman commen tators. This initial result is a v ery encouraging sign for further w ork in this area. 6 Conclusions W e ha v e argued for sup eriorit y of pro ducing explanations with m ultiple, rather than single, agen ts. In particular, w e iden ti ed the diÆculty of pr o ducing prior plans as the k ey issue constraining the abilit y of a single agen t to switc h b et w een high-lev el and lo w-lev el discourse strategies. As a rst step to w ards a m ulti-agen t explanation system with solid theoretical underpinnings, w e describ ed the explanation strategies used b y our liv e so ccer commen tary system, Mike. W e sho w ed ho w a set of imp ortance scores and inference rules can b e used as the basis for agen ts with di eren t discourse strategies, and ho w the discourse con trol tec hniques of in terruption, abbreviation, rep etition and silence can b e used not just to mo derate the discourse of an individual agen t, but also the in teraction b et w een agen ts. W e ev aluated Mike's output through listener surv eys, sho wing that it represen ts an adv ance o v er existing commen tary programs, whic h are all singleagen t. W e also found that the discourse strategies of Mike's agen ts closely resem bled those rev ealed b y the proto col analysis of a team of real-life so ccer commen tators. References E. Andr  e, K. Binsted, K. T anak a-Ishii, S. Luk e, G. Herzog, and T. Rist. 2000. Three Rob oCup sim ulation league commen tator systems. AI Magazine, 21(1):57{66, Spring. A.J. Ca wsey . 1991. Generating in treractiv e explanations. In Pr o c e e dings of the Ninth National Confer enc e on A rti cial Intel ligenc e (AAAI91), pages 86{91. H. Kitano, M. Asada, Y. Kuniy oshi, I. No da, E. Osa w a, and H. Matsubara. 1997. Rob oCup: A c hallenge problem for AI. AI Magazine, pages 73{85, Spring. I. No da, H. Matsubara, K. Hiraki, and I. F rank. 1998. So ccer Serv er: a to ol for researc h on m ulti-agen t systems. Applie d A rti cial Intel ligenc e, 12(2{3):233{251. B. Reev es and C. Nass. 1996. The Me dia Equation. CSLI Publications. J. Sinclair and R. Coulthard. 1975. T owar ds an A nalysis of Disc ourse: The English Use d by T e achers and Pupils. Oxford Univ ersit y Press. K. T anak a-Ishii, K. Hasida, and I. No da. 1998. Reactiv e con ten t selection in the generation of real-time so ccer commen tary . In Pr o c e e dings of COLING-A CL'98, pages 1282{1288, Mon treal.
2000
21
A Computational Approach to Zero-pronouns in Spanish Antonio Ferrández and Jesús Peral Dept. Languages and Information Systems, University of Alicante Carretera San Vicente S/N 03080 ALICANTE, Spain {antonio, jperal}@dlsi.ua.es Abstract In this paper, a computational approach for resolving zero-pronouns in Spanish texts is proposed. Our approach has been evaluated with partial parsing of the text and the results obtained show that these pronouns can be resolved using similar techniques that those used for pronominal anaphora. Compared to other well-known baselines on pronominal anaphora resolution, the results obtained with our approach have been consistently better than the rest. Introduction In this paper, we focus specifically on the resolution of a linguistic problem for Spanish texts, from the computational point of view: zero-pronouns in the “subject” grammatical position. Therefore, the aim of this paper is not to present a new theory regarding zeropronouns, but to show that other algorithms, which have been previously applied to the computational resolution of other kinds of pronoun, can also be applied to resolve zeropronouns. The resolution of these pronouns is implemented in the computational system called Slot Unification Parser for Anaphora resolution (SUPAR). This system, which was presented in Ferrández et al. (1999), resolves anaphora in both English and Spanish texts. It is a modular system and currently it is being used for Machine Translation and Question Answering, in which this kind of pronoun is very important to solve due to its high frequency in Spanish texts as this paper will show. We are focussing on zero-pronouns in Spanish texts, although they also appear in other languages, such as Japanese, Italian and Chinese. In English texts, this sort of pronoun occurs far less frequently, as the use of subject pronouns is generally compulsory in the language. While in other languages, zeropronouns may appear in either the subject´s or the object´s grammatical position, (e.g. Japanese), in Spanish texts, zero-pronouns only appear in the position of the subject. In the following section, we present a summary of the present state-of-the-art for zeropronouns resolution. This is followed by a description of the process for the detection and resolution of zero-pronouns. Finally, we present the results we have obtained with our approach. 1 Background Zero-pronouns have already been studied in other languages, such as Japanese, (e.g. Nakaiwa and Shirai (1996)). They have not yet been studied in Spanish texts, however. Among the work done for their resolution in different languages, nevertheless, there are several points that are common for Spanish. The first point is that they must first be located in the text, and then resolved. Another common point among, they all employ different kinds of knowledge (e.g. morphologic or syntactic) for their resolution. Some of these works are based on the Centering Theory (e.g. Okumura and Tamura (1996)). Other works, however, distinguish between restrictions and preferences (e.g. Lappin and Leass (1994)). Restrictions tend to be absolute and, therefore, discard any possible antecedents, whereas preferences tend to be relative and require the use of additional criteria, i.e. heuristics that are not always satisfied by all anaphors. Our anaphora resolution approach belongs to the second group. In computational processing, semantic and domain information is computationally inefficient when compared to other kinds of knowledge. Consequently, current anaphora resolution methods rely mainly on restrictions and preference heuristics, which employ information originating from morpho-syntactic or shallow semantic analysis, (see Mitkov (1998) for example). Such approaches, nevertheless, perform notably well. Lappin and Leass (1994) describe an algorithm for pronominal anaphora resolution that achieves a high rate of correct analyses (85%). Their approach, however, operates almost exclusively on syntactic information. More recently, Kennedy and Boguraev (1996) propose an algorithm for anaphora resolution that is actually a modified and extended version of the one developed by Lappin and Leass (1994). It works from a POS tagger output and achieves an accuracy rate of 75%. 2 Detecting zero-pronouns In order to detect zero-pronouns, the sentences should be divided into clauses since the subject could only appear between the clause constituents. After that, a noun-phrase (NP) or a pronoun that agrees in person and number with the clause verb is sought, unless the verb is imperative or impersonal. As we are also working on unrestricted texts to which partial parsing is applied, zeropronouns must also be detected when we do not dispose of full syntactic information. In Ferrández et al. (1998), a partial parsing strategy that provides all the necessary information for resolving anaphora is presented. That study shows that only the following constituents were necessary for anaphora resolution: co-ordinated prepositional and noun phrases, pronouns, conjunctions and verbs, regardless of the order in which they appear in the text. H1 Let us assume that the beginning of a new clause has been found when a verb is parsed and a free conjunction is subsequently parsed. When partial parsing is carried out, one problem that arises is to detect the different clauses of a sentence. Another problem is how to detect the zero-pronoun, i.e. the omission of the subject from each clause. With regard to the first problem, the heuristic H1 is applied to identify a new clause. (1)John y Jane llegaron tarde al trabajo porque ∅1 se durmieron (John and Jane were late for work because [they]∅ over-slept) 1 The symbol ∅ will always show the position of the In this particular case, a free conjunction does not imply conjunctions2 that join coordinated noun and prepositional phrases. It refers, here, to conjunctions that are parsed in our partial parsing scheme. For instance, in sentence (1), the following sequence of constituents is parsed: np(John and Jane), verb(were), freeWord3(late), pp(for work), conj(because), pron(they), verb(over-slept ) Since the free conjunction porque (because) has been parsed after the verb llegaron (were), the new clause with a new verb durmieron (over-slept) can be detected. With reference to the problem about detecting the omission of the subject from each clause with partial parsing, it is solved by searching through the clause constituents that appear before the verb. In sentence (1), we can verify that the first verb, llegaron (were), does not have its subject omitted since there appears a np(John and Jane). However, there is a zeropronoun, (they)∅, for the second verb durmieron (over-slept). (2) Pedroj vio a Anak en el parque. ∅k Estaba muy guapa (Peterj saw Annk in the park. [She]∅k was very beautiful) When the zero-pronoun is detected, our computational system inserts the pronoun in the position in which it has been omitted. This pronoun will be resolved in the following module of anaphora resolution. Person and number information is obtained from the clause verb. Sometimes in Spanish, gender information of the pronoun can be obtained when the verb is copulative. For example, in sentence (2), the verb estaba (was) is copulative, so that its subject must agree in gender and number with its object whenever the object can have either a masculine or a feminine linguistic form (guapo: masc, guapa: fem). We can therefore obtain information about its gender from the object, guapa (beautiful in its feminine form) which automatically assigns it to the feminine gender so the omitted pronoun would have to be she rather than he. Gender information can be obtained from the object of the verb with partial omitted pronoun. 2 For example, it would include punctuation marks such as a semicolon. 3 The free words consist of constituents that are not covered by this partial parsing (e.g. adverbs). parsing as we simply have to search for a NP on the right of the verb. 3 Zero-pronoun resolution In this module, anaphors (i.e. anaphoric expressions such as pronominal references or zero-pronouns) are treated from left to right as they appear in the sentence, since, at the detection of any kind of anaphor, the appropriate set of restrictions and preferences begins to run. The number of previous sentences considered in the resolution of an anaphora is determined by the kind of anaphora itself. This feature was arrived at following an in depth study of Spanish texts. For pronouns and zero-pronouns, the antecedents in the four previous sentences, are considered. The following restrictions are first applied to the list of candidates: person and number agreement, c-command4 constraints and semantic consistency5. This list is sorted by proximity to the anaphor. Next, if after applying the restrictions there is still more than one candidate, the preferences are then applied, with the degree of importance shown in Figure 1. This sequence of preferences (from 1 to 10) stops whenever only one candidate remains after having applied a given preference. If after all the preferences have been applied there is still more than one candidate left, the most repeated candidates6 in the text are then extracted from the list, and if there is still more than one candidate, then the candidates that have appeared most frequently with the verb of the anaphor are extracted from the previous list. Finally, if after having applied all the previous preferences, there is still more than one candidate left, the first candidate of the resulting list (the closest to the anaphor) is selected. The set of constraints and preferences required for Spanish pronominal anaphora presents two basic differences: a) zero-pronoun resolution has the restriction of agreement only 4 The usage of c-command restrictions on partial parsing is presented in Ferrández et. al. (1998). 5 Semantic knowledge is only used when working on restricted texts. 6 Here, we mean that we first obtain the maximum number of repetitions for an antecedent in the remaining list. After that, we extract the antecedents that have this value of repetition from the list. in person and number, (whereas pronominal anaphora resolution requires gender agreement as well), and b) a different set of preferences. 1 ) C a n d id a te s in th e s a m e s e n te n c e a s th e a n a p h o r. 2 ) C a n d id a te s in th e p re v io u s s e n te n c e . 3 ) P re fe re n c e fo r c a n d id a te s in th e s a m e s e n te n c e a s th e a n a p h o r a n d th o s e th a t h a v e b e e n th e s o lu tio n o f a z e ro -p ro n o u n in th e s a m e s e n te n c e a s th e a n a p h o r. 4 ) P re fe re n c e fo r p ro p e r n o u n s o r in d e fin ite N P s . 5 ) P re fe re n c e fo r p ro p e r n o u n s . 6 ) C a n d id a te s th a t h a v e b e e n re p e a te d m o re th a n o n c e in th e te x t. 7 ) C a n d id a te s th a t h a v e a p p e a re d w ith th e v e rb o f th e a n a p h o r m o re th a n o n c e . 8 ) P re fe re n c e fo r n o u n p h ra s e s th a t a re n o t in c lu d e d in a p re p o s itio n a l p h ra s e o r th o s e th a t a re c o n n e c te d to a n In d ire c t O b je c t. 9 ) C a n d id a te s in th e s a m e p o s itio n a s th e a n a p h o r, w ith re fe re n c e to th e v e rb (b e fo re th e v e rb ). 1 0 ) If th e z e ro -p ro n o u n h a s g e n d e r in fo rm a tio n , th o s e c a n d id a te s th a t a g re e in g e n d e r. Figure 1. Anaphora resolution preferences. The main difference between the two sets of preferences is the use of two new preferences in our algorithm: Nos. 3 and 10. Preference 10 is the last preference since the POS tagger does not indicate whether the object has both masculine and feminine linguistic forms7 (i.e. information obtained from the object when the verb is copulative). Gender information must therefore be considered a preference rather than a restriction. Another interesting fact is that syntactic parallelism (Preference No. 9) continues to be one of the last preferences, which emphasizes the unique problem that arises in Spanish texts, in which syntactic structure is quite flexible (unlike English). 4 Evaluation 4.1 Experiments accomplished Our computational system (SUPAR) has been trained with a handmade corpus8 with 106 zero 7 For example in: Peter es un genio (Peter is a genius), the tagger does not indicate that the object does not have both masculine and feminine linguistic forms. Therefore, a feminine subject would use the same form: Jane es un genio (Jane is a genius). Consequently, although the tagger says that the verb, es (is), is copulative, and the object, un genio (a genius) is masculine, this gender could not be used as a restriction for the zero-pronoun in the following sentence: ∅ Es un genio. 8 This corpus has been provided by our colleagues in pronouns. This training has mainly supposed the improvement of the set of preferences, i.e. the optimum order of preferences in order to obtain the best results. After that, we have carried out a blind evaluation on unrestricted texts. Specifically, SUPAR has been run on two different Spanish corpora: a) a part of the Spanish version of The Blue Book corpus, which contains the handbook of the International Telecommunications Union CCITT, published in English, French and Spanish, and automatically tagged by the Xerox tagger, and b) a part of the Lexesp corpus, which contains Spanish texts from different genres and authors. These texts are taken mainly from newspapers, and are automatically tagged by a different tagger than that of The Blue Book. The part of the Lexesp corpus that we processed contains ten different stories related by a sole narrator, although they were written by different authors. Having worked with different genres and disparate authors, we feel that the applicability of our proposal to other sorts of texts is assured. In Figure 2, a brief description of these corpora is given. In these corpora, partial parsing of the text with no semantic information has been used. Number of words Number of sentences Words per sentence Lexesp corpus Text 1 972 38 25.6 Text 2 999 55 18.2 Text 3 935 34 27.5 Text 4 994 36 27.6 Text 5 940 67 14 Text 6 957 34 28.1 Text 7 1025 59 17.4 Text 8 981 40 24.5 Text 9 961 36 26.7 Text 10 982 32 30.7 The Blue Book corpus 15,571 509 30.6 Figure 2. Description of the unrestricted corpora used in the evaluation. 4.2 Evaluating the detection of zeropronouns To achieve this sort of evaluation, several different tasks may be considered. Each verb must first be detected. This task is easily the University of Alicante, which were required to propose sentences with zero-pronouns. accomplished since both corpora have been previously tagged and manually reviewed. No errors are therefore expected on verb detection. Therefore, a recall9 rate of 100% is accomplished. The second task is to classify the verbs into two categories: a) verbs whose subjects have been omitted, and b) verbs whose subjects have not. The overall results on this sort of detection are presented in Figure 3 (success10 rate of 88% on 1,599 classified verbs, with no significant differences seen between the corpora). We should also remark that a success rate of 98% has been obtained in the detection of verbs whose subjects were omitted, whereas only 80% was achieved for verbs whose subjects were not. This lower success rate is justified, however, for several reasons. One important reason is the non-detection of impersonal verbs by the POS tagger. This problem has been partly resolved by heuristics such as a set of impersonal verbs (e.g. llover (to rain)), but it has failed in some impersonal uses of some verbs. For example, in sentence (3), the verb es (to be) is not usually impersonal, but it is in the following sentence, in which SUPAR would fail: (3) ∅ Es hora de desayunar ([It]∅ is time to have breakfast) Two other reasons for the low success rate achieved with verbs whose subjects were not omitted are the lack of semantic information and the inaccuracy of the grammar used. The second reason is the ambiguity and the unavoidable incompleteness of the grammars, which also affects the process of clause splitting. In Figure 3, an interesting fact can be observed: 46% of the verbs in these corpora have their subjects omitted. It shows quite clearly the importance of this phenomenon in Spanish. Furthermore, it is even more important in narrative texts, as this figure shows: 61% with the Lexesp corpus, compared to 26% with the technical manual. We should also observe that The Blue Book has no verbs in either the first or the second person. This may be explained by the style of the technical manual, which usually 9 By “recall rate”, we mean the number of verbs classified, divided by the total number of verbs in the text. 10 By “success rate”, we mean the number of verbs successfully classified, divided by the total number of verbs in the text. Verbs with their subject omitted Verbs with their subject no-omitted First person Second person Third person First person Second person Third person Total % Success Total % Success Total % Success Total % Success Total % Success Total % Success 111 100% 42 100% 401 99% 21 81% 3 100% 328 76% 20% 7% 73% 7% 1% 92% Lexesp corpus 554 (61%) (success rate: 99%) 352 (39%) (success rate: 76%) 0 0% 0 0% 180 97% 0 0% 0 0% 513 82% 0% 0% 100% 0% 0% 100% Blue Book corpus 180 (26%) (success rate: 97%) 513 (74%) (success rate: 82%) 734 (46%) (success rate: 98%) 865 (54%) (success rate: 80%) Total 1,599 (success rate: 88%) Figure 3. Results obtained in the detection of zero-pronouns. consists of a series of isolated definitions, (i.e. many paragraphs that are not related to one another). This explanation is confirmed by the relatively small number of anaphors that are found in that corpus, as compared to the Lexesp corpus. We have not considered comparing our results with those of other published works, since, (as we have already explained in the Background section), ours is the first study that has been done specifically for Spanish texts, and the designing of the detection stage depends mainly on the structure of the language in question. Any comparisons that might be made concerning other languages, therefore, would prove to be rather insignificant. 4.3 Evaluating anaphora resolution As we have already shown in the previous section, (Figure 3), of the 1,599 verbs classified in these two corpora, 734 of them have zeropronouns. Only 581 of them, however, are in third person and will be resolved. In Figure 4, we present a classification of these third person zero-pronouns, which have been conveniently divided into three categories: cataphoric, exophoric and anaphoric. The first category is comprised of those whose antecedent, i.e. the clause subject, comes after the verb. For example, in sentence (4) the subject, a boy, appears after the verb compró (bought). (4) ∅k Compró un niñok en el supermercado (A boyk bought in the supermarket) This kind of verb is quite common in Spanish, as can be seen in this figure (49%). This fact represents one of the main difficulties found in resolving anaphora in Spanish: the structure of a sentence is more flexible than in English. These represent intonationally marked sentences, where the subject does not occupy its usual position in the sentence, i.e. before the verb. Cataphoric zero-pronouns will not be resolved in this paper, since semantic information is needed to be able to discard all of their antecedents and to prefer those that appear within the same sentence and clause after the verb. For example, sentence (5) has the same syntactic structure than sentence (4), i.e. verb, np, pp, where the subject function of the np can only be distinguished from the object by means of semantic knowledge. (5) ∅ Compró un regalo en el supermercado ([He]∅ bought a present in the supermarket) The second category consists of those zeropronouns whose antecedents do not appear, linguistically, in the text (they refer to items in the external world rather than things referred to in the text). Finally, the third category is that of pronouns that will be resolved by our computational system, i.e., those whose antecedents come before the verb: 228 zeropronouns. These pronouns would be equivalent to the full pronoun he, she, it or they. Anaphoric Cataphoric Exophoric Number Success Lexesp corpus 171 (42%) 56 (12%) 174 (46%) 78% The Blue Book corpus 113 (63%) 13 (7%) 54 (30%) 68% Total 284 (49%) 69 (12%) 228 (39%) 75% Figure 4. Classification of third person zeropronouns. The different accuracy results are also shown in Figure 4: A success rate of 75% was attained for the 228 zero-pronouns. By “successful resolutions” we mean that the solutions offered by our system agree with the solutions offered by two human experts. For each zero-pronoun there is, on average, 355 candidates before the restrictions are applied, and 11 candidates after restrictions. Furthermore, we repeated the experiment without applying restrictions and the success rate was significantly reduced. Since the results provided by other works have been obtained on different languages, texts and sorts of knowledge (e.g. Hobbs and Lappin full parse the text), direct comparisons are not possible. Therefore, in order to accomplish this comparison, we have implemented some of these approaches in SUPAR. Although some of these approaches were not proposed for zeropronouns, we have implemented them since as our approach they could also be applied to solve this kind of pronoun. For example, with the baseline presented by Hobbs (1977) an accuracy of 49.1% was obtained, whereas, with our system, we achieved 75% accuracy. These results highlight the improvement accomplished with our approach, since Hobbs´ baseline is frequently used to compare most of the work done on anaphora resolution11. The reason why Hobbs´ algorithm works worse than ours is due to the fact that it carries out a full parsing of the text. Furthermore, the way to explore the syntactic tree with Hobbs’ algorithm is not the best one for the Spanish language since it is nearly a free-word-order language. Our proposal has also been compared with the typical baseline of morphological agreement and proximity preference, (i.e., the antecedent 11 In Tetreault (1999), for example, it is compared with an adaptation of the Centering Theory by Grosz et al. (1995), and Hobbs´ baseline out-performs it. that appears closest to the anaphor is chosen from among those that satisfy the restrictions). The result is a 48.6% accuracy rate. Our system, therefore, improves on this baseline as well. Lappin and Leass (1994) has also been implemented in our system and an accuracy of 64% was attained. Moreover, in order to compare our proposal with Centering approach, Functional Centering by Strube and Hahn (1999) has also been implemented, and an accuracy of 60% was attained. One of the improvements afforded by our proposal is that statistical information from the text is included with the rest of information (syntactic, morphologic, etc.). Dagan and Itai (1990), for example, developed a statistical approach for pronominal anaphora, but the information they used was simply the patterns obtained from the previous analysis of the text. To be able to compare our approach to that of Dagan and Itai, and to be able to evaluate the importance of this kind of information, our method was applied with statistical information12 only. If there is more than one candidate after applying statistical information, preference, and then proximity preference are applied. The results obtained were lower than when all the preferences are applied jointly: 50.8%. These low results are due to the fact that statistical information has been obtained from the beginning of the text to the pronoun. A previous training with other texts would be necessary to obtain better results. Regarding the success rates reported in Ferrández et al. (1999) for pronominal references (82.2% for Lexesp, 84% for Spanish version of The Blue Book, and 87.3% for the English version), are higher than our 75% success rate for zero-pronouns. This reduction (from 84% to 75%) is due mainly to the lack of gender information in zero-pronouns. Mitkov (1998) obtains a success rate of 89.7% for pronominal references, working with English technical manuals. It should be pointed out, however, that he used some knowledge that was very close to the genre13 of the text. In our 12 This statistical information consists of the number of times that a word appears in the text and the number of times that it appears with a verb. 13 For example, the antecedent indicator section heading preference, in which if a NP occurs in the heading of the section, part of which is the current study, such information was not used, so we consider our approach to be more easily adaptable to different kinds of texts. Moreover, Mitkov worked exclusively with technical manuals whereas we have worked with narrative texts as well. The difference observed is due mainly to the greater difficulty found in narrative texts than in technical manuals which are generally better written. In any case, the applicability of our proposal to different genres of texts seems to have been well proven. Anyway, if the order of application of the preferences14 is varied to each different text, an 80% overall accuracy rate is attained. This fact implies that there is another kind of knowledge, close to the genre and author of the text that should be used for anaphora resolution. Conclusion In this paper, we have proposed the first algorithm for the resolution of zero-pronouns in Spanish texts. It has been incorporated into a computational system (SUPAR). In the evaluation, several baselines on pronominal anaphora resolution have been implemented, and it has achieved better results than either of them have. As a future project, the authors shall attempt to evaluate the importance of semantic information for zero-pronoun resolutions in unrestricted texts. Such information will be obtained from a lexical tool, (e.g. EuroWordNet), which could be consulted automatically. We shall also evaluate our proposal in a Machine Translation application, where we shall test its success rate by its generation of the zero-pronoun in the target language, using the algorithm described in Peral et al. (1999). References Ido Dagan and Alon Itai (1990) Automatic processing of large corpora for the resolution of anaphora references. In Proceedings of the 13 th sentence, it is considered to be the preferred candidate. 14 The difference between the individual sets of preferences is the degree of importance of the preferences for proper nouns and syntactic parallelism. International Conference on Computational Linguistics, COLING (Helsinki, Finland). Antonio Ferrández, Manuel Palomar and Lidia Moreno (1998) Anaphora resolution in unrestricted texts with partial parsing. In Proceedings of the 36 th Annual Meeting of the Association for Computational Linguistics and 17 th International Conference on Computational Linguistics, COLING - ACL (Montreal, Canada). pp. 385-391. Antonio Ferrández, Manuel Palomar and Lidia Moreno (1999) An empirical approach to Spanish anaphora resolution. To appear in Machine Translation 14(2-3). Jerry Hobbs (1977) Resolving pronoun references. Lingua, 44. pp. 311-338. Cristopher Kennedy and Bran Boguraev (1996) Anaphora for Everyone: Pronominal Anaphora resolution without a Parser. In Proceedings of the 16 th International Conference on Computational Linguistics, COLING (Copenhagen, Denmark). pp. 113-118. Shalom Lappin and Herb Leass (1994) An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4). pp. 535-561. Ruslan Mitkov (1998) Robust pronoun resolution with limited knowledge. In Proceedings of the 36 th Annual Meeting of the Association for Computational Linguistics and 17 th International Conference on Computational Linguistics, COLING - ACL (Montreal, Canada). pp. 869-875. Hiromi Nakaiwa and Satoshi Shirai (1996) Anaphora Resolution of Japanese Zero Pronouns with Deictic Reference. In Proceedings of the 16 th International Conference on Computational Linguistics, COLING (Copenhagen, Denmark). pp. 812-817. Manabu Okumura and Kouji Tamura (1996) Zero Pronoun Resolution in Japanese Discourse Based on Centering Theory. In Proceedings of the 16 th International Conference on Computational Linguistics, COLING (Copenhagen, Denmark). pp. 871-876. Jesús Peral, Manuel Palomar and Antonio Ferrández (1999) Coreference-oriented Interlingual Slot Structure and Machine Translation. In Proceedings of ACL Workshop on Coreference and its Applications (College Park, Maryland, USA). pp. 69-76. Michael Strube and Udo Hahn (1999) Functional Centering – Grounding Referential Coherence in Information Structure. Computational Linguistics, 25(5). pp. 309-344.
2000
22
       !"# $ %'&(*)+!,.- /10(32545(*6 7 8:9;=<>@?A8:BC>EDGF1HIDJ?A9!KL>M8:<N;GB!OQPRBLFSDJ<?T;=>MUVDJB.WYX:UZ8:B!X[8 \]B!U_^J8:<`MU_>RabDGF1c8:B!BL`aedZ^=;GB!UV; cgf!UVdV;GO!8:dZ9!f!UV;LhLc*ikj5lLj5mGn o*prq*s=tCo3svuLw!xeyzp {}|z~!Guzu€{JC| ‚„ƒ ,…452v+†G4 ‡ˆz‰‹ŠŒerŒ!ސ‘ŒzŽ ŠRŽ ’=“Š”ŠRސ•5ސ5–b“RŽ —"ˆC˜ ’e‰‹™=šzŽ Šk›Vœ5Œ!ސ›Vœ5"žŸ‰_’C ¡5šC“RœvžŸr“‰_— —@œ5"ސ›ZސŽ ’e—@Ž¢5’z’Cœ5“r“‰£œv’5’z¤¥Œ!ސ›Vœ5R˜ ž¦5’z—@ŽŽ Ššz–‹“Š€›Zœ5EŽ 5—ˆbœ5› “ˆCŽ ž¨§Ÿ‡©œ ¤zŽ žœv’zŠR“R"r“RŽª“ˆzr“„“ˆCސ«¬—5’®­!ŽªrŒC˜ ŒY–‹‰£Ž ¤¯“Rœ®Ž 5–£˜S°±œ5"–_¤²¤er“J³¨°±Ž‘ˆz…•5Ž ­Yšz‰‹–£“”´Š‰‹žŸŒe–£Ž™GšCŽ ŠR“‰£œv’C˜µ5’zŠR° ސ"‰‹’C  Š«GŠR“RŽ ž¶°#ˆz‰‹—"ˆšeŠRŽ Š„“ˆCŽ·“RŽ —"ˆz’e‰‹™=šzŽ Š§ ¸ ŠR«GŠ“RŽ žšeЉ‹’C T—@œ5ސ›Zސ"Ž ’z—@ލ‰‹Š]—@œvž˜ ŒYrŽ ¤“Rœ¹Ÿ­e5ŠŽ –‹‰‹’CŽEŠR«JŠR“RŽ ž°‰£“ˆA“ˆCŽ "Ž Ššz–£“ “ˆer“±“ˆCŽ€5¤z¤z‰‹“‰£œv’œ5›3“ˆCŽI—@œ5ސ›Z˜ ސ"Ž ’z—@Ž.5’z’Cœ5“r“‰£œv’‘‰‹žŸŒzœ…•5Ž ŠºŒ!ސ›Vœ5R˜ ž¦5’z—@Ž5§ » ¼ 6©452v(½¾†G4v¿µ(*6 ‡ÀˆC޹›Á5—@““ˆzr““µ° œbŒe‰‹Ž —@Ž Šœ5›“RŽ@ÂJ“¦ŠRŒ!Ž —‰£›V«b“ˆCŽ Š5žŽÃ“ˆz‰‹’C ¢‰_’Ä“ˆCŽQ° œ5"–‹¤”—5’¬­!Žb•5ސ«‘ˆCŽ –£ŒC˜ ›Ášz–C‰‹’Å•rr"‰£Ž“}«Æœ5›’zr“šz"5–C–‹5’C všzr 5ŽŒzœJ—@Ž ŠŠ‰‹’C  “5ŠRÇJА§ÈĈz‰‹–£Ž1“ˆCސŽI‰‹Š Å•r5ŠR“ ­!œJ¤C«Æœ5›!–_‰£“Rސ"r“šCŽ œv’”5’zrŒYˆCœ5"·Ž ŠRœv–_šC“‰£œv’ÉÁÊœ5­e­eА³Ë ÌvÍrÎGÏÐ*rŒC˜ Œe‰‹’¨5’e¤AÐ3Ž 5Š"А³©Ë Ì5Ì:ÑCÏÒT‰£“RÇ5œ…•L³ÓË Ì5ÌvÍ=Ï3ÔIŽ]ސ“#5–Õ§£³ Ë Ì5Ì5Öv׳=ž¦5’Ø«Eœ5›e“ˆCŽ ŠRŽ “RŽ —"ˆz’z‰_™=šCŽ Š©Ž ™Gšz‰£Ž ˆz5’z¤J˜ —@"r›V“RŽ ¤ªŽ ŠRœvšC"—@Ž Š ³°ˆz‰‹—ˆÃr޹¤z‰ÚÙº—šz–£“E“Rœ„—@œv’J˜ ŠR“R"še—@“³5œ5Ûše’CŽ 5–‹‰‹Š“‰‹— ŠRœvšC—@Ž ŠÛœ5›e‰‹’C›Vœ5"žŸr“‰£œv’ƍ5Š ‰‹’CŒYšC“À“Rœ¹“ˆCŽ ‰£€5–£ 5œ5"‰£“ˆežŸŠ§€ÊސŽÆ° ŽÆŒeŽ ŠRŽ ’=“I ŠRސ"‰‹Ž Š3œ5›C“RŽ —"ˆe’z‰‹™GšCŽ Š*›Vœ5*Œ!ސ›Vœ5"žŸ‰_’C À—@œ5ސ›VސŽ ’z—@Ž ­!ސ“µ° ŽŽ ’¦’Cœvše’ ŒeˆC"5ŠRŽ ŠÛ°#ˆz‰‹—"ˆ "Ž ™=šz‰‹ŽgÅ–‹‰‹žŸ‰£“RŽ ¤ 5žœvšz’=“Æœ5›ÀŒeˆC"5ŠŽ'ŠR“R"še—@“šCŽ'5Š ‰‹’zŒešC“N5’z¤ª’Cœ ¤CœvžŸ5‰_’J˜µŠRŒ!Ž —‰ÚÜY—#ÇG’zœ…°–£Ž ¤z 5ŽEŽ ’C v‰‹’Cސސ‰‹’C C§ Ý Þ (32vßzàáßz2Øße6†Jß â œ5ސ›VސŽ ’z—@޹5’z’Cœ5“r“‰‹œv’ª‰_’Ø•5œv–£•5Ž Š ¤Cސ“RސžŸ‰‹’z‰‹’z  °ˆCސ“ˆzސÀœ5I’zœ5“À“µ° œ'’Cœvše’¨ŒeˆC"5ŠŽ ŠrŽNšeŠRŽ ¤¨“Rœ ސ›Vސ1“RœÆ“ˆzŽIŠ5žŸŽ#“ˆz‰_’C C§ÛȈz‰_–£Ž“ˆz‰_б‰‹Š ÆŠ‰‹’C v–£Ž “5ŠRÇL³ ¤z‰Úã!ސŽ ’=“N“µ«GŒ!Ž ŠNœ5›#’Cœvšz’bŒYˆC"5ŠRŽ ŠN­LŽ ˆe •5Ž ¤z‰Úã!ސŽ ’=“–£«N‰‹’ “RސžŸŠÛœ5›Lˆzœ…°ä“ˆCސ« —@œr˜Sސ›Vސ § ‡Àˆz‰‹Š –£Ž 5¤eŠNšzŠN“RœåšzŠRŽ'¤z‰Úã!ސŽ ’Ø“]rŒzŒzœv5—ˆCŽ Š]¤zސŒLŽ ’e¤J˜ ‰‹’z Æœv’“ˆCŽÅ“µ«GŒ!ŽÅœ5›Û’zœvšz’ºŒeˆC"5ŠRŽEše’z¤Cސg—@œv’zЉ‹¤C˜ ސ"r“‰‹œv’§ÃÊ#ސލ° Ž—@œv’zЉ‹¤Cސ Œz"œv’Cœvšz’zА³Œeœ5Œ!ސ ’Cœvše’zА³1¤CŽ@ÜY’z‰£“Rލ’Cœvše’zА³15’z¤¢rŒzŒ!œvЉ£“‰‹•5Ž Š§¢æCœ5 Ž 5—ˆ·œ5›g“ˆCŽ ŠR޹—–‹5ŠŠRŽ Š]°±Ž¹¤Cސ“Rސ"ž¦‰‹’Cަ°#ˆzr“N“ˆCŽ ŠRސ“#œ5›Œ!œvŠŠ‰£­e–‹Ž€5’=“RŽ —@Ž ¤CŽ ’=“ŠÅ‰‹Š³e'ŠRސ“œ5›Û›Á5—@“Rœ5"Š °ˆe‰‹—"ˆ”‰_’JçYšCŽ ’z—@Ž.°#ˆCސ“ˆCސA“ˆzŽb“µ° œ’zœvšz’zŠ—@œr˜ ސ›Vސ °‰‹“ˆ œv’CŽ5’Cœ5“ˆCސ…³Ø5’z¤ŸE¤zŽ —‰‹Š‰£œv’ Œz"œG—@Ž ŠŠ ›Vœ5¤Cސ“Rސ"ž¦‰‹’z‰‹’C  °ˆe‰‹—"ˆ¨’Cœvše’zŠgrŽN—@œ5ސ›Vސ"‰_’C C§ èéÁê ëEì:ívî"ïÓì[ðñØí=ò ò[óVô©õ ö ސ›Vœ5Žª—@œ5"ސ›ZސŽ ’e—@Žªœv’¥’Cœvšz’¬ŒeˆC"5ŠŽ ŠA—5’¥­!Ž 5’z’zœ5“r“RŽ ¤³Ä“ˆCŽ÷’Cœvšz’ŒeˆC5ŠRŽ Š³ø“ˆzŽ žŸŠRŽ –£•5Ž Š³ žÆšzŠR“€ÜeŠR“€­LŽŸ¤zސ“Rސ"žŸ‰‹’CŽ ¤3§Æ‡Àˆe‰‹ŠIŒzœJ—@Ž ŠŠ‰‹’C ¨‰‹Š —@œvžŸžŸœv’‘“RœøžœvŠR“åù#ЩúûrŒzŒe–_‰‹—r“‰£œv’zй5’z¤‘‰‹’C˜ —–‹še¤CŽ ŠgŠRŽ ’=“RŽ ’z—@Ž]¤Cސ“RŽ —@“‰‹œv’³z“Rœ5Ç5Ž ’z‰£ü r“‰‹œv’³zŒer“ᘠœ5›Z˜µŠRŒ!ŽŽ —"ˆå“r 5 v‰‹’C ¦5’e¤A’Cœvšz’C˜SŒeˆC"5ŠRŽÅ—ˆ=šz’zÇG‰‹’z C§ ‡ÀˆzŽýŠRŽ ’=“RŽ ’z—@Ž‘¤Cސ“RŽ —@“Rœ5¢° ŽšeŠRŽý‰‹Šª¤CŽ Š—@‰£­!Ž ¤ ‰‹’þÀސ«J’zr5’z¤Aþr“’zrŒer"ÇGˆz‰ ÉáË Ì5ÌvÍ=׳„“ˆCެ“Rœr˜ Ç5Ž ’z‰‹üŽ·‰_’þÀސ«J’zr]ÉáË Ì5Ì5ÖJ׳¹5’z¤²“ˆCŽýŒYr“á˜Sœ5›_˜ ŠRŒ!ŽŽ —ˆ“r 5 5ސÃ‰‹’®þ#r“’zrŒerÇJˆz‰ ÉáË Ì5Ì5Î=×§ÿ‡ÀˆzŽ ’Cœvše’J˜SŒeˆC"5ŠŽ'—"ˆGšz’CÇJ‰‹’C T‰‹ŠÆ5–‹ŠRœ„¤Cœv’C޹5ŠÆT“r r˜  v‰‹’z T“5ŠÇɋ’ðˆz‰‹—ˆª“Rœ5Ç5Ž ’zŠŸr޹“r 5 5Ž ¤¢5Š “ˆCŽ ŠR“r"“Iœ5›±'’Cœvše’AŒeˆz"5ŠRŽ5³L“ˆCŽ—@œv’Ø“‰‹’Gšzr“‰£œv’Tœ5› ’Cœvše’äŒeˆC"5Šލœ5Ÿœ5“ˆCސ §}“ŸŽ žŸŒe–£œ…«JŠŸžœJ¤CŽ –‹‰_’C  “RŽ —ˆz’z‰‹™GšCŽ ŠÀЉ_žŸ‰‹–‹r “RœŸ“ˆCœvŠRŽ]šzŠRŽ ¤¹›Vœ5À“ˆCŽÅŒer“ᘠœ5›Z˜µŠRŒ!ŽŽ —"ˆº“r 5 5ސ §‡ˆCŽ ŠRŽ“RœGœv–‹Š±°±Ž"ŽI5–‹–e“R"5‰‹’CŽ ¤ 5šC“Rœvž¦r“‰‹—5–‹–£«º°‰£“ˆ¹¤zr“›Vœvž “ˆCŽEúӎ ’z’¨‡©ސŽ@˜ ­e5’zDŽÉÁÒTr"—šzŠސ“5–Õ§£³3Ë Ì5Ì:Ñ=×§ èé‹è ëEì:ðYô©ðÓô¥ðÓí  úœv’Cœvšz’]Ž ŠRœv–‹šz“‰£œv’N‰‹Š©“ˆCŽ1žœvŠR“Ó°±Ž –_–Ú˜µŠR“šz¤z‰‹Ž ¤Åœ5› “ˆCަ“µ«GŒLŽ ŠEœ5› —@œ5ސ›VސŽ ’z—@Žº¤z‰‹Š—šeŠŠRŽ ¤„ˆCސŽ5§Ÿ‡ÀˆzŽ Œz"œv’Cœvšz’zŠ€° ŽŸ°‰‹Š"ˆ„“RœAŽ ŠRœv–‹•5ަrަ5–‹– ›Zœ5"ž¦ŠÅœ5› “ˆCŽºŠ‰‹’C vše–‹rEŒzœv’zœvšz’zА³ 5³ v³5’z¤M§A‡ÀˆzŽ ŠRސ“€œ5› Œ!œvŠŠ"‰£­e–£ŽE5’=“RŽ —@Ž ¤CŽ ’=“ŠÅ°±Žˆz…•5Ž—"ˆCœvŠRŽ ’„‰‹Š 5–‹–e“ˆCŽ­Y5Š5–e’Cœvše’ŸŒeˆC5ŠRŽ Š °ˆe‰‹—"ˆ¦œG——šC ­!ސ›Zœ5"Ž “ˆCŽAŒzœv’Cœvšz’ä‰_’·“ˆCŽŠ5žŽAŠŽ ’Ø“RŽ ’z—@Žåœ5¦‰‹’·“ˆCŽ Œz"ސ•G‰£œvšeŠÅ“µ° œåŠRŽ ’=“RŽ ’z—@Ž Š§¨æCœ5]“ˆCŽ ŠRŽ'Œzœv’Cœvšz’eА³  !#"$&%'()* #+-, ,.0/1 2.3,45"$,6'()7/ ()28)':9;!.)'<0()28))0:>=?@+A!1 +-;BC DFEG ,46?@9;DIHJ+-, ,.K/1 2L.M,4N,O +-P,OQ'()7/ ()28R9;!.)'ST0()28))0U)=!?@+@!1 +-RBV DWBOG ,4X3?@93*Y"Z?@2[\28,O2]%^L'#"Z?-R;_?@`O' ?@Ua ,11 '#bdc)BOE!V e[D f g4h28(,^['Z"i#!.6?@0(,j2]28,O%0Z"Z)k8/ (]^L9;?A?@`l"Z !m,O% on ^['()'kn^,O,O% p?@' 28,^]4h]^)0#"Z?-Q!^*q?A\1^,O\(]^[9;']* +-, 2]+A?-rP* `)]^>* 'P 0st*u v2]28)''L?-1 ?@+@?@rPR,^N'L+@?-)28D f )'Q4w28(,^L'R,>"i]=]^!^x,T?@ ]ny))0 \j'(ny)2]?z_2N4h)!%^)':"$j%'(;,4h()Q93,7)+ 93,^U l,O{4w28(,^)D f !1 +-\c{+@?A'('6)2L|,4 ;4h)!%^)':%'() "Z?A2LR4w28(,^L'#"i"i]^ !(()93n?A`;(,93,7)+y"Z?@})9UD ~ )!%^)'c[€ ‚?@?@2]!(,>"ƒ2]+-,O'M7/ ()28))0?@'T(,xn^L,O,O%D f _^L'(k4h)/ %^\28,O%0'U„&…'}4h^,O9†^L?-`O0j(,‡+@]4ˆU '()28,OS28,O%0'i)9‰4h^,O9Š+-]4hu(,:^[?-`OK]DŒ‹\&^8/ 4h]^:(,R}+@!((]^3'NSa ,11 'N?@'28S'6?- !nn^L,)s?@9;!()'i<?-='(P7Ks /1 '()S^L.7?@` n^)')K()?@‡a ,11 '#bdc)BOE!V e[D ~ )!%^LRŽQ`O?-=)' U0()28))K]'j?@'( 28S?@x'()0()28)']DQ‘53?@' n ?-^L)o"Z?-’“n^,O,O% *<"Z?@2[’+A+-,>"Z'T 93,7)+”(,“+-)!^Ll!j^8• 8s?-=)';',O%+@x1y{^8/ '(,O+-=)“?@QS'L93S')K()28D ~ )!%^LS–Tn^,!/ =7?@)'U28^L%393)'L%^6,4"Z]]^<,^g,< „&…|?@'X3'%1—d)28]*'”_ ^L'(”„&…‡?@U3'()0()28 ?@' ,4h();?-'Œ'%17—d)28]D ~ )!%^ ˜93,7)+@' ”'(P77/ 28?@2;28,O0(8s7#,4X;0()28))K]™u?@g2]?@?-/ 2]!(3!<30()28))0#?@'&"Z?-?Ak}n^]ny,O'?-/ ?-,O+$n ^['(j,^393,7?z_)1 PQmn^]ny,O'?-?-,O + n ^['(U,^;^)+@!?@=k2]+@%'(D ~ )!%^mV\`O?-=)' ,?-,O{,4q'+A?-)28*'Z{)K?-5P}"Z?@2[U?@'”93)7/ ?-,O)R^L]nt)!() +-Pk?@'+@?-.)+@P{(,U1y:n^L,O,O9;?@+-/ ?-š])D ~ )!%^)'E)€c]›\)+-nv](]^[9;?@j"Z]]^ X2]?@ !($0()28))KŒ?@'Y,4X28,^^)28q`)7/ ]^)D f  %']*K 93,7)+2]S+-)!^L;!$#n ?-^L?@` ,4:œR]ž’|9;+@k`)]^S9;!.)'}`, , 7/ ()28))0”4ˆ,^ Ÿ7 ]*Ÿ0¡ˆ¢6*7,^&ŸK¡A£¤D f &+@'(ir"i,N4h)/ %^)'Y!^u,O+@P&%')<"Z)g](]^L9;?@?A`X"Z]]^ ,^Y,¥$n^,O,O%#'¥Z^]4ˆ]^L)K]D f )'($2]'()' ?@2]+A%Zn+-],O'(?@2&¦§i,^¨0Pj,]^Z?@93<6n^,!/ ,O%U,0)'¨,¨^]4h]^¨(,;,O% }n ^L'(D © '?@`:)'(4h)!%^)'XSN28,O+A+-)28?-,OS,4¥7/ ,!()ª!’"i>=M(^L?@)«’'(!?A'(?@2]+ 93,7)+”(,Q)2]?@U"Z)|\n^,O,O% l?@';28,^]4h]^(/ ^L?@`;"Z?-T,]^,O%{n^L'(g,^'&,^L]4A/ ]^)0]DS¬ ny)2]?z_2]+A+-P*"$)93n +-,­PT;9s7?@9N%9 ~ )!%^)' ¬ ® ¯ H c!D f X?@'28”?@:„&…' 1y8/ 5"$])3in ^,O,O%N6 0()28))0 s  DSaZ,11'g?@'( 28?@\„&…' 1y]r"i])mn^,O,O% °0()28))0]*oJ n^L,O,O%'X`)]^ s s s s Ž D f M?@'( 28?@Š'()7/ ()28)'«1y]r"i])±°n^,!/ ,O% ‰‰0()28))0 {<n^,O,O%  s s s – f &„ …Z'$ny,O'?@?-,O?@j '()0()28 s s ˜ f ²"i,^L³³…i´g¬0/ !`‡n ^)28)?@`‡ M4ˆ,O+@+@,>"”/ ?@`N#K()28))K s s V f   %9:1y]^,4?@93)'$ 0()28))0;'#1y])“93)7/ ?-,O) s E f M)Š"i,^LŠ,4T7/ ()28))0    n^,!/ ,O% 'X`)]^ s C f X)3…i´g¬0/!`<,4t 0()28))0‰²µn^,!/ ,O% 'X`)]^ s B f ;93,7?z_ ]^#"$,^L 'g …X´g¬0/!`O'³,4¶0()28))0 {<n^,O,O% 'X`)]^ s c]› f N93,7?z_]^ …i´#¬0/!`O' ,4 K()28))KZ }<n^,!/ ,O% 'X`)]^ s cc f ·"i,^L²²…i´g¬0/ !`‡n ^)28)?@`‡ M4ˆ,O+@+@,>"”/ ?@`Ngn^,O,O%  s c) f <n^,O,O%  s f !1 +-¸c!¹ ~ )!%^L)'°4h,^°»ºTs?@9N%9 ¼/ (^,n P†…u^,O,O% ƒºk,7)+o†)?-^°ºk,?@=/ ?-,O½D b¬¾g¬ P7Ksy*T®g¾g®)]^)*U¯q¾<¯½,72]+@?-rP* H<¾H&2]28)''?-1 ?A+@?-rPe ¿)À0Á(ÂÃÄ0ÅmÆhÂLÇÈ3¿]ÉiÃÂÊËÉ ÌÍ@ÎLÌRÇÏ@Ï@Ã>ÉZЍÑÐ&Á(Ã{ÑÐ(¿3Ç Ð(¿]Á3ÃÆ&Ò Í@ÀÇ!ÂLÅQÆh¿)Ç!ÁÑ¿)Ð]ËiÇÀÓlÄÂÃ­Ô ÍAÓ¿)Ð3ÇRÄÂÃÒÕ Ç!Ò Í@ÏAÍ-ÁrÅÓÍ@ÐÁ(ÂLÍ-Ò ÑÁÍ@ÃOÀMÃ>Ô¿]Â\ÁÌ¿xÐ(¿]ÁRÃÆ;ÄyÃOÐÐÍ-ÒÏ-¿ ÃOÑÁÎ8ÃOÈ3¿)Ð)Öz×JبÌ¿kÁÇÐ(ʇÃÆ#¿)Ð(ÃOÏ-Ô7Í@ÀÙÄ ÂÃOÀÃOÑÀÐ Í@ÐuÀÃÁÀÇ!ÁÑÂLÇÏAÏ-ÅNÇ#Î]Ï@ÇÐÐÍzÚtÎ]Ç!ÁÍ-ÃOÀ3ÁÇÐ(ÊtË ÐLÍ@ÀÎ8¿”ÁÌ¿ Ð(¿]ÁNÃƨÄyÃOÐÐÍ-ÒÏ-¿;ÇÀKÁ(¿)Î8¿)Ó¿)ÀKÁÐ3ÓÍ-Ût¿]ÂLÐNÓ¿]Äy¿)ÀÓÍAÀÙ ÃOÀ\Î8ÃOÀ0Á(¿8Ü Á]Ö3Ý ¿]¿3É$¿;Ì Ç)Ô¿3ÁÌ¿3È3Ã7Ó¿)Ï È;Ç!Ê¿Ç Ò Í@À Ç!ÂÅ3Ó¿)Î]Í@ÐÍ-ÃOÀÃOÀÄ ÂÃOÀÃOÑÀÞÇÀ0Á(¿)Î8¿)Ó¿)À0ÁXÄ ÇÍ-ÂLÐ]Ë ÇÀÓ#ÁÌ¿)ÀgÐ(¿)Ï-¿)Î8ÁYÁÌ¿uÄ ÇÍ@½ÉZÍ-ÁÌ<ÁÌ¿ÌÍ-ÙOÌ¿)ÐÁ½ÄÂÃÒÕ Ç!Ò Í@ÏAÍ-ÁrÅÖXßdÀÎ]Ï@ÑÓ¿)ÓkÇÐ ÇÄtÃOÐLÐÍ-Ò Ï-¿<ÄÇÍ-ÂLÍ@ÀÙ;ÍAДÁÌÇ!Á ÁÌ¿:ÄÂÃOÀÃOÑÀTÍ@ÐZÀÃOÀ7Õ¿]Æh¿]¿)ÀKÁÍAÇÏàÖZáÍÁÌÍ@ÐZÄ ÇÍ-Â(Õ Í@ÀÙ<ÃOÀ Ï-ÅNá¿)Ç!ÁÑ¿)Ð&ââ[ãâ)ä#Ç!¿ ÑÐ(¿)ÓÖ ßåÆÁÌÍAÐŒÄ ÇÍ-Â(Õ Í@ÀÙSÌÇÐ ÁÌ¿NÌ Í-ÙOÌ¿)Ð(Á ÄÂÃÒ Ç!Ò ÍAÏ@Í-ÁrÅ}ÁÌ¿)ÀTÁÌ¿NÄÂÃ!Õ ÀÃOÑÀUÍAШÏ-¿]ÆhÁ¨ÑÀ¿)Ð(ÃOÏ@Ô¿)ÓÖ Ø¨Ì¿vÓ¿)Î]Í@ÐLÍ-ÃOÀoÁ(ÿ)Ð(ÃOÏ-Ô¿xÇÄ Ç!ÂÁÍ@Î]ÑÏAÇ!Â{ÄÂÃ!Õ ÀÃOÑÀmÍ@Ð&ÀÃÁ<Í@ÀÓ¿]Äy¿)ÀÓ¿)ÀKÁ¨ÃÆÃÁÌ¿]ÂgÎ8ÿ]Æh¿]¿)ÀÎ8¿ Ó¿)Î]Í@ÐLÍ-ÃOÀÐ]ÖuæÀ¿&ÃƽÁÌ¿&ÉiÇ>Å ÐXÁÌ¿)Ð(¿<Ó¿)Î]Í@ÐÍ-ÃOÀ ÐÍ@À7Õ Á(¿]ÂLÇÎ8Á¨Í@ÐiÍ@ÀjÁÌ¿Î8ÃOÈ3Ä ÑÁÇ!ÁÍ-ÃOÀ}ÃÆYá¿)Ç!ÁÑ¿#ç Ë7ÁÌ¿ À ÑÈ#Òy¿] ÃÆtÁÍ@È3¿)ЌÇÀ3ÇÀ0Á(¿)Î8¿)Ó¿)ÀKÁiÌÇÐ Òy¿]¿)À3È3¿)À7Õ ÁÍ-ÃOÀ¿)Ó½ÖiبÌ¿gÇÎ]Î]ÑÂLÇÎ8Å{ÃÆ ÁÌÍ@Шƈ¿)Ç!ÁÑÂL¿:Ó¿]Äy¿)ÀÓÐ ÃOÀÁÌ¿\ÇÎ]Î]ÑÂLÇÎ8ÅÃÆ#Ä¿]Ô7Í-ÃOÑÐjÎ8ÿ]Æh¿]¿)ÀÎ8¿Ó¿8Õ Î]Í@ÐÍ@ÃOÀÐ:ÍAÀÎ]Ï@ÑÓÍAÀÙUÁÌÃOÐ(¿SÉZÌÍ@Î[ÌvÓÃmÀÃÁ3Í@À0ÔÃOÏ-Ô¿ ÄÂÃOÀÃOÑÀÐ]Öµè ÀÃÁÌ¿]Â{¿8ÜÇÈ3Ä Ï-¿\ÃÆNÁÌÍ@ÐUÓ¿]Äy¿)À7Õ Ó¿)ÀÎ8¿:Í@ÀKÔÃOÏ@Ô¿)ÐZÁÌ¿:Î8ÃOÈ3Ä ÑÁÇ!ÁÍ-ÃOÀkÃÆ Ù¿)ÀÓ¿]ÂZÆhàÇÀ}ÇÀKÁ(¿)Î8¿)Ó¿)ÀKÁ]Ö$éÌ¿)À}ÇNÄy¿]ÂLÐ(ÃOÀjÔÎ8ÃOÈ3Ä ÇÀ0ÅjÍ@Ð Í@À0Á(ÂÃ7ÓÑÎ8¿)ÓËyÃÆhÁ(¿)ÀRÁÌ¿)Í-ÂgÀÇÈ3¿3Í@ЍÇÎ]Î8ÃOÈ3ÄÇÀÍ-¿)Ó Ò Å6ÇÀSÌÃOÀÃÂ[ÍzÚÎXÃÂ$Î8ÃÂLÄtÃÂ[Ç!Á(¿ Ó¿)ÐÍ@ÙOÀÇ!Á(ÃÂÉZÌ Í@ÎLÌ È;Ç!Ê¿)Ð3ÁÌ¿UÙ¿)ÀÓ¿]Â3ÃÆ&ÁÌ¿}¿)ÀKÁÍ-Á5ÅlÎ]Ï-¿)Ç!Â)Öxê¥Ç!Á(¿] Â¿]Æh¿]¿)ÀÎ8¿)Ð Á(ÃÁÌÍAД¿)ÀKÁÍ@ÁrÅ{È;Í-ÙOÌ0Á¨ÃOÀÏ@Å}ÍAÀÎ]Ï@ÑÓ¿<Ç ÐÍ@ÀÙOÏ-¿$Á(¿]Â[ÈUË0Ð(ÃOÈ;¿]ÁÍ@È3¿)ЌÈ;Ç!Ê7Í@ÀÙ<Ó¿]Á(¿]ÂLÈÍ@ÀÇ!ÁÍ-ÃOÀ ÃÆÙ¿)ÀÓ¿]ÂXÍ@È3ÄyÃOÐÐLÍ-Ò Ï-¿”ÑÀÏ@¿)ÐЌÁÌ¿ ¿)ÐÑÏ-ÁÐÃÆÄ¿8Õ Ô7Í-ÃOÑÐ;Î8ÿ]Æh¿]¿)ÀÎ8¿mÓ¿)Î]Í@ÐÍ@ÃOÀÐ3Ç!¿{Ê7ÀÃ>É ÀÖæÀ¿ ÉXÇ)Å{Á(ÃUÎ]Ç!ÄÁÑÂL¿#ÁÌÍ@ÐZÄ Ì¿)ÀÃOÈ;¿)ÀÃOÀTÍAÐ¨Ò ÅkÈ3¿]ÂÙ!Õ Í@ÀÙgÇÀ0Á(¿)Î8¿)Ó¿)À0ÁÐXÇÀÓ;¿]Æh¿]¿)À0ÁÐuÉZÌ¿)À;ÇÀKÅ3Î8ÿ]ÆˆÕ ¿]¿)ÀÎ8¿”ÂL¿)Ï@Ç!ÁÍ-ÃOÀ3Í@ÐqÄyÃOÐÍ-Á(¿)ÓÖYéÌ¿)À3ÁÌÍ@Ð ÌÇ!ÄÄy¿)ÀÐ ÁÌ¿3Æh¿)Ç!ÁÑ¿)Ð<ÃÆ$¿)ÇÎLÌvÀÃOÑÀmÄ ÌÂLÇп6Ç!ÂL¿;È3¿]ÂÙ¿)Ó Í@ÀkÁÌ¿:ÆhÃOÏ@Ï-íÉZÍ@ÀÙÉiÇ>Åë&بÌ¿:Ì¿)ÇÓTÇÀÓmÈ3Ã7ÓÍzÚ ¿] ÉiÃÂLÓÐ{ÇÀÓoÁÇ!ÙOÐkÇ!¿QÇÓ Ó¿)ÓMÁ(Ã|LJÐ(¿]ÁkÃÆNÐÑ ÎLÌ ÉiÃÂLÓÐ<ÇÀÓRÁÇ!ÙOÐ<ÆhÃÂ<ÁÌ¿3¿)À0ÁÍ-ÁrÅÖ6Ø¨Ì ÑЍÆh¿)Ç!ÁÑ¿)Ð ì ãâ]íSÙ¿)À¿]ÂLÇ!Á(¿;Î8ÃOÀ0Á(¿8Ü ÁÑ ÇÏYÄ Â¿)ÓÍ@Î]Ç!Á(¿)Ð]ËyÑÐ(¿)ÓTÒ Å ÁÌ¿\È;ÇÜÍ@ÈNÑÈî¿)ÀKÁ(ÂÃÄ ÅMÈ3à Ó¿)ÏàËZÒ Çп)ÓÃOÀM¿]Ô0Õ ¿]ÂÅQÌ¿)ÇӓÇÀ ӓÈ;à ÓÍ-Ú ¿]Â#ÉiÃÂLӓÉZÌÍ@Î[ÌvÌ ÇÐ:Òy¿]¿)À ÆhÃOÑÀÓjÁ(Ã3Òt¿Î8ÿ]Æh¿]¿)À0Á”ÉZÍ-ÁÌjÁÌ¿<ÇÀKÁ(¿)Î8¿)Ó¿)À0Á¨Í@À ï Ñ¿)Ð(ÁÍ-ÃOÀÖ}ßdÀQÎ8ÃOÀ0Á(ÂLÇÐ(Á]Ë ÃOÀÏ-Å\ÇTÐÍ@ÀÙOÏ-¿;Ó Í@Ð(ÁÇÀÎ8¿ È3¿)ÇÐÑ¿NÍAÐZÊ¿]ÄÁ]ËyÉZÌ Í@ÎLÌmÍ@Ð Ò ÇÐ(¿)ÓTÃOÀTÁÌ¿:¿]Æh¿]Â(Õ ¿)À0Á<ÉZÍ-ÁÌmÁÌ¿3Ï-Ã>Éi¿)Ð(ÁgÝ ÃÒÒ Ð<ÓÍAÐ(ÁÇÀÎ8¿Ö:á¿)Ç!ÁÑ¿ ðàñYòLóôò[õòLö÷)øùúwû(ü[ü8ý]þtõ>öÿ)ù½ò ¤ÿ]ÿ  ùWô]óàöÿ   åóùÿ¤ô ÿ¤ôjøÿ‡óÿ dôåöròù}ù óåöròóù (òWùô ! óÿ dÿ"”õ ­ó¥óøŒõòLörò"#5óåöÿ$óø%"”ÿ & ' à Ó¿)Ï ( ) á ɍޭÃ;¿)ÀKÁÍ@ÁrÅUÈ3¿]ÂÙOÍ@ÀÙ *+Ö, ì â!Ö,7â!ÖÉ Í-ÁÌ}¿)ÀKÁÍ@ÁrÅUÈ3¿]ÂÙOÍ@ÀÙ *+Ö.+ ì ç ֐í ,+ÖWä ØqÇ!Ò Ï-¿ ä ë%(¿)Î]Í@ÐÍ-ÃOÀ½Ë/)¨¿)Î]ÇÏ@ÏàË ÇÀÓ;áYÕ ' ¿)ÇÐÑ¿ ƈà(ÂÃOÀÃOÑÀ10ÔÇÏ@Ñ Ç!ÁÍ-ÃOÀ ç Ë$ÁÌ¿UÀ0ÑÈ:Òy¿]Â6ÃÆ ÁÍ@È;¿)Ð:ÁÌ Í@ÐN¿)ÀKÁÍ-Á5Å“Ì ÇÐ6Òt¿]¿)À È3¿)À0ÁÍ-ÃOÀ¿)ÓkÍ@ДÇÏ@Ð(ÃÍAÀÎ8¿)È3¿)À0Á(¿)ÓÖ 24365 798:<;=:<>;@?BADCFEG>=CDHIJ:<; é\¿3¿]Ô!ÇÏ@ÑÇ!Á(¿)ÓmÃOÑÂ<È3Ã7Ó¿)ÏYÃOÀTÁÌ¿NÓÇ!ÁÇ}Ñп)ÓmÍAÀ K ¿#¿]ÁZÇÏàÖMLdâ *!*!,ON[ÖuبÌ¿]¿Ç!¿3â P!í ì ÄÂÃOÀÃOÑ ÀÐ$ÍAÀ ÁÌÍAÐ{Î8ÃÂÄ ÑÐkÉZÌÍ@Î[ÌoÇ!¿ÆhÃÂLÈ;ÐkÃÆRQFSËUTQ<SË<àVW Öué\¿gÁ(Â[ÇÍ@À¿)ÓUÃOѨÈ;à Ó¿)ϽÃOÀX*!íZYµÃÆqÁÌ¿#ÓÇ!ÁÇ ÇÀÓRÁ(¿)ÐÁ(¿)ÓÃOÀRÁÌ¿3ÃÁÌ¿]Âjâ]íZYSÖ3è‰Æh¿)Ç!ÁÑ¿;ÌÇÓ Á(Ãvà Î]Î]ÑÂSÇ!ÁjÏ-¿)ÇÐÁ[-\ÁÍ@È;¿)Ð;Í@ÀlÃÂLÓ¿]ÂÁ(ÃQÒy¿{Í@ÀÕ Î]Ï@Ñ Ó¿)ÓvÍAÀQÁÌ¿UÈ3Ã7Ó¿)ÏàˌÇÀӓÁÌ¿UÈ3Ã7Ó¿)Ï$Ä Ç!ÂLÇÈ6Õ ¿]Á(¿]ÂLÐÉ$¿]ÂL¿kÎ8ÃOÈ3Ä ÑÁ(¿)ÓÑÐÍ@ÀÙxâ]íívÍ-Á(¿]ÂLÇ!ÁÍ-ÃOÀ Ð3ÃÆ K ß]\t֌بÌ¿}ÁÇÐ(ÊxÌ¿]¿}ÉXÇÐ6Á(ÃÓ¿]Á(¿]ÂLÈ;ÍAÀ¿jÉ ÌÍ@ÎLÌ ÀÃOÑ ÀRÄ ÌÂLÇÐ(¿3ÁÌ¿3ÄÂÃOÀÃOÑÀRÂL¿]ƈ¿]ÂÂL¿)ÓRÁ(Ã{ÃÂ#ÁÌÇ!Á Í-Á:ÉiÇÐNÀÃOÀ7ÕÂL¿]ƈ¿]¿)À0ÁÍ@ÇÏàËqÙOÍ-Ô¿)À“ÇÏ@ÏuÁÌ¿jÄÂL¿]Ô Í-ÃOÑ Ð Î8ÿ]Æh¿]¿)ÀÎ8¿UÂL¿)Ï@Ç!ÁÍ-ÃOÀÐÌ Í-Ä Ð]ÖméÌ Í@Ï-¿ÁÌ Í@ÐNÁÇÐ(ÊvÍ@Ð ÀÃÁk¿]Ä¿)Ð(¿)À0ÁÇ!ÁÍ-Ô¿ÃÆ3ÌÃ­É ÁÌ¿QÈ;à Ó¿)ύÉiÃOÑÏ@Ó Òy¿jÑ Ð(¿)ÓxÍ@ÀÄÂ[ÇÎ8ÁÍ@Î8¿ËÀÇÈ3¿)Ï@Å\ÁÌ¿jÃÁÌ¿]Â6Î8ÃÂL¿]ÆAÕ ¿]¿)À Î8¿\¿)Ï@Ç!ÁÍ-ÃOÀÐ}Ç!¿ÀÃÁUÙOÍ-Ô¿)ÀÇÀÓÈNÑÐ(Á}Òy¿ Î8ÃOÈ3ÄÑÁ(¿)ÓËtÁÌ¿:ÁÇÐÊkÇÏ@Ï-íÉZÐZÑÐ Á(ÃS¿]ÔÇÏ@Ñ Ç!Á(¿:ÁÌ¿ ÄÂLÃOÀÃOÑÀQÈ;à Ó¿)ÏÍ@ÀQÍ@ÐÃOÏ@Ç!ÁÍ-ÃOÀÖ{è Ô¿]ÂLÇ!Ù¿S¿)ÐÑÏ-ÁÐ ÆhÃÂ6Á(¿)À7ÕƈÃOÏAÓvÎ8ÂÃOÐLÐdÕÔÇÏAÍ@ÓÇ!ÁÍ-ÃOÀvÇ!¿jÄ Â¿)Ð(¿)À0Á(¿)ӓÍAÀ ØqÇ!Ò Ï-¿jäUƈÃÂ:ÃOÑÂ:È3Ã7Ó¿)ÏŒÉ Í-ÁÌ\ÇÀÓÉ Í-ÁÌÃOÑÁ<ÁÌ¿ ¿)À0ÁÍ-ÁrÅjÈ3¿]ÂÙOÍ@ÀÙ:Ó¿)ÐÎ8ÂLÍ-Òy¿)Ó;Ç!ÒyíÔ¿Öué\¿ZÚtÀÓ;ÁÌÇ!Á ÁÌ¿#¿)ÀKÁÍ-Á5ÅUÈ3¿]ÂÙOÍ@ÀÙ;ÍAÈ3ÄÂíÔ¿)ДÄy¿]ÂƈÃÂ[È;ÇÀÎ8¿Ö 243_^ 798:F`baO8dcX:<>;@ef>EJahg ØYà Ç!ÄÄ Ï-Å ÁÌ¿ªÄÂÃOÀÃOÑÀ²È3Ã7Ó¿)ÏxÍ@À²ÄÂLÇÎ8ÁÍ@Î8¿ Éi¿ À¿]¿)Ó±Á(ÃJÎ8ÃOÈ3Ä ÑÁ(¿IÁÌ¿ ÃÁÌ¿]ŠÁ5Å0Äy¿)ЫÃÆ Î8ÿ]Æh¿]¿)ÀÎ8¿ÁÌÇ!ÁQÎ8ÃOÀ0Á(ÂLÍ-Ò ÑÁ(¿‡Á(ÃoÁÌ¿|È;¿)ÀKÁÍ-ÃOÀ Î8ÃOÑÀ0ÁuÐ(ÁÇ!ÁÍ@ÐÁÍ@Î]ÐuÇÀÓ6ÉiÃÂLÓ6ÃŒÁÇ!Ù0Þ­Ù¿)ÀÓ¿]Â$Ä ÇÍ-ÂLÐ)Ö i ÃÂL¿]ƈ¿]¿)À Î8¿’Òy¿]ÁrÉi¿]¿)ÀIÄÂÃÄy¿]Â|ÀÃOÑÀ Ð|Í@Ð|Ô¿]ÂÅ Î8ÃOÈ;È;ÃOÀÍ@À|À¿]ÉZÐ(ÉZÍ@¿UÓÃOÈ;ÇÍ@ÀÐÇÀÓ|ÇÎ]Î8ÃOÑÀ0ÁÐ ÆhèÇ!ÄÄÂÃ>Ü7ÍAÈ;Ç!Á(¿)Ï-Å;ÃOÀ¿<ÁÌ Í-ÂLÓjÃÆYÇÏ@ÏyÎ8ÿ]Æh¿]¿)ÀÎ8¿ ¿)ÏAÇ!ÁÍ-ÃOÀÐÌÍ@Ä Ð9LGjXÇÏ@ÓÉZÍ@À}¿]Á&ÇÏàÖ-ËYâ *!*!-/N[Ö ÝZ¿]¿:É$¿ Ç!¿NÎ8ÃOÀÎ8¿]ÂLÀ¿)ÓTÃOÀÏ-ÅSÉZÍ-ÁÌkÁÌ¿#Î8ÿ]Æh¿]¿)ÀÎ8¿N¿)Ï@ÇÕ ÁÍ-ÃOÀ ÐÌÍ-Ä Ð Òy¿]ÁrÉi¿]¿)ÀRÁrÉiÃUÄÂÃÄy¿]Â<ÀÃOÑÀЍÇÀÓRÀÃÁ ÌíɉÄÂLÃÄt¿]Â:ÀÃOÑÀÐgÍAÀKÁ(¿]ÂLÇÎ8Á#ÉZÍ-ÁÌ\ÃÁÌ¿]ÂNÀÃOÑÀÐ ÃÂ:ÄÂÃOÀÃOÑÀ Ð]ÖèµÀÃOÑÀÍAÐgÎ8ÃOÀÐÍ@Ó¿]¿)ÓÇ{Ä ÂÃÄy¿] ÀÃOÑ ÀŠÍ@ÆTÍ-ÁÐlÏ@ÇÐÁ“ÉiÃÂLÓªÌÇÐxÁÌ¿ÁÇ!ÙlkmMmB(on ÃÂpkmBmM(q\rn7Ös(uÂLÃÄt¿]ÂUÀÃOÑ ÀÐ}ÓÃlÀÃÁUÌÇ)Ô¿\ÁÌ¿ ÐÇÈ;¿&Ï-Ã7Î]ÇÏ@Í-Á5Å3Î8ÃOÀÐ(Á(ÂLÇÍ@À0ÁÐ$ÁÌÇ!ÁÄÂLÃOÀÃOÑÀÐ$ÌÇ)Ô¿Ö tXurvDw x y z { |}~€D‚RƒR„…}]†‡ˆ€D‚ ‰!ŠO‹Œ !O‹Ž ‰…Žr‹Ž  „… x6wM‘O’“y ~ w †”•|] u €4– z—w †”„ xx –O„!€ vR{=˜™tXw „!|šD~ w—›Ju ~ y ~ u!œw ~sž u šˆ€ Ÿo¡„ x šˆ„…}] u € ¢ ‡ w €£| w „…~†‡ˆ€ˆ‚ ›Ju ~=„ œ¤u |]|6 x6w ~ w”›Jw ~ w €h}¦¥ w ƒR„ § ‡ˆ„ ¡ w } u † u €ˆ|] vˆw ~q„ x•x<œ ~ w ¡O u šˆ| œ ~ u!œ¤w ~#€ u šˆ€ˆ|q„!| †”„!€ v  v „…} w |”‹©¨ w } w ~ƒª€ˆ€D‚f}]‡ w | w }«§ œ¤w | u!› ~ w x „ ˜ }] u €ˆ|‡ˆ œ |†”„!€¬ w1vDu € w1­ šF6} w „!†”†”šD~„…} w x §®¥—6}]‡ |]ƒ œFx6w |}~€D‚ ˜ ƒª„…}]†‡ˆ€ˆ‚¯} w †‡ˆ€ˆ ­ š w | ‹  ‡ w „ œD˜ œ ~ u „!†‡ª¥ w }]„…° w •|b}]‡ˆ„…}q„ œ ~ u!œw ~±€ u šˆ€[|o† u ~ w”›²˜ w ~ w €h}=¥M6}]‡©„ œ ~ w ¡r u šˆ| x § u †”†”šD~]~€D‚ œ ~ u!œw ~€ u šˆ€  › }]‡ w |]šD<| w ­ š w €Z} œ ~ u!œw ~9€ u šˆ€¯|³„ |]šˆF|}~€ˆ‚ u!› }]‡ wBœ ~ w ¡O u šˆ| u € w ‹  ‡ wx „…} w ~ œ ~ u!œ¤w ~q€ u šˆ€d| € u ~ƒª„ x 6´ w v h§ u € x §f† u €ˆ|] vˆw ~€D‚©} u ° w €ˆ| u †”†”šD~ ˜ ~€ˆ‚„ › } w ~ „!€ v €ˆ† x š v €D‚—}]‡ w#µ ~|}“} u ° w €[}]„…‚!‚ w v „!|U„ œ ~ u!œ¤w ~©€ u šˆ€¶¥—‡ˆ†‡ vDuOw |U€ u }©ƒª„…}]†‡·}]‡ w œ „…}} w ~€F|¹¸| w ~ w | u!›=xw }} w ~| w € v €D‚U€f„ œw ~ uOvFº u ~£¸†”„ œ }]„ xDx6w }} w ~ ›Ju/xx6u ¥ w v O§| w ­ š w €ˆ† w#u!› € u € ˜ ¡ u ¥ w x | º ‹  ‡F|9‡ˆ„!|}]‡ w[w»w †} u!› ‚ w € w ~„ xx §¯~ w˜ ƒ u ¡O€ˆ‚¼€ u € ˜œ ~ u!œ¤w ~£€ u šˆ€½ƒ uOv  µFw ~|³„!€ v ‡ u € ˜ u ~ µ †”|”‹  ‡ w |]„!ƒ wBœ „…}} w ~€ˆ|q„…~ w „ x | u „ œˆœFx  w v } u }]‡ w©w € v u!› }]‡ wœ ~ u!œ¤w ~—€ u šˆ€ } u ~ w ƒ u ¡ w † u ~ œ¤u…˜ ~„…} w#vDw |]‚/€ˆ„…} u ~|”‹ y ~ u!œ¤w ~%€ u šˆ€F|=†”„!€¹„ x | uMu †”†”šD~ „!|—ƒ uOv  µFw ~|±} u † u ƒRƒ u €¼€ u šˆ€ˆ|#|šˆ†‡ „!|M€d}]‡ w w¾ „!ƒ œFx6w –±¸¿ÁÀM6}]„!†‡ˆ| œ¤u ° w |]ƒR„!€ º ‹  ‡ w | w †”„!€  w }~ w „…} w v €®}]‡ w |]„!ƒ w ¥q„ §·„!| u }]‡ w ~ œ ~ u!œw ~ € u šˆ€F|”‹  uMvDu }]‡ˆ|Â¥ w |ƒ œFx §B„ vˆv „!€h§9| w ­ š w €ˆ† w u!› ¥ u ~ v |“}]„…‚!‚ w v ¥—6}]‡Ã¸žBž y º³u ~9¸žBž yqÄrº }]‡ˆ„…} ƒ urv  › §Å„Ƈ w „ v € u šˆ€Ç}]„…‚!‚ w v ¥—6}]‡È¸žMž ºÆu ~ ¸žBž Ärº } uÃu šD~ x |} u!›sœ ~ u!œ¤w ~U€ u šˆ€ˆ|„!€ v½œ ~ u…˜ † w”w v „!|—¥ w9v  v  w”›²u ~ w ‹ É¤Ê•Ë Ì9ÍÎ<ÏbÐOÍfѼÎ<ÒÓÕÔBÖD×FØGÒ=×DÙÚJÎ<Ó ¢¯wŜ¤w ~ ›Ju ~ƒ w v „!€ w ¡…„ x šˆ„…}] u € u!› }]‡ w „… u ¡ w } w †‡F€ˆ ­ š w |ۚF|]€D‚܍…ŽÝ‡ˆ„!€ vr˜ „!€ˆ€ u }]„…} w vÞ¢ „ xx Ä }~ w”w }“ß u šD~€ˆ„ x „…~]}]† xw |”‹  ‡ w „…~]}]† xw |¦† u €h}]„!€ w v à Š!ᬆ u ~ w”›Jw ~]~•€D‚ œ ~ u!œ¤w ~1€ u šˆ€ˆ|”‹ yw ~ ›²u ~ƒR„!€ˆ† w ›Ju ~}]‡ w „… u ¡ w } w †‡ˆ€ˆ ­ š w || œ ~ w | w €h} w v €  „ ˜  x6w£‘ ‹  ‡ˆ•| w ¡„ x šF„…}] u € vDuOw |—€ u }M€ˆ† x š vˆw ƒ uOvr˜  µFw ~|¹}]‡ˆ„…}[„…~ wXœ ~ u!œ¤w ~ª€ u šˆ€ˆ|R<šD}R¥ w |]šˆ| œw †} }]‡ˆ„…}¹|]šˆ†‡·„!€ w ¡…„ x šˆ„…}] u €½¥ u š xv½œ ~ urv šˆ† w |]ƒ ˜  x „…~X~ w |]š x }]| ‹ tXu |]} œ ~ w †”•|] u € w ~]~ u ~| €h¡ u/x ¡ w v }™§ œ¤w ƒR|]ƒR„…}]†‡ w |U w }™¥ w”w € w €Z}]}] w |¹„!€ v † u š xv x 6° w x §â w „ vFv ~ w |]| w v ¥M6}]‡·„¯€ˆ„!ƒ w vr˜w €h}]6}™§ vDw˜ } w †} u ~ ‹ zsw †”„ xx ¥q„!| œ ~ƒR„…~ x §‡OšD~]}±O§ª„ x „!†]° u!› }~ w „…}]ƒ w €h} u!› „!†~ u €h§rƒR|”‹ É4Ê6ã ä©Î<å®å½Î<ӬѼÎ<ÒÓ¦æ!ç©èfÒØJÐhæ[×FÓ¦é ê Î4é=ÐrØ ë u ~ w”›²w ~ w €F† w  w }«¥ w”w €ì† u ƒªƒ u €ì€ u šˆ€ˆ|í„!€ v u }]‡ w ~p€ u šˆ€ˆ|®|·„ v _î[†”š x }½† x „!|]|¬€Û‚ w € w ~„ x – Fšˆ}£„X|]šˆF| w } u!› }]‡ w | w †”„!| w |U†”„!€â w  vˆw €Z}] µ<w v ¥—}]‡1~ w „!| u €ˆ„… x6w9œ ~ w †”•|] u €4‹  ‡ w †”„!| w |B¥ w † u € ˜ |] vDw ~~ w·vˆwµ €ˆ6} w € u šˆ€ œ ‡D~„!| w |ï„!€ v „ œFœu | ˜ 6}]¡ w |”‹ð¨ wµ €ˆ6} w € u šˆ€ œ ‡D~„!| w |X€ v †”„…} w }]‡ˆ„…} }]‡ w ~ w”›Jw ~ w €h}f|]‡ u š xv  w °r€ u ¥—€Õ} u }]‡ w ~ w „ vDw ~ „!€ v }]‡hšF|³¥ w „…~ wªx ° w x §¼} uXµ € v „!€½„!€Z} w † w vDw €h} ›Ju ~X}]‡ˆ•| € u šˆ€ œ ‡ˆ~„!| w ‹ ¢¯w † u €ˆ|] vˆw ~X„p€ u šˆ€ œ ‡ˆ~„!| wUvDwµ €ˆ6} w  › 6}³|ƒ urv  µ<w v O§X}]‡ w¹vDw } w ~ ˜ ƒR•€ w ~ïñJòDó‹ {Du ~ vDw } w ~ƒª€ˆ€D‚½† u ~ w”›²w ~ w €ˆ† w  w˜ }«¥ w”w €Ã}]‡ w | w € u šˆ€ œ ‡D~„!| w |M¥ wUw ƒ œ<x6u § }]‡ w „ œD˜ œ ~ u „!†‡ u!›ô  w 6~„U„!€ v yuOw |] u¼õ Œ ‰!‰ àhö ‹  ‡ wBµ ~|]} € u šF€ œ ‡D~„!| w }]‡ˆ„…} u †”†”šD~|⥗6}]‡F€÷}]‡ w՜ ~ w ¡O ˜ u šˆ| µ ¡ w | w €Z} w €F† w |‡F„!|=„B|}~€ˆ‚ ˜w ­ šˆ¡„ x6w €h}‡ w „ v ¥ u ~ v –b„!€ v € u „ vˆv 6}] u €ˆ„ x ƒ urv  µFw ~|£|† u €ˆ|] vD˜ w ~ w v † u ~ w”›Jw ~ w €Z}±¥—6}]‡R}]‡ wMvDwµ €ˆ6} w € u šˆ€ œ ‡D~„!| w ‹ ¿B€ u }]‡ w ~9†”„!| wRu!› † u ƒRƒ u €â€ u šˆ€Ã† u ~ w”›Jw ~ w €ˆ† w |£„ œˆœ¤u |]6}]6¡ w |”‹  ‡ w † u ~ w”›Jw ~ w €ˆ† w  w }«¥ w”w €÷ø¦òró ùú û ó ú ñü ú#ý±û óþ„!€ v¬ÿ þü ÿ ü ñ™ó9€ï}]‡ wU›²u/x•x6u ¥—•€D‚ | w €h} w €ˆ† w |#„!€ w¾ „!ƒ œFx6w³u!› „!€ „ œˆœ¤u |]}]6¡ w!’  ‡ w „!| w |} u | µ  w ~ –†~ u †” vˆu/x 6} w –•|šˆ€ ˜ šˆ|]šˆ„ xx §@~ w | x  w €h} u €ˆ† w 6} w €h} w ~|ï}]‡ w x šˆ€D‚/|”–%¥M6}]‡ w ¡ w €pˆ~ w”›#w¾rœ¤u |]šD~ w |} u 6}ª†”„!šˆ|]•€D‚¶|§rƒ œ } u ƒR|[}]‡ˆ„…}f|]‡ u ¥Èš œ vDw †”„ vDw | x „…} w ~ –D~ w | w „…~†‡ w ~|M|]„! v ‹ ¢¯w † u €ˆ|] vˆw ~—„!€Z§d}«¥ u € u šˆ€ œ ‡D~„!| w |M| w”œ „…~„…} w v O§£„† u ƒRƒR„„!|%†”„!€ v  v „…} w | ›²u ~“ w €D‚—„!€¹„ œFœu | ˜ 6}]¡ w „!€ v šˆ| w }]‡ w ¥ u ~ v |%}]‡ w € u šˆ€ œ ‡ˆ~„!| w |b† u € ˜ }]„!€©„!|¦¥ w xx „!|=}]‡ w 6~=|§O€h}]„!†}]† † u €Z} w¾ }“} uMvDw } w ~ ˜ ƒR•€ w ¥—‡ w }]‡ w ~o}]‡ w §¹„…~ w € › „!†} „ œFœu |6}]6¡ w |”‹  u ¥ w 6‚/‡h}—}]‡ w | w› „!†} u ~|—¥ w „…‚/„!•€ w ƒ œ<x6u § „[ƒR„ ¾h˜ ƒšˆƒ w €h}~ u!œ §®ƒ urvDw x ‹  ‡ wf›Jw „…}]šD~ w |¹šˆ| w v O§ u šD~Mƒ uOvˆw x „…~ w£vDw |]†~6 w v •€  „… xw ‹ É4Ê  ä©Î<å®å½Î<ӬѼÎ<ÒÓ@ÔBÖD×FØGÒ=×DÙÚJÎ<Ó {Du ~Õ}]‡ w vDwµ €ˆ6} w € u šˆ€ˆ|®¥ w ‡ˆ„!€ vr˜ }]„…‚!‚ w v …Ž ¢ „ xxRÄ }~ w”w }¯ß u šD~€ˆ„ x „…~]}]† x6w |Æ u €h}]„!€ˆ€ˆ‚ !á! ~ w”›Jw ~]~•€D‚ vDwµ €ˆ6} w € u šˆ€ˆ|”‹  u }~„!€ „!€ v } w |} u šD~1„ œˆœ¤u |]}]6¡ w ƒ urvDw x –#¥ w ‡ˆ„!€ vr˜ }]„…‚!‚ w v Œ”Ž!Ž!Ž w¾ „!ƒ œFx6w | u!› € u šF€ œ ‡ˆ~„!| w |”–“¥—‡ˆ•†‡½¥ w ~ w | w”œ „ ˜ ~„…} w v O§Õ„·† u ƒRƒR„r–„!| w 6}]‡ w ~1„ œˆœ¤u |]6}]¡ w | u ~ € u € ˜ „ œˆœ¤u |]6}]6¡ w |”‹ ¿ ›Jw „…}]šD~ w ‡ˆ„ v } u®u †”†”šD~[„…} x6w „!|]} ï}]ƒ w |R€ u ~ vˆw ~R} u  w •€ˆ† x š vDw v €®}]‡ w ƒ urvDw x –4„!€ v }]‡ w ƒ urvDw xœ „…~„!ƒ w } w ~|M¥ w ~ w † u ƒ ˜ œ šˆ} w v šF|]€D‚Œ”Ž!Ž96} w ~„…}] u €F| u!› ]Ä ‹ {ˆu ~b} w |}]€D‚D–  !"#%$&(')"% *+ ,)-.&/%10234 65)7'8"9$: ;<=)# ("7%$&>3460?02A@B-?&%)#"9-2&6C 65)7'8"9$: D<FEG-2%"H)65)I'J"K$:L*+6M%$-.):N%F $, @O"9,QPRS T<FEG-2%"H)65)I'J"K$:L*+6M%$-.):N%F $, %$&LUVRS W XEG-2%"B65J7'J"9$:N*+6C%$-?):%HYZS,)-\[ ] "@O"9,QP_^H `<XEG-2%"B65J7'J"9$:N*+6C%$-?):%HYZS,)-\[ ] "%$& U ^  a<bG0?3 c)65)N'J)"9$:X*+6C%$-?):d%O  Pef$),L%"9-2&6CX65JL'J"K$:B*+6C%$-?):O%  gP_hi jS_H0?3 65JQ'J"9$:%H*+6M%$-.):%k[  LPeJ$),Z%"K-2&6Ml)65)Z'J"K$:*+6M%$-.): %%$&FUmhi X)n0?3 o65)p'J"K$:*+6M%$-?J:o%%$& UVec$),#%H"9-?&6MB)65)g'J)"9$:H*+6C%$-?):q%  gP_hi  ;<X)n0?3 o65)p'J"K$:*+6M%$-?J:o%%$& UVec$),#%H"9-?&6MB)65)g'J)"9$:H*+6C%$-?):q% %$&LUrhA s$tJ02vuwyx) $%5"% :Q34"g${z|$i}-?Y~5)YEXC"%'<€  ')'‚6:%-2%-2ƒz|<, 0 „€<'‚ … † x , ] J-2N‡q…G: `;<ˆD u W ‰u TjSˆ; $'J'f6:9-2%-2ƒ : ``<? W a<ˆa `Š<ˆ` s$tJ02qD<w‹…X"% *-?:%-26Œ6† *$0?0Œ<$),Lxd[„z| $:%5"%q3 " Ž 6Y YZ6g‡q65)gElƒ$0?5)$%-?6  S[3460?,~*+"%6:%:V[ƒ$0?-?,J$%-26N@o$:d5J: ,H@q-2%HajM=3 %k,)$%$L5): ,734""9$-.)-?&Z$),Q%)%"kjM 0234p3 "Q :%-.&‘v"% :%5J02%: 34"7t‚%’m€C'‚ : 3*+6YLYL6y)65)):H$"9 &6-2ƒ y-?1s$tJ02pD<|†+[ *$0?0o34"Z%g, ] )-2765)):“-?:Z”<5)-2702•@H  V[  C%-26Ig, ] )-2“65)):@B)-.*9v,Q:%)$"% %L:9$YZ  $,$:N% -2"N$C *+ , M~*+65)0?,I-?YZ[ ')"9•ƒ7%)-?:–<'‚ *] *$0.02€ŒsYL<, 0.-?&˜—V%7*+6Y“[ 'J$C€S™SŒB@B)-.*9šS**5"9:F34"% ”C5) M%02€š-.˜% :%›$"[ %-?*0? :ŒX@O65)0?,/ 0?'{*+6J:%-?,"9$t802€  ')'‚6:%-2%-?ƒ : :%5œf"34"%6Y%Z3ž$*+N%)$6J02€›$,Ÿ$*+ CNtJ$:%$0 65J7'J"9$: :B$"%H*+6):%-.,"% ,  ¡2¢ £¥¤›¦¨§©Sª8«4¬dª­A®4¯f° )|,)-\œ‚"% CFm€C'‚ :F3H*+"%34"% )*+I, :%*+"K-2t‚ , $t‚•ƒp-?C"9$*+H@B-?%v6 $%"~$:N $*K16 $:%:%"%%:B*+"%34"% )*+~"% 0?$%-?6):%)-2'8:‹)-?:_S**5"9: t‚ *$5):y$:%:"%%-.&’$’*+"%3 "9 )*+>"9 0?$%-26):%J-2' -?YL'J0?-2 :#%)$$¥ M%-2m€¥J$:vtf ±YZ C%-26 , $&6$-?>)-.:~:95tJ: ”<5 C~YZ C%-26/$0?:%›+œ‚ *+%: A@²02<*$0d%~ M%-?„€|$'J'f $"K:BQ$0?0c%~*+"93.[ "% J*+yYZS, 0?: ³x)$-.0?-?&š($:%:"%›$˜*+"%34"[  )*+ "% 0?$%-26):9)-2'vYL$ €I*$5):L$ M%-2m€vgt‚+[ *+6YZ-.)$**+ :%:%-2t802|š$y')"9665)("g*+6YLYZ6 65J#@BJ-?*9›-?:*+"%3 "9 Mn@B-2%›-2n´V#"9,)"q ,"KYL-?ZA@²% :L*+6YZ'‚6 C%:n@G65J0?,›'f"%[ 34"9Yµ&%)" Œ‚@G~-?M&"K$ ,#% YgŒ8$),#ƒi$0\[ 5)$ ,v%J-?:n:€<:% Y5):%-?&F%Zz|¶ Ž [mT Ž "93 "[  )*+yd$:%¸·ž¹q3rŒ  aaD6ºK»)„@O(,J$%$˜:%:  $*K(*+6J:%-?:F3HŠj1¼$0?0n–<"%Q½65"K)$0$"%%-\[ *02 :G%)$o)$ ƒnt‚ p)$),S[„$))%$ , 34"O*+"93.[ "% J*+~"9 0?$%-26):%J-2'J:´r›% :%“$"9%-?*02 :n$0?YZ6: $0?0C*+"%34"% )*+Otfm@G ~65)H'J"9$:% :)$:dtf  YL$"9 ,‹† :%5)0?%:b3 "l%J-?:b%$:“$"%')"% :% M ,L-. s$tJ02HT<O :N"9 :%5)02%:_$"%HtJ$: ,Q6#%H:%*+"[ -?)& $02&"9-2%JYµ5): ,#3 "z|¶ Ž [mTL$J,#, :%*+"K-2t‚ , -? ¾n-?0?$-.Fq$0B·V aaD<ºK¿†B *$0?034"g% :I"%+[ :%5J02%:-?:N02A@‘t‚ *$5):L%)"%L$"% YL$C€v„€<'‚ :3 *+"%34"% )*+›"% 0?$%-?6):%)-2'8:“@B)-?*Kš@O|)$•ƒ› $ YZ'J ,¿‘$)%$ …l"9 *-?:%-26À)$:{"%+[ YL$-. ,g)-2&6Œ):L@Ok*$gt‚3ž$-2"902€7*+6 ] ,) MB-. %g*+"93 "% J*+Q"% 0?$%-26):9)-2'J:k%)$Z)$ ƒ#tf  Á)ÂÃ%Â Ä Å Æ ÁÇ9ÈMɏÇ9ÊJË ÌÍ<ÎˆÏ Ð)ÑÎˆÒ ÓiÐÎˆÒ ÔÕ ÂÖ?Ê)ÂÃ%×2Ø6Ë ÌÍ<Έ٠ÐÐÎˆÓ Ó6ÌCÎ?Ñ Ú ÂÛJÖ Ô Ù<ÜbÄlÇ Ô Ý ×.Þ%×2Ø6ËßMÅ Ô Ý ÂÖ?Öß<ÂË)ÁLÆdɄà Ô ÂÞ9ÊÇ ÔBá ØÇ à›ânã ä Õ ÂÖ.Ê)ÂÃ%×2Ø6Ë á Ø6Ê)Ë)ÁÎ å‚æ?ç èFéSê4ëì•é<í˜î²ïJðiñ˜òžóõôkïJðiéCöméCðiéSóc÷6é ø ÊÇkÂù)ù)Ç%Ø6 ÝKú1á ØÇHù)Ç9Ø6ËØ6Ê)ËIÇ Ô Þ%Ø6Ö?ÊÃ%×2Ø6Ë×.ÞNÞ%×?û“É ×?Ö?ÂÇyÃإà ú ÂÃ{Ø á›üÔNÔ ÃBÂ֏ÎqýVÑ ÍÍþCÿ1×?Ë Ã úÔ’á Ø6Ö\É Ö2Ø  ×?Ë  •È<Þ ÎOØà ú Âù)ù)Ç9Ø6 Ý9úÔ ÞQÊ)Þ Ô Þ%Ã%ÂÃ%×?ÞÃ%×\É Ý ÂÖHûLØ<Á Ô Ö.×?Ë{ÃؘÇ9ÂË(ÂËCÃ Ô Ý+Ô Á Ô ËCÃ%ޛÂËJÁ à úÔ Ë Þ Ô Ö Ô Ý Ã Ã úÔ ÃØùÇ9ÂË Ô ÁØ6Ë Ô Î Ú úÔ¥á4Ô ÂÃ%ÊÇ Ô Þ Ê)Þ Ô ÁFÛ<ÈLۂØà ú à úÔ Þ Ô ûLØ<Á Ô Ö.ÞG×?Ë Ý Ö?Ê)Á Ô Ã úÔ  ØÛ)ÛJÞ Á)×?Þ%Ã%ÂË Ý+Ô ÂË)Á¿Ã ú)Ô û Ô ËCÃ%×2Ø6Ë Ý Ø6Ê)ËCÒÞÃ%ÂÃ%×.ÞÃ%× Ý ÞÎ Ú úÔ È¿ûLÂ×.Ë)Ö2ÈÀÁ)× Ô Ç’×?Ë Ã úÔ=á Ø6Ö?Ö2Ø  ×?Ë   ÈSÞÎ ø ÊÇpÂù)ù)Ç%Ø6 ÝKú Ç Ô ÊJ×2Ç Ô Þ Ë)Ø6Ê)˘ù ú Ç9ÂÞ Ô Þ á ØÇ7×?ËSÉ ùJÊà  ú ×?Ö Ô üÔNÔ ÃBÂÖÎBýVÑ ÍÍþSÿ ÊJÞ Ô Þ á Ê)Ö?ÖùJÂÇ9Þ Ô ÃÇ ÔÔ ÞÎ üÔNÔ ÃBÂÖÎýVÑ ÍÍþCÿ~ÁØ Ô ÞLËØÃ Ô ûLùJÖ2Ø•È Ô ËSÉ Ã%×2ÃmÈvû Ô Ç6×?Ë  ú × Ý9ú  Ô Ë)Á úÔ Ö?ùJÞnØ6ÊÇHûZØSÁ Ô ÖÎ à|Ø6ÞÃ1×?ûZù‚ØÇ%Ã%ÂËCÃ%Ö2ÈߓØ6ÊÇûZØSÁ Ô Ö ú ÂËJÁ)Ö Ô ÞvËØ6ËSÉ Ç Ôá4Ô Ç Ô ËCÃ%×?ÂÖù)Ç9Ø6ËØ6Ê)Ë)Þ  ú ×?Ö Ô¸üÔNÔ ÃBÂ֏ÎqýVÑ ÍÍþCÿ ÔÝ Ö?Ê)Á Ô ÞZà ú)Ô û á Ç%Ø6û à úÔ Á)ÂÃ%ÂSÎ Ú úÔ Þ Ô Á)× Ô ÇÉ Ô Ë Ý+Ô ÞÂÇ Ô ùJÇ9×?ûLÂÇ9×.Ö2ȓà úÔ Ç Ô Þ%Ê)Ö2ÃOØ á Ø6ÊÇ_Á Ô Þ%×2Ç Ô ÃØ Ê)Þ Ô Ã ú ×.ÞpûLØ<Á Ô Ön×?Ë Âù)ù8Ö?× Ý ÂÃ%×2Ø6Ë)Þ Î‘ÆØÇ#ÂË ÂùÉ ùJÖ?× Ý ÂÃ%×2Ø6Ëß á Ê)Ö?ÖbùJÂÇKÞ%×?ËQ×?Þ Ý Ø6ûZùJÊÃ%ÂÃ%×2Ø6ËJÂÖ?Ö2È Ô É ù Ô Ë)Þ%× ÕÔ ÂËJÁ1ËØ6ËSÉÇ Ôá Ô Ç Ô ËCÃ%×?ÂÖXù)Ç%Ø6ËØ6Ê)ËJÞ Ý ÂË)ËØÃ Û ÔÔSÝ Ö.Ê)Á Ô Á á Ç%Ø6ûµÃ úÔ ÁJÂÃ%ÂSÎ Ú úÔ Þ Ô Á)× Ô Ç Ô Ë Ý+Ô Þ ÂÖ?ÞØ“Ö Ô ÂÁpÃØ~à úÔnÔSÝ Ö?ÊJÞ%×2Ø6ËLØ á ÞØ6û ÔnúÔ Ö2ù á Ê)Ö á4Ô ÂiÉ Ã%ÊÇ Ô Þ  ú × Ý9ú ÂÇ ÔqÔ ûLùJÖ2Ø•È Ô ÁpÛ<È üÔNÔ ÃBÂ֏ÎqýVÑ ÍÍþCÿ Þ%Ê ÝKú ÂÞ~›ûZØÇ Ô Â ÝÝ Ê)Ç9Âà ÔQÝ Ø6ûZùJÊÃ%ÂÃ%×2Ø6Ë>Ø á à úÔ  ØÛ)ÛJÞZÁ)×?Þ%Ã%ÂË Ý+Ô ßGà ú)Ô ù)Ç%ØÛJÂÛJ×.Ö?×2ÄÈIà ú Âà Â˚ÂËSÉ Ã Ô Ý+Ô Á Ô ËCÃ Ý ÂË|Ø ÝÝ ÊÇn×?Ëgà ú)Ô Þ%Âû Ô ÞÈSËCÃ%Â Ý Ã%× Ý ù‚ØÉ Þ%×2Ã%×?Ø6Ë/ÂÞZà úÔ ù)Ç%Ø6Ë)Ø6Ê)ËßXÂË)Á{à ú)Ô Ê)Þ Ô Ø á Ê)Ë)Þ%ÊSÉ ù Ô Ç Õ ×?Þ Ô ÁkÃ Ô Ý9ú Ë)× Ê Ô Þ á ØÇ Ô Ë)Á Ô Ç‹Á Ô Ã Ô Ç9û ×?Ë)ÂÃ%×2Ø6ËÎ Ú úÔ Ç Ô Þ%Ê)Ö2ÃZØ á à úÔ Þ Ô ÃÇ9ÂÁ Ô ÉØ ÞF×?ÞLùfØ<ØÇ Ô ÇLù Ô ÇÉ á ØÇ9ûLÂË Ý+Ô ÂÃ#ù)Ç%Ø6Ë)Ø6Ê)Ë Ç Ô ÞØ6Ö.ÊÃ%×2Ø6Ë  ú)Ô Ë Ý Ø6û“É ùJÂÇ Ô Á ÃØ ünÔNÔ ÃBÂÖÎBýVÑ ÍÍþ<ÿKÎ Ú ú)Ô Ã Ô ÝKú Ë)× Ê Ô Þ  Ô Ê)Þ Ô Á á ØÇGùJÇ%Øù Ô ÇGËØ6Ê)Ë Ý ØÇ Ôá4Ô Ç Ô Ë Ý+Ô ÂÇ Ô Þ%×?ûL×2É Ö?ÂǍÃØqà ú Ø6Þ Ô ÊJÞ Ô ÁÛ<ÈoÂÖ?Á  ×?Ë Ô ÃBÂÖÎBýVÑ ÍÍÓSÿ‚ÂË)Á Øà úÔ Ç9Þ Î Æ)ØÇÁ Ô Ë)×2Ã Ô ËØ6Ê)Ë)Þ  Ô Ê)Þ Ô Ã úÔ ÂùÉ ù)Ç%Ø6 ÝKú Ø á × Ô ×2Ç9“ÂË)ÁgÄsØ Ô Þ%×2Ø|ýVÑ ÍÍ6ÌCÿKÎ Ú Ø{Ø6ÊÇ SËØ  Ö Ô Á Ô ßXà úÔ ÂùJù)Ç%Ø6 Ý9ú ÊJÞ Ô Á á ØÇLÁ Ô Ã Ô Ç9ûL×?ËSÉ ×?ËLÂùJùfØ6Þ9×2Ã%× ÕÔ Þ_×?Þ_ËØ ÕÔ ÖÎ   !"#%$'&)(*")+,$&-(/.0&)!132%45 42%4& #%$'6!8792%.0&).0"-&)+:( $'04;+ "-+< &=6>;>;")2?6>[email protected]EFG>;.0HI7-62%4KJ3#%. #%L)4NM6O)D P0FQ2%4R 7S. 2,#%4;JT$& UV44#W !D8XYEKC C0M[Z\D ] ^`_a_:b?c?dfehgic%jWkal Ú ú)Ô Ç Ô ÂÇ Ô ËCÊJû Ô Ç%Ø6Ê)ÞQÂù)ù8Ö?× Ý ÂÃ%×2Ø6Ë)Þ  ú × Ý9ú Ê)Þ Ô Ý ØÇ Ôá4Ô Ç Ô Ë Ý+Ô ÂÞ Â|Û8ÂÞ Ô ÂË)ËØÃ%ÂÃ%×2Ø6ËÎ Ú úÔ Þ Ô ×?ËÉ Ý Ö?ÊJÁ Ô  ØÇ ×?Ë Þ%Ê)ûLû ÂÇ9×m ÂÃ%×2Ø6˱ýYoÂÖ?Á  ×?Ë ÂË)Á à|ØÇ%ÃØ6Ëß‹Ñ ÍÍþonqpNm[m Âû Ô ÃnÂÖÎ2ßbÑ ÍÍÍ6ÿÂË)Á Ê Ô ÞVÉ Ã%×2Ø6˒ÂË)Þ  Ô Ç9×?˘ýžà|ØÇ%ÃØ6ËßkÑ ÍÍÍonOÇ Ô Ý  Ô ÃFÂÖÎ2ß Ñ ÍÍÍ6ÿKÎ ræYs t=ucéovìAò4ï8óxw7ó8v y~é<ðiòžóWz qÔ Ç Ô  Ô ù)Ç Ô Þ Ô ËCÃ‹Ç Ô Þ%Ê)Ö?Ã%Þ á ØÇlÂ Ê Ô ÞÃ%×2Ø6ËZÂË)Þ  Ô ÇÉ ×?Ë’Ã%ÂÞKfÎ|{<ù Ô Ý × 8Ý ÂÖ?Ö?ÈßH ÞÈ<Þ%Ã Ô û ×.Þ}6× ÕÔ Ë¥Â Ê Ô Ç%ÈvÂË)ÁIà úÔ Ë>ÂÞK Ô ÁÃØ  Ë)ÁIÂ#ÒÓ~iɏÛ<È<Ã Ô ÂËSÉ Þ  Ô ÇgÞÃÇ9×.Ë á Ç%Ø6û Â Ý Ø6Ö?Ö Ô Ý Ã%×2Ø6Ë Ø á Á)Ø Ý Ê)û Ô ËMÃ%ÞÎ {<ÈSÞÃ Ô ûLÞ Ô Ë Ô Ç9ÂÃ Ô ÂIÇ9ÂË Ô ÁšÖ.×?ÞÃ“Ø á à úÔ ÃØù(Ó ÂË)Þ  Ô Ç/ÞÃÇ9×.Ë6ÞÎ Ú ú ×?ÞÃ%ÂÞK ×?Þ1à ú)Ô Þ%Âû Ô ÂÞ Ã úԀ Ê Ô ÞÃ%×2Ø6Ë{ÂË)Þ  Ô Ç9×.ËvÃ%ÂÞK>Ê)Þ Ô Áy×.Ë Ú ÅBäOãOÉ þkÂË)Á Á Ô Þ Ý Ç9×2Û Ô ÁL×?Ë  ØCØÇ úÔÔ ÞBÂË)Á Ú × Ý+Ô ýVÑ ÍÍÍCÿKÎ pޱ¨Û8ÂÞ Ô Ö?×?Ë Ô ß  ÔÀú  ÕÔ ×?ûZù8Ö Ô û Ô ËCÃ Ô Á à úÔ ÞÈSÞÃ Ô û Ê)Þ Ô Á Û<ȁp ڃ‚Ú ÂÞ Á Ô Þ Ý Ç9×?Û Ô Á ×.Ë {S×.Ë6ÂÖ Ô ÃBÂÖÎBýVÑ ÍÍÍ<ÿKÎ pÁ Ô Þ Ý ÇK×2ù)Ã%×2Ø6Ë¸Ø ágú Ø  à ú ÂÃ7ÞÈSÞÃ Ô û Ç9ÂËSÞ Þ Ô ËCÃ Ô Ë Ý+Ô Þ7×?Þ6× ÕÔ Ë’×?Ë Ú ÂiÉ ÛJÖ Ô ÌCÎ „ úÔ Ë  Ô Âù)ùJÖ2Èà ú ×?ÞdÂù)ù)Ç%Ø6 ÝKú ÃØnÁØ Ý ÊJû Ô ËMÃ%Þ  ×?à ú’Ý ØÇ Ôá4Ô Ç Ô Ë Ý+Ô Ç Ô Ö?ÂÃ%×2Ø6Ë)Þ ú ×2ùJÞLÂË)Ë)ØÃ%ÂÃ Ô Á  Ô Ý Ø6Ë)Þ9×?Á Ô Ç#ÂËCÈõÂÁ)Á)×2Ã%×?Ø6Ë)ÂÖnÃ Ô Ç9ûLÞ  ú × ÝKú ÂÇ Ô ×.Ë Ý ØÇ Ôá4Ô Ç Ô Ë Ý+ÔõÝKú Â×?Ë)Þ  ×2à ú ËØ6Ê)Ë ù ú ÇKÂÞ Ô Þ{×?Ë Â Þ Ô ËCÃ Ô Ë Ý+Ô ÃØ ú  ÕÔ Ø ÝÝ ÊÇ%Ç Ô Á¥×?Ëõà ú ÂÃ›Þ Ô ËCÃ Ô Ë Ý+Ô ÂÞ  Ô Ö?ÖÎ Ú úÔ Ã Ô Ç9ûLÞFÂÁ)Á Ô Á Õ ×?ÂIÂ Ý ØÇ Ôá4Ô Ç Ô Ë Ý+Ô ÝKú Â×?Ë)ÞdÂÇ Ô 6× ÕÔ Ë~Í~†… Ø á à úÔ  Ô × ú ÃØ á Ç Ô 6ÊJÖ?ÂÇ9Ö2È Ø ÝÝ ÊÇ%Ç9×.ËFÃ Ô ÇKûLÞÎ „ ú)Ô Ëvù)Ç Ô Þ Ô ËCÃ%×?Ë7à úÔ ÃØù>Ó ùJÂÞ9Þ%Â Ô ÞßJÇ Ôá Ô Ç Ô ËCÃ%ÞBÂÇ Ô ×?Ë)ÁJ× Ý ÂÃ Ô ÁQ×?ËQùJÂÇ Ô ËMà úÔ É Þ Ô ÞqË Ô Ã_ÃØLà úÔ Ç Ôá4Ô Ç%Ç9×.Ë“Ã Ô Ç9ûLÞÎ ræ?å t=ucéovìAò4ï8óxw7ó8v y~é<ðiòžóWzˆ‡T‰ëJêYucë)ìAò4ï8ó „ Ô“ÔÕ ÂÖ?Ê)ÂÃ Ô Á›Û‚Øà ú Âù)ùJÇ%Ø6 Ý9úÔ ÞÊJÞ%×?ËLà ú)Ô Ò~S~ Ê Ô ÞÃ%×2Ø6ËJÞÊ)Þ Ô Á|×?Ë Ú ÅBäoãGÉmþZÂË)Á|ÂË ÔÕ ÂÖ?Ê)ÂÃ%×2Ø6Ë Þ Ý ÇK×2ù)ÃLÁ Ô Þ%׊6Ë Ô Á á ØÇ Ã úÔ Þ Ô‹ Ê Ô ÞÃ%×2Ø6ËJÞLÂË)Á{ù)Ç%ØÉ Õ ×?Á Ô Á˜ÛCȍŒNŽ{ Ú Î Ú ú)Ô Ø6ÊÃùJÊ)Ãp×?ÞFÞ Ý ØÇ Ô Á Ê)Þ%×.Ë à úÔ û Ô ÂËQÇ Ô Ý ×2ùJÇ%Ø Ý ÂւÇKÂËvýžà›ÅBÅnÿKÎ   ‘K’W“ Ñ ”S•—–8˜ ýY™ÿ qÔ Ç Ô – ×?Þà úÔ ËCÊJûHÛ Ô ÇØ áš Ê Ô ÞÃ%×2Ø6ËJÞù)Ç%Ø Ý+Ô Þ%Þ Ô Áß ÂË)Á ”•—–8˜ ýY™6ÿ~×?ޓà úÔ ÇKÂË<×.ËvØ á à úԛ Ç9Þà ÒÓ~iÉ Û<ÈCÃ Ô ÞÃÇ9×.Ë  ú × Ý9ú ÂË)Þ  Ô Ç9Þcà ú)Ô< Ê Ô ÞÃ%×2Ø6Ë3™<Î Ú ú)Ô Ç Ô Þ9Ê)Ö2Ã%Þ á ØÇ Ô Â Ý9ú Âù)ù)Ç9Ø6 Ý9ú ÂÇ Ô ù)Ç Ô Þ Ô ËCÃ Ô Á ×.Ë Ú ÂiÉ ÛJÖ Ô þ<Î œBSžžŸS 3¡ƒS¢£f¤Š¢Ÿ¥ ¦§©¨«ª  ­¬K®S¯±°²`³®f´[µ¶@  ¢—¬ž3·¸®S¹»º—µ  ž¬¤®i¢ ¹ <¹ [¬K¹6¤ [¼S  ³3µž¤Š¢ŸƒNž¬K¹6S¤Ÿ ª ¬8¼S  ´¬K®S¹š¶¬´ ª ½ ¢®ºoµ [¹¾¿ Ào¯ÁS¢ž¤®i¢Á §à Äo§Å S´ ª žK  ´¬¤®i¢`®S·:¬ ª   žK =°²€³®f´[µ¶@  ¢—¬ž ¤Šž Æ ¹®S£S  ¢x¤Š¢—¬K®ˆžK  ¢—¬K  ¢´  žÇS¢³È  S´ ª žK  ¢†¬K  ¢É´  ¤Šž«Sž6ž¤Ÿi¢  ³›­ž´®S¹  Æ SžK  ³›®i¢‹¬ ª  /·Ê®iˊË®)̃¤Š¢Ÿ SËŸS®S¹6¤Š¬ ª ¶ § Ío§=¨«ª  º—µ [¹6¾}¬K [¹6¶ÎÌ<  ¤Ÿ ª ¬/®S·Ï [¼S [¹¾©ºoµ  ž;Ð ¬¤®i¢}Ì<®S¹ ³}¬ ª ¬¯É¯h  ¹ žN¤Š¢}¬ ª  žK  ¢—¬K  ¢´ ¤Šž S³³  ³Ñ¬K®»¬ ª  ­žK  ¢—¬K  ¢´ €ž´®S¹ SÒ¬ ª  ­¯ÉSž6žŸS  ž¤Ó[ /¤ÔžIžK [¬«¬K®¬ ª  *žK  ¢†¬K  ¢É´ *ž¤Ó[  ½ ¤Ô¢ Æ ¾o¬K  ž  § Õ §«Ö ·š=ºoµ [¹¾€Ì:®S¹6³ Æ ¤ŸS¹6S¶×¯É¯h  ¹ žN¤Š¢‹¬ ª   žK  ¢—¬K  ¢´ SÒ Àf¬K¹6}´¹6  ³¤¬KØ=¤Šž*Sžž¤Ÿi¢  ³ˆ¬K®»¬ ª   žK  ¢—¬K  ¢´  § ° §ÙÖ ·ÇS¢ÚS³-ÛK®i¤Š¢¤Š¢ŸQžK  ¢—¬K  ¢´ x´®i¢†¬S¤Š¢ÉžÜ ºoµ  žK¬¤®i¢ÝÌ:®S¹6³Þ¢®S¬@´®i¢†¬S¤Š¢  ³±¤Š¢ˆ¬ ª ¤ŠžžK  ¢fÐ ¬K  ¢´ SÒ`S¢³×¤Š· Æ ¾ßS³É³¤Š¢Ÿà¬ ª ¤ÔžÈS³-ÛK®i¤Š¢¤Š¢Ÿ žK  ¢—¬K  ¢´ ¿¬K®‹¬ ª  ­¯ÉSžžŸS SÒB¬ ª  ¯ÁSžžŸS Çž¤Ó[  ³®o  ž¢âá㬠À´ [  ³°²S² Æ ¾—¬K  ž[Ò ª SË·«¬ ª  Çºoµ [¹¾ ¬K [¹6¶×Ì<  ¤Ÿ ª ¬ƒ·Ê®S¹ƒ¬ ª ¤ŠžIÌ:®S¹6³‹¤ŠžƒS³³  ³›¬K®­¬ ª   žK  ¢—¬K  ¢´ ž´®S¹6  § äo§aÖ ·V¬ ª  ¢ Àf¬ƒS³-ÛK®i¤Š¢¤Š¢Ÿž  ¢†¬K  ¢´ 3´®i¢†¬S¤Ô¢ž Ýº—µ  ž¬¤®i¢xÌ<®S¹6³å¢®S¬Ç´®9¼S [¹6  ³Q¾S [¬[ÒTS¢É³x¤Š· Æ ¾ÑS³³¤Ô¢Ÿ}¬ ª ¤ŠžS³-Û;®i¤Ô¢¤Š¢Ÿ»žK  ¢†¬K  ¢É´ ›¬K®©¬ ª   ¯ÉSžž6ŸS SÒ«¬ ª  ‹¯ÁSžžŸS `ž¤Ó[ }³®—  ž¢qáã¬= À´ [  ³ °²S² Æ ¾—¬K  ž Ò=Qº—µÉ¹¬K [¹Ü®S·Ç¬ ª  Èºoµ [¹¾—Ð%¬K [¹6¶ Ì:  ¤Ÿ ª ¬ ·Ê®S¹/¬ ª ¤ÔžÌ<®S¹6³ˆ¤Šž S³³  ³`¬K®›¬ ª  žK  ¢fÐ ¬K  ¢´ ž6´®S¹  § æ ¢žKÌ: [¹Tço  Ë  ´¬¤®i¢q¥ ¦§ æ ž6¤Š¢ŸiˠϬK®S¯¹6S¢£S  ³žK  ¢—¬K  ¢´ ƒ¤ŠžžK  ˊ  ´¬K  ³ ·Ê¹®i¶Î  S´ ª ³®o´[µ¶  ¢†¬ §‹¨ ¤  ž¹  Æ ¹®S£S  ¢Ý¤Š¢ ·Y ¼S®S¹ƒ®S·Ë®i¢ŸS [¹ƒžK  ¢—¬K  ¢´  ž § Äo§Ñè   ¹KÐ?³µ¯Éˊ¤Š´[¬K ¿¯ÉSžžŸS  ž=¹ Ç¹  ¶@®9¼S  ³ § Ö ·ÑéË®)ÌIÐ?ž´®S¹6¤Š¢Ÿß¯ÁSžžŸS  ª SžGé´®iž6¤Š¢ Ð ž¤Š¶=¤ŠËŠ¹6¤¬\¾ê®S·ë®9¼S [¹ì² § °²í̃¤Š¬ ª  ª ¤Ÿ ª ˾ ¹6S¢£S  ³¯ÉSžžŸS SÒo¬ ª  ƒË®)ÌIÐ?ž´®S¹6¤Ô¢Ÿ ¯ÉSžžŸS T¤Šž ¹  ¶@®)¼S  ³ § Ío§Î¨«ª  Ñ¬K®S¯ïîÁ¼S ÑžK  ¢—¬K  ¢´  ž»·¸¹6®i¶ð¬ ª  Ý¹ Ð ¶S¤Š¢É¤Š¢Ÿ3®i¢  ž«¹  ¯¹ ¤Š¢†¬K  ³€¤Ô¢­®S¹ ³ [¹«®S·V¬ ª   ¤¹ ž´®S¹  ž §xÖ · ˆžK  ¢—¬K  ¢´ »¤Šž=µ¢É³ [¹ Ä °² Æ ¾o¬K  ž ¬ ª  Üˊ¬K [¹ Æ ¾o¬K  ž›®S·3¬ ª  Ü¯É¹ [¼o¤Š®iµžÇžK  ¢†¬K  ¢É´  ¹ ¤Ô¢´[ˊµ³  ³€S¢É³€¬ ª   ¢›¬ ª  /  ¹6ˊ¤ [¹ Æ ¾o¬K  žƒ®S· ¬ ª  /¢ Àf¬ƒžK  ¢—¬K  ¢´  § ¨  Æ Ë *ñ—¥ æ ¨ƒò/¨ œBSžžŸS *¡NS¢£f¤Š¢Ÿ æ ˊŸS®S¹6¤¬ ª ¶ ó‹®f³  Ë ó»¡ƒ¡ Æ SžK  ˊ¤Š¢  ° Äo§'Íiô ´®S¹ [·Ê [¹  ¢´  ° Ío§'õiô ¨  Æ Ë  õ ¥ƒó‹  S¢`¡ƒ  ´[¤¯¹®f´[SËW¡NS¢£€·Ê®S¹ ¨ ¡ Å:ö Ð õ ÷ µ  žK¬¤®i¢ æ ¢žKÌ: [¹6¤Š¢Ÿ ¨ SžK£ ø ùÜú?ûüý<û-ûú?þ8ÿ ¨«ª  *´®S¹ [·Ê [¹  ¢´ @S¢¢®S¬¬¤Š®i¢}¯É¹®o³Éµ´  žN¿ž¶SËÔË ¤Š¢É´¹  SžK ­¤Ô¢Ü¬ ª  ¿¯  [¹·¸®S¹ ¶S¢´ =®S·I¬ ª  ¿ºoµ  žK¬¤®i¢Ð S¢³Ð?S¢žKÌ: [¹ÜžK¾ož¬K  ¶ § çf¤Ô¢´ Þ¬ ª   Æ SžK  ˊ¤Š¢ Ñ  ž;Ð ¬¤Š¶=¬K  ž=´®S¹ [·Ê [¹  ¢´  Æ ¾±¤Š¢´[ˊµÉ³¤Š¢Ÿ›¬K [¹ ¶ž̃¤Š¬ ª ¯É¹6¬¤ŠSˋÌ:  ¤Ÿ ª ¬žå·¸¹6®i¶ ¬ ª   žµ¹¹®iµ¢³É¤Š¢ŸÚžK  ¢fÐ ¬K  ¢´  ž ÒW¬ ª  ­´®S¹ [·¸ [¹6  ¢´ ¿ž¾ožK¬K  ¶ ̃¤ŠËŠËV®i¢ˊ¾}®iµ¬;Ð ¯  [¹·Ê®S¹6¶ê¬ ª   Æ SžK  ˊ¤Š¢ ­Ì ª   ¢Ý¤¬¤Š¢´[ˊµ³  žºoµ [¹¾ ¬K [¹6¶=ž¬ ª ¬*®i¢Ë¾»®o´[´[µ¹*¶S¢—¾ÜžK  ¢—¬K  ¢´  ž Ìϝ ¾ § ¨«ª ¤Šž©ž´  ¢¹6¤Š®ï®f´[´[µ¹6žÜ¤Š¢·¸¹  ºoµ  ¢—¬Ë¾SÒ@Ì ª ¤Ô´ ª ¤Šž Ì ª ¾ÈÌ: }®i¢É˾ȞK [ ©Ñž6ˊ¤Ÿ ª ¬­¤Š¶@¯É¹®9¼S  ¶@  ¢—¬Ç¤Š¢  Ä ²S²/ºoµ  žK¬¤Š®i¢* [¼Sˊµ¬¤Š®i¢ §ö ®S¹ [·¸ [¹6  ¢´ «¤ŠžV  žžK  ¢fÐ ¬¤ŠSËa·¸®S¹/îÁ¢³¤Š¢ŸÇS¢ÉžKÌ< [¹ ž/¬K®}ž®i¶@ =®S·Ï¬ ª  =ºoµ  ž;Ð ¬¤®i¢Éžš¤Š¢¬ ª ¤Šžž [¬ § ®S¹a¬ ª  Nº—µ  ž¬¤®i¢âÒ ª   ¢=³¤Ô³ : [ [¬ ª ®)¼S  ¢»³É¤  ܬ ª  žK¾fžK¬K  ¶ìÌN¤¬ ª ´®S¹ [·Ê [¹  ¢´  ÌϝSžƒ Æ ËŠ  ¬K®=žK  Ë  ´¬«¬ ª  /žK  ¢†¬K  ¢É´ S¥ ço¬¤ŠËŠË,Ò Sž ¢ [ÌNž žK¯¹6  S³ ®S· ª ¤Šž ½ < [ [¬ ª ®9¼S  ¢âáž  îÁ¢ÉSË ¤ŠËŠËŠ¢  žž ½ ª   Ì<®iµËÔ³­³¤Š N®S·ÛKSµ¢É³¤Š´ NS¢É³¿Sž6´[¤¬K  žI®i¢ ó}¹6´ ªÈÄSä Ò ¦ õSÄ ñS Ò:¬ ª  €œ ª ¤ÔË ª ¹6¶@®i¢É¤Š´ ço®o´[¤Š [¬?¾ ®S· 8®i¢³®i¢ žK  ¢—¬ ª ¤Ô¶  ŸS [¬;Ð%Ì<  ˊË8Ÿi¤·Ê¬I®S· ¦ ²S²=¯h®iµÉ¢³ž § ª 9¼o¤Ô¢Ÿ ¹6  žK®i˼S  ³ ¬ ª    ª ¤Šž S¢³  ª   ¬K® : [ [¬ ª ®)¼S  ¢ÈÌ ª ¤ÔË =¬ ª   Æ SžK  ˊ¤Š¢ ­®S¯¬K  ³Þ®i¢Ë¾ˆ·¸®S¹ žK  ¢—¬K  ¢´  žÌ ª ¤Š´ ª S´¬µSˊ˾ ´®i¢†¬S¤Ô¢é¬ ª  Q¬K [¹6¶ : [ [¬ ª ®)¼S  ¢ §:¨ƒª  ƒ´®S¹ [·Ê [¹  ¢´  S¢¢®S¬¬¤Š®i¢­³ Ð ž´¹ ¤ Æ   ³ ª  [¹6 ´[S¢ Æ   µž  ³¿¤Ô¢­®S¬ ª  [¹IÌ:9¾ožIÌ ª ¤Š´ ª Ì:®iµˊ³±ËФУS  ˾ Æ   Æ   ¢ îÁ´[¤ŠSËϬK®ˆS¢¯¯Éˊ¤Š´[¬¤Š®i¢ § B¤ŠËŠËÔ¤Š¢ŸÇ¤Š¢©¹ [·Ê [¹  ¢—¬ž¤Š¢©¬ ª   Ào¬K¹ S´¬K  ³ˆ¬K Àf¬*´[S¢ ¶£S  ¬K Àf¬I¶*µÉ´ ª ¶@®S¹ ´® ª  [¹  ¢—¬I¬K®¬ ª  ¹  S³ [¹ § © ÇžK [ =¬ ª ¤Šž ª Sž Æ  [  ¢ˆ³®i¢  Æ ¾©¬ ª  ¿žK¾fžK¬K  ¶ê¤Ô¢ ¬ ª  » Æ ®9¼S » ÀfS¶@¯ÁË  § ®S¹Çº—µ  ž¬¤®i¢fÐ?S¢žKÌ: [¹6¤Š¢Ÿ ¬ ª ¤Ôž¶9¾©¯¹®9¼f¤Š³ S¢ÝS¢žKÌ: [¹ ÒBS¢³Ü¤Š¢Ñžµ¶¶=-Ð ¹6¤ŠÓ ¬¤®i¢±¬ ª ¤Šž@¶= ¾±SˊË®)Ìì¬ ª  €¹  S³ [¹­¬K®ˆ³ [¬K [¹KÐ ¶¤Ô¢ =Ì ª ¬¬ ª  Ç³®f´[µ¶@  ¢—¬*¤Šž3 Æ ®iµ¬ §`ö ®S¹6 [·¸ [¹KÐ   ¢´  Æ  [¬?Ì: [  ¢`¯¯ ®iž¤¬¤¼S  žT´[S¢ ª   ˯»³ [¬K [¹6¶¤Š¢  ´[S¢³É¤Š³¬K €S¢žKÌ: [¹6ž·Ê®S¹º—µ  ž¬¤®i¢fÐ?S¢žKÌ: [¹6¤Š¢Ÿ»Sž  !#"%$!'&($!*)(+,(.-/.0132' 45 6 798;:=<?> "@$A9B3 !A C"#DE&F9G HG%$/I !%$! "+KJ9 !ML N*OQPRTSFUVNP WYX[Z*S(\^]`_QPbadcaeNPfFSFZ*RYUCSFZgN*OQPbWQSihW X[OQZ9aFjPkN*OQPRlaeZ*PmcnSFZ*PUCPZ*P WoNqpr]?N*OsutbPmX[v`aFWlNS UwhQZ*N*OQPZPnx^X[v?SFZ*PDN*OP jPqaeZ*P aFj.]`WyN*OQPIUwhQN*hQZ*PFs z {}|~€Q@‚ƒe„!|…~ tbP†Oa(\FP‡X[Z*P jP WoNP _ˆjP\FPZ aFv‰NP c OW[]`ŠohP j‹UCSFZ cnSFZ*PUCPZ*P WcnPŒZ*P j*Siv`hQN*]?SiWaFW_‰P\eaFv`haeNP _P aFc OŽSFU N*OQP ŒstbP‘Oa(\FPA_P kSiWjNZ aeNP _uN*OaeN’N*OQP j*P“NP c9O^” W]`Š•hQP j…caFW—–P“]`W˜NPfFZ9aeNP _™NS.XZ S(\^]`_QPšaFhNSi›aeN*]`c cnSFZ*PUCPZ*P WcnPAaFWWQSFN*aeN*]?SiW—SiW—Z*P aFvœ”!p“SFZ v`_I_aeN*aaFW_ N*OaeNN*OaeNžaFWWQSFN*aeN*]?SiWŸcaFW –PuhjP _gNSy]`kX[Z*S(\FP WaeN*hQZ9aFv3v`aFWQfihaefFPIX[Z*S•cnP j j*]`WQfkaeXX[v¡]`caeN*]?SiWjs ¢l£¤£¥i£ ~š€ £ ƒ ¦F§(¨ª©¬«˜§Ž­® ®9§(¯™°u±A²9³G©¬´¶µ·i¯ž¸i«i¹%² ºF» °™§ ´o¼¾½'¿(À•² ¹@Á ÂA§(©ª®E§ ·˜»@ç(»9ÄÆÅ9ÇÇ(ǘÄÉÈ»@©ª´˜ÊÌË*¿¹@² Í`² ¹%² ´˜Ë ²yË«o§ ©¬´˜» Í`¿¹‘Á@² ÎeÁ.»M·˜¯¯D§(¹@©¬®9§ Á@©¬¿(´ÄuÏ´ŸÐÑ@ÒEÓÔÔ%Õ Ö?×GØ Ù—Ò!ÚqÛ?Ü˜Ô Ý ÒÑ@ÞÙ@ÜiÒ%ߌÒ×yàÒÑ@ÔVÚ*Ô*Ñ%Ô*×^Ó%ÔráוÕâ*ِۡãß(ß^äKÖ¡ÓáÛCÖ¡Ò×iÙ° 兿¨ª¨¬² ʲ“æ3§(¹@Ã^°F盧 ¹%ºe¨¬§(´˜¼Q°iè·i´i²Ä é ¹@²EËÃ é § ¨œ¼Fê'©¬´g§ ´˜¼y뒫i¿¯ž§»‘çk¿¹MÁ%¿(´ěÅEÇ(ÇìiÄq횺Gî ´˜§(¯ž©¬ËdË ¿(¹%²*Í`² ¹%² ´oË*²*îwÀ˜§»M²E¼ï»M·˜¯¯D§(¹@©¬®9§ Á@©¬¿(´Ä Ï´ АÑ%Ò9ÓÔÔÕÖ?×GØ ÙAÒMÚÛ`Üiԗð•ÜFÖ?Ñ@ՙàÒ×EÚ*Ô*Ñ%Ô*×^Ó%ÔrÒ×qñòߕÖ?Ñó Ö¡ÓáäômÔ*Û`ÜiÒEÕnٛÖ?×lõráÛCöFÑ@á ä÷[á ×GØ(ö˜á ØGÔuВÑ@Ò9ÓÔ*ÙÙÖ?×eØ ° “¹§ ´˜§¼i§i°i¦e¸o§ ©¬´°˜è·i´i²(Ä é ¹@²EËÃ é §(¨¬¼Fê'©¬´°žè(² øï½'² ºe´˜§(¹9°çk©ªÃ²l兿¨ª¨¬©ª´o» °žè§ î »@¿(´™ù©œ»M´˜² ¹E°˜­¼F꒧ ©ªÁ½ú§ Á@´˜§(¸˜§ ¹%Ãe©V°iè¿»@² ¸i«™½'¿G»M²9´Fî ®9ꅲ9©ªÊo°­´i¿e¿(¸û¦i§ ¹%Ãn§(¹9°3§(´˜¼ ¦e¹%©ª´˜©ª³§(» é § ´iÊG§ ¨¬¿(¹%²(Ä ÅEÇ(ÇüiÄ1횲9»%Ë*¹%©ª¸iÁ@©¬¿(´I¿(ÍÁ@«i²AÈú´˜©ª³² ¹»M©ªÁ!ºž¿ Í1æ²9´i´˜»@ºG¨ªî ³§ ´˜©¬§k»@ºF»!Á%² ¯ý·˜»@²9¼;Í`¿¹‘ç›È“åîþiÄqÏ´gВÑ@Ò9ÓÔÔÕÖ?×eØÙ ÒMڅÛ`ÜiÔ'ÿoÖÛ`Ürô›Ô*ÙÙ á ØGÔ1×^Õ(Ô ÑÙÛwá וÕÖ?×eØDàÒ×EÚ*Ô*Ñ%Ô*×^Ó%Ô  ô…àó °•å…¿¨ª·˜¯rÀi©œ§i°i盧(¹@ºe¨œ§ ´˜¼Ä ùúÄ é ¹%²9ËÃ^° èoÄ é ·i¹%Ê(² ¹E° Ä i²9¹@¹%¿˜°(í.ĵ¿(·˜»@²(°çŒÄ [©ªÊ«ÁE° §(´˜¼ ÏÄçm§ ´i©CčÅ9Ç(ÇÇiÄ ­ý¦Fºe»DË9§ ¨¬¨ª²E¼“§(´˜¼i§˜Ä Ï´ АÑ%Ò9ÓÔÔÕÖ?×GØ ÙžÒMڑÛ`ÜiԑñÖ,ØnÜeÛ`Ü ð#Ô (Ûñ’ÛVÑÖ¡Ôná ä’à3Ò ×˜ó Ú*Ô Ñ@Ô ×•ÓÔ  ðñ“àón° ÂA§(©,Á%«i² ¹»@Ài·i¹%ʘ°}çm§ ¹%ºG¨œ§ ´o¼Q°  ¿³(² ¯.Ào²9¹9Ä  ÏM¦FëAÄ íš²*Í`² ´o»M² ­¼i³n§(´˜Ë*²E¼ ½'²9»@²9§(¹%Ë« æ¹@¿ !²9Ë*Á%» ­úʲ ´˜Ë º  íú­½úæQ­Ä Å9Ç(ÇGüFÄ ÐÑ@ÒEÓÔÔ%Õ Ö?×GØ Ù=Ò!ÚbÛ`ÜiÔ ÿoÖÛ`Ü ômÔ*Ù%Ù á9ØÔ1×^Õ(Ô ÑÙÛá×^ÕÖ?×eØlàÒ×9Ú*Ô Ñ@Ô ×•ÓÔ  ô…àón° 兿¨ª·i¯.Ài©œ§i°i盧(¹@ºe¨œ§ ´˜¼Q°  ¿³(² ¯.Ào²9¹9Ä  ©¬ºe·—“²(°eè¿(«i´Dµ§(¨ª²°G§(´˜¼Dù·˜Ê(² ´˜²“å…«˜§ ¹%´i©œ§ Ã^ÄÅEÇ(ÇìiÄ ­ï»MÁ%§ Á@©œ»!Á%©¬Ë9§ ¨§(¸i¸i¹%¿§(Ë«}Á@¿›§(´˜§ ¸˜«i¿(¹§q¹%²9»@¿(¨¬·FÁ%©ª¿´Ä Ï´gАÑ@ÒEÓÔ%ÔÕ Ö?×GØ ÙIÒMڞÛ?ܘԞÿ•Ö(Û`Ü Ý ÒÑ@ÞÙ@ÜiÒ%ß ÒוÔ*Ñ! ÷1áÑVØGÔAàÒÑCߘÒÑ%á°盿(´GÁ@¹%²9§(¨V°"š·i² À•²9Ë(°吧 ´˜§¼i§i°n­ú·iî Ê·˜»!ÁEÄ è² ¹%¹@ºD½Äiµ¿(ÀiÀ˜»9Ä'Å9Ç$#þ˜Ä1æ¹%¿(´˜¿(·i´I¹%²9»@¿(¨¬·FÁ%©ª¿´Ä3ë[²EË«Fî ´i©œË §(¨½'² ¸•¿(¹@Á#þ î%Ű˜å…©,Á!ºu兿¨ª¨¬² ʲ(°  ² ê&%¿(¹%Ã•Ä ¦e«˜§(¨ª¿¯' 1§ ¸i¸i©¬´y§ ´˜¼yµú²9¹@À•² ¹@ÁrèoÄ( ²E§(»%» ÄkÅEÇ(Ç")oÄD­ú´ §(¨ªÊ¿(¹%©,Á%«i¯ Í`¿(¹“¸i¹@¿´i¿(¯ž©¬´˜§ ¨#§ ´˜§(¸i«i¿¹%§D¹%²9»@¿(¨¬·FÁ%©ª¿´Ä àÒòߕöFÛáÛCÖ¡Ò×^áäQ÷Ö?×GØ(öF֜ÙÛVÖ¡Ó*Ù°i¸˜§(Ê(²9»úü *ü,+Fü þ˜Å(Ä çŒÄçm§ ¹Ë*·˜»9° é Ä[¦F§ ´GÁ%¿(¹%©ª´i©C°§ ´o¼}çŒÄçm§ ¹Ë*©¬´iÃe©ª²9ê'©¬Ë ®(Ä ÅEÇ(Ç )˜Ä é ·i©¬¨œ¼F©ª´˜Ê.§‘¨œ§ ¹%Ê(²š§ ´i´i¿(Á%§ Á@²9¼qË*¿(¹%¸i·˜»…¿ Í[ù´Fî ʨª©œ»@«.-3Á@«˜²‘æ² ´˜´™ë1¹%² ² Ào§ ´iÃ^đàÒòߕöFÛáÛCÖ¡Ò×^áäQ÷Ö?טó Ø(öF֜ÙÛVÖ¡Ó*Ù°Å9Ç 0/ !*iÅ1*2+$* * 3iÄ ½'·o»M¨œ§ ´yçk©ªÁ@Ã(¿³^ÄuÅ9ÇÇ4#eÄ5 ˜§(Ë*Á@¿¹%»“©¬´y§(´˜§ ¸˜«i¿(¹§I¹%²9»@¿ î ¨¬·FÁ%©ª¿´.-뒫i²9ºq§ ¹%²A´i¿ Á'Á%«i²¿(´i¨¬ºIÁ%«i©¬´iÊ»’Á%«˜§Áú¯D§ ÁMî Á%² ¹EÄ'­ Ë9§(»@²}»MÁ@·˜¼FºbÀo§(»@²9¼ ¿´ Á!ꐿû¼F©ªø^² ¹%² ´GÁ™§(¸Fî ¸˜¹@¿G§(Ë«i²E» ÄQÏ´DАÑ@ÒEÓ%ÔÔÕÖ?×eØÙúÒ!ڐÛ`ÜiԐã.à÷76 8:9;iñã.à[÷ 6 8:9 Ý Ò Ñ@ÞnÙ@ÜiÒߟÒ×=<1ßiÔ Ñ@áÛCÖ¡Ò×^áä.>[áÓ*ÛÒÑّÖ?כАÑ@áÓ*ó ÛCÖ¡Óáä?šÒ @ öeÙÛãš×•áßiܘÒÑ@áAÔ*Ù ÒäKöFÛCÖ¡Ò×oÄ ë’«i¿¯D§(»gçk¿¹MÁ%¿(´Ä Å9Ç(ÇÇiÄ Èš»M©¬´iʎË*¿¹@² Í`² ¹%² ´˜Ë ²l©ª´ B ·i²E»!Á%©ª¿´ §(´˜»Mꐲ ¹%©¬´iÊ˜Ä Ï´†ÐÑ@ÒEÓÔ%ÔÕ Ö?×GØ ÙYÒ!Ú Û?Ü˜Ô ñ’Ö,ØEÜFÛ?ÜYð#ÔÛCñÛCÑÖ¡Ôáä.à3Ò ×9Ú*Ô Ñ@Ô ×•ÓÔ  ð?šñ“àó!° ÂA§(©,Á%«i² ¹»@Ài·i¹%ʘ°F盧 ¹%ºe¨¬§(´˜¼Q°  ¿³(² ¯.Ào²9¹9Ä  ÏM¦FëAÄ ­š¼Fꐧ(©,Á½§Á@´o§ ¸˜§(¹@Ãe«i©CÄÅ9ÇÇ(þ˜Ä[­b盧ÎF©¬¯r·i¯ ù´GÁ@¹%¿(¸eº 慧 ¹@Áq¿ ͞¦e¸o²9²9Ë«Ìë#§ ÊÊ(²9¹9ĎÏ´ÐÑ@ÒEÓ%ÔÔÕÖ?×eØ Ò!ÚmÛ?Ü˜Ô àÒ ×9Ú*Ô Ñ@Ô*×^ÓÔûÒ ×‰ñòߕÖ?ÑÖ¡Óáä.ômÔ*Û`ÜiÒEÕnٌÖ?׉õ.áÛCöFÑ@áä ÷1á×eØ ö˜á9ØԞАÑ@ÒEÓÔÙÙÖ?×GØ(°1È´i©¬³(² ¹»@©,Á!º›¿(͐æ²9´i´˜»@ºG¨¬³§î ´˜©¬§˜°i盧nº›Å,#nî%ÅEìiÄ ­š¼Fꐧ(©,ÁD½ú§ Á@´˜§(¸˜§ ¹%Ãe«i©VÄbÅEÇ(Ç$#eČ­‹»@©¬¯¸˜¨ª²™©¬´Á%¹@¿F¼F·oËî Á%©ª¿´ïÁ%¿ ¯D§ÎF©¬¯r·i¯ ²9´Á%¹@¿¸eº ¯ž¿F¼F² ¨œ» Í`¿(¹l´˜§Á%·Fî ¹§ ¨3¨œ§ ´˜Ê(·˜§(Ê(²¸i¹@¿FË ²9»%»M©¬´iʘÄIë1²9Ë«˜´i©¬Ë9§ ¨½'²9¸o¿¹MÁrÇ$#Eî 3ìi°eÏ´˜»MÁ@©ªÁ@·FÁ%²“Í`¿(¹'½'²9»@²9§(¹%Ë«q©ª´›å…¿(Ê´i©,Á%©ª³²“¦FË ©ª²9´˜Ë*²° È´i©¬³(² ¹»@©,Á!ºI¿(Í#æ²9´i´˜»@ºG¨¬³§ ´i©œ§iÄ è²*ø^¹@²9º.åšÄ½'² ºe´˜§(¹#§(´˜¼r­š¼Fꐧ(©,Á3½ú§ Á@´˜§(¸˜§ ¹%Ãe«i©VÄÅ9Ç(Ç$#eÄ ­ ¯D§ Îe©¬¯r·˜¯ ² ´GÁ%¹@¿¸Gº §(¸i¸i¹%¿§Ë« Á@¿b©œ¼F²9´Á%©,Í`ºe©¬´iÊ »@² ´GÁ%² ´˜Ë ²Ào¿·i´˜¼i§(¹@©¬²9»9ÄÏ´.АÑ%Ò9ÓÔÔÕÖ?×GØ Ù’ÒMÚÛ`ÜiÔ>Ö ÚÛ`Ü àÒ ×9Ú*Ô Ñ@Ô*×^ÓÔuÒ ×;ã…ßߕäKÖ¡Ôՙõrá ÛVöFÑ%áä1÷1á×GØ(ö˜á ØGԞАÑ@Òó ÓÔ*ÙÙÖ?×eØ °˜¸˜§ ʲ9»AÅ9þ2+•ÅEÇi°4D;§»M«i©¬´iÊ(Á@¿(´[°iírÄKåšÄª°o­ú¸i¹%©ª¨CÄ è²*ød½'² ºe´˜§(¹9ľÅEÇ(ÇìiÄÉð˜Òߕ֡Ó}ÿQÔCØ(òDÔ*×oÛwá ÛVÖ¡Ò ×E ãšä ØGÒó ÑÖ?Û`Üeò.Ù.á×^՝ãß(ß^äKÖ¡ÓáÛCÖ¡Ò×iÙÄúæ«Ä í.ÄFÁ%«i²9»@©¬»9°oÈ´i©¬³(² ¹@î »@©ªÁ!ºI¿ Í3æ² ´i´o»Mºe¨¬³n§(´i©œ§iÄ ­¯©ªÁ‰¦e©¬´iÊG§ ¨C°;¦GÁ%² ³(²d­Ài´i² º°;盩¬Ë«i©¬² ¨ é §Ë Ë«i©œ§ ´˜©V° 盩¬Ë«˜§(² ¨兿(¨¬¨ª©¬´˜»9°•íš¿(´o§ ¨œ¼›µú©¬´˜¼F¨¬²(°Q§ ´o¼F i²9¹@´˜§(´˜¼F¿ æ²9¹@²9©ª¹§iĚÅ9ÇÇ(ÇiĒ­3ëHG덧Ášë'½ùåîìiĐÏ´}АÑ@ÒEÓ%ÔÔÕó Ö?×eØÙmÒMÚkÛ?ܘÔuñ’Ö,ØEÜFÛ?ÜTð#ÔÛIñÛCÑÖ¡Ôáä‘àÒ ×9Ú*Ô Ñ@Ô*×^ÓÔ  ðñ“àón°IÂA§(©,Á%«i² ¹»MÀ˜·i¹@Êo°.盧(¹@ºe¨œ§ ´˜¼°  ¿³(² ¯î À•² ¹EÄ  ÏM¦FëAÄ ½'²9´˜§Á§FJš©ª²9©ª¹§›§ ´o¼g盧(»%»@©ª¯ž¿mæ¿e²9»@©¬¿˜ÄgÅEÇ(Ç4#Fěæ¹%¿ î Ë ²9»%»M©¬´iÊT¼F²K˜´˜©,Á%²‰¼F²E»@Ë ¹@©¬¸FÁ%©ª¿´˜» ©¬´‹Ë ¿(¹%¸o¿¹%§˜Ä Ï´ àÒ ÑCߕöeÙóL@ánÙ ÔÕTá וնàÒòߕöFÛáÛCÖ¡Òוá äDãß(ß^Ñ@Ò9áÓ%ܘÔÙ ÛÒNMr֜٠ÓÒöFÑ٠ԓãš×^á%ߘÜiÒÑ%áÄoÈAåO yæ¹%²9»%» Ä ç›§(¹%Ë5J©¬¨œ§ ©¬´°[è¿(«i´ é ·˜¹@ʲ ¹E°è(¿«i´}­Ào²9¹%¼F²9² ´°1í²9´Fî ´˜©¬»兿´i´i¿(¨¬¨¬º(°F§ ´˜¼P ºe´i² ÁMÁ%²Aµ©ª¹»%Ë«i¯D§ ´ĐÅ9ÇÇüFÄ­ ¯ž¿F¼F²9¨,îCÁ@«i²9¿(¹%²*Á%©¬ËË ¿(¹%²*Í`² ¹%² ´oË*²»@Ë ¿(¹%©ª´˜Êž»%Ë«i²9¯²ÄÏ´ ВÑ@Ò9ÓÔÔÕÖ?×eØÙkÒ!ÚIÛ`Üiԗÿ•ÖÛ?܌ômÔÙÙ á ØGÔ=1×^Õ(Ô ÑÙÛwá וÕó Ö?×e؝àÒ×9Ú*Ô Ñ@Ô ×•ÓÔ  ô…àó °兿¨ª·i¯.Ài©œ§i°n盧 ¹%ºe¨¬§(´˜¼QÄ ùÄQJ3¿e¿(¹%«i²9²9»q§(´˜¼bí.Ē뒩œË*²ÄïÅ9ÇÇ(ǘĎ뒫i²}ë'½úù’åîwì B ·i²E»!Á%©ª¿´b§ ´˜»@ꅲ9¹@©¬´iÊ;Á@¹§(ËÃû² ³§ ¨¬·˜§ Á@©¬¿(´Ä=Ï´=АÑ@Òó ÓÔÔÕÖ?×eØÙdÒ!ډÛ?ܘÔÌñÖ,ØnÜeÛ`܆ð#ÔÛRñ’ÛVÑÖ¡Ôná äŒàÒטó Ú*Ô Ñ@Ô ×•ÓÔ  ðñAà#ó° ÂA§ ©ªÁ@«i²9¹%»@Ài·i¹%ʘ°;盧 ¹%ºe¨¬§(´˜¼Q°  ¿³(²9¯rÀ•² ¹EÄ  ÏM¦FëAÄ
2000
23
      ! "$#% &(') *+-,. * !/0 1 2436587 "" 1 * " 9 :;=<>?:A@BCED:GF H):GCEIJ>?KFMLN:>?O<PC QRFTSU<>J>VI?W<FTS4XTKYRSU<;ZY[9\C]B^WC]:; _a`cb6`ed2:GfgYihejk<lYR<:C-mEn opFI?q^<PCEYrIsS%KZB^t9uI?SrSUYvfxwC]W^n yz{|9 :GC]OM_kq^<lFw< 9uI?SvS]YvfgwC]W^n&}9~_ €>JBCEn:G; 98:GC]O}‚6@{„ƒG…†‡lˆ‰{^…TƒyŠo‹XŒ_ lސP‘l’-“”•—–U˜~™V˜•UšPš!™›—‘œ ž’ŸE ›”Pl›–›P’—¡U¢~™’šPš!™£¡—Ž]¤ _¥fxY%SUC]:mS ¦=§©¨PªP«P¬U­a®iªE¯¬U°—§©¨PªP±²¯³µ´]ª ´U§G¬Uª-¶ ¯¬U·¹¸-º ´U» ³¼®iªE¯¹®i«½«P³¾¬U°¼´U¿]¨—®~·¹¶·¯¹®i­)³µ·x¯À—®€¬rÁ³µ°Âº ³µ¯J¶¥¯¹´Ã¿U®iª—®‰» ¬r¯¹®cª—´]­\³µªP¬U°Œ®²ÄÅP»®i··1³¼´]ªP· ¯À¬r¯Z«P®i·±²»1³¼ÁG®M´UÁ—ÆÇ®i±²¯·È³µªÉ¯À—®Ê¯¬U·¹¸ «P´]­a¬U³µªËNÌͪM¯ÀP³µ·ÃŬrÅG®‰»i΀Ïg®Ð»®‰ÅG´U»¯ »1®i·¨P°¼¯·Š§Ñ»´]­ ¨·³µª—¿(­a¬U± ÀP³µª—®M°µ®i¬r»1ªº ³¾ª—¿Ò¯¹´Ó¯¹»1¬U³µªN¬UªP«É¯¹®i·¹¯È¬Ôª—´]­\³µªP¬U°Âº ®²ÄÅP»®i·1·³¼´]ª½¿U®iª—®‰»1¬r¯¹´U»!´]ª¬a·®‰¯2´U§ÕUÖUÕ ªP´]­a³µªP¬U°~«—®i·±²» ³¼ÅP¯³¼´]ªP·½§Ñ»´]­×¯ÀP®ÐØÙÚ ØGÙTÛÜÝÞ±²´U»Å¨P·´U§8¯¬U·¸Eº?´U»1³µ®iªE¯¹®i«Þ«—®²º ·1³¼¿]ªM«P³¾¬U°¼´U¿]¨—®i·‰Ëàß2®i·¨°¼¯·Ã·ÀP´%Ïá¯ÀP¬r¯ Ïx® ±‰¬Uªâ¬U±1ÀP³µ®‰ãU®a¬½äråEæN­\¬r¯±1À⯹´À¨º ­\¬UªMÅG®‰»§Ñ´U»1­a¬UªP±²®‹¬U·Ã´UÅÅ´]·®i«M¯¹´Z¬ çiè æéÁ¬U·¹®i°¾³µª—®â§Ñ´U»aƨP·¹¯¿]¨P®i··³µª—¿|¯À—® ­a´]·¹¯[§ê»1®ië-¨—®iª-¯[¯J¶ÅG®â´U§ ª—´]­a³µªP¬U°!®²Äº Å»®i··³¼´]ª³µª¯À—®&ØÙØGÙ^Û^ÜÝ ±²´U»Å¨P·iËìT´ ´]¨P»„·1¨—»ÅP»1³¾·¹®Œ´]¨—»„»®i·¨P°µ¯·T³µª«P³µ±‰¬r¯¹®x¯ÀP¬r¯ ­\¬UªE¶~´U§¯À—®±²®iª-¯¹»1¬U°-§ê®i¬r¯¨—»1®i·´U§ÅP»®‰ã³Âº ´]¨·°¼¶cÅP»´UÅG´]·¹®i«·¹®i°µ®i±²¯³¼´]ªÃ­ ´«—®i°µ·&«P³µ« ªP´U¯!³µ­aÅP»´%ãU®8¯À—®8ÅG®‰»§ê´U» ­a¬UªP±²®´U§¯À—® °µ®i¬r»1ª—®i«MªP´]­a³µªP¬U°Âº?®²ÄÅP»®i·1·³¼´]ªÈ¿U®iª—®‰» ¬vº ¯¹´U»%Ë y QRFTSUC]BŒDwmS]IJB^F ¦Ô§Ñ¨Pª«P¬U­ ®iª-¯¬U°T§©¨PªP±²¯³µ´]ª4´U§x¬UªE¶k¯¬U·¹¸-º?´U»1³¼®iª-¯¹®i« «P³µ¬U°µ´U¿]¨—®M·¹¶·¹¯¹®i­ ³¾·A¯À—®Þ¬rÁl³µ°µ³¼¯V¶Ô¯¹´Ô¿U®iª—®‰»1¬r¯¹® ª—´]­a³¾ªP¬U°Œ®²ÄÅP»®i··³µ´]ªP·í¯ÀP¬r¯8«—®i·±²»1³µÁ®\´UÁÆÇ®i±²¯·8³µª ¯À—®u¯¬U·¹¸6«—´]­a¬U³¾ªËU» ®²Ä¬U­aŰ¼®UΌ±²´]ªP·³µ«P®‰»¯À—® ®²Ä—±²®‰»ÅP¯8´U§&¬k¯¬U·¹¸Eº?´U» ³¼®iªE¯¹®i«Š«P³¾¬U°¼´U¿]¨—®\§Ñ»´]­á¯À—® ØGÙTØGÙTÛÜÝ6±²´U»Å¨P·€³µª¥î„³¼¿]¨—»1® çï©ð ³Œñg¨—¿U®iªP³¼´½®‰¯ ¬U°s˼ÎEòråUåUåEó„ì&À—®±²´]ªEãU®‰»1·1¬UªE¯·³¾ª¯À³µ·„«P³µ¬U°¼´U¿]¨—®¬r»® ¬r¯¹¯¹®i­ ÅP¯³¾ª—¿u¯¹´[±²´]°µ°¾¬rÁ´U» ¬r¯³¼ãU®i°¼¶[±²´]ªP·¹¯¹»1¨±²¯!¬½·¹´rº °µ¨—¯³µ´]ªu§Ñ´U»§©¨—»1ªP³µ·1ÀP³µª—¿¬ ¯VÏg´\»´-´]­)ÀP´]¨P·¹®UËñ¬U± À ±²´]ª-ãU®‰»1·¬Uª-¯&·¹¯¬r»¯·¯À—®~¯¬U·¹¸uÏ&³¼¯À½¬ ·¹®‰¯´U§^§©¨—»1ªP³¼º ¯¨—»®x³¼¯¹®i­a·¯ÀP¬r¯„±‰¬Uª8Á®x¨P·¹®i«8³µªí¯ÀP®x·¹´]°¾¨—¯³¼´]ªËTÌͪ ¯À—®uÅP»´±²®i··í´U§2ª—®‰¿U´U¯³¾¬r¯³µª—¿Ã¯ÀP®u·¹´]°µ¨P¯³¼´]ªÎ^¯ÀP®‰¶ ôµõ—ö1÷sø?ùúö ûüsý ûÿþRø?ùÿýaø?ýR÷?ýRû$ö1û ÷ ¹ö aö 1÷  aþUý ùR÷?ùÿý ÷ %ù ö1ûÿý £þ  öøxû¹öÍüxþvüù ø ! "$# %ý£ûÿûúö1÷?ü%'&  ö)(+*-,/.0.12 354687 1 3:9;-<>= 1.0.( 35? %@„ý: ‰ý þ  öö- ~ý1ø  Í÷BAÂþ%÷ vù ø?þR÷  û/A ø^ø  öøC2ö1ø D  ÇüA ý ÷E ## Rý£ûÿûúö1÷?üGFH IJ-K ýL &  öBRýMAÂþR÷ Rù ø?þR÷ ^ûN5A øGø  ö1øODÇý£ü©ø?üCP-E ## %Q&R £þÇüsü ‰ý£þSD¹ö>vþ +T0U,V*-,/.0.12 3G467 1 3MW-9;-< % OXZY ö -%'&5[ ûÿû\vþ] ^T0U, 3G46_7 1 3>9;-<`= 1.a.( 3G? %b&  ö`( 63 , ,5c'dGU(e 3 ø  öø\&fD¹ö$>Rþ gAÂý1÷ME ## Rý ûúûúö÷?ü^ø  ö1øü  ý£þvûN û¹öŒþvü\ù ø  výý/ -% IJh öøxüsý þi %ü £ý‰ý j%  ýíö  ¹ö 8öi 'vþ kT0U,g*-,/.0.12 3546 ö )T0U, 63 , ,5cbdGU](e 3 %  &G[ ûÿûOvþ kT0U, 63 , ,5c 9<<'= 1.a.( 3 dGU(e 3 %>@8Çüsù !lý:m vûÍø F IJQI ý£þi Rü ý‰ý jLf Rýn ‰ý þbö‰øST0U, 63 , ,5codGU(e 3 ù ø   Rùvù !÷?ý‰ýpù ø  T0U,>1$T0U, 3 dGU(e 3G? Fq&8RþRøgT0U,g*-,/.0.12 3546 ù*ø  ûúùiù ÷?ý‰ýn%  !ø  M ÍüsùN ~ùÿüCDÇýgRûNVø % rI ý£þi Rü\ ý‰ý j%Osù øø   ]Çüsù nDÍýgvûÍø  ¼¿]¨—»1® çit ñŒÄ—±²®‰»ÅP¯*´U§¬½ØGÙØÙTÛÜÝk«P³¾¬U°¼´U¿]¨—®³µ°¼º °µ¨·¹¯¹»1¬r¯³µª—¿Ããv¬r»1³¾¬rÁ°¼®\·¹®i°µ®i±²¯³¼´]ª6´U§2¬r¯¹¯¹» ³¼Á¨—¯¹®i·§ê´U» ª—´]­\³µªP¬U°«—®i·±²» ³¼ÅP¯³¼´]ªP· ¿U®iª—®‰» ¬r¯¹®uª—´]­a³¾ªP¬U°„®²ÄÅ»®i··³¼´]ª· ï ·À—´%Ï*ªp³µªp³¼¯¬U°¼º ³µ±‰·£óx«—®i·±²»1³¼Ál³µª—¿8¯À—®³¼¯¹®i­a·2´U§T§©¨—»1ªP³µ¯¨—»®UË ñg¬U±1À‹§©¨—»1ªP³¼¯¨P»® ¯J¶ÅG®u³µª6¯À—®[ØGÙØGÙ^ÛÜ^݋¯¬U·¹¸ «—´]­\¬U³µªÃÀP¬U·*§Ñ´]¨—»*¬U··´±‰³µ¬r¯¹®i«¥¬r¯¹¯¹»1³µÁ¨—¯¹®i· t ±²´]°¼´U»iÎ ÅP» ³µ±²®UÎE´RÏ&ª—®‰»¬UªP«\됨P¬Uª-¯³¼¯J¶U˦Ȫ—´]­\³µªP¬U°—®²ÄÅP»®i·Çº ·³µ´]ªí¿U®iª—®‰»1¬r¯¹´U»Œ­8¨P·¹¯«—®i±‰³µ«—®Ï*ÀP³µ±1À´U§—¯À—®i·¹®g§ê´]¨—» ¬r¯¹¯¹»1³µÁ¨—¯¹®i·x¯¹´ ³µªP±‰°µ¨P«P®*³µªu¯À—®~¿U®iª—®‰»1¬r¯¹®i«½®²ÄÅP»®i·Çº ·³µ´]ªËpU» ®²Ä—¬U­ Ű¼®UίÀP®c¯¬U·¸Ð«—´]­a¬U³¾ª6´UÁÆ¹®i±²¯· ¨Pª«—®‰»«P³¾·±‰¨P··³µ´]ªâ³µªp¯À—®½«P³µ¬U°µ´U¿]¨—®\³µªÐî³µ¿]¨—»® ç ¬r»®a¬vu ç ärå[¶U®i°µ°¼´RÏ »1¨—¿[´RÏ&ª—®i«¥Á¶xw~¬r»1»®‰¯¹¯ ï wíó ¬UªP«½¬yu ç åUå «—´]°µ°µ¬r»x¿U»®‰®iª½±1À¬U³¼»´%Ï&ª—®i«cÁ¶^z¯¹®‰ãU® ï zPó ˌÌǪ‹¯À—®[«P³µ¬U°¼´U¿]¨P®c®²Ä—±²®‰»ÅP¯ ³µªŠî„³¼¿]¨—»1® ç ¯À—® ¶U®i°µ°µ´%Ï »1¨P¿p³µ· «—®i·±²»1³µÁ®i«|{P» ·¹¯ ¬U·~}€J-‚ƒ‚…„i†ˆ‡‰JŠ ‹ „i‡!ŒJjŽ~J„i‚‚…}i‡$‘€¬UªP«k¯À—®iªâ·¨PÁ·¹®i됨—®iªE¯°µ¶Ã¬U·y’“” j‚‚…„i†•‡‰JŠ ‹ „i‡€ŒJjŽ–J„r‚ƒ‚…}i‡‘$—˜’“”€‡‰JŠ ‹ „r‡€ŒJjŽ J„r‚ƒ‚…}i‡‘$—™’“”šj‚‚…„i†›‡‰JŠUËÉÌV¯Ã±²´]¨°µ«¬U°µ·¹´.ÀP¬iãU® ÁG®‰®iªŠ«—®i·±²»1³µÁ®i«6Á-¶Ð¬UªE¶Ð´U§2¯À—®½§ê´]°¾°¼´%Ï&³¾ª—¿4ª—´]ª—º ÅP»1´]ª—´]­a³µªP¬U°G®²ÄÅP»®i·1·³¼´]ªP· t ’“R'‡‰JŠi—œ`‡‰JŠr—Vœb j‚‚a„r†ž‡‰JŠr—ZœbyŸjŒJjŽkj‚‚a„r†ž‡‰JŠr—8’“”ŸjŒJjŽyj‚0  ‚…„i†¡‡‰JŠi—+’“”ŸjŒJ¢Žv‡‰JŠ]˽ì&À—®\±²´]ª-¯¹®iªE¯8´U§¯À—®i·¹® «—®i·±²» ³¼ÅP¯³¼´]ªP·kãr¬r»1³¼®i·p«P®‰Å®iª«P³µª—¿=´]ª(Ï&ÀP³¾±1À(¬r¯Çº ¯¹»1³¼Ál¨—¯¹®i·¬r»®³µªP±‰°¾¨P«—®i«c³µªc¯À—®í«—®i·±²»1³µÅP¯³¼´]ªË8£&´RÏ «—´®i·a¯À—®4·¹ÅG®i¬r¸U®‰»u«—®i±‰³µ«P®Ï&ÀP³µ± À ¬r¯¹¯¹»1³¼Ál¨—¯¹®i· ¯¹´ ³µªP±‰°¾¨P«—®¤ ì2ÀP®8Å»´UÁ°¼®i­É´U§g±²´]ª-¯¹®iªE¯·¹®i°¼®i±²¯³¼´]ª¥§Ñ´U»íª—´]­ º ³µªP¬U°g®²ÄÅP»®i··³µ´]ªP·À¬U· Á®‰®iª‹¯À—®½§Ñ´±‰¨·´U§!­¨±1À ÅP»®‰ã³¼´]¨P·*Ïx´U»¸Ã¬UªP«¥¬½°µ¬r»¿U® ª¨P­ÁG®‰»!´U§­ ´«—®i°µ· ÀP¬%ãU®xÁG®‰®iªÅP»´UÅG´]·¹®i« ï ¥ °µ¬r»¸í¬Uª«)¦Z³µ°¼¸U®i·Çº/w€³¼ÁPÁ·‰Î ç Ör§ èh¨R© »®iªPªP¬Uªa¬Uª« ¥ °µ¬r»¸Î ç ÖUÖ èh¨—ð ¬U°¼®&¬UªP«uß2®²º ³¼¯¹®‰»iÎ ç ÖUÖUä ¨«ª ¬U··¹´]ªPªP®i¬U¨Î ç ÖUÖUä ¨n¬ ´U»1«P¬UªÎgòråUåUåEó ­® ’G‡'}r‚ ­ }rËì2ÀP®€§Ñ¬U±²¯¹´U»1·&¯ÀP¬r¯2¯À—®i·®­ ´«—®i°µ·¨—¯³Âº °µ³…¯‰®â³µª±‰°µ¨P«—®4¯À—®â«P³µ·±²´]¨—» ·¹®k·¯¹»1¨P±²¯¨—»®UÎ!¯À—®â¬r¯Çº ¯¹»1³¼Ál¨—¯¹®i·¨P·®i«u³µª\¯À—®~°µ¬U·¹¯­ ®iªE¯³µ´]ªÎ-¯ÀP®&»®i±²®iªP±²¶ ´U§G°µ¬U·¹¯­ ®iª-¯³¼´]ªÎ]¯ÀP®§ê»®i됨—®iªP±²¶8´U§G­ ®iªE¯³µ´]ªÎ]¯À—® ¯¬U·¹¸ ·¹¯¹» ¨P±²¯¨—»®UίÀ—®k³µª—§Ñ®‰»®iª-¯³µ¬U°±²´]­ Ű¼®²Ä—³¼¯V¶A´U§ ¯À—®€¯¬U·¹¸Î¬UªP«½Ï¬i¶·2´U§„«—®‰¯¹®‰»1­\³µªP³µª—¿ ·¬U°µ³µ®iªE¯´UÁ—º ƹ®i±²¯·¬UªP«6¯À—®\·¬U°¾³¼®iªE¯¬r¯¹¯¹»1³¼Á¨—¯¹®i·´U§2¬Uªp´UÁÆ¹®i±²¯‰Ë ÌǪ¯À³µ·Å¬rÅG®‰»Ïx®2¨—¯³µ°¾³…¯‰®g¬~·¹®‰¯Œ´U§l§Ñ¬U±²¯¹´U»1·±²´]ªP·³µ«º ®‰»®i«Ð¬U·³µ­aÅ´U»1¯¬UªE¯í§Ñ´U»¯ÀP»®‰®\´U§¯À—®i·®u­ ´«—®i°µ·‰Î ¬UªP«‹®i­ ųµ»1³µ±‰¬U°µ°¼¶¥±²´]­ Ŭr»1®c¯ÀP®c¨—¯³¾°µ³¼¯J¶â´U§*¯À—®i·¹® §©¬U±²¯¹´U»1·Œ¬U·„ÅP»®i«P³¾±²¯¹´U»1·„³µª¬~­a¬U± ÀP³µª—®°¼®i¬r»1ª³µª—¿&®²Äº ÅG®‰»1³µ­ ®iª-¯‰ËŒì2À—®€§©¬U±²¯¹´U»!·®‰¯·Ïg®¨—¯³µ°µ³…¯‰®€¬r»® t ° ØGÙTÛݱ²³iÝ ³ ´Ý §Ñ¬U±²¯¹´U»1·iÎ ³¾ªP·¹Å³¼»1®i« Á¶ ¯À—® µÑÛ^Øf±\´¶S´ÛÝf²B· ¶~ÙB¸´· ´U§ ð ¬U°¼®¬UªP«[ß2®i³¼¯¹®‰» ïÇç ÖUÖUä-ó ¨ ° ØGÙTÛ^Øf´¹PÝGÜ\²B·º¹i²ØÝ §©¬U±²¯¹´U»1·‰Î³µª·¹Å³¼»®i«ÐÁ¶ ¯ÀP® ­ ´«P®i°µ·N´U§ ¥ °¾¬r»¸ ¬UªP« ±²´]°¾°¼®i¬r¿]¨—®i· ï ¥ °µ¬r»¸ ¬UªP«k¦Z³µ°¼¸U®i·¹º/w~³¼ÁPÁ·iÎ ç Ör§ èh¨© »®iªPªP¬Uª ¬Uª« ¥ °µ¬r»¸Î ç ÖUÖ è ó ¨ ° µÑÛÝ´PÛÝfµÑÙTÛ\²B·›µêÛB»h·-ÜC´PÛ^Øf´Q³§©¬U±²¯¹´U»1·‰Îp³µªº ·Å³¼»®i«cÁ¶c¯ÀP®í­ ´«P®i°G´U§ ¬ ´U»1«P¬Uª ï òråUåUå—ó Ë ð ¬U°¼®¬UªP« ß2®i³¼¯¹®‰»]¼ÿ·ZµÑÛ^Øf±\´¶>´PÛÝ¢²B·¶~ÙB¸´·*§ê´rº ±‰¨P·¹®i·´]ªâ¯À—®aÅ»´«¨P±²¯³¼´]ª¥´U§ª—®i¬r»¹ºJ­a³¾ªP³µ­a¬U°„«—®²º ·±²»1³µÅP¯³¼´]ªP·Ã¯ÀP¬r¯â¬U°µ°¼´%Ï ¯À—®ŠÀ—®i¬r»®‰»¥¯¹´Z»®i°µ³¾¬rÁ°¼¶ «P³µ·¯³µª—¿]¨P³µ·1À¯À—® ¯¬U·¹¸k´UÁÆ¹®i±²¯€§ê»1´]­·³µ­a³¾°µ¬r»&¯¬U·¹¸ ´UÁÆ¹®i±²¯·‰Ë.îP´]°µ°¼´%Ï*³µª—¿šw!»´]·¯¬UªP«™z³µ«ª—®‰» ïÇç Ör§ è ó Î ð ¬U°¼®~¬UªP«uß&®i³¼¯¹®‰» ¼ÿ·x¬U°¼¿U´U»1³¼¯À­Ô¨—¯³¾°µ³…¯‰®i·«P³¾·±²´]¨—»1·¹® ·¹¯¹»1¨±²¯¨—»®*¬U·¬Uªc³µ­ ÅG´U»¯¬Uª-¯§©¬U±²¯¹´U»2³µª\«P®‰¯¹®‰»1­a³µªº ³µª—¿ÈÏ*ÀP³µ±1À ´UÁÆÇ®i±²¯·Ð¯À—®=±‰¨—»»®iª-¯p´UÁÆ¹®i±²¯Ð­8¨P·¹¯ ÁG®u«P³¾·¹¯³µª—¿]¨P³¾·À—®i«¥§Ñ»´]­[Ë¥ì2À—®u­a´«—®i°g´U§ ¥ °¾¬r»¸lÎ © »®iªPªP¬UªM¬UªP«½¦=³¾°¼¸U®i·Çº/w~³¼ÁÁ·[³µ·[Á¬U·®i«È´]ªÈ¯À—® ª—´U¯³¼´]ª6´U§*ØGÙ^Û^Øf´¹ÝÜC²B·q¹i²ØÝ³UÎT³sË ®U˽¯À—®c±²´]ªº ãU®‰»1·¬Uª-¯·8¬r¯¹¯¹®i­ ÅP¯í¯¹´k±²´-´U»1«³µªP¬r¯¹® Ï&³¼¯Àp´]ª—®\¬Uªº ´U¯À—®‰»4Á-¶=®i·¹¯¬rÁ°µ³¾·ÀP³µª—¿Š¬ ±²´]ªP±²®‰Å¯¨P¬U°€Å¬U±²¯§Ñ´U» «—®i·±²» ³¼Á³µª—¿u¬Uª4´UÁ—ÆÇ®i±²¯‰Ë ¬ ´U»1«P¬Uª\¼ÿ·nµÑÛÝ´PÛÝfµÑÙTÛ\²B· µÑÛC»”·EÜC´PÛTØO´O³\­a´«—®i°³µ·8Á¬U·®i«Š´]ªŠ¯À—®[¬U··¨P­ ÅPº ¯³¼´]ª$¯ÀP¬r¯¥¯À—®A¨Pª«—®‰»1°¼¶³µª—¿A¯¬U·¹¸-º?»®i°µ¬r¯¹®i«(³¾ª—§ê®‰»º ®iªP±²®i·\»®i됨P³¼»®i«‹¯¹´6¬U± ÀP³¼®‰ãU®Ã¯ÀP®[¯¬U·¹¸Š¿U´]¬U°µ·\¬r»® ¬Uª=³µ­ ÅG´U»¯¬Uª-¯a§©¬U±²¯¹´U»c³µª|±²´]ªE¯¹®iª-¯[·¹®i°¼®i±²¯³¼´]ª|§ê´U» ª—´]ª—ºJ­a³µªP³µ­\¬U°«—®i·±²»1³µÅP¯³¼´]ªP·‰Ëg¦â® «—®i·1±²»1³¼ÁG®í¯À—®i·¹® ­ ´«—®i°µ·³¾ª[­ ´U»®«—®‰¯¬U³¾°GÁ®i°µ´%ÏíË ¦â® ±²´]­aŬr»®Š¯À—®‹ÅP»®i«P³µ±²¯³µãU®ÐÅG´%Ïx®‰»¥´U§\¯À—® §©¬U±²¯¹´U»1·~¨—¯³µ°µ³¾¯‰®i«³µª[¯À—®i·®8­ ´«—®i°µ·2Á-¶¨P·1³µª—¿\­a¬vº ± ÀP³µª—®â°¼®i¬r»1ª³µª—¿6¯¹´A¯¹» ¬U³µªZ¬UªP«=¯¹®i·¹¯¬Aª—´]­a³µª¬U°Âº ®²ÄÅP»®i··1³¼´]ªÈ¿U®iª—®‰»1¬r¯¹´U»4´]ªM¬ ·®‰¯´U§aÕUÖUÕ ª—´]­a³¼º ªP¬U°«—®i·±²» ³¼ÅP¯³¼´]ªP·§ê»1´]­Ô¯ÀP®!±²´U»Ål¨P·x´U§ØGÙTØGÙTÛÜÝ «P³¾¬U°¼´U¿]¨—®i·‰Ë¿¦â®âÅ»´%㝳µ«—®k¯À—®â­a¬U±1À³µª—®â°¼®i¬r»1ª—®‰» Ï&³µ¯À«P³¾·¹¯³µªP±²¯„·¹®‰¯·„´U§P§Ñ®i¬r¯¨—»®i·Œ­ ´U¯³µãv¬r¯¹®i« Á¶¯À—® ­ ´«—®i°µ·[¬rÁG´RãU®UÎí³µªÈ¬U«P«P³¼¯³µ´]ªZ¯¹´ «P³µ·±²´]¨—» ·¹®¥§ê®i¬vº ¯¨—»1®i·^»®‰Å»®i·¹®iª-¯³µª—¿2¿]³¼ãU®iª—ºJª—®‰Ï‹«³µ·¹¯³µªP±²¯³µ´]ªP·‰Î%¬UªP« «P³¾¬U°¼´U¿]¨—®·¹ÅG®i±‰³…{l±~§ê®i¬r¯¨—»1®i·&·¨P± ÀìU·*¯À—®·¹ÅG®i¬r¸U®‰» ´U§2¯ÀP®uª—´]­a³¾ªP¬U°®²ÄÅP»®i··1³¼´]ª΄³¼¯·¬rÁl·¹´]°µ¨—¯¹®c°¼´±‰¬vº ¯³¼´]ª|³µªŠ¯À—®Ã«P³¾·±²´]¨—»1·¹®UÎx¬UªP«Š¯À—®Å»´UÁ°¼®i­ ¯ÀP¬r¯ ¯À—®±²´]ªEãU®‰»1·1¬UªE¯·!¬r»®±‰¨—»1»®iªE¯°µ¶\¯¹»¶³µª—¿ ¯¹´u·´]°¼ãU®UË ¦â®|®‰ãv¬U°µ¨¬r¯¹® ¯À—® ª—´]­a³¾ªP¬U°Âº?®²ÄÅP»®i··³µ´]ªM¿U®iªº ®‰»1¬r¯¹´U»kÁ¶=±²´]­ Ål¬r»1³µª—¿ ³¼¯·ÅP»®i«P³µ±²¯³µ´]ªP·[¬r¿]¬U³µªP·¹¯ Ï&À¬r¯$À¨P­a¬UªP·Ê·¬U³¾«¬r¯(¯À—® ·¬U­ ® ÅG´]³µª-¯$³¾ª ¯À—®È«P³¾¬U°¼´U¿]¨—®UË ¦â®ÈÅP»1´%㝳µ«—®Z¬Þ» ³¼¿U´U»´]¨P·‹¯¹®i·¹¯ ´U§!¯À—®[ª—´]­a³µª¬U°Âº?®²ÄÅ»®i··³¼´]ªÐ¿U®iª—®‰»1¬r¯¹´U»\Á-¶Ð´]ªP°¼¶ ±²´]¨Pª-¯³µª—¿í¬U·g±²´U»»®i±²¯¯ÀP´]·¹®&ª—´]­a³¾ªP¬U°®²ÄÅ»®i··³¼´]ª· Ï&À³µ±1ÀA®²Ä—¬U±²¯°¼¶ ­a¬r¯± À ¯À—®4±²´]ªE¯¹®iª-¯u´U§~¯À—®4À-¨º ­a¬Uª8¿U®iª—®‰» ¬r¯¹®i« ª—´]­a³µªP¬U°E®²ÄÅ»®i··³¼´]ª·‰ËaÀ'¦â®¬U°µ·¹´ 됨P¬Uª-¯³¼§ê¶u¯À—®í±²´]ª-¯¹»1³¼Á¨—¯³µ´]ªP·x´U§T®i¬U± À§Ñ®i¬r¯¨—»®·¹®‰¯ ¯¹´ ¯À—®*Å®‰»1§ê´U»1­\¬UªP±²®&´U§¯ÀP®!ª—´]­\³µªP¬U°Âº?®²ÄÅP»®i··1³¼´]ª ¿U®iª—®‰» ¬r¯¹´U»iËÂÁ~¨P»[»®i·¨P°¼¯·[·1À—´%ϯÀ¬r¯Ãª—´]­a³µª¬U°Âº ®²ÄÅP»®i··1³¼´]ª8¿U®iª—®‰»1¬r¯¹´U»1·Á¬U·¹®i« ´]ª ¬€±²´]­Á³µªP¬r¯³¼´]ª ´U§¯À—®¿]³µãU®iªºJª—®‰ÏíÎr¬UªP««³µ¬U°¼´U¿]¨—®·¹ÅG®i±‰³…{l±¬Uª«)µêÛÚ Ý´PÛÝfµ©Ù^Û\²·kµêÛB»h·-ÜC´PÛ^Øf´Q³§ê®i¬r¯¨P»®i·±‰¬Uªc¬U± ÀP³¼®‰ãU® äråEæ ¬U±‰±‰¨—» ¬U±²¶N¬r¯­a¬r¯± ÀP³µª—¿ À¨P­a¬UªNÅG®‰»§ê´U»º ­a¬Uª±²®UÎu¬·³µ¿]ªP³…{±‰¬Uª-¯p³µ­ ÅP»1´%ãU®i­ ®iª-¯6´RãU®‰»Ð¯À—® ­a¬Rƹ´U»1³¼¯V¶A±‰°µ¬U·· Á¬U·®i°µ³µª—®½´U§ çiè æe³µªŠÏ&À³µ±1ÀНÀ—® ¿U®iª—®‰» ¬r¯¹´U»k·³¾­ Ű¼¶ ¿]¨P®i··¹®i·¯ÀP®p­ ´]·¹¯§Ñ»®i됨—®iªE¯ ÅP»1´UÅ®‰»1¯J¶±²´]­Á³µªP¬r¯³¼´]ªËÌǪެU««P³¼¯³¼´]ª΀¯¹´Z´]¨—» ·¨P»ÅP»1³µ·®UίÀ—®4»1®i·¨P°¼¯·c³µªP«P³¾±‰¬r¯¹®4¯ÀP¬r¯½¯À—®6ØGÙTÛGÚ Øf´¹PÝGÜ\²·Ã¹i²ØÝ.§Ñ®i¬r¯¨—»®i·\¬Uª«A¯À—®4ØGÙ^Û^Ýf±²Z³iÝ ³]´Pݽ§Ñ®i¬r¯¨—»®i·­a¬r¸U®€ª—´ ·³¼¿]ªP³¾{±‰¬UªE¯±²´]ª-¯¹»1³¼Á¨—¯³µ´]ª ¯¹´aÅG®‰»§Ñ´U»1­a¬Uª±²®UË ÄÆÅ  ùÿûN~ø  ùÿü2ö%÷?ý²ö$D  ùÿügDÇý‰øs÷?ý$Í÷?üsùúö û…L\+rÇûÿù ø  öø  þ2ö$grÍ÷ÇAÂý1÷ 2öD/ùÿüDÇþ%÷s÷ ‰ø?û gø  ^ýRû ÷ Çö üsý]m öRûNxü©øJöi Rö1÷G ö £ö ùvü©ø  ùD  f:D¹ö$' ö ûÿþvö1ø :vö1ø?þR÷Jö1û ûúö þrö$ ) Í÷Jö1ø?ý1÷?ü8ô X rÍ÷?ûúö Í÷LMEÈÈÉ5% K ý1ø ~ø  öø ø  ý ÷ „öøsøs÷?ùvþRø Íüö %ùúüÇDÍý£þR÷?üÇM/iø?ù øƒ  ö1üL%ø    ö1÷G V÷ ù øxùÿügø?ýíö$D  ù&öb5Êvö$DÍø2ö1ø D  ø?ýíö  þ2ö` ÇüÇDV÷?ùN]m ø?ùÿý¢L%ù…% %AÂý1÷ý£þ%÷\R÷?ývûÐø  ZRýùrö1ûmË/Ê]%÷ Çüsüsùÿý_ ]m Í÷Jö1ø?ý1÷+xþRü©ø>DÍý ÷s÷ DÍø?ûÌ ˜D  ý‰ý üÇ öý ÍEÎkUý£üsüsùRùúûÿù ø?ùÇü ÷ R÷ Íüljø  S- 2ø  8UýfÍ÷üÇVøTý$Aø  AÂý£þ%÷Tö1øsøs÷?ùvþ%ø Çü% z®i±²¯³µ´]ª=òЫ—®i·±²»1³µÁ®i·a¯À—®âØGÙØGÙ^ÛÜ^ÝZ±²´U»Å¨P·‰Î ¯À—®½®iªP±²´«P³µª—¿k´U§&¯ÀP®½±²´U»1ŨP· ¬UªP«Š¯À—®½§Ñ®i¬r¯¨—»®i· ¨P·¹®i«½³µªc­a¬U± ÀP³µª—®€°¼®i¬r»1ª³µª—¿³¾ªu­ ´U»1®~«P®‰¯¬U³µ°sËz®i±£º ¯³¼´]ª8ÕÅP»®i·®iªE¯·T¯À—®g됨P¬Uª-¯³¼¯¬r¯³¼ãU®g»®i·¨P°¼¯·^´U§—¯¹®i·¹¯Çº ³µª—¿u¯À—® °¼®i¬r»1ª—®i«4»1¨P°µ®i·*¬r¿]¬U³µª·¹¯*¯ÀP® ±²´U»1ŨP·‰ÎG«P³µ·Çº ±‰¨P··®i·¯À—®u§ê®i¬r¯¨P»®i·8¯ÀP¬r¯¯ÀP®u­a¬U± ÀP³µª—®c°¼®i¬r»1ª—®‰» ³µ«—®iª-¯³…{P®i·c¬U·½³µ­ ÅG´U»¯¬Uª-¯‰Î&¬UªP«.ÅP»´R㐳µ«P®i·\®²Ä—¬U­ º Ű¼®i·´U§¯À—®\» ¨P°¼®i·€¯ÀP¬r¯ ¬r»®c°¼®i¬r»1ª—®i«˙z®i±²¯³µ´]ª€Ï ·¨P­\­a¬r»1³…¯‰®i· ´]¨—»u»®i·¨P°µ¯· ¬UªP« «P³µ·1±‰¨P··¹®i· §©¨—¯¨—»® Ïx´U»¸lË ‡ ÐuBCJÑxwY}gÒâ:Sr:„}H)<—S]nB„DgY Á~¨—»$®²ÄÅ®‰» ³µ­ ®iª-¯·$¨P¯³µ°µ³…¯‰®Ô¯À—®Ô» ¨P°¼® °¼®i¬r»1ªP³µªP¿ ÅP»´U¿U» ¬U­ ±µƒ¹Q¹´± ï ¥ ´]ÀP®iªÎ ç ÖUÖ è ó6¯¹´Þ°µ®i¬r»1ªÒ¬ ª—´]­a³¾ªP¬U°Âº?®²ÄÅP»®i··³µ´]ªu¿U®iªP®‰»1¬r¯¹´U»*§Ñ»´]­)¯ÀP®ª—´]­a³Âº ªP¬U°„®²ÄÅP»®i··1³¼´]ªP·€³µªk¯À—®uØGÙØÙTÛÜÝб²´U»Å¨·‰Ë8¦!°Âº ¯À—´]¨—¿]À=Ïx®pÀP¬U«Z·¹®‰ãU®‰»1¬U°€°¼®i¬r» ª—®‰»1·c¬iãr¬U³µ°µ¬rÁ°¼®k¯¹´ ¨P·‰ÎÏx®!± À—´]·¹®±µƒ¹¹Q´±[ÅP» ³µ­a¬r»1³µ°µ¶ÁG®i±‰¬U¨P·¹®!¯À—®~³¼§¾º ¯À—®iª»1¨P°¼®i·^¯ÀP¬r¯„¬r»®x¨·¹®i«¯¹´!®²ÄÅP»®i··T¯À—®°¼®i¬r»1ª—®i« ª—´]­a³¾ªP¬U°í¿U®iª—®‰» ¬r¯¹´U»6­ ´«—®i°¬r»1®‹®i¬U·¶M§Ñ´U»kÅG®‰´rº Ű¼®&¯¹´8¨Pª«—®‰»1·¹¯¬UªP«u¬UªP«u¯À-¨P·§Ñ¬U±‰³¾°µ³¼¯¬r¯¹®~±²´]­ Ŭr»¹º ³µ·¹´]ª6Ï&³¼¯ÀЯÀ—®\¯ÀP®‰´U»®‰¯³µ±‰¬U°­ ´«—®i°µ·íÏx®½¬r»®u¯¹»1¶Eº ³µª—¿*¯¹´*®‰ãr¬U°µ¨P¬r¯¹®UË8Ó^³µ¸U®g´U¯À—®‰»„°¼®i¬r»1ª³µª—¿2ÅP»1´U¿U»1¬U­a·‰Î ±µƒ¹Q¹´±Š¯¬r¸U®i· ¬U·³¾ª—Ũ—¯€¯À—®cªP¬U­ ®i·´U§2¬Ã·¹®‰¯´U§ Ô ‚…}‘‘-‘¯¹´~Á®°¼®i¬r»1ªP®i«ÎU¯À—®ªP¬U­ ®i·Œ¬UªP«8»1¬UªP¿U®i·„´U§ ãr¬U°µ¨—®i·!´U§x¬^{—ĝ®i«¥·¹®‰¯€´U§ ‹ }J’Ɖh‡‘£ÎT¬UªP«º’Ƈ} ­®­ƒ® Š J}r’ }x·¹ÅG®i±‰³¼§Ñ¶³µª—¿¯À—®g±‰°¾¬U··^¬UªP«§ê®i¬r¯¨P»®ãr¬U°µ¨—®i·§Ñ´U» ®i¬U±1Àp®²Ä¬U­ Ål°¼®8³¾ªk¬c¯¹»1¬U³¾ªP³µª—¿c·¹®‰¯‰ËÌͯ·~´]¨—¯¹Å¨—¯~³µ· ¬ Ô ‚…}‘$‘ ­ Õ Ô }r’ ­ „ ® œk„ ¢‚§Ñ´U»!ÅP»®i«P³µ±²¯³¾ª—¿a¯À—®8±‰°µ¬U·· ´U§~§Ñ¨P¯¨—»®[®²Ä—¬U­ Ű¼®i·‰Ë|Ìͪ–±µƒ¹¹´Q±Î¯À—®Ã±‰°¾¬U··³…{—º ±‰¬r¯³¼´]ª.­ ´«P®i°2³µ·a°¼®i¬r» ª—®i«|¨P·1³µª—¿¥¿U»®‰®i«—¶A·¹®i¬r»1± À ¿]¨P³µ«P®i«[Á-¶¬Uªk³µª—§Ñ´U»1­a¬r¯³µ´]ª[¿]¬U³µªÃ­a®‰¯¹»1³µ±rÎl¬Uª«Ã³µ· ®²ÄÅP»®i··¹®i«Ã¬U·2¬Uª´U»1«P®‰»®i«[·¹®‰¯2´U§Œ³¼§êº?¯À—®iª½»1¨P°µ®i·‰Ë ì2À¨P·Œ¯¹´ ¬rÅP۵¶b±µ¹¹´±Î-¯À—®&ª—´]­a³¾ªP¬U°—®²ÄÅP»®i·Çº ·³¼´]ª·x³¾ª\¯À—®€±²´U»Ål¨P·­¨P·¯xÁG®!®iªP±²´«—®i«[³µªu¯¹®‰»1­a· ´U§&¬k·¹®‰¯8´U§&±‰°µ¬U··¹®i· ï ¯À—®½´]¨—¯¹Å¨P¯±‰°µ¬U··1³…{±‰¬r¯³¼´]ªló ¬UªP« ¬$·¹®‰¯Š´U§4³µª—Ål¨—¯p§Ñ®i¬r¯¨—»®i·‹¯ÀP¬r¯A¬r»®È¨P·¹®i« ¬U·uÅP»®i«P³µ±²¯¹´U» · §ê´U»u¯À—®4±‰°¾¬U··¹®i·‰ËZ¦*·c­ ®iª-¯³¼´]ª—®i« ¬rÁG´%ãU®UΌÏx®c¬r»1®\¯¹»¶³µª—¿4¯¹´k°¼®i¬r» ªpÏ&ÀP³¾±1Àp´U§2¬k·¹®‰¯ ´U§±²´]ª-¯¹®iªE¯í¬r¯¹¯¹» ³¼Á¨—¯¹®i·!·À—´]¨P°¾«ÁG®³µª±‰°µ¨P«—®i«³¾ª4¬ ª—´]­a³¾ªP¬U°„®²ÄÅ»®i··³¼´]ªËaì2À—® §Ñ®i¬r¯¨—»®i·íÏx®a®iªP±²´«—® §Ñ´U»x®i¬U± À[ª—´]­a³µª¬U°P®²ÄÅP»®i··³µ´]ªu¬r»®€­ ´U¯³µãv¬r¯¹®i«cÁ¶ §©¬U±²¯¹´U»1·&±‰°µ¬U³¾­ ®i«[³µª½¯À—®°¾³¼¯¹®‰»1¬r¯¨—»®~¯¹´aÁG®í³µ­aÅ´U»º ¯¬Uª-¯cÅ»®i«P³µ±²¯¹´U»1·c´U§í¯ÀP®¥±²´]ª-¯¹®iª-¯[´U§¬Šª—´]­a³µªP¬U° ®²ÄÅP»®i··³µ´]ªË © ®i°¼´RÏNÏx®¥«—®i·±²»1³¼ÁG®´]¨P»c±²´U»Ål¨P·\´U§ª—´]­a³µªP¬U° ®²ÄÅP»®i··³µ´]ªP·‰Î¯À—®!¬U·1·³¼¿]ªP­ ®iª-¯´U§^±‰°µ¬U··®i·¯¹´8®i¬U± À ª—´]­a³¾ªP¬U°®²ÄÅP»1®i··³¼´]ªÎE¯À—®®²Ä¯¹» ¬U±²¯³¼´]ªa´U§G§Ñ®i¬r¯¨—»®i· §Ñ»´]­ ¯À—®â«P³µ¬U°¼´U¿]¨P®k³µª=Ï&À³µ±1À=®i¬U±1À®²ÄÅP»1®i··³¼´]ª ´±‰±‰¨—»1·‰ÎP¬Uª«[´]¨—»&°¼®i¬r» ªP³µª—¿ ®²ÄÅ®‰» ³µ­ ®iª-¯·‰Ë Ö\×ÆØ Ù'ÚQÛÜ8ÝBÞ ì2ÀP®*ØÙØGÙ^Û^ÜÝa±²´U»Ål¨P·³µ·Œ¬í·®‰¯´U§Gòϱ²´]­ Ũ—¯¹®‰»¹º ­ ®i«³µ¬r¯¹®i«(«P³µ¬U°¼´U¿]¨P®i·k±²´]ª·³µ·¹¯³µªP¿.´U§½¬=¯¹´U¯¬U° ´U§ çUç å]ò~¨—¯¹¯¹®‰» ¬UªP±²®i·‰ËŒì2À—®«P³µ¬U°¼´U¿]¨P®i·^Ïx®‰»®±²´]°µ°¼®i±²¯¹®i« ³µªâ¬Uªp®²ÄÅG®‰»1³µ­a®iªE¯€Ï&ÀP®‰»® ¯JÏx´ÃÀ¨P­a¬Uªp·¨PÁÆÇ®i±²¯· ±²´]°µ°¾¬rÁ´U» ¬r¯¹®i«4´]ª¥¬c·³µ­ Ål°¼®«—®i·³µ¿]ªÃ¯¬U·¹¸ÎG¯ÀP¬r¯~´U§ Á¨P¶³µªP¿!§©¨—»1ª³¼¯¨—»®§Ñ´U»¯JÏx´»1´-´]­a·´U§G¬À—´]¨P·¹® ï©ð ³ ñg¨—¿U®iªP³¼´â®‰¯c¬U°s˼Πç ÖUÖr§]ó ËȦ!ªA®²Ä—±²®‰»ÅP¯a´U§€¬ÐØGÙÚ ØGÙTÛÜ݋«P³µ¬U°¼´U¿]¨P®aÏx¬U·¿]³¼ãU®iª6³¾ªp¼¿]¨—»® ç Ë[ì2ÀP® Ŭr»1¯³µ±‰³¼Å¬Uª-¯·-¼í­a¬U³µªÊ¿U´]¬U°u³µ·â¯¹´MªP®‰¿U´U¯³µ¬r¯¹®|¯À—® ŨP»1±1ÀP¬U·®i· ¨ ¯À—®â³¼¯¹®i­a·c´U§À³¼¿]À—®i·¹¯cÅP»1³¼´U» ³¼¯J¶ ¬r»® ¬A·¹´U§©¬‹§ê´U»[¯ÀP®¥°¾³¼ã³¾ª—¿6»´´]­×¬UªP«Z¬Š¯¬rÁ°¼®â¬UªP« §Ñ´]¨—»~±1ÀP¬U³¼» ·&§Ñ´U»~¯À—®8«³µªP³µª—¿ »1´-´]­[Ë~ì2À—®Å¬r»¯³µ±‰³¼º ŬUª-¯·Œ¬U°µ·¹´~ÀP¬%ãU®·Å®i±‰³¾{±·¹®i±²´]ª«P¬r»¶¿U´]¬U°¾·TÏ*ÀP³µ±1À §©¨—»¯À—®‰»k±²´]ªP·¹¯¹»1¬U³¾ªM¯À—®ÐÅP»´UÁ°µ®i­ ·¹´]°¼ã³µª—¿A¯¬U·¸lË ª ¬r»¯³µ±‰³µÅ¬UªE¯·~¬r»® ³µª·¹¯¹»1¨P±²¯¹®i«k¯¹´½¯¹»¶¯¹´­ ®‰®‰¯€¬U· ­a¬Uª-¶´U§¯ÀP®i·¹® ¿U´]¬U°µ·~¬U·*ÅG´]··1³¼Á°¼®UάUª«4¬r»® ­ ´rº ¯³¼ãr¬r¯¹®i«A¯¹´p«—´â·¹´âÁ¶‹»®‰Ïx¬r» «P· ¬U··¹´±‰³µ¬r¯¹®i«AÏ&³µ¯À ·¬r¯³¾·{P®i«‹¿U´]¬U°µ·‰Ëpì2ÀP®c·¹®i±²´]ª«P¬r»¶6¿U´]¬U°¾· ¬r»® t4ç ó ­a¬r¯± Àâ±²´]°¼´U»1·~Ï&³µ¯ÀP³µª¬c»´´]­[Îò]ó&Á¨—¶Ã¬U·~­8¨P±1À §©¨—»1ªP³µ¯¨—»®g¬U·„¶U´]¨ ±‰¬UªÎ-Õ]ó·¹ÅG®iªP«8¬U°µ°¶U´]¨—»­ ´]ª—®‰¶UË ì2ÀP®\Ŭr»¯³¾±‰³¼Å¬Uª-¯·¬r»®c¯¹´]°µ«ÐÏ&ÀP³µ± À6»1®‰Ïx¬r»1«·¬r»® ¬U··´±‰³µ¬r¯¹®i«ÃÏ&³¼¯À[¬U± ÀP³¼®‰ã³µª—¿ ®i¬U±1À4¿U´]¬U°sË ñg¬U±1À|Ŭr»¯³µ±‰³¼Ål¬UªE¯a³µ· ¿]³µãU®iª|¬p·¹®‰Ål¬r»1¬r¯¹®Á¨P«—º ¿U®‰¯ ¬UªP«‹³µªEãU®iª-¯¹´U»¶â´U§§©¨—»1ªP³µ¯¨—»®UËoß&®i³¼¯ÀP®‰»Å¬r»¹º ¯³µ±‰³µÅ¬UªE¯ ¸ª—´%Ï*· Ï&ÀP¬r¯ ³µ· ³µª‹¯À—®[´U¯À—®‰» ¼ÿ· ³¾ªEãU®iªº ¯¹´U»¶â´U»8À—´%ÏÓ­¨±1À‹­ ´]ª—®‰¶¥¯À—®a´U¯À—®‰» ÀP¬U·‰Ë © ¶ ·À¬r»1³µª—¿½³µª—§Ñ´U»1­a¬r¯³¼´]ª¥«P¨P»1³µª—¿u¯À—®a±²´]ªEãU®‰» ·¬r¯³¼´]ªÎ ¯À—®‰¶Ê±‰¬Uª(±²´]­Á³µª—®Š¯À—®i³¼»¥Á¨«—¿U®‰¯·â¬UªP«(·¹®i°¼®i±²¯ §©¨—»1ªP³µ¯¨—»® §ê»1´]­®i¬U±1À‹´U¯À—®‰»]¼ÿ·³µª-ãU®iª-¯¹´U»1³¼®i·‰Ëì2ÀP® Ŭr»1¯³µ±‰³¼Å¬Uª-¯·¬r»1®c®i됨P¬U°µ·8¬UªP«‹Å¨—» ±1ÀP¬U·³¾ª—¿Ã«—®i±‰³Âº ·³µ´]ªP· ¬r»®ÆÇ´]³¾ªE¯‰Ë‹ÌͪНÀP®[®²ÄÅG®‰»1³µ­a®iªE¯‰Î®i¬U± À|·¹®‰¯ ´U§2Ål¬r»¯³µ±‰³¼Å¬Uª-¯·8·¹´]°¼ãU®i«‹´]ª—®c¯¹´k¯ÀP»®‰®c·1±²®iªP¬r»1³¼´]· Ï&³µ¯À.ãr¬r»¶³µª—¿A³µª-ãU®iªE¯¹´U» ³¼®i·[¬UªP«ZÁ¨P«P¿U®‰¯·‰ËÔì2ÀP® ÅP»1´UÁ°¼®i­·±²®iª¬r»1³¼´]·íãr¬r»1³¼®i«p¯¬U·¹¸â±²´]­ Ű¼®²Ä—³¼¯J¶¥Á¶ »1¬UªP¿]³µª—¿c§ê»´]­à¯¬U·¸·~Ï&ÀP®‰»®8³µ¯¹®i­a·€¬r»® ³µª—®²ÄÅG®iªº ·³µãU®2¬UªP« ¯À—®Á¨P«P¿U®‰¯„³¾·„»®i°µ¬r¯³¼ãU®i°¼¶8°µ¬r»¿U®¯¹´í¯¬U·¸· Ï&ÀP®‰»®¯À—®&³¼¯¹®i­a·Œ¬r»1®®²ÄÅG®iªP·³µãU®¬Uª« ¯À—®Á¨P«P¿U®‰¯ »®i°¾¬r¯³¼ãU®i°¼¶½·­a¬U°µ°sË ¦*§Ñ¯¹®‰»x¯ÀP®!±²´U»Ål¨P·ρ¬U·±²´]°µ°¼®i±²¯¹®i«½³¼¯gÏx¬U·¬UªPª—´rº ¯¬r¯¹®i«ÐÁ-¶pÀ¨P­a¬Uª6±²´«—®‰» ·í§Ñ´U»¯JÏx´k¯J¶Å®i·´U§§ê®i¬vº ¯¨—»1®i·‰Ë ì2À—®k¸CµÆ³%ØGÙ^ÜB±C³ ´º´PÛÝfµêÝOàá·j´Râ´·½¬UªPª—´rº ¯¬r¯³¼´]ª·[ÅP»´R㐳µ«P®k«P³¾·±²´]¨—»1·¹®k»®‰§Ñ®‰»®iªP±²®p³µªP§ê´U»1­\¬vº ¯³¼´]ªc§Ñ»´]­ Ï&ÀP³¾±1Àc³µªP³¼¯³µ¬U°l»®‰ÅP»1®i·¹®iªE¯¬r¯³µ´]ªP·x´U§T«P³µ·¹º ±²´]¨—» ·¹®®iª-¯³¼¯³¼®i·\¬Uª«|¨—Å«P¬r¯¹®i· ¯¹´Ð¯À—®i­ ±‰¬Uª|ÁG® «—®‰» ³¼ãU®i«Î-¬Uª«a®²ÄÅ°µ³µ±‰³¼¯¬r¯¹¯¹» ³¼Á¨—¯¹®&¨·¬r¿U®*³µªP§ê´U»1­\¬vº ¯³¼´]ªZ¯À¬r¯[»®-ãP®i±²¯·À—´RÏ®i¬U±1ÀM«P³µ·±²´]¨—» ·¹®4®iª-¯³¼¯V¶ ρ¬U·a®‰ãU´U¸U®i«ËÈU»c®²Ä¬U­aŰ¼®UÎx¯À—®Ã³µªP³µ¯³µ¬U°x»®‰ÅP»®²º ·¹®iª-¯¬r¯³¼´]ª §Ñ´U»šäÇÌ ÀP¬%ãU®4¬â¶U®i°µ°µ´%Ï » ¨—¿—Ë Ìͯa±²´]·¹¯· u ç äråËå Ïx´]¨P°µ«È³µªP±‰°¾¨P«—®¥¯J¶ÅG®UÎíë-¨P¬Uª-¯³¼¯V¶Uα²´]°¼´U» ¬UªP«u´%Ï&ªP®‰»x§Ñ´]°µ°¼´RÏ&³µª—¿¯À—®>{P»1·¹¯x¨—¯¹¯¹®‰»1¬UªP±²®UËæÁ€ªP°¼¶ ¯À—®½ë-¨¬UªE¯³¼¯V¶6¬r¯¹¯¹»1³¼Ál¨—¯¹®c³¾·³µªP§ê®‰»»1®i«Ëæ*§ê¯¹®‰» ¯À—® ·¹®i±²´]ªP« ¨P¯¹¯¹®‰»1¬UªP±²®¯À—®®iª-¯³¼¯J¶8Ïg´]¨°µ«8Á®¨—Å«P¬r¯¹®i« ¯¹´³µª±‰°µ¨P«—®xÅP»1³µ±²®U˄ì2À—®&Ü^ÝÝ´±²TÛ^Øf´·¢´”â´·`²^ÛGÚ Û^ÙTÝj²PÝfµ©Ù^ÛB³±‰¬rÅP¯¨—»®2¯ÀP®ÅP»´UÁ°¼®i­(·´]°¼ã³¾ª—¿í·¹¯¬r¯¹® ³µª[¯¹®‰» ­a·&´U§„¿U´]¬U°µ·iα²´]ªP·¯¹»1¬U³µª-¯&±1ÀP¬UªP¿U®i·!¬Uª«¯À—® ·³…¯‰®4´U§€¯À—®¥·¹´]°µ¨—¯³¼´]ª|·¹®‰¯u§Ñ´U»c¯À—®k±‰¨P»»®iª-¯\±²´]ªº ·¹¯¹»1¬U³¾ªE¯ ®i됨P¬r¯³¼´]ªP·\¬U· Ïg®i°¾°¬U·a±‰¨—»»®iª-¯ ãr¬r»1³µ¬rÁ°¼® ¬U··³µ¿]ªP­ ®iª-¯·‰ËÉì2ÀP®6¨—¯¹¯¹®‰»1¬UªP±²®‹°¼®‰ãU®i°í«P³¾·±²´]¨—»1·¹® §Ñ®i¬r¯¨—»®i·[®iªP±²´«—®âÏ&ÀP®iªÈ¬UªZ´i箉»Ã³¾·[­a¬U«—®p¬UªP« ¯À—®€°¼®‰ãU®i°´U§^¬ ·¹ÅG®i¬r¸U®‰» ¼ÿ·±²´]­a­\³¼¯­ ®iª-¯g¯¹´a¬ÅP»´rº ÅG´]·¬U°^¨ªP«—®‰»&±²´]ªP·1³µ«—®‰»1¬r¯³¼´]ªÎ³sË ®U˱²´]ªP«P³¼¯³µ´]ªP¬U°´U» ¨PªP±²´]ª«P³¼¯³¼´]ªP¬U°?Ë ÌǪc´U»1«—®‰»¯¹´ «—®‰»1³¼ãU®~·¹´]­ ®~´U§^¯ÀP®~«³µ·±²´]¨—»1·®*³µªº §Ñ´U»1­a¬r¯³¼´]ªâ¯ÀP® ¯¬U·¹¸â·¹¯¹»1¨P±²¯¨—»1®a­¨P·¯€Á® ³µ«P®iªE¯³Âº {P®i«Ëcì2À—®cØGÙØGÙ^ÛÜ^Ý6±²´U»Ål¨P·€Ï¬U·®iªP±²´«—®i«â㝳µ¬ ¬‹·¹®‰¯u´U§í³¾ªP·¹¯¹»1¨P±²¯³µ´]ªP·a¯¹´Š±²´«P®‰»1·\¯¹´‹»®i±²´U»1«=¬U°µ° «—´]­a¬U³¾ª½¿U´]¬U°µ·‰Ë ¥ ÀP¬Uª—¿U®i·2¯¹´u¬a«P³açG®‰»®iª-¯2«—´]­a¬U³µª ¿U´]¬U°—´U»¬U±²¯³µ´]ª Ïg®‰»1®2¨P·¹®i« ¬U·¬€±‰¨—®¯¹´«—®‰»1³¼ãU®¯À—® ª—´]ªºJ°¾³µª—¿]¨P³µ·¯³µ±k¯¬U·¹¸M·¹¯¹»1¨±²¯¨—»® ï ìT®‰»¸U®iªÎ ç Ör§Uä ¨ w!»´]·¯k¬UªP«pz³µ«ª—®‰»iÎ ç Ör§ è ó ËÊñg¬U±1ÀZ«—´]­a¬U³¾ª|¬U±£º ¯³¼´]ªÃÅP»´%㝳µ«—®i·&¬a«P³µ·±²´]¨P»1·¹®í·¹®‰¿]­ ®iª-¯&Ål¨—»ÅG´]·¹®í·¹´ ¯ÀP¬r¯®i¬U±1À[¨P¯¹¯¹®‰»1¬UªP±²®~¯ÀP¬r¯»®i°µ¬r¯¹®i·x¯¹´ ¬ «P³açG®‰»®iªE¯ «—´]­a¬U³¾ªí¬U±²¯³¼´]ª´U»·¹®‰¯T´U§P«—´]­a¬U³µª¬U±²¯³¼´]ªP·«—®-{ª—®i· ¬~ª—®‰ÏA·¹®‰¿]­ ®iª-¯‰ËŒì2À—®x®iªP±²´«P®i« §ê®i¬r¯¨—»1®i·¬U°¾°À¬iãU® ¿U´´«[³µª-¯¹®‰»1±²´«—®‰»»®i°µ³µ¬rÁl³µ°µ³¼¯V¶ ï©ð ³ñg¨—¿U®iªP³¼´8®‰¯2¬U°s˼Πç ÖUÖr§ ¨f¬ ´U»1«P¬UªÎòråUåUåEó Ë Á~¨P» ®²ÄÅG®‰»1³¾­ ®iªE¯¬U°×«¬r¯¬ ³µ· ÕUÖUÕ ª—´]ªº ÅP»´]ªP´]­a³µªP¬U°ª—´]­a³¾ªP¬U°«—®i·±²»1³¼Å¯³¼´]ªP·§ê»1´]­ ç Հ«P³µ¬vº °¼´U¿]¨—®i·2´U§¯ÀP®ØÙØGÙ^Û^Üݱ²´U»Ål¨P·2¬U·Ïx®i°µ°¬U·§Ñ®i¬vº ¯¨—»®i·k±²´]ªP·¹¯¹» ¨P±²¯¹®i«M§Ñ»´]­ ¯À—®Š¬UªPª—´U¯¬r¯³¼´]ªP·k«—®²º ·±²»1³µÁ®i«|¬rÁ´RãU®UËè¦â®¥®²ÄÅ°µ¬U³µª.À—´%ÏNÏx®k¨P·®Ã¯À—® ¬UªPª—´U¯¬r¯³µ´]ªP·x¯¹´8±²´]ªP·¯¹»1¨P±²¯x¯À—®*§Ñ®i¬r¯¨—»®i·³µªu­ ´U»® «—®‰¯¬U³µ°Á®i°µ´%ÏíË Ö×ËÖ Ù`éêÞ Þ!ë^Þ Þ]ìífîïšðhîfñ ì2À—®±²´U»Å¨P·&´U§ª—´]­a³µªP¬U°®²ÄÅP»1®i··³¼´]ªP·*³µ·2¨P·®i«¯¹´ ±²´]ªP·¹¯¹» ¨P±²¯2¯À—®í­a¬U± ÀP³µª—®í°µ®i¬r»1ªP³µª—¿ ±‰°µ¬U·1·¹®i·&¬U·§Ñ´]°Âº °¼´RÏ&·‰Ë ¦â® ¬r»®8¯¹»¶³¾ª—¿a¯¹´½°¼®i¬r»1ªÏ*ÀP³µ±1À4·¨—Á·®‰¯&´U§Œ¯À—® §Ñ´]¨—»€¬r¯¹¯¹»1³¼Á¨—¯¹®i·iÎG±²´]°µ´U»iÎÅP»1³µ±²®Uδ%Ï*ª—®‰»iÎ됨P¬UªE¯³µ¯J¶UÎ ·À—´]¨°µ«6ÁG®½³µªP±‰°µ¨«—®i«Ð³¾ªÐ¬âª—´]­\³µªP¬U°g®²ÄÅP»1®i··³¼´]ªË ¦â®A®iªP±²´«—®Š®i¬U±1À ªP´]­a³µªP¬U°®²ÄÅ»®i··³¼´]ª$³µª$¯À—® ±²´U»Å¨·¬U·Ã¬ ­a®i­ÁG®‰»´U§ ¯À—®6±‰¬r¯¹®‰¿U´U»1¶È»®‰ÅP»®²º ·¹®iª-¯¹®i«‹Á-¶â¯À—®½·¹®‰¯8´U§2ÅP»´UÅG®‰»¯³µ®i·í®²ÄÅP»®i··¹®i«ÐÁ¶ ¯À—®pª—´]­\³µªP¬U°~®²ÄÅP»®i··³µ´]ªË)ì2ÀP³µ·c»®i·¨°¼¯·[³µª çiè ±‰°µ¬U·1·¹®i·2»®‰Å»®i·¹®iª-¯³µª—¿a¯À—®Å´RÏg®‰»!·¹®‰¯2´U§Œ¯À—®§ê´]¨—» ¬r¯¹¯¹»1³µÁ¨—¯¹®i·‰Ë Ö\ׅò óBðhêRñÝÛðvô>õ\ñ]ÛêörñìÚOî ì2ÀP®8±²´U»Ål¨P·~³µ·~¨P·¹®i«k¯¹´±²´]ªP·¹¯¹»1¨±²¯!¯ÀP® ­\¬U±1ÀP³µªP® °¼®i¬r» ªP³µª—¿[§Ñ®i¬r¯¨—»®i·8¬U·§Ñ´]°µ°¼´RÏ&·‰Ë[ÌǪ𱵐¹¹´±ÎT§ê®i¬vº ¯¨—»1®ãv¬U°¾¨—®i·|¬r»®$±²´]ªE¯³¾ª-¨—´]¨· ï ª¨P­ ®‰»1³µ±%ó Î¥·¹®‰¯Çº ãr¬U°µ¨—®i«Ό´U»a·¶­ÁG´]°µ³µ±rË÷¦â®½®iªP±²´«P®i«Š®i¬U±1À ª—´]ª—º ÅP»1´]ª—´]­a³µªP¬U°«—®i·±²»1³¼Å¯³¼´]ª ³µªa¯¹®‰» ­a·´U§¬·¹®‰¯g´U§är§ §Ñ®i¬r¯¨—»®i·u¯ÀP¬r¯\Ïx®‰»®®i³µ¯À—®‰»u«P³µ»®i±²¯°¼¶Š¬UªPª—´U¯¬r¯¹®i« Á¶kÀ¨P­a¬Uª·€¬U·í«—®i·±²»1³µÁ®i«¥¬rÁG´%ãU®U΄«—®‰»1³µãU®i«¥§ê»´]­ ¬UªPªP´U¯¬r¯¹®i« §ê®i¬r¯¨—»1®i·„´U»„³¾ªPÀ—®‰»®iª-¯^¯¹´~¯À—®«P³µ¬U°¼´U¿]¨P® ï©ð ³ñ¨—¿U®iªP³µ´\®‰¯*¬U°?˼Πç ÖUÖr§ ¨B¬ ´U»1«P¬UªÎGòråUåUåEó Ë!ì2ÀP® «P³¾¬U°¼´U¿]¨—®g±²´]ª-¯¹®²Ä¯³µªíÏ&À³µ±1À®i¬U±1À «—®i·1±²»1³¼ÅP¯³¼´]ª´±£º ±‰¨—» ·2³µ·»®‰ÅP»1®i·¹®iªE¯¹®i«4³µª½¯À—®€®iªP±²´«P³µªP¿]·‰Ë ø   ö1ø8ùúü'xþ%ø?þrö ûÿûÌ Y RýC øƒ ]r/mË Y LVDÇý ûÿý ÷ÇmË Y L ýCÍ÷ÇmË Y L]%÷?ùND//mË Y L]ù‰þvö‰ø?ù ø mË Y ø ÷ 5AaÍ÷ D/mµ÷ Çûúö1ø?ùÿý ýŒý$Afúûú$ü]úýþhÿRÿ)úû  û  î³µ¿]¨—»®ò t w~³µãU®iªºGß&®‰Ï$i¬r¯¨—»®yz®‰¯‰Ë ì&À—®CµËâ´PÛGÚÛC´ §ê®i¬r¯¨—»1®i·³µªÈ¼¿]¨—»®pò‹®iªº ±²´«—®§©¨PªP«¬U­ ®iªE¯¬U°-¬r¯¹¯¹»1³¼Á¨P¯¹®i·^´U§—¯À—®g®iªE¯³µ¯J¶í¯ÀP¬r¯ ³µ·!¯¹´½ÁG®8«P®i·±²»1³¼ÁG®i«4Á-¶4¯À—® ª—´]­a³µªP¬U°^®²ÄÅ»®i··³¼´]ª ï ¥ °¾¬r»¸¬UªP«4¬r»1·ÀP¬U°¾°sÎ ç Ör§ çi¨hª »1³µªP±²®UÎ ç Ör§ ç ó ˦⮠®iªP±²´«—®gÏ&ÀP¬r¯T³¾·^­8¨—¯¨P¬U°µ°¼¶~¸ª—´%Ï*ª¬rÁG´]¨—¯^¯ÀP®g«P³µ·¹º ±²´]¨—» ·¹®4®iª-¯³¼¯V¶.¬r¯½¯À—®kÅ´]³¾ªE¯c¬r¯uÏ&À³µ±1À=³¼¯u³¾·\¯¹´ ÁG®Ã«—®i·1±²»1³¼ÁG®i« ï ’ÆR /œ]— Ô „i‚…„i‡ 5œ—„i† ® ‡ /œ— O‡ ­ Ô  /œ]—-‰} ® ’ ­ ’Æi 5œ‰ó Ë ¦â®p¨—¯³µ°¾³…¯‰®k¬–‡ ‹ ‡   ® Ô - 5‡‚…}r’ ­ „ ® §ê®i¬r¯¨P»®¥¯¹´‹®iª±²´«—®kÏ&À—®‰¯ÀP®‰»u¯À—® ®iª-¯³¼¯J¶ ³µ·ŒªP®‰Ï ï  "!$# ó ÎU¿]³¼ãU®iª ï%'&'(*)$+ ó^´U»«P³µ·¹º ±²´]¨—» ·¹®‹³¾ª—§ê®‰»1»®i« ï *+)'(*),-%') óu»1®i°µ¬r¯³¼ãU®‹¯¹´.¯À—® «P³¾·±²´]¨—»1·¹®À³µ·¹¯¹´U»¶UËì2À—®¯V¶-ÅG®i·´U§P³¾ª—§ê®‰»1®iªP±²®i··¨—ÅPº ÅG´U»¯¹®i«8Á¶í¯À—®¬UªPª—´U¯¬r¯³¼´]ª ¬r»®·¹®‰¯‰Î]·¨PÁ·¹®‰¯‰ÎU±‰°µ¬U·· ¬UªP«=±²´]­a­ ´]ªZª—´]¨ª.¬UªP¬rÅlÀ—´U»1¬ ï ®UË ¿—Ë$´]ªP®k¬UªP« ª¨P°µ°G¬UªP¬rÅÀP´U»1¬Eó ïǬ ´U» «P¬UªÎòråUåUåEó Ë ì&À—®^µÑÛ/.C´±\´ÛÝp»”´R²PÝGÜC±\´Q³\³µªÐ¼¿]¨P»®½Õk¬r»® ¬‹·¹ÅG®i±‰³…{±[®iª±²´«P³¾ª—¿6´U§€Å¬r»1¯³µ±‰¨P°µ¬r»1·u¬rÁG´]¨—¯\¯À—® «P³¾·±²´]¨—»1·¹®u·³¼¯¨P¬r¯³¼´]ªÎ·¨±1À‹¬U·¯ÀP®c·¹ÅG®i¬r¸U®‰»i΄¯À—® ¯¬U·¹¸ÎP¬UªP«u¯À—®~®iªE¯³µ¯J¶f¼ÿ·¸ª—´RÏ&ªc¬r¯¹¯¹»1³¼Á¨—¯¹®i· ï ’ÆR— Ô „i‚a„r‡$—Z„r† ® ‡—O‡ ­ Ô —0-‰} ® ’ ­ ’Æ%ó ËZ¦=ÀP³¾°¼®Ïx®g«—´]ª\¼ ¯ ®²ÄÅG®i±²¯€¯ÀP³µ·~§Ñ®i¬r¯¨—»®a·¹®‰¯€¯¹´[¿U®iª—®‰» ¬U°µ³…¯‰® ¯¹´[´U¯À—®‰» «P³¾¬U°¼´U¿]¨—®4·1³¼¯¨P¬r¯³¼´]ªP·iÎ2³¼¯u¬U°µ°µ´%Ï&·c¨P·u¯¹´‹®²Ä—¬U­a³µªP® Ï&ÀP®‰¯À—®‰»¯À—®‰»®*¬r»®&³µªP«³¼ã³¾«P¨P¬U°E«³a箉»1®iªP±²®i·³µª ¬r¯Çº ¯¹»1³µÁ¨—¯¹®g·®i°¼®i±²¯³¼´]ª8¬U°¼¿U´U»1³¼¯ÀP­\· ï ‘1R}J‡—C‘2}J‡  ”} ­ ‡ó δU»¥Ï&ÀP®‰¯À—®‰»4·¹ÅG®i±‰³…{l±‰·Ã¬rÁG´]¨—¯Ã¯ÀP®6ÅP»´UÅPº ®‰»¯³µ®i·a´U§~¯À—®´UÁÆ¹®i±²¯‰Î¯À—®Ã°¼´±‰¬r¯³¼´]ª|Ï&³¼¯ÀP³µª‹¯À—® «P³¾¬U°¼´U¿]¨—® ï ‰R’Ç’G-‡} ® Ô   ® ‰”œ43$-‡¹ó Îa¬UªP«$¯À—®‹ÅP»´UÁPº ø þRøsø Í÷Jö$D/mËiþgrV÷L—üÇr¹ö Y Í÷ÇmËvö ù ÷LüÇrÇö Y Í÷LOR÷?ý]m ûN/:mƒiþgrÍ÷ ø ö1øsøs÷?ùvþ%ø 8£ö1ûÿþÇü  5 øƒ r  ý8ý$A76-ýhÿ8:9]ýú Rÿnü ý ; þ”ÿ<=>Rÿ þý@? A 5 DÇý ûÿý ÷  ýŒýAB C”ÿ7;]þ =Dhÿ<> $ûÿFE -þþ G 5 ýCV÷  ýŒý$A 6-þ”ÿHü9D 5 %÷?ùND/  ÷Jö  SA ÷?ý P "$# ø?ýgP$Î ## 5 ù‰þrö$‰ø?ùÿøƒ  ÷Jö  SA ÷?ý # ø?ýJI% ¼¿]¨—»®ÓÕ tLK ÛM.C´±\´PÛÝi¬r¯¨—»1® z®‰¯ t ìT¬U·¸lÎ zÅG®i¬r¸U®‰»Š¬UªP« ð ³µ·±²´]¨P»1·¹®.ñgªE¯³µ¯J¶ˆzÅG®i±‰³…{±Š§Ñ®i¬vº ¯¨—»®i·iË ø ùN‰ø Í÷Jö$DÍø?ùÿýRüSù ø  ý ø  V÷n %ùúüÇDÍý£þR÷?üÇ'‰ø?ù ø?ùNÍü  RùÿüÆm øJöD//m¾û ö1ü©øÇmµ÷ /ALq %ùÿü©øJöD//m¾û ö1ü©øÇmµ÷ /A0m¾ùmµø?þR÷ RüLviþ:m rÍ÷ÇmËR÷ -mËgRø?ùÿývüLiüÇr¹ö Y V÷Çmêý$A0m¾ûúö ü©øÇmµ÷ /A ø R÷ iùÿý£þRüx ÇüÇDV÷?ùN%ø?ùÿý DÇý£ûÿý1÷Çmêù]mêûúö1ü©øÇmƒ5ÊjLuøƒ ]r/m ùN]m¾û ö1ü©øÇmË/Ê]¢LýCV÷Çmêù]mêûúö1ü©øÇmƒ5ÊjL%÷?ùND//m¾ùN]m¾û ö1ü©øÇmË/Ê]¢L ùiþvö%ø?ùÿøƒ mêù]mêûúö1ü©øÇmƒ5ÊjLPø ]r/m¾ùm¾ûúö ü©øÇmµø?þR÷ jLfDÇý ûÿý ÷Çm¾ùm û ö1ü©øÇmµø?þR÷ ¢LTýCÍ÷Çm¾ùm¾ûúö ü©øÇmµø?þ%÷ ¢LR÷?ùD/m¾ùm¾ûúö ü©øÇmµø?þR÷ jL ùiþvö%ø?ùÿøƒ mêù]mêûúö1ü©øÇm¾ø?þ%÷ ¢L‰ùRùÿø?ùúö1ûm¾ùm¾ûúö ü©øÇmµø?þ%÷ ¼¿]¨—»®nÏ t ØGÙTÛ^Øf´¹PÝGÜ\²B·o¹i²ØÝÃi¬r¯¨P»®bz®‰¯‰Ë °¼®i­ «P³ON\±‰¨°¼¯J¶ ï O‡„"3‚¾œb  ® ‰”œ3‡óu۵¬%¶ ·³¼¿]ªP³…{Pº ±‰¬Uª-¯2»´]°¼®i·2³¾ª[¬r¯¹¯¹»1³¼Á¨—¯¹®·¹®i°¼®i±²¯³¼´]ªË ì2ÀP®(ØGÙTÛ^Øf´¹PÝGÜ\²B· ¹i²ØlÝ­ ´«P®i°4·¨P¿U¿U®i·¹¯· ¯ÀP¬r¯«P³µ¬U°¼´U¿]¨—®ŒÅl¬r»¯³µ±‰³¼Å¬Uª-¯·ª—®‰¿U´U¯³¾¬r¯¹®x¬2«P®i·±²»1³¼Å—º ¯³¼´]ª ¯ÀP¬r¯ŒÁG´U¯Àb{ª«8¬U«P®ië-¨P¬r¯¹®&§ê´U»«—®i·1±²»1³¼Á³µªP¿*¬Uª ´UÁÆ¹®i±²¯ ï ¥ °µ¬r»¸[¬UªP«~¦=³µ°µ¸U®i·Çº/w~³¼ÁPÁl·‰Î ç Ör§ èh¨C© »®iªº ªP¬Uª¬Uª« ¥ °µ¬r»¸Î ç ÖUÖ è ó ËÉì2À—®p·Å®i¬r¸U®‰»Ã¿U®iª—®‰»¹º ¬r¯¹®i·¯¹»1³µ¬U°]«P®i·±²»1³¼ÅP¯³µ´]ªP·G¯ÀP¬r¯¯À—®À—®i¬r»®‰»„­ ´«P³…{P®i· Á¬U·¹®i«.´]ª=Ï&ÀP³µ± À|´UÁ—ÆÇ®i±²¯[À—®4¯ÀP³µª—¸·\À—®¥³µ·c·¨—Å—º ÅG´]·¹®Ã¯¹´A³µ«P®iªE¯³¼§Ñ¶UËMì2ÀP®4ª—®‰¿U´U¯³µ¬r¯³µ´]ª=±²´]ª-¯³µª¨—®i· ¨Pª-¯³µ°g¯À—®½Å¬r»¯³µ±‰³¼Ål¬UªE¯· ¬r»®[±²´]ªR{«P®iªE¯ ¯ÀP¬r¯ ¯À—® À—®i¬r»®‰»kÀP¬U·Ã±²´U»1»®i±²¯°¼¶³µ«—®iª-¯³…{P®i«¯À—®6³µª-¯¹®iªP«—®i« ´UÁÆ¹®i±²¯‰Ë€ì2À—®¬U««P³¼¯³¼´]ªP¬U°^§ê®i¬r¯¨P»®i·~·¨—¿U¿U®i·¹¯¹®i«kÁ¶ ¯ÀP³µ·Ã­ ´«P®i°€³µªP±‰°µ¨P«P®k¯ÀP®pÅP»®‰ã³¼´]¨P·[«—®i·1±²»1³¼ÅP¯³¼´]ª ·³µª±²®¯ÀP¬r¯³µ·Œ¯ÀP®2«—®i·±²»1³¼Å¯³¼´]ª ¯ÀP¬r¯Ï&³¾°µ°Á®2­a´«º ³…{P®i«Îg¬UªP« ÀP´%Ï °¼´]ªP¿p¬r¿U´6¯À—®«—®i·1±²»1³¼ÅP¯³¼´]ªŠÏ¬U· ­a¬U«—®UˋÌͧ&¯À—®«—®i·1±²»1³¼ÅP¯³¼´]ªÐÏx®‰»®­a¬U«—®½§©¨—»¯À—®‰» Á¬U±1¸8³µª ¯ÀP®«³µ¬U°¼´U¿]¨—®UÎU¯ÀP¬r¯Ïg´]¨P°¾«8³¾ªP«P³µ±‰¬r¯¹®x¯ÀP¬r¯ ¯À—®aª—®‰¿U´U¯³¾¬r¯³¼´]ªâÅP»´±²®i··íÀ¬U«¥Á®‰®iªâ±²´]­aŰ¼®‰¯¹®i«Ë îP¨—»1¯À—®‰»1­ ´U»®Uΐ¯À—®!­ ´«P®i°·¨—¿U¿U®i·¯·g¯À¬r¯‰Î´]ª±²®!¬ ŬU±²¯ÀP¬U·xÁG®‰®iª\»®i¬U± À—®i«Ν¯ÀP¬r¯x¯À—®~«P³µ¬U°¼´U¿]¨—®*Ŭr»¹º ¯³µ±‰³¼Ål¬UªE¯·~Ï&³¾°µ°T±²´]ª-¯³µª¨—® ¯¹´¨P·¹® ¯À—®a«—®i·1±²»1³¼ÅP¯³¼´]ª ¯ÀP¬r¯!¯À—®‰¶ÅP»1®‰ã³¼´]¨·°¼¶[ª—®‰¿U´U¯³µ¬r¯¹®i«ˀì2À³µ·&¬U·¹ÅG®i±²¯ ´U§&¯ÀP®c­ ´«—®i°³µ·8¬U°µ·¹´â·³µ­\³µ°µ¬r»í¯¹´ ª ¬U··¹´]ªª—®i¬U¨\¼ÿ· ·j´BPCµÑØO²B·Í»ÙTØÜB³~­a´«—®i° ïÆª ¬U··´]ªPª—®i¬U¨Î ç ÖUÖUä]ó Ë ì2ÀP®.ØÙTÛ^Øf´¹PÝGÜ\²· ¹i²ØÝÒ§ê®i¬r¯¨—»1®i·6³µª(¼¿rº ¨—»®`Ͻ®iªP±²´«—® À—´%Ï(¯À—® ±‰¨—»1»®iªE¯€«—®i·1±²»1³¼ÅP¯³¼´]ª4»®²º °µ¬r¯¹®i· ¯¹´ ÅP»®‰ã³¼´]¨P·Š«—®i·±²» ³¼ÅP¯³¼´]ªP·‹´U§k¯À—®·¬U­ ® ®iª-¯³¼¯J¶UË ¦â®Ô®iªP±²´«—®ÒÏ&ÀP®iª ¯À—®Ò®iª-¯³¼¯V¶ Ïx¬U· °µ¬U·¯Z«P®i·±²»1³¼ÁG®i«à³µªN¯¹®‰»1­a·Z´U§Šª-¨­ÁG®‰»Z´U§Ð¨—¯Çº ¯¹®‰»1¬Uª±²®i· ¬UªP« ¯¨—»1ªP· ï  ­ ‘’ } ® Ô  /‚…}‘’Æ /‡ ‹ — ­ ‘  ’ } ® Ô  /‚…}‘’Æ  ­ƒ®  /’Ɖ”‡ ® ‘¹ó ΐÀ—´%ÏA§Ñ»®i됨—®iªE¯°µ¶³¼¯Ïx¬U·Œ«—®²º ·±²» ³¼ÁG®i« ï/® ‰”œ43$‡ QO‡ R] /œ! ® ’ ­ „ ® ‘ó ΗÏ&À—´°¾¬U·¹¯g«—®²º ·±²» ³¼ÁG®i« ³¼¯ ï ‘1R}J‡ „ ‹  /‚a}i‘’Æ /‡ ‹ ó ΋¬UªP« À—´RÏ ³µ¯ ρ¬U·(°µ¬U·¹¯Ê«—®i·±²»1³µÁ®i«³µª¯¹®‰»1­a·$´U§|¯¨—» ª¬UªP« ®²ÄÅP»®i··1³¼´]ª ·³¾ªP±²®¯À—®$«—®i·±²»1³¼Å¯³¼´]ª ­a¬%¶ ÀP¬iãU® ÁG®‰®iªuÁP»´U¸U®iªc³µªE¯¹´8·¹®‰ãU®‰»1¬U°¨—¯¹¯¹®‰»1¬UªP±²®i· ï Ô „i‚a„r‡  ­ƒ®   ‚…}‘’Æ :SQ— ’ÆR-  ­®  /‚a}‘’Æ  SQ— „r† ® ‡  ­®  /‚a}i‘’Æ :SO— O‡ ­ Ô   ­®  5‚…}‘’Æ  SQ—T ‰} ® ’ ­ ’Æ  ­®  5‚…}‘’Æ  SQ—ˆ’ÆR-  ­®  /‚a}‘’Æ /’Ɖ”‡ ® — Ô „i‚…„i‡  ­®  /‚a}i‘’Æ ’Ɖh‡ ® —„i† ® ‡  ­®  /‚a}‘’Æ  ’Ɖ”‡ ® —O‡ ­ Ô   ­ƒ®  /‚…}‘’Æ ’Ɖ”‡ ® —U ‰} ® ’ ­ ’Æ  ­®  5‚…}‘’Æ /’Ɖ”‡ ® — ­®­ ’ ­ }i‚0  ­ƒ®  /‚…}‘’Æ ’Ɖ”‡ ® ó Ë øWV"X'Y[Z,\]\]Y ^]_"X"`Y @ùúü©øs÷Jö$DÍø?ý ÷?ü  øƒ ]r/mƒ Rùÿü©øs÷JöDÍø?ý1÷?üL DÍý£ûÿý ÷Çmƒ Rùÿü©øs÷Jö$DÍø?ý ÷?üLÐýCÍ÷Çmƒ Rùÿü©øs÷Jö$DÍø?ý ÷?üLq%÷?ùND//mƒ RùÿüÆm øs÷Jö$DÍø?ý ÷?üLiù‰þrö‰ø?ù ø mƒ Rùÿü©øs÷Jö$DÍø?ý ÷?ü øWaY b"c/Y X,\ @„ùÿü©øs÷Jö$DÍø?ý ÷?ü  ø ]r/mƒ %ùúü©øs÷Jö$DÍø?ý ÷?üL«DÇý ûÿý ÷Çm %ùÿü©øs÷JöDÍø?ý1÷?üLlýCÍ÷Çmƒ Rùÿü©øs÷JöDÍø?ý1÷?üLf%÷?ùD/mƒ Rùÿü©øs÷JöDVø?ý ÷?üL ù‰þvö‰ø?ù ø mƒ Rùÿü©øs÷JöDÍø?ý1÷?ü î³µ¿]¨—»®ä t ØGÙTÛݱ²³i݀³]´PÝki¬r¯¨—»®bz®‰¯· ì&À—®µêÛ^Øf±\´¶>´PÛÝ¢²B·n¶~ÙB¸´·Á¨P³¾°µ«P·¬«—®i·±²»1³¼ÅPº ¯³¼´]ªM³µªP±²»®i­ ®iª-¯¬U°µ°¼¶|Á-¶Z±²´]ªP·³µ«P®‰»1³µª—¿‹¯À—®â´U¯À—®‰» ´UÁÆ¹®i±²¯· ¯ÀP¬r¯ ¬r»®½±‰¨—»»®iª-¯°¼¶â®²ÄÅ®i±²¯¹®i«Š¯¹´kÁG®c³¾ª §Ñ´±‰¨·§Ñ´U» ¯À—®[À—®i¬r»®‰» ï©ð ¬U°¼®[¬UªP«‹ß2®i³µ¯¹®‰»iÎ ç ÖUÖUä]ó Ë ì2ÀP®i·¹®´U¯À—®‰».´UÁÆÇ®i±²¯·=¬r»®$±‰¬U°µ°¼®i«  ­ ‘’Ƈ} Ô ’ „i‡$‘£Ë ì2ÀP®kÁl¬U·³µ±¥³µ«—®i¬Š³µ·c¯¹´ ¬U«P«È¬r¯¹¯¹»1³µÁ¨—¯¹®i·[¬U·ª—®i±£º ®i··1¬r»¶6¨Pª-¯³µ°x¬UªE¶‹«P³µ·¹¯¹»1¬U±²¯¹´U» ·¬r»®½»1¨°¼®i«p´]¨—¯ ¬U· ±²´]­ ÅG®‰¯³µªP¿&±²´rºJ·¹ÅG®i±‰³…{P®‰»1·iË © ¬U·¹®i«´]ª¯À—®i·®g³µ«—®i¬U·iÎ Ïx®«—®‰ãU®i°¼´UÅG®i«Š¬â·¹®‰¯ ´U§&§Ñ®i¬r¯¨—»®i·aÏg®[±‰¬U°µ°&ØGÙTÛGÚ Ý±²³%Ý ³]´PÝ(§Ñ®i¬r¯¨—»®i·‰Îu¬U·p³µª(¼¿]¨—»® äËéì2ÀP® ¿U´]¬U°g´U§´]¨—»í®iªP±²´«P³µª—¿³¾·€¯¹´Ã»®‰Å»®i·¹®iª-¯íÏ&À—®‰¯À—®‰» ¯À—®‰»1®¬r»®«P³¾·¹¯¹»1¬U±²¯¹´U»1·Å»®i·¹®iª-¯^³µª~¯À—®§Ñ´±‰¨P··¹Å¬U±²® Ï&À³µ±1Àk­a³¼¿]À-¯*­ ´U¯³µãv¬r¯¹® ¯À—® ³µªP±‰°µ¨·³¼´]ª[´U§¬uŬr»¹º ¯³µ±‰¨°µ¬r»2¬r¯¹¯¹»1³¼Á¨P¯¹®UËed f  ùúüÈ÷ %÷ ÇüljøJö1øJöø?ùÿýàývûÌ Éö%÷?ýÊRù2öø ÇüZø   ý ÇûüsùD/!ø  8g X"`^,YDcMYDX,\_"h ý ]ÇûþRø?ùÿûÿùeiÇü&ö`R÷ 5m AaÍ÷s÷  cü?ö ûÿùD/€ý ÷G V÷?ùN ýAŒö1øsøs÷?ùvþ%ø Çü2ö Çûÿùùrö1ø Íü Rùÿü©øs÷JöDVø?ý ÷?ü&ö ü2öøsøs÷?ùNRþRø Çüö1÷ €ö   ø?ý8ö' ]ÇüÇDÍ÷?ùRø?ùÿý¢% j ý ÷8/ÊRögvûLEö  %ùN ø  öøsøs÷?ùvþRø _T…*1k,  €ø  gý]m l DVøgùÿüö>D  ö1ùÿ÷LhÇûÿùùNvö1ø Çüö$ ' %ùúü©øs÷Jö$DÍø?ý ÷?üø  öøö1÷ j[ ø D  ö ù ÷?ü% X þR÷ZDÇý %ù 2øs÷ ¹ö1ø?üöøsøs÷?ùNRþRø ÇüùNRü©ø ¹ö íýAý]m l DVø?üö ü\ %ùÿü©øs÷JöDÍø?ý1÷?ü%  ùÿü\%ù/  ö1üø  Tö ]£ö$iøJö$ ø  öø ø  «R÷ /A0Í÷s÷  ý ÷G ]Í÷?ù íýAö1øsøs÷?ùRþRø ÇüDÇý£þRû aö l þRü©øö$D/m DÇý ÷G %ù !ø?ý!ø  :AÂý-DÇþRüüÇröD/2öi ø  ùÿüùN‰ø Í÷ %÷ ÍøJö1ø?ùÿý8ýA @Œö ûŒö$i nmÍùÿø V÷[ ü\ý ]Çûiö1üü  ýC!ùôpo£ý ÷G RöjL $###  ø?ýMrÍ÷ÇAÂý1÷ püsùNùÿûúö1÷?ûÌ gø?ý„ø  ü©øs÷?ùDÍøQý Çû…%RsTýV÷L²ø   Aa¹öø?þR÷ ÷ R÷ ÇüÇ/iøJöø?ùÿýùÿü!ü©ø?ùÿûÿûŒùNgUýV÷?ùúü   ^ù ø  ÷ /m üÇrDÍø^ø?ý~ôpo£ý1÷G vöjL $### ^üsùD/ù ø Rý-ÍüÇ¢[ øZDÇöRø?þ%÷ 8  öø ø  _ý£ü©øü?ö1ûúù‰øö1øsøs÷?ùvþ%ø g£ö1ûúþÇüAÂý ÷gø  gAÂý-DÍþvüxüÇvöD ö1÷ % ¦!ªM´UÅG®iªÞ³µ··¨—®6Ï&³¼¯À«—®‰» ³¼ã³¾ª—¿A¯À—®Ð«³µ·¹¯¹»1¬U±£º ¯¹´U»1· ³µ·8À—´RÏÓ¯¹´â«—®-{lª—®c¬k§Ñ´±‰¨·8·Å¬U±²® ï ¦p¬U°¼¸U®‰»iÎ ç ÖUÖ è ó Ë ¦â®6¨P·®k¯VÏg´A§Ñ´±‰¨P·[·¹Å¬U±²®p«—®-{lªP³¼¯³¼´]ªP·iÎ ´]ª—®kÁ¬U·¹®i«.´]ª|»1®i±²®iªP±²¶UÎ&¬UªP«.¯À—®Ã´U¯ÀP®‰»c´]ª=³µªº ¯¹®iª-¯³¼´]ªP¬U°·¯¹»1¨P±²¯¨—»®UË z®‰®6¼¿]¨—»1®ÐäËÉU»¥³µªº ¯¹®iª-¯³¼´]ªP¬U°8·¹¯¹»1¨P±²¯¨—»1®6Ïx®Ð¨—¯³µ°¾³…¯‰®p¯À—®Ð¯¬U·¹¸¿U´]¬U° ·¹®‰¿]­ ®iª-¯¬r¯³¼´]ª.®iªP±²´«—®i«|³µª ¯À—®kØGÙTØGÙTÛÜÝ.±²´U»¹º ŨP· ¬U· «³µ·±‰¨P··®i«‹¬rÁG´RãU® ï ³ ´q¶>´PÛÝó ˋîP´U»\»®²º ±²®iªP±²¶UÎ~Ïx®p·³µ­ ۵¶ ±²´]ªP·³µ«—®‰»½¯À—®¥®iª-¯³¼¯³¼®i·[§Ñ»´]­ ¯À—®~ÅP»®‰ã³¼´]¨P·¨—¯¹¯¹®‰» ¬UªP±²®í¬U·ÅG´]··³¼Ál°¼®&«P³µ·¯¹»1¬U±²¯¹´U»1· ï ÙTÛC´àÜÝGÝf´Q±²^Û^Øf´ó Ë î—´U»Š®i¬U± À §ê´±‰¨P·Š·¹Å¬U±²® «—®-{ª³¼¯³¼´]ªÎÏx®4®iª±²´«—®¥Ï&ÀP®‰¯À—®‰»½¯À—®k¬r¯¹¯¹» ³¼Á¨—¯¹® ãr¬U°µ¨—®Ê´U§Ð¯ÀP® ³¼¯¹®i­ ¯¹´ ÁG®(«—®i·±²»1³¼ÁG®i«N³µ·Z¯À—® ·¬U­ ®‹¬U·¥¬r¯k°¼®i¬U·¯Ã´]ª—®Ð´U¯À—®‰»â³¼¯¹®i­ ³¾ªÈ¯À—®‹§ê´rº ±‰¨P·8·¹Å¬U±²® ï ’Æ   ­ ‘’Ƈ} Ô ’ „i‡‘$— Ô „i‚…„i‡  ­ ‘’Ƈ} Ô ’ „i‡$‘— „i† ® -‡  ­ ‘’Ƈ} Ô ’ „r‡$‘— f‡ ­ Ô -  ­ ‘’Ƈ} Ô ’ „i‡$‘—r-‰} ® ’ ­ ’Æi   ­ ‘’Ƈ} Ô ’ „r‡$‘ó Ë ø øJö ü Y üsù ø?þröø?ùúý £ý£ö û…L«DÇý£ûÿý ÷ 2öø D  L_DÇý£ûÿý1÷ 2ö1ø D  m DÇýRü©øs÷Jö ù‰ø R÷ ÇüÇDL|R÷?ùDÇûÿùù øLºR÷?ùDÍûúùù øÇmËDÇý]m ü©øs÷Jö ù‰ø R÷ ÇüÇDLg%÷?ùD ö ûÿþröø?ý ÷L_%÷?ùND/ ö ûÿþröø?ý ÷Çm DÇýRü©øs÷Jö ù‰ø R÷ ÇüÇDL|DÇý£ûÿý ÷?ûÿùù øLÃDÇý ûúý1÷?ûÿùNù øÇmËDÇým ü©øs÷Jö ù‰ø R÷ ÇüÇDL¢R÷?ùDÇþrÍ÷?ûÿùù øLjR÷?ùDÇþrÍ÷?ûÿù:m ùÿøÇmËDÇývü©øs÷Jö1ùN‰ø %÷ ÇüÇD/ ø ö ÷ /g‰øAü©øJö1ø   ù svþD/m¾ý]mêûÿùÿü©ø Í÷L÷DÇý:m ùÿøÇm¾üÇr¹ö Y Í÷Lüsý ûúþ%ø?ùÿým¾üsùeiL%÷ -m¾ùN srþD5mêým¾ûÿùÿüÆm ø Í÷L%÷ -mËDÇýgù øÇm¾üÇr¹ö Y Í÷LZR÷ -m¾üsý£ûÿþ%ø?ùúým¾üsùeiL Rùÿü©øJöD/m¾ýA mêûúö1ü©øÇmêü©øJöø /m¾ùm¾þRøsø Í÷Jö$DÍüL Rùÿü©øJöD/m ýA0m¾ûúö ü©øÇm¾ü©øJöø /m¾ùN]mµø?þR÷ vüL ÷ /A mƒ2ö /m¾ùmË%÷ -mÑö$DÍø?ùÿý]m ü©øJö1ø LRüÇr¹ö Y Í÷Çm¾ý$A0m¾ûúö ü©øÇm¾ü©øJö1ø  ø üsý£ûÿþRø?ùÿýuùiø V÷JöDÍø?ùÿýRü  DÇý ûÿý ÷ÇmËDÇý%øs÷Jö ü©øLfR÷?ùD/mËDÇým øs÷Jö ü©ø ¼¿]¨—»® èht ÌǪ-¯¹®iªE¯³¼´]ª¬U°Ìͪ㨗®iªP±²®i·i¬r¯¨P»®bz®‰¯‰Ë ¬ ´U»1«P¬Uª ïǬ ´U» «P¬UªÎ-òråUåUåEóŒÅP»´UÅG´]·¹®i« ¬€­ ´«—®i°¯¹´ ·¹®i°¼®i±²¯c¬r¯¹¯¹»1³¼Á¨P¯¹®i·\§Ñ´U»uª—´]­\³µªP¬U°µ·a±‰¬U°¾°¼®i« ¯À—®vµêÛGÚ Ý´PÛÝfµÑÙTÛ\²B·÷µÑÛC»”·EÜC´Û^Øf´Q³8­ ´«—®i°sË ì2ÀP³µ·~­ ´«—®i° ÅG´]·³¼¯·Ð¯ÀP¬r¯Š¯À—®=¯¬U·¹¸Eº?»1®i°µ¬r¯¹®i«)³µª—§Ñ®‰»®iªP±²®i·Š¬UªP« ¯À—®|¬r¿U»1®‰®i­ ®iªE¯ÐÅP»´±²®i··p§Ñ´U»p¯¬U·¹¸ ªP®‰¿U´U¯³µ¬r¯³¼´]ª ¬r»®a³µ­aÅ´U»1¯¬UªE¯€§©¬U±²¯¹´U»1·³¾ªk·®i°¼®i±²¯³µª—¿¬r¯¹¯¹»1³µÁ¨—¯¹®i·‰Ë ì2À—®6§ê®i¬r¯¨—»1®i·k¨·¹®i«¯¹´Z¬rÅPÅP»´%ĝ³¾­a¬r¯¹® ¬ ´U»1«P¬Uª\¼ÿ· ­ ´«—®i°€¬r»®6³¾ªÈî³µ¿]¨—»® è Ë ì2À—®â¯¬U·¸Z·1³¼¯¨P¬r¯³¼´]ª §Ñ®i¬r¯¨—»®i·g®iªP±²´«P®*³µª—§Ñ®‰»»1¬rÁl°¼®±1ÀP¬Uª—¿U®i·³µª\¯À—®2¯¬U·¹¸ ·³¼¯¨¬r¯³¼´]ª¥¯ÀP¬r¯í¬r»® »®i°¾¬r¯¹®i«â¯¹´Ã³¼¯¹®i­ ¬r¯¹¯¹»1³µÁ¨—¯¹®i·‰Ë ì2À—®=¬r¿U»®‰®i­ ®iª-¯A·¹¯¬r¯¹®Z§ê®i¬r¯¨—»1®i·6®iªP±²´«—®Z±²»1³¼¯³Âº ±‰¬U°TÅ´]³¾ªE¯·&´U§¬r¿U»®‰®i­ ®iª-¯!«P¨P»1³µª—¿ ÅP»1´UÁ°¼®i­ ·¹´]°µãEº ³µª—¿—ËÊU»[®²Ä—¬U­ Ű¼®UÎ*³¼§í¬Š«P³µ¬U°¼´U¿]¨—®4Ŭr»¯³µ±‰³µÅ¬UªE¯ ³µ· } Ô$Ô @f’ ­® Š$¬(ÅP»´UÅG´]·¬U°sÎ4·À—®­a¬%¶)Ïx¬Uª-¯ ¯¹´ ãU®‰»1³¼§Ñ¶Z¯ÀP¬r¯4·1À—®6ÀP¬U·Ã¯À—®6·¬U­ ®‹³¼¯¹®i­ ¬Uª«¯À—® ·¬U­ ®*®iªE¯³µ¯J¶ «—®i·±²»1³µÅP¯³¼´]ªa¬U·gÀ—®‰»Å¬r»1¯ª—®‰»i˄ì2À—®i·¹® ¬r»®Ð§ê®i¬r¯¨—»1®i·¯ÀP¬r¯ ï©ð ³8ñ¨P¿U®iªP³¼´A®‰¯4¬U°?˼ÎòråUåUåEó §Ñ´]¨PªP«Ê¯¹´ÁG®A³µªP«³µ±‰¬r¯³¼ãU®Š´U§c¬r¿U»1®‰®i­ ®iªE¯‹·¹¯¬r¯¹®i· ¬UªP« ³µªP±‰°µ¨«—® ð ¦<÷z”Ó §ê®i¬r¯¨P»®i· ï/­®Dt ‰ ® Ô -  „ ®  5‚ ­ ‘’G ® -‡$— Ô „iœbœ ­ ’Æ 5‘2}J‡—uO‡R  ­®Dt ‰ ® Ô -  „ ®  5‚ ­ ‘’G ® -‡$—UO‡R  Ô „iœbœ ­ ’Æ 5‘2R$}r-‡¹ó ï ¦!°µ°¼®iªp¬UªP« ¥ ´U»1®UÎ ç ÖUÖ'vUó ÎÞÅP»´U¿U»®i·1·(¯¹´%ρ¬r»1«P·Ô¬·¹´]°µ¨—¯³µ´]ª ï ‘-„i‚ ‰R’ ­ „ ®  5‘ ­pw —xf‡ R] 5‘„i‚0‰R’ ­ „ ®  G‘ ­Ow — ‡ ‹  5œk}Jj-  ­®  QO‡ R] } Ô ’ ­ „ ®  5‘’ }J’G1ó Îp¬UªP« §ê®i¬r¯¨—»1®i·.³µªÀ—®‰»®iª-¯ ¯¹´A¬UªZ¬r¿U»®‰®i­ ®iª-¯·¹¯¬r¯¹® ï ‘2}J‡ „ ‹  /‚a}‘’Æ G‘’ }r’G—  ­ ‘’ } ® Ô - „ ‹  /‚a}‘’Æ G‘’ }r’G  ­®  5‰R’Ç’G-‡} ® Ô ‘—º ­ ‘’ } ® Ô -  „ ‹  /‚a}‘’Æ G‘’ }r’G  ­®  /’Ɖ”‡ ® ‘¹ó Ë ì&À—®[·¹´]°µ¨—¯³¼´]ªA³¾ªE¯¹®‰»1¬U±£º ¯³¼´]ª·Œ§ê®i¬r¯¨P»®i·»®‰Å»®i·¹®iª-¯·³¼¯¨P¬r¯³µ´]ªP·ŒÏ&À—®‰»1®2­¨P°¼º ¯³¼Ål°¼®2ÅP»´UÅG´]·¬U°¾·¬r»®~¨PªP«—®‰»x±²´]ªP·³µ«P®‰»1¬r¯³¼´]ªaÏ*ÀP³µ±1À ­a¬%¶È±²´]ªE¯¹»1¬U·¯Ï&³¼¯ÀZ´]ª—®p¬UªP´U¯À—®‰»Ã³µªZ¯¹®‰»1­\·[´U§ ·¹´]°µã³µªP¿p±²´]°¼´U»¹ºJ­a¬r¯± ÀP³µª—¿p¿U´]¬U°¾· ï Ô „r‚a„i‡  Ô „ ® ’Ƈ}‘’¾ó ´U»*ÅP»1³µ±²®~»®i°µ¬r¯¹®i«¿U´]¬U°µ· ï O‡ ­ Ô   Ô „ ® ’Ƈ}i‘’¾ó Ë Ö\×zy {ðhêÛiîìÆîí€ô>õCÜðhÛìÆïšðhîfñÞ ì2ÀP®y{ªP¬U°Œ³µª—Ål¨—¯*§Ñ´U»°¼®i¬r» ªP³µª—¿[³µ·€¯¹» ¬U³µªP³µª—¿½«P¬r¯¬Î ³sË ®U˼άû®‰ÅP»®i·®iªE¯¬r¯³¼´]ª‹´U§2¬k·¹®‰¯8´U§2ª—´]­a³µªP¬U°®²Ä-º ÅP»1®i··³¼´]ªP·g³µªa¯¹®‰» ­a·x´U§§ê®i¬r¯¨P»®!¬Uª«u±‰°µ¬U·1·gãr¬U°µ¨—®i·iË ÌǪc´U»1«—®‰»¯¹´ ³µªP«P¨P±²®*»1¨P°¼®i·x§Ñ»´]­ ¬8ãv¬r»1³µ®‰¯J¶\´U§^§ê®i¬vº ¯¨—»1®&»®‰ÅP»1®i·¹®iªE¯¬r¯³µ´]ªP·‰Î´]¨—»g¯¹» ¬U³µªP³µª—¿«P¬r¯¬³¾·»®‰Å—º »®i·®iªE¯¹®i«A«P³…箉»®iª-¯°¼¶6³¾ª‹«³a箉»1®iªE¯8®²ÄÅG®‰»1³¾­ ®iªE¯·iË î„³¼»1·¯‰Îv®²Ä—¬U­ ۵®i·T¬r»®g»®‰ÅP»®i·®iªE¯¹®i« ¨P·³¾ª—¿2´]ªP°¼¶~¯À—® Bµ¾â´ÛGÚÛC´(§Ñ®i¬r¯¨—»®i·Œ³µª8¼¿]¨—»®2ò*¯¹´€®i·¹¯¬rÁ°µ³¾·À¬ ÅG®‰»§Ñ´U»1­a¬UªP±²®uÁ¬U·¹®i°µ³¾ª—® §ê´U»8¿]³¼ãU®iªºJªP®‰ÏÓ³µªP§ê´U»1­\¬vº ¯³¼´]ªËgì2À—®iª½´U¯ÀP®‰»&§Ñ®i¬r¯¨—»®·®‰¯·&¬r»®¬U««—®i«³µª½¯¹´ ®²Ä—¬U­a³µª—®€¯ÀP®i³¼»&³µªP«³¼ã³¾«P¨P¬U°±²´]ª-¯¹»1³¼Ál¨—¯³¼´]ªΗ±‰¨P°µ­\³Âº ªP¬r¯³¾ª—¿ Ï&³¼¯À½¯À—®§Ñ¨P°¾°l§Ñ®i¬r¯¨—»®·¹®‰¯‰Ë ì&À—®´]¨—¯¹Ål¨—¯T´U§®i¬U±1À ­a¬U± ÀP³µª—®x°¼®i¬r»1ªP³µªP¿&®²ÄÅ®‰»º ³µ­a®iªE¯&³¾·2¬\­ ´«—®i°§Ñ´U»*ªP´]­a³µªP¬U°®²ÄÅP»®i·1·³¼´]ª[¿U®iªº ®‰»1¬r¯³µ´]ª§Ñ´U»T¯À³µ·^«—´]­\¬U³µª¬UªP«¯¬U·¹¸Î]°¼®i¬r»1ª—®i«§ê»´]­ ¯À—® ¯¹»1¬U³¾ªP³µª—¿c«P¬r¯¬Ëíì^´½®‰ãr¬U°µ¨P¬r¯¹® ¯À—®i·¹® ­ ´«—®i°µ·iÎ ¯À—®~®‰»»1´U»x» ¬r¯¹®i·´U§^¯À—®~°¼®i¬r»1ª—®i«½­ ´«—®i°µ·x¬r»®~®i·¹¯³Âº ­a¬r¯¹®i«â¨·³µª—¿½òUäRº?§ê´]°µ«¥±²»´]·1·Çº?ãv¬U°¾³µ«P¬r¯³¼´]ªÎG³sË ®Uˀ¯À—® ¯¹´U¯¬U°l·¹®‰¯´U§®²Ä¬U­ Ål°¼®i·³µ·„»1¬Uª«—´]­a°¼¶8«P³¼ã³µ«—®i«8³µªE¯¹´ òUä8«P³µ·©ÆÇ´]³µª-¯¯¹®i·¹¯·¹®‰¯·iΝ¬Uª«còUä»1¨Pª·´U§¯À—®~°¼®i¬r»1ª—º ³µªP¿ÃÅP»´U¿U» ¬U­¬r»®uÅG®‰»§ê´U» ­ ®i«Ëì2À¨P·‰Î®i¬U±1ÀŠ»1¨ª ¨P·®i·„¯À—®®²Ä¬U­ Ål°¼®i·Œª—´U¯³µª¯À—®¯¹®i·¹¯·¹®‰¯Œ§Ñ´U»Œ¯¹»1¬U³µª—º ³µªP¿a¬UªP«½¯À—®í»®i­\¬U³µªP³µª—¿8®²Ä—¬U­ Ű¼®i·§Ñ´U»&¯¹®i·¯³µª—¿—Ë † |}8Ñ<CEIJ;=<FTSU:>&j¥<Yrw>?S]Y ì¬rÁ°¼® ç ·¨­a­a¬r»1³…¯‰®i·´]¨—»x®²ÄÅG®‰»1³µ­a®iªE¯¬U°»®i·¨P°¼¯·iË î—´U»$®i¬U±1Àá§Ñ®i¬r¯¨—»®Ó·¹®‰¯‰ÎAÏg®Ô»®‰ÅG´U»¯M¬U±‰±‰¨—» ¬U±²¶ »1¬r¯¹®i·T¬UªP«í·¹¯¬Uª«P¬r»1«€®‰»»1´U»1·G»®i·¨P°µ¯³µª—¿§ê»1´]­±²»´]··Çº ãr¬U°µ³µ«P¬r¯³µ´]ªËe~Ìͯ³µ·±‰°¼®i¬r»¯ÀP¬r¯ÅG®‰»§ê´U» ­a¬UªP±²®\«—®²º € DDÇþR÷Jö$D/ a÷Jö1ø Çü&ö÷ ~ü©øJö1ø?ùÿü©ø?ùD¹ö ûÿûÌ ½üsù vùƒ‚iD¹ö$‰ø?û  %ùA m AaÍ÷ /iø«  /½ø  öDDÍþR÷JöDÍùNÍünvûÿþvü&ý ÷SùiþvüøùDíø   ü©øJö vö1÷G SÍ÷s÷?ý ÷ Rýævý øýV÷?û ö$ ôËlý  ¢LjEÈÈ " 5Lr¢%\E„I% ÅG®iªP«P·[´]ªÞ¯À—®‹§ê®i¬r¯¨P»®i·Ã¯À¬r¯4¯À—®‹°¼®i¬r»1ª—®‰»kÀP¬U· ¬iãr¬U³µ°µ¬rÁl°¼®UËì&À—® çiè ËúÕ]æ ¶«²0…²Ù±µêÝOàpØf·J²³³‡†O²³ ´—Ú ·JµÑÛC´k¬U±‰±‰¨—»1¬U±²¶4»1¬r¯¹®a³µª4¯À—®'{»1·¹¯!»´%Ï(³µ·~¬½·¹¯¬Uªº «P¬r»1« Á¬U·¹®i°µ³µªP®[¯ÀP¬r¯\±²´U»»®i·Å´]ª«P· ¯¹´6¯ÀP®Ã¬U±‰±‰¨º »1¬U±²¶Z´]ª—®âÏx´]¨P°µ«¬U± ÀP³¼®‰ãU®p§Ñ»´]­é·1³µ­ Ű¼¶.±1À—´´]·Çº ³µª—¿$¯À—®«—®i·1±²»1³¼ÅP¯³¼´]ªÔ¯J¶ÅG®Z¯ÀP¬r¯A´±‰±‰¨P»1·A­ ´]·¹¯ §Ñ»®ië-¨P®iªE¯°¼¶A³µª ¯À—®Ã±²´U»Å¨·‰ÎÏ&ÀP³µ± À ³µªA¯ÀP³¾·a±‰¬U·¹® ­ ®i¬UªP·2¯ÀP¬r¯¯À—®ª—´]­\³µªP¬U°Âº?®²ÄÅP»®i··1³¼´]ª\¿U®iªP®‰»1¬r¯¹´U» Ïx´]¨P°µ«í¬U°¼Ï¬i¶·„¨P·¹®¯ÀP®g±²´]°¼´U»iÎrÅP» ³µ±²®¬UªP«ë-¨P¬Uª-¯³¼¯V¶ ¯¹´\«—®i·1±²»1³¼ÁG®€¬a«—´]­a¬U³µª½®iª-¯³¼¯V¶UË j ¹ö1ø?þ%÷  I Vø?üMˆTüÇ € DDÇþ%÷JöD/ ô I ‰  cŠ_"‹V$^ g \ Œ`h_Ža@ar _ŽaY h g X'Y EÎ % „n b ge‘ YDX]’“X'Y” EÉ % Zô % „ b g•‘ YDX]’“X'Y”<–eaY b EÉ % " Zô % "  b ge‘ YDX]’“X'Y”7–e`— # % „Zô % "  b g•‘ YDX,’˜X'Y ”7– gOg X"™ „„ %NEšZô % › b g•‘ YDX]’“X'Y”<– gOg X'™]–e`—,–ea@Y b „„]% ·.ô %  b g•‘ YDX,’˜X'Y ”7– g X'œ I % ΐZô % › b ge‘ YDX]’“X'Y”7– gOg X'™]– g X"œ I ›-% ȐZô % #  b ge‘ YDX]’“X'Y”7– gOg X'™]– g X"œ*–e` — "# %NEšZô %  b g•‘ YDX,’˜X'Y ”7– gOg X"™]– g X'œ*–e`—,–ea@Y b IÉ % Zô % È b g•‘ YDX]’“X'Y”<– gOg X'™]– g X'œ*–e`—,–z Z,\]\ IÎ % # ZôÆE% È ì¬rÁ°¼® çit ¦!±‰±‰¨—»1¬U±²¶» ¬r¯¹®i·„§ê´U»„¯À—®gß*´]­a³µªP¬U°Rw!®iªº ®‰»1¬r¯¹´U»x¨P·³µª—¿~«P³açG®‰»®iª-¯„§ê®i¬r¯¨—»1®2·¹®‰¯·‰Î”z—ñŸžpz¯¬Uªº «P¬r»1«Þñ»»1´U»iË0ØO¹ ž ¯À—®AØGÙ^Û^Øf´¹ÝÜC²B·á¹i²ØÝ §Ñ®i¬r¯¨—»®i·‰Ë µÆµêÛC» že¯À—®|µêÛÝ´PÛ^ÝOµ©Ù^ÛC²B· µÑÛC»”·EÜGÚ ´PÛ^Øf´Q³â§ê®i¬r¯¨P»®i·‰ËµêÛ/.Tžá¯À—®šµÑÛ/.C´±\´ÛݧѮi¬vº ¯¨—»®i·iËo³ ´ ¡ž)¯À—®cØGÙ^Û^Ýf±²Z³iÝlÚ$³ ´PÝ£¢³]´q¶S´PÛ^Ý §Ñ®i¬r¯¨—»®i·‰Ë¥¤UÜÝGݦžÉ¯À—®4ØGÙ^Ûݱ²³%Ý ³ ´ݧ¢íÙTÛC´ ÜÝGÝ´±²^Û^Øf´u§Ñ®i¬r¯¨—»®i·‰Ë ì2ÀP®Ð»´RÏe´U§\ì¬rÁ°¼® ç °¾¬rÁ®i°¾°¼®i« Bµ¾â´ÛGÚÛC´ ·À—´RÏ&·8¯ÀP¬r¯ ÅP»´R㐳µ«³µª—¿¯À—®½°¼®i¬r»1ªP®‰»Ï&³µ¯ÀгµªP§ê´U»¹º ­a¬r¯³¼´]ª.¬rÁG´]¨—¯aÏ&À—®‰¯ÀP®‰»a¯À—®Ããr¬U°µ¨—®i·a´U§€¯À—®Ã¬r¯Çº ¯¹»1³¼Ál¨—¯¹®i·c§ê´U»[¬A«P³µ·1±²´]¨—»1·¹®k®iªE¯³µ¯J¶.¬r»1®p­¨—¯¨¬U°µ°¼¶ ¸ª—´%Ï&ªu«—´®i·gª—´U¯‰Î³µªu¬UªP«a´U§³¼¯·®i°¼§Vΐ³µ­ ÅP»1´%ãU®&ÅG®‰»¹º §Ñ´U»1­a¬UªP±²®A´RãU®‰»6¯À—®ŠÁ¬U·¹®i°¾³µª—®UË z—³µ­a³µ°µ¬r» °¼¶UÎí¯À—® »´RÏ&·Z°µ¬rÁG®i°µ°¼®i«¨Bµ¾âZ´PÛGÚÛC´©¢…³]´qá¬UªP«ªCµËâ´PÛGÚ ÛC´«¢ÂØf¹Ã·À—´%Ï$¯ÀP¬r¯!ÅP»´R㐳µ«³µª—¿ ¯À—®8§ê®i¬r¯¨P»®i·&§Ñ´U» ±²´]ª-¯¹»1¬U·¹¯Š·¹®‰¯Ð¬UªP«Ô±²´]ªP±²®‰ÅP¯¨P¬U°uŬU±²¯‹«—´-®i·‹ª—´U¯ ·¹¯¬r¯³µ·¯³µ±‰¬U°µ°¼¶ ³¾­ ÅP»´RãU®$ÅG®‰»§Ñ´U»1­a¬Uª±²®Þ´RãU®‰»M¯À—® Á¬U·¹®i°¾³µª—®UË(ì2À—®Bµ¾â´PÛÚ¹ÛB´©¢0µÆµêÛC».¬UªP«¥CµËâ´PÛGÚ ÛC´«¢ µÆµÑÛC»¢ÂØf¹/¢…³]´qA»1´%Ï&·í·1À—´%ÏÔ¯ÀP¬r¯¬U«P«P³¾ª—¿˜µêÛGÚ Ý´PÛÝfµÑÙTÛ\²B· µêÛC»”·-ÜC´PÛ^Øf´Q³c§Ñ®i¬r¯¨—»®i·­¬ Ú\ð¢Þ ÅP»´rº 㝳µ«—®\¬k·³¼¿]ªP³¾{±‰¬UªE¯Å®‰»1§ê´U»1­\¬UªP±²®\³µ­ Å»´%ãU®i­ ®iª-¯‰Î Á¨—¯c¬U°µ°¼´RÏ&³µª—¿p¯ÀP®k°¼®i¬r» ª—®‰»\¯¹´A°¼®i¬r»1ª|»1¨P°¼®i·\¯ÀP¬r¯ Ïx´]¨P°µ« ±²´]­Á³µª—® §ê®i¬r¯¨—»1®i· §Ñ»´]­ ¯À—®›µêÛÝ´PÛGÚ ÝfµÑÙTÛ\²B·µêÛC»”·-ÜC´PÛ^Øf´Q³2§ê®i¬r¯¨P»®i·‰Î—¯À—®íØGÙ^Ûݱ²³%Ý ³ ´Ý[§Ñ®i¬r¯¨—»®i·¬UªP«c¯À—®ØGÙ^ÛTØO´Q¹PÝGÜ\²B·^¹i²Øݽ§Ñ®i¬vº ¯¨—»®i·Ã«—´-®i·4ª—´U¯Ã·³µ¿]ªP³…{±‰¬Uª-¯°¼¶|³µ­aÅP»´%ãU®6Å®‰»1§ê´U»¹º ­a¬Uª±²®€´%ãU®‰»„ƨP·¹¯ÀP¬i㝳µª—¿ ¯À—®+µÑÛÝ´PÛÝfµÑÙTÛ\²B·^µêÛÚ »”·EÜB´PÛ^Øf´Q³‹§Ñ®i¬r¯¨—»®i·‹¬U°¼´]ª—®UË î„³¼¿]¨—»1®®vÈ·ÀP´%Ï&· ¯À—®\»1¨P°¼®i·€¯À¬r¯¬r»®u°µ®i¬r»1ª—®i«â§Ñ´U»¯À—®\¿U®iª—®‰»1¬r¯³¼´]ª ´U§ª—´]­\³µªP¬U°®²ÄÅP»®i··1³¼´]ªP·¿]³¼ãU®iª4¯À—®CµËâ´PÛGÚÛC´ ¬UªP«™µêÛÝ´PÛÝfµ©Ù^Û\²·oµêÛC»”·EÜB´PÛ^Øf´Q³*§Ñ®i¬r¯¨—»®i·‰Ë ì&À—®»1´%Ï(°µ¬rÁG®i°µ°¼®i«¯CµËâ´PÛGÚÛC´«¢ µÑÛ/.‹³µªkìT¬rÁ°µ® ç ·ÀP´%Ï&·c¯ÀP¬r¯‰Î~³¼§íÏx®âÏg®‰»1®k´]ª°¼¶|³µª-¯¹®‰»®i·¹¯¹®i«³¾ª «—´]³¾ª—¿6Ïx®i°µ°!³µª|¯À³µ·\«—´]­\¬U³µªÎ¯ÀP¬r¯[¬U«P«P³¾ª—¿6«P³µ·¹º ±²´]¨—» ·¹® ®iª-¯³¼¯J¶ ¬UªP« ¯¬U·¹¸(·¹ÅG®i±‰³…{±A³µª—§Ñ´U»1­a¬r¯³µ´]ª ¬ Ú\ð¢Þ ³¾­ ÅP»´RãU®*¯À—®!ÅG®‰»§Ñ´U»1­a¬UªP±²®*´U§^¯À—®~°¼®i¬r»1ª—®i« ª—´]­\³µªP¬U°Âº?®²ÄÅP»®i··1³¼´]ªÃ¿U®iªP®‰»1¬r¯¹´U»iË ì2ÀP³µ·!³µ·€¬r¯€¯À—® ±²´]·¹¯~´U§x°¼´]·³µª—¿u¿U®iª—®‰»1¬U°µ³µ¯J¶Ã³µª4¯À—® »1¨P°µ®i·&¯ÀP¬r¯€¬r»® °¼®i¬r» ª—®i«Ëu¼¿]¨P»®§[·ÀP´%Ï&·¯ÀP¬r¯¯À—®\¿U®iª—®‰»1¬r¯³¼´]ª »1¨°¼®i·u°µ®i¬r»1ª—®i«.¿]³¼ãU®iªZ¬U±‰±²®i··½¯¹´Š¯À—®xµêÛ/.C´Q±\´PÛÝ §Ñ®i¬r¯¨—»®¥·¹®‰¯[­a¬r¸U®¥¨P·¹®k´U§­a¬Uª-¶|«P³µ·±²´]¨P»1·¹®®iªº ¯³¼¯V¶Uί¬U·¹¸Î¬UªP«¥·¹ÅG®i¬r¸U®‰»í·¹ÅG®i±‰³…{±§ê®i¬r¯¨P»®i·‰Ëì2ÀP® ·¹ÅG®i¬r¸U®‰»¹º?Ål¬U³¼»*§Ñ®i¬r¯¨—»®8¬U°¼´]ª—® ³µ·&¨·¹®i«Ã³µª4·¹®‰ãU®iªk´U§ ¯À—®°¼®i¬r»1ª—®i«½»1¨°¼®i·‰Ë ì&À—®°Bµ¾â´ÛGÚÛC´©¢0µÆµêÛC»¢0µêÛM. »´RÏá³¾ªMì¬rÁ°¼® ç ·¨P¿U¿U®i·¹¯·¯ÀP¬r¯2³µª-¯¹®‰»®i·¹¯³¾ª—¿ »1¨P°¼®i·±‰¬Uª[ÁG®€°¼®i¬r»1ª—®i« Á¶8¬U««P³µª—¿~¯À—®«µÑÛÝ´PÛÝfµÑÙTÛ\²B·`µÑÛC»”·EÜC´PÛTØO´O³g§ê®i¬vº ¯¨—»1®i·&¯¹´u¯À—®'µÑÛ/.C´±\´PÛ^Ýk§ê®i¬r¯¨—»1®i·‰ÎlÁl¨—¯2¯À—®Å®‰»º §Ñ´U»1­a¬UªP±²®³¾­ ÅP»´RãU®i­ ®iªE¯A´RãU®‰».¯ÀP®áµêÛ/.C´Q±\´PÛÝ §Ñ®i¬r¯¨—»®·®‰¯&³µ·ª—´U¯&·1³¼¿]ªP³…{±‰¬Uª-¯‰Ë ì&À—®»®i­\¬U³µªP«—®‰»2´U§Œ¯ÀP®¯¬rÁ°µ®·À—´RÏ&·2¯À¬r¯*¯À—® ¬rÁ³¾°µ³¼¯J¶\¯¹´a¨—¯³µ°µ³…¯‰®~¬U°µ°´U§^¯ÀP®€§ê®i¬r¯¨—»1®i·ÅP»´R㐳µ«P®i·x¬ ·°¾³¼¿]ÀE¯ÅG®‰»§ê´U» ­a¬UªP±²®u³¾­ ÅP»´RãU®i­ ®iªE¯8Ï&ÀP³¾±1À‹À—´%Ϻ ®‰ãU®‰»a³µ·8ª—´U¯ ·¹¯¬r¯³µ·¹¯³¾±‰¬U°µ°¼¶p·³¼¿]ª³…{±‰¬Uª-¯‰Ëkì2ÀP®u°µ¬U·¹¯ ¯VÏg´ »´RÏ&··¨—¿U¿U®i·¯¯ÀP¬r¯2¬U«P«P³µªP¿³µªc§Ñ®i¬r¯¨—»®i·»®‰Å—º »®i·®iªE¯³µªP¿&ãr¬r»1³¼´]¨P·㝳¼®‰Ï&·^´U§P«P³µ·±²´]¨P»1·¹®·¹®‰¿]­ ®iª-¯¬vº ¯³¼´]ª8«—´!ª—´U¯T±²´]ª-¯¹»1³¼Á¨P¯¹®¯¹´&ÅG®‰»§Ñ´U»1­a¬Uª±²®UË¼¿]¨—»1® §8·À—´RÏ&·g¯À—®*¿U®iªP®‰»1¬r¯³¼´]ª½»1¨P°µ®i·°¼®i¬r»1ªP®i«\Ï&³µ¯À\¯À—® ÁG®i·¹¯ŒÅG®‰»§Ñ´U»1­a³µªP¿*§Ñ®i¬r¯¨—»®i·g·¹®‰¯·À—´RÏ&ªa³µª ¯ÀP®»´%Ï °µ¬rÁG®i°µ°µ®i«±Bµ¾âZ´PÛGÚÛC´©¢0µÆµêÛB»-¢0µÑÛ/.²¢¼Øf¹GË ¦!·p­ ®iªº ¯³¼´]ªP®i«p¬rÁG´%ãU®UÎT­\¬UªE¶k¯¬U·¹¸Î^®iª-¯³¼¯J¶â¬UªP«â·¹ÅG®i¬r¸U®‰» ·¹ÅG®i±‰³…{l±§ê®i¬r¯¨—»1®i·g¬r»®!¨P·¹®i«a³¾ª ¯À—®i·¹®*»1¨P°¼®i·‰Ë8£*´%Ϻ ®‰ãU®‰»iίÀ³µ·&» ¨P°¼®·®‰¯!ÅG®‰»§Ñ´U»1­a·!¬r¯íäråEæÉ¬U±‰±‰¨—»1¬U±²¶UÎ ¬U·u´UÅPÅG´]·¹®i« ¯¹´ŠÕUÕË è æ ¬U±‰±‰¨—»1¬U±²¶ §Ñ´U»u´]¨P»u­ ´]·¹¯ ¿U®iª—®‰» ¬U°§Ñ®i¬r¯¨—»®~·¹®‰¯ ï ·1À—´%Ï&ªc³µª ¯ÀP®*»´RÏ=°¾¬rÁ®i°¾°¼®i« Bµ¾â´ÛGÚÛC´©¢0µÆµêÛC»¢¼Øf¹/¢…³ ´qó Ë ³ ÒpIJYrm—wgYvYrIJB^F$:GFgD(wŒS]wC]<‹LÉBC]Oµ´ ¦ZÀP³µ°¼®uÅP»®‰ã³¼´]¨P·»®i·¹®i¬r»1± ÀA³µª‹ªP¬r¯¨—»1¬U°x°µ¬Uª—¿]¨P¬r¿U® ¿U®iª—®‰» ¬r¯³¼´]ªÞÀ¬U·Ã¬rÅPŰ¾³¼®i«È­a¬U±1À³µª—®Ð°µ®i¬r»1ªP³µª—¿Š¯¹´ ¬U±‰±²®iª-¯.Ål°µ¬U±²®i­ ®iª-¯‰Î4±‰¨—®²º?Ïx´U»1« ·®i°¼®i±²¯³¼´]ªÎ4¯¹®²Ä¯ Ű¾¬UªPªP³µª—¿—Îr¬UªP« «—®‰¯¹®‰» ­a³µªP³µªP¿&¯À—®§Ñ´U»1­Ê´U§l¬~ª—´]­ º ³µª¬U°®²ÄÅ»®i··³¼´]ª ï £*³¼» ·±1À-ÁG®‰»¿—Î ç ÖUÖUÕ ¨ ´]·®‰»!¬UªP« ô-´U»1®UÎ ç ÖUÖUä ¨ ®i°¾°µ³µ·À$®‰¯Ð¬U°?˼Πç ÖUÖr§ ¨xª ´®i·³¼´—Î òråUåUå ¨ z¯¹»1¨—ÁG®í¬UªP«˜¦â´]°¼¯¹®‰»1·‰ÎGòråUåUåEó ÎÏx®í¸ª—´%ÏÞ´U§ ¶"·D¸ õ Xº¹¯»½¼ ô…R÷?ùDÇþrÍ÷?ûÿùù øÇmËDÇýRü©øs÷Jö ù‰ø R÷ ÇüÇD:¾Z&˜¿~õ Àj&Çlf&  /Á[ô… £ý£ö û¾ I ‰ À ‰ l  lfs € &Ãm I  ¶"·D¸ õ X¯»½¼ ô…R÷?ùDÇûÿùù ø¾ -ÇüG ¶"·D¸ õ X¯»½¼ ô…R÷?ùDÇþrÍ÷?ûÿùù øÇmƒDÍývü©øs÷Jö1ùiø %÷ ÇüÇD:¾Z&“¿~õqÀ¢&Ælf&  ŠÁ[ô… £ý£ö û¾ I ‰ À ‰ l CIX j €  ¶"·D¸ l ¹u»½¼ ô…DÇý ûúý1÷ÇmË Y ¾ ÇüG0Áô¼ù srþD//m¾ým¾ûÿùÿü©ø V÷¾ZröMÁÃô¾ %ùÿü©øJöD//m¾ýA0m¾ûúö ü©øÇm¾ü©øJöø /m¾ùN]m¾þRøsø Í÷Jö$DÇüÄ/I£ÁÃô¾ %ùÿü©øJöD//m ýA m¾û ö1ü©øÇm¾ü©øJö1ø /m¾ùm¾þRøsø V÷JöD/ÇüÅ/I- ¶"·D¸ l Xu»Â¼ ô…DÇý ûúý1÷?ûÿùNù ø¾ -ÍüG ¶"·D¸ l X»½¼ ô…R÷?ùD/mË Y ¾ -ÇüG-ÁÃô…R÷ / m¾üsý ûúþ%ø?ùÿým¾üsùei:¾Z& K @ ‰QЉ m£¿>& K € Љ ŠÁ[ôÂ÷ /A0mË2ö /m¾ùmËR÷ / mêö$DÍø?ùÿým¾ü©øJöø :¾ZvýÁ ô¼üsý£ûÿþ%ø?ùúým¾üsùei:¾Z& K @ ‰QЉ m§¿>& K € Љ  ¶"·D¸ l Xu»Â¼ ô¾ Rùÿü©øJö$D/m¾ý$A0m¾ûúö ü©øÇm¾ü©øJö1ø /m¾ùm¾þ%øsø Í÷JöDÇüÄBÈ/Á[ô¾ Rùÿü©øJöD/m¾ýA m¾û ö1ü©øÇm¾ü©øJö1ø /m¾ùmµø?þR÷ RüÅME ¶"·D¸ X¯»½¼ ô¼ù svþD/m¾ý]m¾ûúùÿü©ø Í÷¾ùA ýmµ÷ ù‰þÇü©ø5ŠÁ[ô¾ Rùÿü©øJö$D/m¾ý$A0m¾ûúö ü©øÇm¾ü©øJö1ø /m¾ùmµø?þ%÷ vüÅ #  ¶"·D¸ X »½¼ ô…R÷ / m¾ù svþD/m¾ý]m¾ûúùÿü©ø Í÷¾ýr/m¾ý%ø?ùúýr Á ô¾ù‰þrö$‰ø?ùÿøƒ mƒ Y ¾ -ÍüG Á ô…%÷ -mêüsý ûÿþRø?ùÿý]m üsùei:¾Z& K @ ‰QЉ m§¿>& K € Љ  ¶"·D¸ lGõ »Â¼ ô¼üsý£ûÿþRø?ùÿým¾üsùei:¾Z& K @ ‰QЉ m§¿>& K € Љ  Áaô¼ù srþD//m¾ým¾ûÿùÿü©ø V÷¾ZröBÁaô…R÷?ùD/mËDÇýiøs÷Jö1ü©ø¾ -ÇüGBÁ\ô¾ %ùÿü©øJöD//m ýA m¾û ö1ü©øÇm¾ü©øJö1ø /m¾ùmµø?þR÷ RüÄ  ¶"·D¸ »½¼ ô…R÷ / m¾üsý ûúþ%ø?ùÿým¾üsùei:¾Z@ ‰OЉ m£¿>& K € Љ "Áô…DÍý£ûÿý ÷ÇmËDÇýiøs÷Jö1ü©ø¾ZvýŽÁô¾ %ùÿü©øJöD//m¾ýA0m¾ûúö ü©øÇm¾ü©øJöø /m¾ùN]m¾þRøsø Í÷Jö$DÇüÄM„ Á[ô…R÷ -m¾ù srþD//m¾ým¾ûÿùÿü©ø V÷¾Zrö ¶"·D¸ [»Â¼ ô…R÷ / m¾üsý ûúþ%ø?ùÿým¾üsùei:¾Z@ ‰OЉ m£¿>& K € Љ ŠÁ[ô…DÇý ûúý1÷ 2ö1ø D  mËDÇývü©øs÷Jö ù‰ø R÷ ÍüÇDš¾ ‰qÆ õ Àj&Çlf&   ¶"·D¸ [»Â¼ ôÂ÷ /A mƒ2ö /m¾ùmË%÷ -mÑö$DÍø?ùÿý]mêü©øJöø :¾ -ÇüG-Á[ô¾ %ùÿü©øJöD//m¾ýA0m¾ûúö ü©øÇm¾ü©øJöø /m¾ùN]mµø?þR÷ vüÅ # /Áô…DÇý ûÿý ÷ÇmË Y ¾ -ÇüG ¶"·D¸ [»Â¼ ô¼ù svþD/m¾ý]mêûÿùÿü©ø Í÷¾ùAÂý$mµ÷ ù‰þÇü©ø5 ¶"·D¸ [»Â¼ ô…R÷?ùD/£ö1ûÿþrö1ø?ý1÷¾ -ÇüG ¶"·D¸ lGõ Xº¹¯»½¼ ô¾ Rùÿü©øJöD/m¾ýA mêûúö1ü©øÇmêü©øJöø /m¾ùm¾þRøsø Í÷Jö$DÍüÄ " ŠÁ½ô¼üÇr¹ö Y Í÷Çm¾ýA0m¾ûúö ü©øÇm¾ü©øJöø :¾ X8 s ‰ m8-Ácô…DÇý£ûÿý ÷ÇmËDÇý‰øs÷Jö ü©ø¾Zvý ¶"·D¸ lGõ Xº¹Ç»Â¼ ô… £ý²ö1û¾ I ‰ À ‰ l  lfs € &Èm I «ÁÉô…%÷ -m¾üsý£ûÿþRø?ùÿý]m¾üsù•i/:¾Z& K @ ‰OЉ m§¿>& K € Љ [Á ôÂ÷ /A0mË2ö ]/m¾ùmËR÷ -m öDVø?ùúým¾ü©øJö1ø š¾Zvý-/Á[ô¾ Rùÿü©øJöD/m¾ýA m¾û ö1ü©øÇm¾ü©øJö1ø /m¾ùm¾þRøsø V÷JöD/ÇüÅM„ ¶"·D¸ lGõ XÉ»½¼ ô¼ùN srþD5mêým¾ûÿùÿü©ø Í÷¾„ö$DÍø?ùÿý]m %ù ÷ DÍø?ùŠÁ[ôÂ÷ /AaV÷ D5m¾÷ Íû öø?ùÿý,¾ùRùÿø?ùúö1û0 ¶"·D¸ lGõ XÉ»½¼ ô… £ý²ö1û¾ I ‰ À ‰ l CIX j € MÁô¾ %ùÿü©øJöD//m¾ýA0m¾ûúö ü©øÇm¾ü©øJöø /m¾ùN]m¾þRøsø Í÷Jö$DÇüÄ  ¶"·D¸ lGõ XÊ»½¼ ôÂ÷ 5A0mË2ö ]/m¾ùN]mËR÷ -mêöDÍø?ùÿým¾ü©øJö1ø :¾ZRý-"Á8ô… £ý£ö û¾ I ‰ À ‰ l \ €§Ë À ‰  Á8ô¾ %ùúü©øJö$D5mêý$A0m¾ûúö ü©øÇm¾ü©øJö1ø 5mêù]m¾ø?þ%÷ vüÄME ¶"·D¸ lGõ XÉ»½¼ ô…R÷ -m¾ù srþD//m¾ým¾ûÿùÿü©ø V÷¾„öDÍø?ùÿý]mƒ Rù ÷ DÍø?ù-Áô… ý²ö û¾ I ‰ À ‰ l \ €0Ë À ‰  ¶"·D¸ lGõ XÉ»½¼ ô¼ùN srþD5mêým¾ûÿùÿü©ø Í÷¾ýrm¾ýRø?ùÿýiŠÁ[ô¾ %ùÿü©øJöD//m¾ýA0m¾ûúö ü©øÇm¾ü©øJöø /m¾ùN]m¾þRøsø Í÷Jö$DÇüÅ #  5A¼ö þRû ø ¶'· ¸ lGõ ¹ ¼¿]¨—»®Év t ß&¨P°µ®i·Ó^®i¬r»1ª—®i«®Ì*·³¾ª—¿ÍBµ¾â´PÛÚ¹ÛB´׬UªP« µÑÛÝ´PÛÝfµÑÙTÛ\²B· µêÛC»”·-ÜC´PÛ^Øf´Q³i¬r¯¨—»®i·‰Ë(ì2ÀP® ±‰°µ¬U··®i·€®iªP±²´«—® ¯À—® §Ñ´]¨—»¬r¯¹¯¹»1³¼Ál¨—¯¹®i·‰Î®UË ¿ ¥gª Á8ÎϞ ¥ ´]°µ´U»iÎ ª »1³¾±²®UÎ Á!Ï&ª—®‰»€¬UªP«ŸÎ€¨P¬UªE¯³µ¯J¶UÎTìО)춐ÅG® ´]ªP°¼¶ ª—´k´U¯À—®‰» Ïx´U»¸6¬rÅP۵¶³µªP¿4­a¬U± ÀP³µª—®½°¼®i¬r»1ªP³¾ª—¿¯¹´ «—®‰¯¹®‰»1­\³µªP³µª—¿8¯À—®í±²´]ª-¯¹®iªE¯*´U§¬ ªP´]­a³µªP¬U°®²ÄÅP»®i·Çº ·³¼´]ªËÌǪa´]¨P»®²ÄÅ®‰» ³µ­ ®iª-¯·‰ÎEÏx®&¯¹»1¬U³¾ª\¬8ª—´]­a³µªP¬U°¼º ®²ÄÅP»®i··³µ´]ªí¿U®iª—®‰»1¬r¯¹´U»Œ¯¹´~°¼®i¬r»1ªÏ&ÀP³µ± À¬r¯¹¯¹»1³µÁ¨—¯¹®i· ´U§¬Uªâ®iªE¯³µ¯J¶¥¯¹´Ã³µªP±‰°¾¨P«—® ³µªâ¬[ª—´]­a³µª¬U°«P®i·±²»1³¼Å—º ¯³¼´]ªâ§Ñ»´]­¬Ã±²´U»Å¨P·~´U§«P³µ¬U°µ´U¿]¨—®i·‰ËÁ~¨—»€»1®i·¨P°¼¯· ·À—´RϯÀP¬r¯ t ° Á€¨—»ŠÁ®i·¯ÐÅG®‰»§Ñ´U»1­a³µªP¿Þ°¼®i¬r»1ªP®i«)ª—´]­a³µªP¬U°¼º ®²ÄÅP»®i·1·³¼´]ªÐ¿U®iª—®‰»1¬r¯¹´U»\±‰¬UªA¬U± ÀP³¼®‰ãU®¬âäråEæ ­\¬r¯±1À ¯¹´íÀ¨P­a¬Uª8ÅG®‰»§Ñ´U»1­a¬UªP±²®¬U·„´UÅPÅG´]·¹®i« ¯¹´u¬ çiè æ Á¬U·®i°µ³µª—® ¨ ° ì&À—®2³µª-¯¹®iªE¯³¼´]ª¬U°—³µªR㨗®iª±²®i·T§Ñ®i¬r¯¨—»®i·g«—®‰ãU®i°Âº ´UÅG®i«€¯¹´!¬rÅPÅP»´%ĝ³¾­a¬r¯¹® ¬ ´U» «P¬Uª\¼ÿ·^³¾ªE¯¹®iª-¯³¼´]ªP¬U° ³¾ªR㨗®iªP±²®i·„­ ´«—®i°«—´€·³¼¿]ª³…{±‰¬Uª-¯°¼¶³µ­ Å»´%ãU® ÅG®‰»§Ñ´U»1­a¬Uª±²® ¨ ° îP®i¬r¯¨—»®i··Å®i±‰³¾{±¯¹´€¯ÀP®¯¬U·¹¸lΐ·¹ÅG®i¬r¸U®‰»¬UªP« «³µ·±²´]¨—»1·®Ã®iª-¯³¼¯J¶=¬U°µ·¹´‹ÅP»´%㝳µ«—®k·³¼¿]ªP³¾{±‰¬UªE¯ ÅG®‰»§Ñ´U»1­a¬Uª±²®í³µ­ ÅP»´RãU®i­ ®iª-¯· ¨ ° z—¨—»ÅP»1³¾·³µª—¿]°¼¶UÎT¯À—®¨P·¹®½´U§!¿]³¼ãU®iªºJª—®‰Ïα²´]ªº ¯¹» ¬U·¹¯g·¹®‰¯¬UªP«u±²´]ªP±²®‰ÅP¯¨P¬U°lŬU±²¯§Ñ®i¬r¯¨—»®i·x«—´ ªP´U¯2³µ­ ÅP»´RãU®íÅG®‰»§Ñ´U»1­a¬UªP±²®UË ¦â®­a³¼¿]À-¯2ÀP¬iãU®í®²ÄÅG®i±²¯¹®i«¯¹´\¬U± ÀP³¼®‰ãU®¬ ÁG®i·¹¯Çº ÅG®‰»§Ñ´U»1­a³µªP¿\¬U±‰±‰¨—»1¬U±²¶kÀP³¼¿]À—®‰»*¯ÀP¬UªâäråEæ Ál¨—¯*¬U· ¯ÀP³¾·³µ·¯À—®{»1·¹¯·¹¯¨«—¶â´U§¯ÀP³µ·¸³µª«Î^¯À—®‰»1®c¬r»® ·¹®‰ãU®‰» ¬U°³¾··¨—®i·&¯¹´c±²´]ªP·1³µ«—®‰»iË2¼»1·¯‰Îl¯À—®8ª—´]­a³µª¬U° ®²ÄÅP»®i··1³¼´]ªP·€³µªk¯À—®a±²´U»Ål¨P·€­a¬i¶¥»®‰Å»®i·¹®iª-¯xƹ¨P·¯ ÚfîBð Ïx¬%¶Ã¯¹´[«—®i·±²»1³µÁ®¯À—®8®iªE¯³¼¯V¶4¬r¯~¯ÀP¬r¯~ÅG´]³µª-¯ ³µªk¯À—®a«³µ¬U°¼´U¿]¨—®Uη¹´[¯À¬r¯í¨P·³µª—¿½À¨P­a¬UªkÅG®‰»§ê´U»º ­a¬Uª±²® ¬U·€¬½·¹¯¬Uª«P¬r»1«k¬r¿]¬U³µªP·¹¯~Ï&ÀP³¾±1À4¯¹´c®‰ãr¬U°µ¨º ¬r¯¹®¯À—®x°¼®i¬r»1ª—®i«8ª—´]­a³µª¬U°Âº?®²ÄÅ»®i··³¼´]ª~¿U®iªP®‰»1¬r¯¹´U»1· ÅP»1´%㝳µ«—®i·2¬UªÃ´%ãU®‰»1°¼¶½»1³µ¿U´U»´]¨P·¯¹®i·¹¯ ï Á!ÁG®‰»1°µ¬UªP«P®‰»iÎ ç ÖUÖr§]ó ËxîP¨—»¯À—®‰» ­ ´U»®UÎEÏx®&«—´ª—´U¯¸ª—´%Ï.Ï&À—®‰¯À—®‰» À¨P­a¬UªP·*Ïx´]¨P°µ«Å»´«¨P±²®³µ«—®iª-¯³µ±‰¬U°Tª—´]­a³µªP¬U°^®²Ä-º ÅP»1®i··³¼´]ªP·!¿]³¼ãU®iª¥¯ÀP® ·1¬U­ ®a«P³µ·±²´]¨P»1·¹® ·³¼¯¨P¬r¯³¼´]ªË ¦eÅ»®‰ã³µ´]¨P·Ã·¹¯¨«—¶Z´U§\¬UªP¬rÅÀ—´U»k¿U®iª—®‰»1¬r¯³µ´]ª$³¾ª ¥ À³µª—®i·¹®Œ·À—´RÏg®i«¯ÀP¬r¯»1¬r¯¹®i·^´U§—­a¬r¯±1À§ê´U»^À¨P­a¬Uª ·¹ÅG®i¬r¸U®‰»1·x¬iãU®‰» ¬r¿U®i«[vÏ-æ(§Ñ´U»g¯À¬r¯ÅP»´UÁl°¼®i­ ïÈÑ ®iÀ ¬UªP«Êîi°µ°µ³µ·ÀÎ ç ÖUÖ'vUó ÎP¬UªP«u´]¨—»g»1®i·¨P°¼¯·g·À—´%ÏZ¯ÀP¬r¯ ³µª±‰°µ¨P«P³µªP¿.·¹ÅG®i¬r¸U®‰»¹ºJ·¹ÅG®i±‰³…{l±Š§ê®i¬r¯¨—»1®i·Ð³µ­aÅP»´%ãU®i· ÅG®‰»§Ñ´U»1­a¬UªP±²®½·³¼¿]ª³…{±‰¬Uª-¯°¼¶UˀÁ~¨—» ±²´]ªP±‰°µ¨P·³µ´]ªÐ³µ· ¯ÀP¬r¯g³¼¯­a¬%¶8ÁG®2³µ­aÅ´U»1¯¬UªE¯Œ¯¹´됨P¬Uª-¯³¼§Ñ¶¯À—®&Á®i·¯ ÅG®‰»§Ñ´U»1­a¬UªP±²®[¯À¬r¯a¬pÀ-¨­a¬UªA±²´]¨P°µ«A¬U± ÀP³¼®‰ãU®Ã¬r¯ ­a¬r¯± ÀP³µª—¿\¯À—®ª—´]­a³¾ªP¬U°®²ÄÅP»1®i··³¼´]ªP·³¾ª[¯À—®±²´U»¹º Ũ·‰Îr¿]³¼ãU®iª ¯À—®±²´]­ Ű¼®‰¯¹®«P³¾·±²´]¨—»1·¹®x±²´]ªE¯¹®²Ä¯¬UªP« ¶"·D¸ õ X¯»½¼ ô…DÇý£ûÿý ÷¾þY ŠÁ[ô¼ýCÍ÷¾ I ‰ À j ŠÁ[ô¼üÇr¹ö Y Í÷ÇmËrö1ùÿ÷¾  € m0m ‰Q\ m I]ЉqÒ§‰  ¶"·D¸ Xº¹¯»½¼ ô…DÇý ûÿý ÷¾þY ŠÁô¾ù‰þrö$‰ø?ùÿøƒ 'Ä  ¶"·D¸ Xº¹¯»½¼ ô¾ %ùúü©øJö$D5mêý$A0m¾ûúö ü©øÇm¾ü©øJö1ø 5mêù]m¾ø?þ%÷ vü1¾ MÁ[ô¾ Rùÿü©øJö$D/m¾ûúö ü©øÇmµ÷ /A m¾ùN]mµø?þR÷ vüÄM„ E/Á½ôÂø ]r:¾Mlfs € &ÃmZ ¶"·D¸ l Xº¹¯»½¼ ô¾ù‰þrö$iø?ù øƒ "Ä Á½ô…R÷?ùD:¾þY výCi ¶"·D¸ l Xº¹¯»½¼ ô¾ù‰þrö$iø?ù øƒ "Ä Á½ôÂø ]r/m¾ùm¾ûúö ü©øÇmË/Ê],¾ZRý- ¶"·D¸ l ¹u»Â¼ ô¼üÇr¹ö Y Í÷ÇmËrö1ùÿ÷¾Z@ € Ò§‰ m  m ‰f ŠÁ½ô¾ Rùÿü©øJö$D/m¾ûúö ü©øÇmµ÷ /A m¾ùN]mµø?þR÷ vüÄME " /Á[ô¾ Rùÿü©øJöD/m¾ûúö ü©øÇmµ÷ /AÃÄ/I ¶"·D¸ l »Â¼ ô…R÷ / mËDÇýgù øÇmêüÇrÇö Y Í÷¾ZDÍýgù ø5BÁô¼þ%øsø Í÷JöD/mËiþærV÷šÄ/I„ ¶"·D¸ l »Â¼ ô…R÷?ùD5mêù]mêûúö1ü©øÇm¾ø?þ%÷ ,¾ZRý-Á[ô¼þRøsø Í÷Jö$D5mƒiþærÍ÷šÅ E ¶"·D¸ l Xu»Â¼ ô…R÷?ùDš¾þY RýCiÁ[ô¼þRøsø Í÷Jö$D/mËiþrÍ÷šÄMEÎ ¶"·D¸ l XÊ»½¼ ô¾ùiþvö‰ø?ù ø m¾ùm¾ûúö ü©øÇmË/Ê],¾ZRý-"Á ô¾ %ùÿü©øJöD//m¾û ö1ü©øÇmµ÷ /A0m¾ùmµø?þR÷ RüÅMEÉ*ÁQô¼þ%øsø Í÷JöD/mËiþrÍ÷šÄMEÉ$Á ô…R÷?ùDÄM„ "  ¶"·D¸ l Xu»Â¼ ô…R÷?ùD5mêù]mêûúö1ü©øÇmƒ5Ê]¾ -ÇüGÁ[ô¼üÇr¹ö Y Í÷ÇmËrö1ùÿ÷¾£o&˜ÀŽÀJmêõ ‰KZKMÓ  ¶"·D¸ l Xu»Â¼ ô…R÷?ùDÍþrV÷?ûúùù ø¾ -ÇüG ¶"·D¸ X¯»½¼ ô¼üÇr¹ö Y Í÷ÇmËrö1ù ÷¾  € mMm ‰Q\ m IЉBÒ£‰ MÁ½ô¼üÇr¹ö Y Í÷Çm¾ýA m¾û ö1ü©øÇm¾ü©øJö1ø :¾ I ‰ À j 0Á[ô…DÇý£ûÿý1÷ÇmƒDÍý‰øs÷Jö ü©ø¾ZRý- ¶"·D¸ X¯»½¼ ô…DÇý ûÿý ÷¾þY MÁ½ô¼ýCV÷¾ X8 s ‰ m8/Á½ô…R÷?ùDÅM„ ##  ¶"·D¸ [»Â¼ ô…R÷ / m¾üsý ûúþ%ø?ùÿým¾üsùei:¾Z@ ‰OЉ m£¿>& K € Љ ŠÁ[ô…R÷?ùDÄ "# 0Á[ô…DÇý ûúý1÷ÇmËDÇý‰øs÷Jö ü©ø¾ZRý- ¶"·D¸ [»Â¼ ô…DÇý ûúý1÷¾þY  ¶"·D¸ [»Â¼ ô¾ Rùÿü©øJö$D/m¾ûúö ü©øÇmµ÷ /A m¾ùN]mµø?þR÷ vüÄME # /Á[ô¼üÇr¹ö Y V÷Çmƒvö ù ÷¾§Ô €  s Ó mÿ € m0ÔVqÁô¾ %ùÿü©øJöD//m¾û ö1ü©øÇmµ÷ /AÃÅ  ¶"·D¸ lGõ »½¼ ô¼þRøsø Í÷Jö$D/mËiþrÍ÷šÅ "  Áô¼þ%øsø Í÷JöD/mËiþrÍ÷šÄM„ Áô¾ù‰þrö$‰ø?ùÿøƒ 'ÅME ¶"·D¸ lGõ »½¼ ô¾ Rùÿü©øJöD/m¾ýA mêûúö1ü©øÇmêü©øJöø /m¾ùm¾þRøsø Í÷Jö$DÍü1¾[ I][ §Á½ô…R÷?ývûFÄ ŠÁ[ô…R÷?ùD5mƒDÍý‰øs÷Jö ü©ø¾ -ÍüG ¶"·D¸ lGõ »½¼ ô¾ùiþvö‰ø?ù ø 'Å\m E ¶"·D¸ lGõ »½¼ ô¼üÇr¹ö Y V÷¾§Ôºm& I &ÇŠÁ½ôÂ÷ /A0Í÷ D/mµ÷ Çûúö1ø?ùÿý]¾ùAaV÷ D ¶"·D¸ lGõ Xº¹É»½¼ ô… £ý²ö1û¾ I ‰ À ‰ l  lfs € &Èm I /Áô…%÷ -mêüsý ûÿþRø?ùÿý]mêüsùeiš¾Z& K @ ‰OЉ m£¿>& K € Љ  ¶"·D¸ lGõ Xº¹Õ»½¼ ô¼üÇr¹ö Y Í÷ÇmËrö ù ÷¾§Ô €  s Ó m˜¿ € m0ÔVWÁNô…%÷ -mêüsý ûÿþRø?ùÿý]mêüsùeiš¾Z& K @ ‰OЉ m£¿>& K € Љ [ÁÐôÂ÷ /A0Í÷ D//m ÷ Çûúö1ø?ùÿý,¾ùvù ø?ùúö û  ¶"·D¸ lGõ Xº¹É»½¼ ô… £ý²ö1û¾ I ‰ À ‰ l  lfs € &Èm I /Áô…%÷?ývûFÅME ¶"·D¸ lGõ XÉ»½¼ ô… £ý²ö1û¾ I ‰ À ‰ l CIX j €  ¶"·D¸ lGõ XÉ»½¼ ô…R÷?ýRûFÅMEŠÁ[ô¼þRøsø Í÷Jö$D5mƒiþærÍ÷šÄ0›-Á½ôÂ÷ /AaV÷ D5m¾÷ Íû öø?ùÿý,¾ùRùÿø?ùúö1û0§Á[ô…%÷?ùDÄME "$#  ¶"·D¸ lGõ XÉ»½¼ ô¼üÇr¹ö Y V÷Çmƒvö ù ÷¾§Ô €  s Ó mÿ € m0ÔV Áô¾ù‰þrö$‰ø?ùÿøƒ 'ÅME ¶"·D¸ lGõ XÉ»½¼ ô¼þRøsø Í÷Jö$D/mËiþrÍ÷šÄ0›› 5A¼ö þRû ø ¶'· ¸ lGõ ¹ ¼¿]¨—»®)§ t ì2À—®€ÁG®i·¹¯&Å®‰»1§ê´U»1­\³µª—¿ »1¨P°¼®€·¹®‰¯‰Î°¼®i¬r»1ª—®i«Ã¨P·³µª—¿ ¯ÀP®±²´]­Ál³µªP¬r¯³¼´]ª[´U§„¯À—®CµËâ´PÛGÚÛC´*ÎQµêÛÚ Ý´PÛÝfµÑÙTÛ\²B·^µÑÛC»”·EÜC´PÛTØO´O³r΢µÑÛ/.C´±\´ÛÝTΝ¬Uª«[ØGÙTÛ^Øf´¹PÝGÜ\²B·^¹i²ØÝ[§Ñ®i¬r¯¨—»®í·¹®‰¯·‰Ëgì2À—®€±‰°µ¬U··¹®i·®iª±²´«—® ¯À—®í§Ñ´]¨—»2¬r¯¹¯¹»1³µÁ¨—¯¹®i·‰Î—®UË ¿—˼Π¥:ª Á8ÎО ¥ ´]°¼´U»iÎ ª »1³µ±²®UÎ Á!Ï*ª—®‰»¬UªP«©Î~¨P¬Uª-¯³¼¯V¶UÎìxžÊìg¶-ÅG®€´]ªP°µ¶UË ¯À—®\³¾«—®iªE¯³µ¯J¶Ã´U§¯À—® »1®‰§ê®‰»®iª-¯‰ËaÌǪp¬U«P«P³µ¯³¼´]ªÎ¯À—® «P³ONu±‰¨P°¼¯J¶!´U§—¯ÀP³µ·ÅP»´UÁ°¼®i­M«—®‰ÅG®iªP«P·´]ª¯À—®gª¨P­ º ÁG®‰»\´U§í¬r¯¹¯¹»1³µÁ¨—¯¹®i·c¬iãr¬U³µ°µ¬rÁ°¼®Ã§ê´U»½«—®i·±²»1³µÁ³µª—¿â¬Uª ´UÁÆ¹®i±²¯³µª\¯ÀP®!«—´]­\¬U³µª ¨ ´]¨—»xªP´]­a³µªP¬U°®²ÄÅP»1®i··³¼´]ª ¿U®iª—®‰»1¬r¯¹´U»ÀP¬U·g¯¹´ ±²´U»»®i±²¯°¼¶\­\¬r¸U®*§Ñ´]¨—»«P³açG®‰»®iªE¯ «—®i±‰³µ·1³¼´]ªP·¯¹´¬U±1À³¼®‰ãU®&¬Uª ®²Ä—¬U±²¯­a¬r¯± Àa¯¹´À¨P­a¬Uª ÅG®‰»§Ñ´U»1­a¬UªP±²®UˌµªP¬U°µ°µ¶UÎU¯À—®~ØGÙØÙTÛÜÝu±²´U»Å¨·³µ· Ũ—Ál°µ³µ±‰°¼¶Ã¬%ãv¬U³¾°µ¬rÁ°¼®UάUªP«p´U¯À—®‰»8»®i·¹®i¬r» ±1À—®‰»1·8±‰¬Uª ª—´RÏM¬r¯¹¯¹®i­ ÅP¯*¯¹´\³µ­ ÅP»1´%ãU®€´]ª[´]¨—»&»®i·¨P°¼¯·iË Á~ªP®´U§€¯À—®4­a´]·¹¯c·1¨—»ÅP»1³¾·³µª—¿k»®i·¨P°µ¯·a´U§€´]¨—» ·¹¯¨P«P¶ ³¾·¯À—®S{ªP«P³µªP¿!¯À¬r¯x­a¬Uª-¶ ´U§¯À—®*¯À—®‰´U»®‰¯Çº ³µ±‰¬U°µ°µ¶a­ ´U¯³¼ãr¬r¯¹®i«u§Ñ®i¬r¯¨—»®i·Å»®‰ã³µ´]¨P·°¼¶8ÅP»´UÅG´]·¹®i« ³µª[¯ÀP®°µ³¼¯¹®‰»1¬r¯¨P»®í«—´\ª—´U¯!³µ­ ÅP»1´%ãU®íÅG®‰»§Ñ´U»1­a¬UªP±²® ´]ª´]¨P»&¯¬U·¹¸Ëæ£*´%Ïx®‰ãU®‰»i㵪[Å»®‰ã³µ´]¨P·Ïg´U»1¸lÎ ¬ ´U»¹º «P¬Uª ïǬ ´U»1«P¬UªÎUòråUåUåEóT¨—¯³¾°µ³…¯‰®i«€¯À—®ØGÙTØGÙTÛÜÝ8±²´U»¹º ŨP·a¯¹´6«—®‰ãU®i°¼´UÅ=¬â»1¨P°¼®²º?Ál¬U·¹®i«A­ ´«—®i°´U§€ª—´]­a³Âº ªP¬U°*®²ÄÅ»®i··³¼´]ª|¿U®iª—®‰»1¬r¯³¼´]ª=§Ñ´U»~‡¢‘ Ô ‡ ­ f’ ­ „ ® ‘£Î ³sË ®UËá¯À—®Ð·1¨—Á·¹®‰¯k´U§ ´]¨—»â«P¬r¯¬.Ï&À—®‰»®Ð¯À—®Ð»1®‰§¾º ®‰»®iªP±²®[»1®i°µ¬r¯³¼´]ªA³µ· %'&'(*)"+ Ë ¬ ´U»1«P¬UªA¬U°µ·¹´¥§Ñ´]¨PªP« ¯ÀP¬r¯âãr¬r»¶³µª—¿ZÀ—´%Ï ±²´]ª-¯¹»1¬U·¹¯p·¹®‰¯·6¬r»®A«—®‰»1³¼ãU®i« ­a¬U«—®~ª—´8·³¼¿]ªP³…{l±‰¬UªE¯x«P³açG®‰»®iªP±²®*³¾ª\ÅG®‰»§Ñ´U»1­a¬UªP±²® ¬UªP«[¯À¬r¯2¯À—®ØGÙ^ÛTØO´Q¹PÝGÜ\²B·o¹i²ØlÝ\³~­a´«—®i°Gρ¬U· ·³µ¿]ªP³…{±‰¬Uª-¯°¼¶íÏx´U»1·¹®2¯À¬Uª ¯À—®_µêÛ^Ýf´ÛÝfµÑÙTÛ\²B·bµêÛÚ »”·EÜB´PÛ^Øf´Q³!­ ´«—®i°sËÌǪ[§©¨—¯¨—»®€Ïx´U»¸ÎPÏx®í۵¬Uª½¯¹´ ÅG®‰»§Ñ´U»1­ ·³¾­a³µ°µ¬r»®²ÄÅ®‰» ³µ­ ®iª-¯·´]ªÃ«P³…箉»®iª-¯&±²´U»¹º ÅG´U»1¬¥Ï&³µ¯À «P³açG®‰»®iª-¯a±²´]­a­¨ªP³µ±‰¬r¯³¼´]ªP·\·¹®‰¯¹¯³µª—¿]· ¬UªP«8ÅP»1´UÁ°¼®i­Þ¯J¶Å®i· ï ®UË ¿—˄Ű¾¬UªPªP³µª—¿—Îr·± À—®i«P¨P°µ³¾ª—¿—Î «—®i·1³¼¿]ªP³µª—¿-󯹴c«—®‰¯¹®‰»1­a³µªP®!Ï*À—®‰¯À—®‰»&´]¨P»_{ªP«P³¾ª—¿]· ¬r»® ·¹ÅG®i±‰³…{±í¯¹´u¯À—®¿U®iª—»®´U§«P³µ¬U°¼´U¿]¨P®i·2¯ÀP¬r¯!Ïg® ®²Ä—¬U­a³µª—®íÀP®‰»®UÎP´U»&Ï*À—®‰¯À—®‰»2¯À—®‰¶[¬r»®­ ´U»®í¿U®iªº ®‰»1¬U°?˦â®8¬U°µ·¹´\³µª-¯¹®iªP«[¯¹´c«—®‰ãU®i°¼´UÅ4´U¯À—®‰»*§Ñ®i¬r¯¨—»® ·¹®‰¯·¯¹´ÃÅP»1´%㝳µ«—® ¬U«P«P³¼¯³µ´]ªP¬U°Œ¬rÅPÅP»´%ĝ³¾­a¬r¯³¼´]ªP·~¯¹´ ¯À—®i·®­ ´«—®i°µ·‰Ë j¥<PtÇ<PC]<lFm<Y Ö×DØFÙÚÜ۲ݽÝÂÙÞ¡× Þ"ßáà× âãáä0å â@Ù æªçè è]é]æëêUâ@×DìOíÜåDì êrÛrà4î,ïMð"êUñ½× ÝÂå òF× óšíºØH×Dâ@ã]ô'õñ½ÞÚ1Ùö Ùâ@× ÝŽÝz×÷ Ùâ@Úæ î]ô$Ú× Þ©ø²æù§âÙÞ'Þ"× Þ[×DÞ"ß©ú²ÙâûŽÙ âíFú‡æMä0Ý½× âãæ«çè è ü'æ ïqÙ:ý,ñzó ×DÝóþ'å ñ½ó ه×DÞ"ßÜó:å Þ"ó:Ùõ,íô"× Ý õ$× óšíÚrñ½Þ4ó å Þ]ö Ùâ1ÿ Ú@× í@ñÂå ÞBæ     ! #"$&%(' )+*&$&, -.( /*012$435 6879+*&:;<$æ úrÙ â@û$Ùâ1íuúnæä0Ýz×Dâ@ã ×DÞ"ß±ä§× í@þ'Ù â@ñ½Þ'Ù>=næ8à×DâÚ1þ$×DݽÝÈæ çè&?"ç æqêUÙ@"Þ"ñ•í@Ù0â@Ù:ìpÙâÙÞ"ó:Ù£×DÞ"߇؇ô,í@ô"×DÝ]ã]Þ'åAºÝ½Ùß,ò Ù æ B“Þ¯Ö å Úþ'ñDCFEÜÙû'û$ÙâGC§×DÞ"ß«î,×DòC/Ùß'ñ•í@å âÚCHI J:D"  LKMN"%((" O. 6&("(P  6Q*CDõ$×Dò ÙÚ/çSRUT]üWV'æäHX5YLC ä§× ؇û'â@ñ½ß'ò Ù æ ú²ÙâûŽÙ âí únæ©ä0Ýz×Dâ@ãÏ× Þ"ߨêrÙ×DÞ'Þ"×ZEÉñ½ÝÂã ÙÚ1ÿP[Jñ½û'û"Úæ çè&? ü"æ\=ºÙ:ìpÙââ@ñ½Þ'ò²× Ú ×²ó:å ÝÂÝz×DûŽå â× í@ñÂö Ù-õ"âå,ó:ÙÚÚæ]79^*_ D<:C`W`,ð½çTQV è'æ YŠ×Dô"Ý\= æä0å þ'Ù ÞBærçè è/a,æ#Hbc<%(  d1e':G6"# f35_ ; g#%<  h:P  ij*Wc%( æ/à8Blk>Y/â@ÙÚ@ÚC,ù§å Ú1íå ÞBæ EÉñ½ÝÂݽñ½× Ø ä0å þ'ÙÞBæ«çè è ü'æ­ïBÙ× âÞ"ñÂÞ'òíâ@Ù ÙÚ<×DÞ$ßWâ@ô'ÝÂÙÚ AºñÂíþHÚÙ:í1ÿ˜ö ×Dݽô'Ùß7ìpÙ× íô"âÙÚ æ.B“Þnmd(:o'p79G _ c%(pl qo'r35<%( s3f"("S%< D<t 435; g#%<  h:P  ij*/c%(:æ =ºå û$Ùâ1í7êJ×DݽÙ× Þ"ßø/þ]ô"ßu=ºÙñ•í@Ù âæ­çè è/a,æ ä0å ØFõ'ô,í@×Dÿ í@ñÂå Þ"×DÝQB“ÞíÙâõ'â@Ù:í× í@ñÂå Þ"Ú-åDì"í@þ'Ù][Jâñzó:Ù×DÞFà× ý,ñ½ØHÚ ñ½Þ í@þ'Ùr[JÙÞ'Ù â× í@ñÂå ލåDìF=ºÙ ìpÙ â@âñ½Þ'òøŠý,õ'â@ÙÚ@Ú1ñ½å Þ"Úæq79^*_ Dvxwy%<c%(CBçèzD`W{šð|`V&VT` ü&VCÛ²õ'âT]Ö ô'Þ'Ù æ ù£×Dâ@û"×Dâ×®êrñ­ø/ô'ò Ù Þ'ñ½å:C}YŠ×DØFÙ Ýz×~E æÖ å âß'×DÞdC4Ö å ÿ þ"× Þ'Þ"×nê7æ,àå]å â@Ù&C×DÞ"ß=ºñzóþ'ØFå Þ$ß8únæQk£þ'å ØH× Ú1å ÞBæ çè è&?"æHÛ²ÞÙ ØFõ'ñ½â@ñ½ó×DÝ/ñ½Þ]ö ÙÚ1íñ½ò ×Díñ½å ÞÜåDìºó:å ݽÝz×DûŽå â× ÿ í@ñÂö Ù4ß,ñz×Dݽå ò ô"ÙÚæB“Þ€3 7d-L_7IF-\h+‚ƒ…„&†0M I+G%(((6_ Q*"n qo'ˆ‡ 'Q;$_"(jWo'‰79G + %(8l qo'3!"(_ "S%< D<p 279bcP ;<c  L-/*&N"(;<%"(Càå Þ,ÿ í@âÙ×DÝ;C$ä§×DÞ"× ß'×C,Û²ô"ò ô"Ú1íæ ù£×Dâ@û"×Dâ×WêUñnø/ô'ò Ù Þ'ñ½å:C5YŠ×DØFÙ Ýz×sE æºÖ å âß'×DÞ\Cf=ºñzóþ,ÿ ØFå Þ$ß«ú‡æFk£þ'å ØF× Ú1å Þ\C0× Þ"ß¯Ö å þ"×DÞ"Þ"×Êê‡æMàåå âÙ æ `&R&R&R"æŠk£þ"ÙÍ× ò â@Ù Ù ØFÙÞ í«õ'â@å]ó ÙÚ@Ú ð Û²Þ Ù ØFõ'ñ½âñÂÿ ó×DÝ<ñÂÞ]ö ÙÚ2í@ñÂò× í@ñÂå Þ åDì8þ]ô'ØH×DÞ,ÿ˜þ]ô'ØH×DÞ ó å ØFõ'ô,í@Ù âÿ ØFÙß'ñ½×DíÙߟó:å ÝÂÝz×DûŽå â× í@ñÂö Ùß,ñ½× ÝÂå ò ô'ÙÚ æ‹‡u3H&:(  th:Pc ;<  bc  fl ŒŽr :_79bcP wcD:6<"šæ ù£×Dâ@û"×Dâ× Ö"æL[Jâåڏ8× Þ"ß[ä§×DÞ$ß'× ó Ù8ï/æ-î,ñ½ß,Þ"Ù âæ­çè&? ü'æ Û§ííÙ Þí@ñÂå Þ\C ñ½ÞíÙÞ í@ñÂå Þ"Ú ×DÞ$ß4í@þ'Ù8Ú1íâ@ô"óší@ô'âÙHå ì£ß'ñ½Ú1ÿ ó å ô'âÚ1Ù æb79bcP ;<  -L/*&N"D<%"C"çU`,ð½çéaUT`&R"æ Ö ô'ÝÂñz×ùJæú²ñ½â@Ú@óþ]û$Ùâò$æ­çè èWV'æ8Y/ñÂí@óþ[× ó ó Ù Þí‡ñ½Þ[ó:å Þ,ÿ í@Ù:ý]íðnõ"âÙß,ñ½ó:íñ½Þ'òñ½Þ í@å Þ"×Díñ½å Þ"× ÝŠõ'â@å ØFñÂÞ"Ù Þ"ó ÙFìpâå Ø í@Ù:ý]íæ‘35; g#%<  !h  ij*/ %^’  iC<ü&V"ð VWRWaST V&WR'æ YŠ×DØFÙݽ׍E æBÖ å â@ß'× ÞBæM`RWR&R"æMh:P:;<c  \hG“H:c%(" e]”o•(%c–5^6W"% c;<:"]4KM<  J^*:S,#IvG<6W %^ (+— sHbc<%(  Hw;:6$ æ2YMþBæ ê‡æ íþ'ÙÚ1ñzÚGC˜B“Þí@Ù ÝÂÿ ݽñÂò Ù Þí­î]÷,Ú1íÙØFÚeY/â@å ò â×DؙCxX²Þ"ñÂö Ù âÚ1ñÂí2÷å ìYMñ•íí@Ú1ÿ û'ô"âò þBæ ä0þ'â@ñ½ÚÊàÙÝÂݽñzÚ1þ\CHÛrÝÂñzÚ2í×Dñ½â}šµÞ"åDí1íSCHÖ å Þ~›Uû$ÙâÝz×DÞ$ß,Ù âSC × Þ"ß©àñ½óã‰› œ êUå Þ'Þ"٠ݽÝÈæ°çè è&?"æ«ø/ý]õŽÙ â@ñ½Ø<ÙÞí@ÚFô"Ú1ÿ ñ½Þ'òFÚ2í@å]óþ$× Ú1íñzórÚÙ×DâóþFìpå â§í@Ù:ý]í§õ"Ý½× Þ'Þ'ñ½Þ'ò"æLB“Þ™ I+_ %(((6/*"H ˜h:P ;<c  7G +c%(…Ž‚M D^  -. Q*: G*Wƒ] + D<:C'õ$×Dò ÙÚºèéSTŽçGRW?'æ à× âò×Dâ@Ù:í[<æ0àåÚ1Ùâ×DÞ$ßÉÖ å þ"×DÞ'Þ$×­àåå âÙ æ¡çè è/a,æ B“Þ]ö ÙÚ2í@ñÂò× í@ñÂÞ'òJó:ô'Ù£Ú٠ݽÙóší@ñÂå Þ7× Þ"ßnõ'Ýz× ó Ù ØFÙ ÞíñÂÞ7íô'ÿ í@å â@ñ½× ÝBß,ñ½Ú@ó:å ô'âÚ1Ù æ.B“Þe3 7d-‰„/C'õ"× ò ÙÚµçGV&RTŽçSVé]æ Ö å ޒ›UûŽÙ â@Ý½× Þ"ß,Ùâæ©çè èW?'æ­êråÜíþ'Ùâñ½ò þí7íþ'ñ½Þ'ò"æ½æ½æ û'ô'í Ù ý]õŽÙó:íÜí@þ'Ù«ô'Þ'Ù:ý,õŽÙó:íÙßBæž7]  D<c  Ž-:_ *&N"D<%"C`cz<VW{:ð a&R'çTaRW?'æ =ºÙû$Ùó ó× Ö$æ}YŠ× ÚÚå Þ"Þ'Ù× ôBæ çè èWa'æ B“ÞíÙò â× í@ñÂÞ'ò [Jâ@ñzó:Ù× ÞW×DÞ"ßÛ§ííÙÞ í@ñÂå Þ"×Dݺä0å Þ"Ú1íâ×Dñ½Þí@Úæ™B“ÞŸ H+_ %(((6Q*"Ml 5hc7d3]hx„/ æ à× ÚÚñ½Ø<å}Yå]ÙÚñÂå$æ}`RWR&R"æ­ÛrÞ'Þ'åDí× í@ñÂÞ'ò­×Üó å â@õ'ô"Úníå ß'Ù ö ÙÝÂå õÉ×DÞ"߯Ùö× ÝÂô$× íÙÜß,ñ½Ú@ó:å ô'âÚ1Ù Ù Þí@ñ•í2÷«âÙ×DݽñJ× ÿ í@ñÂå Þ×Dݽò å âñÂíþ"ØFÚðqñzÚ@Ú1ô'ÙÚM× Þ"ßHõ'â@٠ݽñÂØFñ½Þ"×Dâ@÷<âÙÚ1ô'ÝÂí@Úæ B“Þ  H+S%G¡M-. Q*: G*Wu–5"^%^"s c6tHv  i: D< 79G + %(0L-d–bf7L_P¢/£W£&£Dæ øMÝÂݽ٠Þ¤§æIY/â@ñÂÞ$ó:Ù æ çè&?"ç æ¥k åA§× â@߯×Wí@× ý,å Þ'å Ø7÷[åDì ò ñÂö Ù Þ,ÿ˜Þ'ÙGAŸñ½Þ,ìpå â@ØH× í@ñÂå ÞBæFB“Þe–5 &6<%(  y H^ *&r D<%"C õ$×Dò ÙÚ]`&`&VUT`Wa&a'æ Ûró× ß,ÙØFñ½ó!Y/â@ÙÚ@Úæ àñzóþ"× Ù Ý,îíâ@ô'ûŽÙ£× Þ"ßnà× âñz×]E­å ÝÂíÙâ@Úæ˜`&R&RWR'æBÛ©õ'âå û,ÿ × û'ñ½ÝÂñzÚ2í@ñ½óò ÙÞ'âÙ ÿÃñ½Þ"ß,Ùõ$ÙÞ"ß,ÙÞ í7Ø<å,ß,ÙÝ£åDì²õ'â@å Þ'å ØFñ•ÿ Þ$×DݽñJ× íñ½å ÞqæMB“Þs H+S%^((6Q*" o' ‚M'235_ <%( Ÿ1e(DQ*} qo'3f"("S%< D<™ e79bcP _ ;<  y-L/*&N"D<%"C'õ"× ò ÙÚJçS?UT`Wa,æ Ö$æŽàWæŽùJæ kqÙâã Ù ÞBæ7çè&?/a,æ™O." c6qm\c%D<u !35%_ %(:;: ;< ,}w r9:r:D"šæ’YMþBæ ê7æíþ"ÙÚñ½ÚGC B“Þ$Ú2í@ñ•í@ô,íÙ ìpå âqYÙ âó:Ù õ'íñ½å ޟ=²ÙÚÙ×Dâóþ\C§ø/ñ½Þ"ß,þ'å ö Ù Þ\C k£þ"َ¦²Ù:í@þ'Ù â@Ýz×DÞ"ß'Úæ à× âñ½Ý½÷Þۇæ E­× ÝÂã Ù âæ çè è ü'æ²ïqñÂØFñÂíÙßÜ× ííÙ Þí@ñÂå Þ4×DÞ"ß ß'ñ½Ú@ó:å ô'â@ÚÙ¯Ú2í@âô"ó:íô'â@Ù æ§79bcP ;<  -/*&N"(_ ;<%"(Cc`&`ÿl`,ð|`&aWaST`Dü$æ ä0þ'ñ½Þ'ò ÿ˜ïqå Þ'ò5¨/Ù þ8× Þ"ßFä0þ"âñzڊàÙ ݽݽñ½ÚþBæ§çè è]é]æqÛrÞHÙ Ø<ÿ õ"ñÂâ@ñ½ó×D݊Ú1íô"ß,÷å Þí@þ'Ù7ò Ù Þ'Ùâ@×Díñ½å Þå ì0×DÞ$×Dõ'þ'å â@×8ñÂÞ óþ"ñÂÞ'ÙÚ1Ù æ©79bcP ;<  F-LQ*N";<%"C#`&V ÿç ð½çü èT çè&R"æ
2000
24
       "!$#&%" '()* +-,/.0.1,24365879:,;,/<>=?,@BA.0CEDAF0G4H+-AGI2I24F0.1,;. JLKIMONQPSRK4TUK4V4MXWXY[Z'KI\/]OMK_^`MONbacK d MOKI\Nbe fgKihVQecNbMjVQ]ONbMk l/m WXYKInop\q;Ksr[V4`acKIM]O`otT ubvbw4xy r[^z_h{}|~MjVQ\cW€K ƒ‚;‚ „†…;‚b‡ ˆ‰/Š‹;‚Œ‰c‚4ŠcŽb‰Œ‹/ŽI‘:’ƒ“”Š;•/•;–/—c‚”—ˆb‰;Šƒ‹‚˜Œ‰‚bŠŽ”‰ŒU‹;Ž4 ™š1<›24G4Aœb2 :ž ŸIžU¡£¢¤?¥§¦bž©¨ª4ž)«¬­žO¢?®4ª4¥°¯€±4ž³²p´›¤i¢´›ªIµ ¡?¬­¤­±b¢¬­¥¶ª4·1¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­ž(¬­¤£¨ºª4¡£Ÿ»±”¢ž)¤?¡¼¬?®”¨X¬/¥§ª»µ ½ ´›¾ ½ ž)¡1¤­žO¨º¿4¿4¾¶ÀI¥¶ª4·¬?®4ž'¤­žU·›±4¾°¨X¤Áµ¹ž»¿Q¤­ž)¡?¡­¥¶´›ª ¢´€Ã¿4¥¶¾¶ž)¤¬­´Ä¥¶¬­¡8´X«ªE´›±4¬?¿4±4¬Å ÆtÃÇ¿4¾¶žµ ÃǞ)ªI¬­žOŸs¥¶ª ¨Xª ¨º¾¶·›´›¤?¥¶¬­®4ÃÈ¢)¨X¾¶¾¶žOŸ:¢´€Ã¿4¥¶¾¶žµ ¤?ž)¿4¾°¨›¢ž€É4¬­®Q¥§¡(¬­ž¢­®Qª4¥°¯›±Qž0®”¨º¡¼¿4¤?´ ½ žOŸÊ±4¡­ž²p±4¾ ²p´€¤¼®b¨Xª”ŸI¾¶¥¶ª4·ª4´€ªIµÁ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥ ½ ž¿Q®4ž)ª4´€Ãµ ž)ªb¨4˨Xª”Ÿ8«(ž ŸIž)ÃÇ´›ª4¡?¬­¤¨X¬­ž:¥¶¬L´›ªÍÌ&¨X¾°¨À ²p±Q¾§¾Îµt¡­¬?ž)Ã6¤?žOŸI±4¿Q¾§¥°¢)¨º¬­¥¶´›ªs¨Xª”ŸLÏ1¤£¨º¦4¥°¢¡­¬?ž)à ¥¶ªI¬­ž)¤ŸI¥¶·›¥¶¬£¨º¬­¥¶´›ª/Å Ð Ñ .ƒ2»GQÒC0F0œb24H?Òc. ÌL´€¡­¬ª”¨X¬?±4¤£¨º¾c¾°¨Xª4·€±”¨X·€ž)¡¢´€ª4¡­¬?¤­±”¢¬«(´›¤ŸI¡'¦IÀ:¢´›ªIµ ¢)¨º¬­ž)ª”¨º¬­¥¶ª4·Óô›¤?¿4®4ž)ÃǞ)¡¬­´›·€ž)¬­®Qž)¤L¥§ª¡?¬­¤­¥°¢¬L´€¤£ŸIž)¤?¡OÅ Ô ±”¢?®_¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥ ½ žÃ´€¤­¿4®Q´›¬£¨€¢¬­¥°¢¡¢)¨ºª¦bž¥¶ÃÇ¿4¤­ž)¡Áµ ¡­¥ ½ ž)¾¶À}¿Q¤­´4ŸI±”¢¬?¥ ½ ž›ÉžU¡­¿bžO¢¥°¨X¾¶¾¶À_¥¶ªÕ¨º·›·€¾§±Q¬­¥¶ª”¨X¬?¥ ½ ž0¾°¨ºªIµ ·›±b¨X·›žU¡¾¶¥¶Ö›žLÏ0ÀIÃ}¨º¤£¨&´›¤Ç× ±Q¤­ÖI¥¶¡­®;É0¨ºª”Ÿ¥¶ª©¨X·€·›¾¶±4¬­¥Îµ ª”¨º¬­¥ ½ žOغ¿”´€¾¶À>¡?ÀIª>¬?®4ž)¬?¥Ù¢˜¾Ù¨ºª4·›±b¨X·›žU¡0¾¶¥¶Ö›ž'ÆtªI±4ÖI¬­¥¶¬?±4¬OÅÆtª ¡­±b¢­®E¾°¨ºª4·›±”¨º·›ž)¡©¨Ú¡­¥¶ª4·€¾¶ž©«¼´€¤£ŸÛÃ}¨ÀB¢´€ª>¬¨X¥¶ª6¨º¡ Ã}¨ºªIÀ©Ã´€¤­¿4®4žUÞ)¡_¨X¡:¨XªÍ¨ ½ ž)¤£¨º·›žµt¾¶ž)ª4·€¬­®ÝÜ$ª4·›¾¶¥¶¡­® ¡­žUª>¬?ž)ª”¢ž€ÅßÞ à/¥§ªQ¥§¬?žµt¡­¬£¨º¬­žÃ´€¤­¿4®Q´›¾¶´›·€À6¥¶ªÈ¬­®4žÛ¬­¤£¨€ŸI¥¶¬­¥¶´›ªá´X² ¬­®QžS׫(´Xµ­âž ½ ž)¾Çã¹ä1´›¡?֛ž)ªQª4¥¶ž)Ã¥tÉåæ›ç€è›é_¨ºª”Ÿ[ê1ž)¤­´ ¥¶Ã¿4¾¶ž)ÃǞ)ªI¬£¨º¬­¥¶´Xª4¡ãtëcž)žU¡­¾¶ž)À-¨ºª”Ÿ©ä¨X¤?¬­¬­±Qª4ž)ª;É0ì€í›í€í›é ®”¨º¡¦bž)ž)ª ½ ž)¤­À¡­±b¢)¢ž)¡?¡t²p±4¾¥¶ª¥§ÃÇ¿4¾¶ž)ÞUª>¬?¥¶ª4·L¾Ù¨º¤­·€žµ ¡£¢U¨X¾¶ž›É€¤­´›¦Q±4¡­¬¼¨ºª”Ÿ"žî}¢¥¶ž)ªI¬cô€¤­¿4®Q´›¾¶´›·€¥Ù¢U¨X¾>¨ºª”¨X¾¶ÀIï)ž)¤Áµ ·›žUª4ž)¤£¨º¬­´€¤­¡1²p´›¤0¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥ ½ ž¾°¨XªQ·›±”¨º·›ž)¡É4¥¶ª”¢¾¶±”ŸI¥¶ª4· ¬­®Qž¢´›ÃÇÞ)¤¢¥°¨X¾¶¾¶À¥¶Ã¿b´›¤?¬£¨XªI¬Ü±4¤­´€¿”ž¨Xª:¾°¨Xª4·€±”¨X·€ž)¡ ¨XªbŸBª4´€ªIµtÆtª”ŸI´Xµ­Ü±4¤?´›¿bžO¨XªÄž”¨XÃÇ¿4¾¶ž)¡:¾¶¥¶Ö›ž-à/¥¶ª4ª4¥¶¡­®/É ×±4¤­ÖI¥¶¡­®©¨ºª”Ÿð±4ª4·I¨X¤?¥Ù¨ºª;Åð´X«(ž ½ ž)¤É0ä1´›¡?֛ž)ª4ªQ¥§žUÃ¥ ®4¥¶Ã¡?ž)¾Î²'±4ª”ŸIž)¤?¡­¬?´>´4Ÿñ¬­®”¨º¬}®Q¥§¡L¥¶ª4¥¶¬­¥°¨º¾¥¶ÃÇ¿4¾¶ž)Þ)ªI¬£¨jµ ¬­¥¶´€ª:®”¨›Ÿ³¡­¥¶·›ª4¥Î¸ ¢U¨XªI¬¾¶¥¶Ã¥¶¬¨X¬­¥Î´›ª4¡¥¶ª:®”¨ºª”ŸI¾¶¥¶ª4·_ª4´€ªIµ ¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥ ½ ž'ÃÇ´›¤?¿4®4´›¬¨›¢¬?¥°¢(¿4¤­´4¢ž)¡?¡­ž)¡ò óô ªQ¾§Às¤­žU¡­¬­¤?¥°¢¬­žOŸÓ¥¶ªI¸4”¨º¬­¥¶´›ª ¨ºª”Ÿs¤­žOŸ»±4¿4¾¶¥Îµ ¢)¨º¬­¥¶´›ª©¢)¨ºª¦”žL®”¨ºª”ŸI¾¶žOŸ¨›ŸIž¯›±b¨X¬­žU¾§ÀÓ«¥¶¬­® ¬?®4ž¿Q¤­ž)¡?ž)ªI¬'¡­ÀI¡­¬?ž)ÃiÅ Ô ´›Ãž"žÂ4¬­ž)ª4¡?¥¶´›ª4¡'´€¤ õ?ö¼÷Xø'ùUú­ùLû›ü°ýXþ1ýXÿºýXýgý ý ?ù£÷Lù)ú ÷ €ú!" "# $&%' ( ) "*,+ ý  ¶þ0û›ÿIÿ -XýXû.£û€þ0ýXÿ /gý0ßýû"123 54 þ0ýXÿIÿ ¶ü°ü°û€ÿ -Xý6 7%89ú:­ù;<=%8>cù? 3<@BABDCùUú)E $ ¤­ž ½ ¥¶¡­¥¶´›ªQ¡}«¥¶¾¶¾¦”žsª4žO¢ž)¡­¡¨X¤­À8²g´›¤_¨Xª[¨›ŸIžµ ¯›±b¨X¬­žsŸIž)¡£¢¤­¥¶¿4¬­¥¶´€ª©´X²'¾°¨Xª4·€±”¨X·€ž)¡¿b´›¡­¡?ž)¡­¡Áµ ¥§ªQ·žÂ4¬­ž)ª4¡?¥ ½ ž¥¶ªI¸4”¨X¬?¥¶´›ª´›¤¤­žŸI±4¿4¾¶¥°¢)¨X¬?¥¶´›ª8F ã¹ä1´›¡?֛ž)ª4ªQ¥§žUÃ¥tɔåæ›ç€è4ÉIì"G›éUÅ ×®Q¥§¡¾¶¥¶Ã¥¶¬¨X¬­¥¶´Xª®”¨º¡´X²¼¢´€±4¤­¡?žª4´€¬ž)¡£¢U¨X¿bžOŸ_¬?®4ž'ª4´ºµ ¬?¥°¢ž´º² ½ ¨X¤?¥¶´›±4¡¤­ž ½ ¥¶ž)«(ž)¤­¡É(ž›Å ·”Å Ô ¿4¤?´>¨º¬)ã£åæ›æ€ì›é)Ås:ž ¡?®”¨X¾¶¾¨X¤?·›±4žL¬­®b¨X¬¬­®Qž}ÃÇ´›¤?¿4®4´›¬¨›¢¬?¥°¢¾¶¥¶Ã¥¶¬£¨º¬­¥¶´›ª4¡´º² ¬?®4ž¬­¤¨›ŸI¥¶¬­¥¶´€ª”¨X¾b¥¶Ã¿4¾¶ž)ÃǞ)ªI¬£¨X¬?¥¶´›ª4¡c¨X¤?ž¬­®4ž'ŸI¥¶¤­ž¢¬0¤?žµ ¡?±4¾¶¬0´º²$¤?ž)¾¶ÀI¥§ªQ·¡­´›¾¶ž)¾¶ÀL´›ªL¬?®4ž‘¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥¶´€ª_´›¿bž)¤£¨jµ ¬?¥¶´›ª¥¶ªÇô›¤?¿4®4´€¬£¨›¢¬­¥°¢0ŸIžU¡£¢¤?¥§¿Q¬­¥¶´›ª;Å :žŸIž)¡¢¤­¥¶¦bž‘¨Ê¬­žO¢?®4ªQ¥Ù¯€±4ž€É”«¥¶¬?®4¥¶ª_¬­®4ž"ê0ž)¤?´ ¥¶Ãʵ ¿Q¾§žUÞ)ªI¬£¨º¬­¥¶´›ª´X²¸bª4¥¶¬­žµt¡­¬¨X¬­žÃÇ´›¤­¿Q®4´›¾¶´›·€À>ɬ?®”¨X¬¢´›¤Áµ ¤?žO¢¬?¡'¬­®4ž¾¶¥¶Ã¥¶¬¨X¬­¥Î´›ª4¡0¨º¬0¬?®4ž¡­´€±4¤£¢ž€É;·€´›¥¶ª4·¦bž)À€´›ª”Ÿ ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥¶´€ª'¬­´¨X¾¶¾¶´X«i¬?®4ž²p±4¾¶¾€¤£¨XªQ·›ž(´X²I¸”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž ´€¿”žU¤£¨X¬?¥¶´›ª4¡/¬­´0¦bž(±4¡­žOŸ¥¶ª'ÃÇ´›¤?¿4®4´›¬¨›¢¬?¥°¢ Ÿ»ž)¡£¢¤?¥¶¿4¬­¥¶´›ª/Å H ž)·›±Q¾Ù¨º¤tµtžÂ4¿4¤­žU¡­¡­¥¶´€ª}ŸIž)¡¢¤­¥¶¿4¬?¥¶´›ª4¡0¨º¤­ž¢´›Ã¿Q¥§¾¶žOŸ¥¶ªI¬­´ ¸bª4¥¶¬­žµt¡­¬¨X¬­ž¨º±4¬­´€Ã}¨º¬£¨´€¤'¬­¤¨Xª4¡ŸI±”¢žU¤­¡_ãt¢´›¾¶¾¶žO¢¬?¥ ½ ž)¾¶À ¢U¨X¾¶¾¶žOŸLª4ž)¬t«(´›¤?ÖI¡£é'¨º¡±Q¡­±”¨º¾¹É/¨Xª”ŸL¬?®4ž)ª:¬?®4ž¢´›ÃÇ¿4¥¶¾¶ž)¤ ¥¶¡¤­žµÁ¨º¿4¿4¾¶¥¶žOŸ¬­´¥¶¬­¡c´X«ª´›±Q¬­¿4±4¬ɛ¿Q¤­´4ŸI±”¢¥¶ª4·¨0ÃÇ´4ŸI¥Îµ ¸bžOŸÇ¦4±4¬¡­¬?¥¶¾§¾Q¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­ž'ª4žU¬t«¼´€¤­ÖÅ×®4¥¶¡¬­žO¢?®4ªQ¥Ù¯€±4ž€É ¥¶ÃÇ¿4¾¶ž)Þ)ªI¬­žOŸÚ¥¶ª6¨ºª ¨º¾§·€´›¤?¥§¬?®4à ¢)¨X¾¶¾¶žOŸI:JLK=M8NPOQ8R S QTM8O"U:IVQ;É®”¨º¡'¨X¾¶¤­žO¨€ŸIÀL¿4¤­´ ½ žOŸL±4¡?ž²p±4¾$²p´›¤1®”¨Xª”Ÿ»¾§¥¶ª4· Ì&¨X¾°¨À²p±Q¾§¾Îµt¡­¬?ž)Ãͤ?žOŸI±4¿4¾¶¥°¢)¨º¬­¥¶´›ª¨ºª”Ÿ"Ï0¤£¨º¦4¥°¢c¡?¬­ž)Ã¥¶ªIµ ¬?ž)¤£Ÿ»¥§·€¥¶¬£¨X¬?¥¶´›ª;ÉQ«®4¥°¢­®&«¥§¾¶¾/¦”žŸIž)¡¢¤­¥¶¦bžOŸs¦”žU¾§´X«'Å1ëcžµ ²g´›¤­ž&¥§¾¶¾¶±4¡?¬­¤£¨º¬­¥¶ª4·:¬?®4ž)¡­ž ¨º¿4¿4¾¶¥°¢)¨X¬?¥¶´›ª4¡É«¼ž&«¥§¾¶¾ ¸”¤­¡?¬ ´€±4¬­¾¶¥¶ª4ž´›±4¤"·›ž)ª4žU¤£¨X¾¨X¿Q¿4¤­´I¨›¢?®i¬?´L¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­žÃÇ´›¤Áµ ¿Q®4´›¾¶´›·€À>Å W X H­.1Ht24,:YZ¼24A/24,7[ÒcG8\130Òc=?Ò^]c@ _L`Pa bc^d8e)f6ghg,d8cBikjmlnc^lno d8ph'qVc Ætª:¬?®4žLô€¡­¬'¬?®4ž)´›¤?À€µ"¨ºª”Ÿ:¥¶ÃÇ¿4¾¶ž)Þ)ªI¬£¨º¬­¥¶´›ªIµtª4ž)±Q¬­¤£¨º¾ ²g´›¤­ÃsÉô€¤­¿4®Q´›¾¶´›·€¥Ù¢U¨X¾c¨Xª”¨º¾¶À>¡?¥¶¡£ØX·€ž)ª4ž)¤¨X¬?¥§´€ª:´X²0«¤?¥§¬Áµ ¬?ž)ª:«(´›¤£Ÿ»¡¢)¨Xª:¦bžÃ´4ŸIžU¾§žŸi¨º¡¨Ç¤­ž)¾°¨X¬?¥¶´›ª:¦bž)¬t«¼žUž)ª ¬?®4ž(«¼´€¤£ŸI¡c¬?®4ž)ÃÇ¡­ž)¾ ½ žU¡$¨ºª”Ÿ¨Xª”¨º¾§ÀI¡?ž)¡´X²¬­®4´€¡­ž(«¼´€¤£ŸI¡Å ×®Qži¦”¨º¡­¥°¢:¢¾°¨X¥¶Ã ´€¤_®4´›¿bžs´X²¬­®4žs¸”ªQ¥§¬?žµt¡­¬£¨º¬­ž ¨º¿Iµ ¿Q¤­´>¨€¢?®¬?´0ª”¨º¬­±4¤¨X¾Îµt¾Ù¨ºª4·›±b¨X·›ž/ô€¤­¿4®4´€¾¶´›·›À(¥¶¡;¬?®”¨X¬¬?®4ž ÃL¨X¿4¿Q¥§ªQ·²p¤?´›ÃÈ«¼´€¤£ŸI¡¬­´s¬­®Qž)¥¶¤‘¨ºª”¨X¾¶ÀI¡­ž)¡Ç㹨ºª”Ÿ ½ ¥°¢ž ½ ž)¤­¡¨›éc¢´›ª4¡?¬­¥¶¬­±4¬?ž)¡¨0¤?ž)·›±Q¾Ù¨º¤¤­ž)¾°¨X¬?¥¶´›ª;ÉX¥tÅ ž›Å›¨1¤?ž)¾°¨X¬­¥¶´€ª ¬?®”¨X¬L¢)¨Xª8¦bži¤?ž)¿4¤­žU¡­ž)ªI¬­žŸ[¦IÀ©¨s¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­ž:¬­¤¨Xª4¡Áµ Ÿ»±”¢ž)¤Å×®4žL¾Ù¨ºª4·›±b¨X·›žÇ¬­´s¦”žs¨Xª”¨º¾§ÀIï)žŸ ¢´€ª4¡­¥¶¡­¬?¡´º² ¡?¬­¤?¥§ªQ·›¡Lãsr «¼´€¤£ŸI¡trE¡?žO¯€±4ž)ª”¢žU¡}´º²'¡­ÀIæ”´€¾¶¡£é«¤?¥§¬Áµ Regular Expression Compiler Analysis Strings ANALYSIS/ GENERATION Word Strings à/¥¶·›±4¤?ž'å›ò^u(´›Ã¿Q¥§¾°¨º¬­¥¶´›ª´X² ¨vHž)·€±4¾°¨X¤¼Ü/Â4¿4¤?ž)¡­¡?¥¶´›ª¥¶ªI¬­´¨Xªw8xyL¬?®”¨X¬¼Ì&¨X¿4¡(¦”žU¬t«¼ž)žUª}׫(´zH ž)·›±4¾°¨º¤$â/¨Xª4·€±”¨X·€ž)¡ ¬­žUª ¨€¢)¢´›¤ŸI¥¶ª4·¬­´Ó¡­´›ÃǞiŸIž¸bª4žOŸ ´›¤­¬?®4´›·€¤£¨º¿4®IÀ>ÅÆtª ¨¢´€ÃÞU¤£¢¥°¨X¾;¨º¿4¿4¾¶¥°¢)¨X¬?¥¶´›ª²g´›¤0¨·€¥ ½ ž)ªLª”¨º¬­±4¤¨X¾;¾°¨ºªIµ ·›±b¨X·›ž€Éc¬­®Qž}¾°¨ºª4·›±”¨º·›žÇ¬­´s¦”ž_¨Xªb¨X¾¶ÀIï)žOŸÓ¥¶¡±4¡?±”¨X¾¶¾¶À ¨ ·›¥ ½ ž)ª;ÉXž›Å ·bÅX¬­®4ž¡?ž)¬c´º² ½ ¨º¾§¥°Ÿ'àQ¤­žUª”¢?®«¼´€¤£ŸI¡c¨X¡ƒ«¤­¥¶¬­¬?ž)ª ¨›¢U¢´›¤ŸI¥¶ª4·i¬?´i¡?¬£¨ºª”Ÿ4¨X¤Ÿà4¤?ž)ª”¢?® ´€¤­¬­®Q´›·›¤¨X¿4®IÀIÅs×®4ž ¨Xªb¨X¾¶ÀI¡­¥¶¡L¾Ù¨ºª4·›±b¨X·›žs¨X·>¨º¥¶ª ¢´€ª4¡­¥¶¡?¬­¡_´º²‘¡?¬­¤­¥¶ª4·€¡Oɦ4±Q¬ ¡­¬?¤­¥¶ª4·€¡ŸIž)¡?¥¶·›ª4žOŸL¨€¢)¢´›¤ŸI¥¶ª4·¬?´¬­®4žª4žUžOŸI¡'¨Xª”ŸÇ¬¨X¡­¬?ž ´X²¬­®4ž:¾¶¥¶ª4·›±Q¥§¡?¬Oɤ?ž)¿4¤?ž)¡­ž)ªI¬?¥§ªQ·-¨Xª”¨º¾¶À>¡?ž)¡L´X²¬?®4ž:´›¤Áµ ¬­®Q´›·›¤¨X¿4®Q¥Ù¢U¨X¾0«(´›¤ŸI¡OÅñÆt¬¥¶¡¡­´€Ãž)¬?¥§ÃǞ)¡¢´›ª ½ žUª4¥¶ž)ªI¬ ¬­´ŸIž)¡?¥§·€ª-¬?®4ž)¡?ž ¨Xª”¨º¾§ÀI¡?¥§¡¡?¬­¤?¥§ªQ·›¡L¬­´Ó¡­®4´X«È¨X¾¶¾0¬?®4ž ¢´€ª4¡­¬?¥§¬?±4ž)ªI¬0ÃÇ´›¤?¿4®4ž)ÃǞ)¡¼¥¶ªL¬­®Qž)¥¶¤Ã´›¤?¿4®4´€¿4®4´›ªQž)Ã¥°¢ ²p´€¤­ÃiÉ(¡?ž)¿”¨X¤¨X¬?žOŸ[¨Xª”Ÿ ¥°ŸIž)ªI¬­¥Î¸”žŸÅBÆtª©´€¬­®4žU¤i¨X¿4¿Q¾§¥Îµ ¢)¨º¬­¥¶´›ª4¡É(¥§¬ÃL¨Ài¦bž_±4¡?ž²p±4¾¬?´ ŸIž)¡­¥¶·€ªS¬?®4ži¨ºª”¨X¾¶ÀI¡­¥¶¡ ¡­¬?¤­¥¶ª4·€¡_¬­´©¢´€ª>¬¨X¥¶ª[¬­®Qž:¬­¤£¨€ŸI¥¶¬­¥¶´›ªb¨X¾'ŸI¥°¢¬­¥¶´€ª”¨X¤?À-¢¥Îµ ¬£¨º¬­¥¶´›ªÇ²p´€¤­Ãsɛ¬?´›·›žU¬­®4ž)¤«¥¶¬­®L¾¶¥¶ª4·›±Q¥§¡?¬tµt¡­ž)¾¶žO¢¬­žOŸ ó ¬£¨º·"F ¡­ÀIæb´›¾¶¡¾§¥¶Ö€ž|{}~€n ɂ{nƒ8„…†ɂ{8‡ˆ ɉ{nŠ‹Ƀ¬­®”¨º¬¢´›ªIµ ½ žUÀ ¢U¨X¬­žU·›´›¤?À>Éb¿bž)¤­¡?´›ª;Ɇª>±Q攞)¤É4¬?ž)ª4¡­ž€ÉôI´4ŸÉ4¢)¨º¡­ž›É ž)¬¢XÅ ×®>±Q¡i¬?®4žS¨ºª”¨X¾¶ÀI¡­¥¶¡_¡?¬­¤­¥¶ª4·8¤­žU¿4¤­ž)¡?ž)ªI¬­¥¶ª4·[¬?®4ž ¸”¤?¡­¬tµt¿bž)¤­¡?´›ª¡­¥¶ª4·€±4¾°¨X¤OÉ¿Q¤­ž)¡?ž)ªI¬i¥¶ª”ŸI¥°¢)¨º¬­¥ ½ žs²g´›¤­Ã´X² ¬­®Qž0à4¤?ž)ª”¢?® ½ ž)¤?¦&ŒŽ ‘˜ã ó ¬?´¿”¨ÀnF>é/Ã¥¶·›®I¬¦bž¡­¿bž)¾¶¾¶žOŸ ’8“” „…n{V• –nŠn{8‡ˆ{nŠ6—?{"ƒ8„…n† Å Æg² ¬?®4ž'¤­žU¾Ù¨º¬­¥¶´›ªL¥¶¡(¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­ž›Éb¬­®Qž)ª_¥¶¬0¢)¨XªL¦bžŸIžµ ¸”ª4žŸ±4¡­¥¶ª4·Ó¬­®4ž&Þ)¬¨X¾°¨Xª4·€±”¨X·€ž´º²˜¤?ž)·›±Q¾Ù¨º¤žÂ4¿4¤­žU¡tµ ¡­¥¶´€ª4¡OË;¨ºª”ŸÉb«¥¶¬­®:¨¡?±4¥¶¬£¨X¦Q¾§ž¢´›ÃÇ¿4¥¶¾§žU¤OÉ4¬?®4ž¤­žU·›±4¾°¨X¤Áµ žÂ4¿4¤?ž)¡­¡?¥§´€ª¡­´›±Q¤£¢ž¢´4ŸIž©¢)¨Xª[¦bžS¢´€Ã¿4¥¶¾¶žOŸ ¥¶ª>¬?´-¨ ¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­ž¬­¤¨Xª4¡ŸI±”¢ž)¤8ã)wŽx?y;é)ɨº¡:¡­®4´X«ªÚ¥§ªà/¥§·ºµ ±4¤?žå›ÉI¬?®”¨X¬¥¶Ã¿Q¾§žUÞ)ªI¬­¡ƒ¬­®4ž'¤?ž)¾°¨X¬?¥§´€ª}¢´€Ã¿4±Q¬£¨X¬?¥¶´›ªIµ ¨X¾¶¾¶ÀIÅ'à4´€¾§¾¶´X«¥¶ª4·¢´›ª ½ ž)ªI¬­¥¶´›ª;Ɇ«¼ž«¥¶¾¶¾;´º²p¬­ž)ªs¤­ž²pž)¤¬­´ ¬­®Qž¼±4¿Q¿”ž)¤¿4¤?´˜žO¢¬­¥¶´›ª'´º²4¬­®4ž‚w8xycɤ­žU¿4¤­ž)¡?ž)ªI¬­¥¶ª4·¨ºª”¨X¾Îµ ÀI¡­ž)¡É¨º¡}¬?®4ž™O"QTš:N'IVU:O¾Ù¨ºª4·›±b¨X·›ž€É¨Ó¡­žU¬_´X²¾¶žÂ4¥°¢)¨º¾ ¡­¬?¤­¥¶ª4·€¡O˨Xª”Ÿ&«¼ž«¥¶¾¶¾;¤?ž²pž)¤¬­´L¬?®4ž¾¶´X«(ž)¤¿4¤­´˜žO¢¬?¥§´€ª ¨X¡&¬­®4ž›x?œ S wU:IVQ;°¨Xª4·€±”¨X·€ž›É'¢´€ª4¡­¥¶¡­¬?¥¶ª4·´X²¡­±Q¤t²¹¨€¢ž ¡­¬?¤­¥¶ª4·€¡OÅ×®4ž)¤?ž¨X¤?ž¢´€Ã¿bž)¾¶¾¶¥§ªQ·˜¨€Ÿ ½ ¨XªI¬¨X·›žU¡¼¬?´‘¢´€Ãµ ¿4±4¬?¥¶ª4·L«¥§¬?®_¡­±b¢­®s¸bª4¥¶¬­žµt¡­¬¨X¬­žÃL¨›¢?®4¥¶ª4ž)¡É¥¶ª”¢¾¶±”ŸI¥¶ª4· Ã}¨º¬­®4žUÃ}¨X¬?¥°¢)¨X¾(ž)¾¶ž)·>¨ºª”¢ž›É@bžÂ4¥§¦Q¥§¾¶¥¶¬tÀIɼ¨XªbŸ²p´€¤ÃÇ´›¡­¬ ª”¨º¬­±4¤¨X¾Îµ¹¾°¨ºª4·›±”¨º·›ž0¨º¿4¿4¾¶¥°¢)¨X¬?¥§´€ª4¡Oɺ®4¥¶·›®žî}¢¥¶ž)ª”¢ÀL¨Xª”Ÿ Ÿ4¨º¬£¨µ­¢´€Ã¿”¨€¢¬­¥¶´›ª/Å ô ª4ž‘¢´›Ã¿Q±4¬­ž)¡1«¥¶¬­®žw8x?y¡¦IÀ_¨X¿Q¿4¾¶À>¥¶ª4·¬?®4ž)Ãsɔ¥¶ª ž)¥¶¬­®Qž)¤_ŸI¥¶¤­žO¢¬?¥¶´›ª;ɬ?´S¨ºª©¥¶ª4¿4±4¬Ç¡­¬­¤?¥¶ª4·”Å[ ®4ž)ª©´€ª4ž ¡­±b¢­®Ÿw8xyL¬­®b¨X¬(« ¨º¡c«¤­¥¶¬­¬?ž)ª²g´›¤¼àQ¤­ž)ªb¢­®Ç¥¶¡¼¨X¿4¿Q¾§¥¶žOŸ¥¶ª ¨XªL±4¿I«¨X¤Ÿ_ŸI¥¶¤­žO¢¬­¥¶´›ª_¬?´¬?®4ž'¡­±Q¤t²¹¨€¢ž«(´›¤Ÿ¡ z¢ £¤"¥n£ ã ó ®4´›±Q¡­ž)¡F>é)É"¥¶¬_¤­ž)¬?±4¤­ªQ¡:¬­®4žÓ¤­ž)¾°¨X¬?žOŸ8¾¶žÂ4¥°¢)¨X¾'¡?¬­¤?¥§ªQ· ¦“V§¨ ~"n{n©8„ ¦ {nŠn‹n{}Ž~€n É(¢´›ªQ¡­¥¶¡­¬?¥§ªQ· ´X²¬­®4ž ¢¥¶¬¨X¬­¥¶´€ª ²g´›¤­Ã¨Xª”ŸÊ¬£¨º·'¡­ÀIæb´›¾¶¡c¢­®Q´›¡­žUª¦IÀ‘¨˜¾§¥¶ª4·€±4¥¶¡­¬c¬?´¢´€ªIµ ½ ž)À¬­®”¨º¬;¬?®4ž¼¡?±4¤t²t¨›¢ž(«¼´€¤£Ÿ¥¶¡¨(²pž)ÃÇ¥¶ª4¥¶ª4žª4´›±4ª"¥§ª¬?®4ž ¿Q¾§±Q¤£¨X¾»²p´›¤?ÃiÅbÏ©¡­¥¶ª4·€¾¶ž¡­±4¤Á²¹¨›¢ž1¡­¬?¤­¥¶ª4·¢)¨ºª¦”ž¤?ž)¾°¨X¬?žOŸ ¬?´Ã±4¾¶¬­¥¶¿4¾¶ž;¾¶žÂ4¥°¢)¨º¾X¡­¬­¤?¥¶ª4·›¡ÉXž›Å ·”ŨX¿4¿4¾¶ÀI¥¶ª4·(¬­®4¥¶¡6w8x?y¥¶ª ¨ºª'±4¿I«¨X¤£ŸŸI¥¶¤­ž¢¬­¥¶´›ª¬­´¬?®4ž¡­±4¤Á²¹¨›¢ž¡­¬­¤?¥¶ª4·&£;ªn¢ £(¿4¤­´ºµ Ÿ»±”¢ž)¡c¬?®4žƒ²p´›±4¤¤?ž)¾°¨X¬­žŸ'¾¶žÂ4¥°¢)¨X¾I¡?¬­¤­¥¶ª4·€¡c¡­®Q´X«ª¥¶ªà/¥¶·Xµ ±Q¤­ž}ìQÅ Ô ±b¢­®¨Xæ4¥¶·€±4¥¶¬tÀ}´º²0¡­±4¤Á²¹¨›¢ž¡­¬?¤­¥¶ª4·€¡¥¶¡ ½ ž)¤?À ¢´›ÃÃÇ´›ª;Å « „¬…8„{V• –nŠn{8‡ˆ{nŠ6—?{"ƒ8„…n† ¨ € §­ …8„{• n–nŠn{އˆn{nŠ®{nƒ8„"…† ¨ € §­ …8„{• n–nŠn{އˆn{nН—?{nƒ8„"…† ¨ € §­ …8„{• ¦8’ {8‡"ˆn{nŠ8® {nƒ8„…† à;¥¶·€±4¤­ž'ì4òcÌ ±4¾¶¬­¥¶¿4¾¶ž0Ï1ª”¨X¾¶ÀI¡­ž)¡ ²p´›¤@£ªn¢ £ u(´›ª ½ žU¤­¡­žU¾§ÀIɬ­®Qž ½ žU¤­Às¡£¨XÃǞw8xy©¢)¨ºª ¦bž}¨X¿Q¿4¾¶¥§žŸ ¥¶ª©¨ Ÿ»´X«ª>«¨X¤ŸŸI¥¶¤­žO¢¬?¥¶´›ª©¬?´S¨:¾¶žÂ4¥°¢)¨º¾0¡­¬?¤­¥¶ª4·Ó¾§¥¶Ö€ž « „¬n…8„{• n–nŠn{އˆn{nŠ6—{nƒ8„"…n†¬?´¤?ž)¬­±Q¤­ª¬?®4ž¤­žU¾Ù¨º¬­žOŸÊ¡­±4¤Áµ ²t¨›¢žÇ¡­¬­¤?¥¶ª4·›£ªn¢ £›Ë¡?±”¢?®¬­¤£¨ºª4¡£Ÿ»±”¢ž)¤?¡}¨X¤?žL¥§ªQ®4ž)¤­žUª>¬?¾¶À ¦Q¥ÙŸ»¥§¤?žO¢¬?¥§´€ª”¨X¾tÅÏ1æ4¥¶·›±4¥¶¬tÀ¥¶ª_¬?®4žŸI´X«ªI«¨X¤£ŸsŸI¥¶¤­ž¢?µ ¬?¥¶´›ªL¥¶¡¨X¾¶¡?´¿”´€¡­¡­¥¶¦4¾¶ž›É¨º¡0¥¶ªL¬­®4ž'¤?ž)¾°¨X¬?¥§´€ª_´X²c¬?®4ž'¾¶žÂIµ ¥°¢)¨º¾¡­¬­¤?¥¶ª4· ’8“” „…n{V• –nŠn{8‡ˆ{nŠ6—?{"ƒ8„…n†Lã ó Æ1¿”¨ÀnF>é ¬­´ ¬?®4ž_¡?±4¤t²¹¨€¢ž:¡­¬?¤­¥¶ª4·€¡Œ8"¢P ¨ºª”Ÿ°Œ8" É0«®Q¥Ù¢?®-¨º¤­žs¥¶ª ²t¨›¢¬ ½ ¨X¾¶¥°Ÿ:¨X¾¶¬­žU¤­ª”¨º¬­žL¡­¿bž)¾¶¾¶¥§ªQ·›¡¥§ªÓ¡­¬¨Xª”Ÿ4¨º¤£ŸÓà4¤?ž)ª”¢?® ´€¤­¬?®4´›·€¤£¨X¿Q®>ÀIÅ _L`/_ ±kqo ²^³^qpd´µph´"g,dTc¶i7b·e'plno c¸dTph)qc^g ×®Qž)¤­ž¨X¤­ž1¬t«¼´¢­®b¨X¾¶¾¶ž)ª4·›žU¡¼¥¶ªÃ´4ŸIžU¾§¥¶ª4·ª”¨º¬­±4¤¨X¾b¾°¨XªIµ ·€±”¨X·€ž0ÃÇ´›¤­¿Q®4´›¾¶´›·€Àò ¹ ÌL´›¤?¿4®4´›¬¨›¢¬?¥°¢¡ ¹7º ®4´›ªQ´›¾¶´›·€¥Ù¢U¨X¾°Ø ô ¤­¬?®4´›·€¤£¨º¿4®4¥°¢)¨X¾QÏ0¾¶¬­ž)¤?ª”¨X¬?¥¶´›ª4¡ à/¥¶ª4¥¶¬­žµ¹¡?¬£¨X¬?ž'ô€¤­¿4®Q´›¾¶´›·€Àô4ŸIž)¾¶¡¦b´›¬­®&±4¡­¥¶ª4·Ç¤­ž)·€±Iµ ¾°¨º¤¼žÂ4¿4¤?ž)¡­¡?¥¶´›ª4¡Åc×®4ž'¡?´›±4¤¢žŸIžU¡£¢¤?¥§¿Q¬­¥¶´›ª4¡1Ã}¨À¨X¾¶¡­´ ¦bž«¤­¥¶¬­¬?ž)ªi¥¶ª_®Q¥§·€®4ž)¤Áµ¹¾¶ž ½ ž)¾ª4´›¬¨X¬­¥¶´€ª4¡'ãtëcž)žU¡­¾¶ž)À:¨Xª”Ÿ 䨺¤­¬­¬?±4ª4ž)ª/É0ì›í€í›í›é"¬­®b¨X¬¨X¤?žL¡­¥¶Ã¿4¾¶Às®4ž)¾¶¿I²p±Q¾0¡­®Q´›¤­¬Áµ ®b¨Xª”ŸI¡²g´›¤¤?ž)·›±Q¾Ù¨º¤žÂ4¿4¤­žU¡­¡­¥¶´€ª4¡_¨Xª”Ÿ¬?®”¨X¬¢´›Ã¿Q¥§¾¶ž›É ±Q¡­¥¶ª4·s¬­®4ž)¥¶¤LŸIžOŸI¥°¢)¨º¬­žOŸ¢´€Ã¿4¥¶¾¶ž)¤­¡Éc¥¶ªI¬­´&¸”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž ªQž)¬t«¼´€¤­ÖI¡OÅÆtªs¿4¤£¨€¢¬­¥°¢ž€É¬­®4žÃÇ´›¡­¬¢´›ÃÇô€ª4¾¶À¡­ž)¿”¨jµ ¤¨X¬?žOŸÃ´4ŸI±Q¾§žU¡;¨X¤­ž¨(¾¶žÂ4¥°¢´›ªw8x?ycÉX¢´›ªI¬£¨º¥¶ª4¥¶ª4·¾¶žÂ4¥Ù¢U¨X¾ ¡?¬­¤?¥§ªQ·›¡OÉ/¨Xª”Ÿ_¨Ê¡­ž)¿b¨X¤£¨º¬­ž)¾¶Às«¤­¥¶¬­¬?ž)ªs¡­ž)¬´X²(¤­±4¾¶ž&wŽx?y¡ ¬?®”¨X¬ÃL¨X¿²p¤?´›ÃĬ­®4ž¡?¬­¤­¥¶ª4·€¡0¥¶ªL¬­®4ž¾¶žÂ4¥°¢´›ªL¬?´¿4¤­´€¿Iµ žU¤­¾¶À_¡­¿bž)¾¶¾¶žOŸs¡­±Q¤t²¹¨€¢ž¡­¬?¤­¥¶ª4·›¡Å×®4ž¾¶žÂ4¥°¢´›ª:ŸIž)¡¢¤­¥¶¿Iµ ¬?¥¶´›ª-ŸIž¸”ª4ž)¡_¬?®4ž:ô€¤­¿4®Q´›¬£¨€¢¬­¥°¢¡Ç´X²¬­®4žs¾°¨Xª4·€±”¨X·€ž›É Lexicon Regular Expression Rule Regular Expression Compiler Lexicon FST .o. Rule FST Lexical Transducer (a single FST) à;¥¶·€±4¤­ž'è4ò^u(¤­ž¨X¬­¥¶´€ª}´º²$¨↞Â4¥Ù¢U¨X¾×¤£¨ºª4¡£ŸI±b¢ž)¤ ¨XªbŸ ¬?®4ž_¤?±4¾¶ž)¡_ŸIž¸”ª4ž&¬­®4ži¨º¾¶¬­ž)¤?ª”¨X¬?¥§´€ª4¡OÅ8×®4ž&¡­ž)¿Iµ ¨X¤¨X¬?ž)¾¶À ¢´›ÃÇ¿4¥¶¾§žŸ_¾¶žÂ4¥°¢´›ª¨Xª”Ÿ³¤­±4¾¶žŸwŽx?y¡¢)¨XªÓ¡­±4¦Iµ ¡­ž¯›±Qž)ªI¬­¾¶À¦bž¢´€Ã¿b´›¡­žŸ¬­´€·›ž)¬?®4ž)¤¨X¡¥¶ª}à/¥¶·›±4¤?ž'謭´ ²p´€¤­Ã ¨ ¡­¥¶ª4·›¾¶ž&O"Q8š:N)IVU:O›y S U:»6x?¼6œ6IVQ S ã¹ä¨º¤­¬­¬?±4ª4ž)ª ž)¬'¨X¾tŶÉåOæ€æ›ì›é ¬­®”¨º¬'¢´›±4¾°Ÿ&®”¨ ½ ž¦bž)ž)ª ŸIž¸bª4žOŸsžO¯€±4¥ ½ µ ¨X¾¶ž)ªI¬?¾§ÀIÉ(¦4±4¬L¿bž)¤­®”¨º¿4¡L¾¶ž)¡­¡L¿bž)¤­¡?¿4¥°¢±4´›±Q¡­¾¶À ¨ºª”Ÿ¾¶ž)¡?¡ žî}¢¥¶ž)ªI¬?¾§ÀIÉI«¥¶¬­®L¨¡­¥¶ª4·›¾¶ž¤­žU·›±4¾°¨X¤(žÂ4¿4¤?ž)¡­¡?¥§´€ª;Å à4´€¤¼žÂQ¨ºÃ¿4¾¶ž›É€¬­®Qž'¥¶ªI²p´›¤?Ã}¨º¬­¥¶´›ª'¬?®”¨X¬¬?®4ž¢´€Ã¿”¨º¤tµ ¨X¬?¥ ½ ž´X²¼¬?®4ž¨›Ÿ˜žO¢¬­¥ ½ ž|½¢¾L¥¶¡½¢5¾¾‘ÊÃ¥¶·›®I¬¦bž¤­ž)¿Iµ ¤­žU¡­ž)ªI¬­žŸ:¥¶ªi¬?®4žÜª4·›¾¶¥¶¡­®s¾¶žÂ4¥°¢)¨X¾¬­¤¨Xª4¡ŸI±”¢žU¤¦IÀ_¬?®4ž ¿”¨º¬­®sãsrÚ¡­ž¯›±Qž)ª”¢ž´º²$¡?¬£¨º¬­ž)¡¨Xª”ŸL¨X¤¢¡£é1¥¶ª_à/¥§·€±4¤­ž,¿bÉ «®4ž)¤?ž¬­®4žïUž)¤­´€¡'¤­ž)¿Q¤­ž)¡?ž)ªI¬ž)¿4¡­¥¶¾¶´›ª&¡­ÀIæ”´€¾¶¡OÅÁÀÓ×®4ž ‹8„ §Ã"“nÄŨާ –8„‰Æ b b i i g g g 0 0 +Adj e 0 r +Comp ‡€n…Ç “Tà „ ¨Ž§ –8„‰Æ à/¥¶·›±4¤?žÈ¿bòÏ º ¨X¬­®L¥¶ª}¨×¤¨Xª4¡ŸI±”¢ž)¤ ²p´›¤0Ü$ª4·›¾¶¥¶¡­® ·›žUÃ¥¶ª”¨X¬?¥¶´›ª'´º²Êɑ¨ºª”Ÿ¬­®Qžž)¿”žUª>¬?®4ž)¬?¥Ù¢U¨X¾¸„"¥¶ª¬?®4ž0¡?±4¤tµ ²¹¨€¢žL²p´€¤­Ã˽¢¾¾"‘_¤?ž)¡­±4¾¶¬Ê²p¤­´€Ã ¬?®4ži¢´€Ã¿b´›¡­¥¶¬?¥§´€ª:´X² ¬­®Qž´›¤­¥¶·€¥§ªb¨X¾;¾¶žÂ4¥°¢´€ªÌw8x?yS«¥¶¬?®i¬?®4ž¤­±4¾¶ž&w8xyÓ¤­ž)¿4¤?žµ ¡­žUª>¬?¥¶ª4·L¬­®4ž¤?ž)·›±4¾°¨º¤ÃÇ´›¤?¿4®4´›¾¶´€·›¥°¢)¨X¾¨X¾¶¬?ž)¤­ª”¨º¬­¥¶´›ªQ¡¥¶ª ܪ4·€¾¶¥§¡?®;Å à4´€¤0¬?®4ž¡¨Xրž´º² ¢¾°¨X¤?¥§¬tÀIɔà/¥¶·›±Q¤­ž&¿}¤?ž)¿4¤?ž)¡­ž)ªI¬?¡¬?®4ž ±4¿4¿bž)¤ÇãsrÛ¾¶žÂ4¥Ù¢U¨X¾°é0¨ºª”Ÿs¬­®4ž¾¶´X«(ž)¤ãsrÛ¡­±4¤Á²¹¨›¢žOé'¡?¥°ŸIž ´X²¬­®Qž¨X¤¢(¾°¨X¦bž)¾”¡?ž)¿”¨º¤£¨X¬?ž)¾¶À´€ª¬­®4ž´€¿4¿b´›¡­¥¶¬?ž¡­¥°ŸIž)¡(´X² ¬­®Qž ¨º¤£¢XÅÆtª¬­®4ž(¤­žUÃ}¨X¥¶ª4¥¶ª4·0Ÿ»¥Ù¨º·›¤¨Xáɫ¼ž(±4¡?ž0¨0ÃÇ´›¤?ž ¢´€Ã¿”¨€¢¬ª4´›¬¨X¬?¥§´€ª;ò(¬­®4ž"±4¿4¿bž)¤¨XªbŸ¬?®4ž¾§´X«(ž)¤1¡­ÀIõ ¦b´›¾¨X¤­žL¢´›Ã¦Q¥§ªQžOŸ_¥¶ªI¬­´s¨L¡­¥¶ª4·€¾§ž¾°¨º¦”ž)¾(´X²0¬?®4ž²g´›¤­Ã € ’n’ „…‚Æ Ä ~"Í8„…¥Î²;¬­®4ž1¡­ÀIæ”´€¾¶¡$¨º¤­žŸI¥¶¡­¬­¥¶ª”¢¬OÅÏ8¡­¥¶ª4·›¾¶ž ¡­ÀIæb´›¾¥¶¡±4¡­žOŸ&²g´›¤'¨Xªs¥°ŸIž)ªI¬­¥¶¬tÀ_¿”¨º¥¶¤OÅÆtªi¬?®4ž¡­¬¨XªIµ Ÿ4¨º¤£ŸÇª4´›¬¨X¬?¥§´€ª;ÉI¬­®Qž¿b¨X¬­®L¥¶ª}à/¥¶·›±Q¤­ž,¿¥¶¡¾Ù¨º¦”žU¾§žŸ¨X¡ † § ÉÏΉÆÉÐ{nÑn–ŽÒ‰ÆÓÎ7ΉÆÔ„Å{8Õn~ ¦8’ Æ…Å âž»¥°¢)¨º¾¬­¤¨Xª4¡ŸI±”¢ž)¤?¡_¨X¤­žLÃÇ´›¤­žÇžî}¢¥¶ž)ªI¬²p´€¤¨Xª”¨º¾§À€µ ¡­¥¶¡¼¨ºª”Ÿ"·›ž)ª4žU¤£¨X¬?¥¶´›ª¬­®b¨Xª¬­®Qž0¢¾°¨X¡­¡?¥°¢)¨X¾4¬t«(´Xµt¾¶ž ½ žU¾4¡­ÀI¡tµ ¬­žUá_ãtä1´›¡­Ö€ž)ª4ª4¥¶ž)ÃÇ¥tɍåæ›ç€è›é¦bžO¢)¨X±Q¡­ž:¬­®QžiÃÇ´›¤­¿Q®4´Xµ ¬£¨€¢¬­¥°¢¡_¨ºª”Ÿ©¬?®4žsô›¤?¿4®4´€¾§´€·›¥°¢)¨º¾ ¨º¾§¬?ž)¤­ªb¨X¬­¥¶´€ª4¡L®”¨ ½ ž ¦bž)ž)ª©¿4¤?žO¢´€Ã¿4¥¶¾¶žOŸ ¨ºª”ŸÓª4ž)žOŸ ª4´›¬¦”žs¢´›ª4¡?±4¾¶¬­žOŸ ¨X¬ ¤­±Qª>¬?¥¶Ãž›Å Ö×B"&Ø")Á™t÷Xø=Ùnù<ž"ßúص§ùÚø,3 ž" tú)"AùUú!"Ê)A ÁÛnÚù? E!Üm!>^ÁÁ:A"ú,"øÝ>B3µÞ 9ú^‰BÚ 9  E ÌL´›¡?¬¾°¨Xª4·€±”¨X·€ž)¡'¦4±4¥¶¾°Ÿ:«(´›¤ŸI¡¦>À:¡?¥¶Ã¿4¾¶À_¡?¬­¤­¥¶ª4·ºµ ¥¶ª4·sô€¤­¿4®Qž)Þ)¡Êãp¿4¤?ž¸4Â4ž)¡OÉ ¤­´I´›¬?¡¨Xª”ŸÓ¡­±»î»žU¡£é¬­´ºµ ·€ž)¬­®Qž)¤¥§ª&¡­¬­¤?¥°¢¬´›¤£Ÿ»ž)¤­¡Å'×®4žÃ´€¤­¿4®4´€¬£¨€¢¬­¥°¢"ãp«(´›¤Ÿ€µ ¦Q±4¥¶¾ÙŸ»¥§ªQ·>é¿Q¤­´4¢ž)¡?¡­ž)¡´º² ¿4¤?ž¸4”¨X¬?¥¶´›ª ¨XªbŸ_¡?±IX¬­¥¶´€ª ¢U¨Xª¦”žL¡?¬­¤£¨º¥¶·›®I¬t²p´€¤­«¨X¤ŸI¾¶À:ô4ŸIž)¾¶žOŸ³¥§ª³¸”ª4¥¶¬­žL¡?¬£¨X¬?ž ¬?ž)¤­ÃÇ¡¨X¡L¢´›ª”¢U¨X¬­žUª”¨X¬?¥§´€ª;ÅÍ냱4¬¡?´›ÃǞ}ªb¨X¬­±Q¤£¨X¾1¾°¨XªIµ ·€±”¨X·€ž)¡c¨X¾¶¡­´žÂ4®4¥¶¦4¥¶¬ª4´€ªIµÁ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥ ½ ž¼ÃÇ´›¤­¿Q®4´›¬¨›¢?µ ¬?¥°¢¡OÅ Ô ´€Ãž)¬?¥¶Ãž)¡_¬?®4ž©¾°¨XªQ·›±”¨º·›ž)¡:¬?®4ž)á?ž)¾ ½ ž)¡ ¨º¤­ž ¢U¨X¾¶¾¶žOŸ ó ªQ´›ªIµ­¢´›ªb¢)¨X¬?ž)ª”¨X¬?¥ ½ ž:¾°¨Xª4·€±”¨X·€ž)¡ÓFbɦ4±4¬LÃÇ´›¡­¬ žUÿ4¾¶´XÀs¡­¥¶·›ª4¥Î¸ ¢U¨XªI¬¢´›ªb¢)¨X¬?ž)ª”¨X¬?¥¶´›ª-¨º¡«¼ž)¾¶¾tÉ(¡­´:¬?®4ž ¬?ž)¤­Ã ó ª4´›¬i¢´›Ã¿Q¾§žU¬­ž)¾¶À-¢´€ª”¢)¨º¬­ž)ª”¨º¬­¥ ½ ž?F-¥¶¡_±4¡?±”¨X¾¶¾¶À ÃÇ´›¤?ž0¨X¿4¿4¤?´›¿4¤?¥°¨X¬­ž€Å ÆtªÏ1¤£¨X¦Q¥Ù¢ºÉX²p´€¤cž”¨XÃÇ¿4¾¶ž›ÉX¿4¤?ž¸4Â4ž)¡0¨ºª”Ÿ¡­±IîÂ4ž)¡¨X¬Áµ ¬¨›¢?®¬?´¡?¬­ž)ÃÇ¡¼¥¶ªÇ¬­®4ž1±4¡­±”¨º¾;¢´›ªb¢)¨X¬?ž)ª”¨X¬?¥ ½ ž« ¨ÀIɀ¦4±4¬ ¡?¬­ž)ÃÇ¡0¬?®4ž)á?ž)¾ ½ ž)¡¨X¤?ž²p´›¤?ÞOŸÇ¦IÀ_¨¿4¤?´4¢ž)¡­¡"Ö>ªQ´X«ª ¥¶ªI²g´›¤­ÃL¨X¾¶¾ßÀ¨X¡¥¶ªI¬­ž)¤ŸI¥¶·›¥¶¬£¨º¬­¥¶´›ª/ËO«®Q¥§¾¶ž¥¶ªÌ_¨º¾Ù¨ÀIÉ)ªQ´›±4ª ¿Q¾§±Q¤£¨X¾¶¡¨X¤­ž$²p´›¤?ÞOŸ¦IÀ'¨¿Q¤­´4¢ž)¡?¡cÖIª4´X«ª¨X¡/²p±Q¾§¾Îµt¡­¬?ž)à ¤?žOŸI±4¿Q¾§¥°¢)¨º¬­¥¶´›ª/Å Ï1¾§¬?®4´›±Q·›® Ï1¤£¨º¦4¥°¢}¨ºª”Ÿ Ì&¨X¾°¨À ¨X¾¶¡­´ ¥¶ª”¢¾§±bŸIž'¿4¤?ž¸4”¨X¬?¥§´€ªi¨XªbŸL¡­±Iº¬­¥¶´›ª_¬?®”¨X¬¨X¤?žÃÇ´»Ÿ>µ žU¾§žŸ ¡?¬­¤¨X¥¶·›®I¬t²g´›¤­«¨X¤ŸI¾¶À:¦IÀS¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥¶´›ª;É1¨:¢´›Ãʵ ¿Q¾§žU¬­ž ¾¶žÂ4¥°¢´€ªÝ¢)¨ºª4ª4´›¬:¦bž´›¦4¬¨X¥¶ª4žOŸñ«¥¶¬­®4´€±4¬_ª4´€ªIµ ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥ ½ ž¿Q¤­´4¢ž)¡?¡­ž)¡Å :ž¼«¥¶¾¶¾I¿4¤­´4¢žUžOŸ«¥¶¬­®ŸIž)¡¢¤­¥¶¿4¬?¥¶´›ª4¡c´º²”®4´X«Ì_¨X¾°¨À ¤?žOŸI±4¿Q¾§¥°¢)¨º¬­¥¶´›ª¨XªbŸ Ô žUÃ¥¶¬­¥°¢¡?¬­ž)Ã¥¶ªI¬­ž)¤ŸI¥¶·›¥¶¬£¨º¬­¥¶´›ª¨º¤­ž ®b¨Xª”ŸI¾¶žOŸÇ¥¶ªÇ¸”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž'ô€¤­¿4®Q´›¾¶´›·€À±4¡?¥¶ª4·¬?®4ž'ª4ž)« ¢´›Ã¿Q¥§¾¶žµt¤­žU¿4¾°¨›¢ž¨X¾¶·›´€¤­¥¶¬­®4ÃsÅ ß à ÒBá\1H­=?,:YX5S,â\0=tAœ, ×®Qž ¢ž)ªI¬­¤¨X¾¥°ŸIžO¨ ¥¶ª[´›±4¤i¨º¿4¿4¤?´>¨›¢?®[¬?´-¬?®4ž ÃÇ´»Ÿ>µ žU¾§¥¶ª4·s´X²ª4´›ª»µÁ¢´€ª”¢)¨º¬­ž)ª”¨º¬­¥ ½ žL¿4¤?´4¢ž)¡­¡?ž)¡_¥¶¡¬­´ ŸIž¸bª4ž ªQž)¬t«¼´€¤­ÖI¡(±4¡­¥¶ª4·¤­ž)·€±4¾°¨X¤cž»¿Q¤­ž)¡?¡­¥¶´›ª4¡ɔ¨º¡¼¦bž²p´€¤­ž›ËQ¦4±4¬ «(žª4´X«[ŸIž¸”ª4ž¬­®Qž'¡­¬?¤­¥¶ª4·›¡1´X²¼¨XªL¥¶ªI¬­ž)¤?ÞOŸ»¥Ù¨º¬­žª4ž)¬Áµ «(´›¤?Ö¡?´¬­®b¨X¬0¬?®4ž)Ài¢´›ªI¬£¨º¥§ª_¨º¿4¿4¤­´€¿4¤­¥°¨º¬­ž'¡?±4¦4¡­¬?¤­¥¶ª4·€¡ ¬?®”¨X¬¨X¤­žL¬?®4ž)á?ž)¾ ½ ž)¡"¥§ªÓ¬­®4žÊ²p´›¤?Ã}¨º¬´º²0¤­ž)·€±4¾°¨X¤žÂIµ ¿Q¤­ž)¡?¡­¥¶´›ª4¡Å ×®4ž©¢´€Ã¿4¥¶¾¶žµt¤­ž)¿4¾°¨€¢ž ¨X¾¶·›´€¤­¥¶¬­®Qà ¬­®Qž)ª ¤?žO¨X¿Q¿4¾¶¥§žU¡i¬?®4ž¤­ž)·€±4¾°¨X¤tµtžÂ4¿4¤?ž)¡­¡?¥§´€ªB¢´›ÃÇ¿4¥¶¾§žU¤_¬­´8¥¶¬­¡ ´X«ª"´›±4¬?¿4±4¬OÉX¢´€Ã¿4¥¶¾¶¥¶ª4·(¬­®4ž(¤­ž)·€±4¾°¨X¤Áµ¹žÂ4¿4¤?ž)¡­¡?¥¶´›ª¡­±Q¦Iµ ¡?¬­¤?¥§ªQ·›¡(¥§ªL¬?®4ž¥§ªI¬?ž)¤­ÃǞOŸI¥°¨X¬?žª4ž)¬t«(´›¤­ÖL¨Xª”ŸÊ¤­ž)¿Q¾Ù¨€¢¥¶ª4· ¬?®4ž)ÃÄ«¥¶¬­®¬?®4ž'¤?ž)¡­±4¾¶¬´º²$¬?®4ž'¢´›ÃÇ¿4¥¶¾°¨X¬­¥¶´€ª;Å ×´:¬£¨Xրž_¨s¡­¥¶Ã¿4¾¶žª4´€ªIµt¾§¥¶ª4·€±4¥¶¡­¬?¥Ù¢Êž”¨XÃÇ¿4¾¶ž›É¼à/¥¶·Xµ ±Q¤­žzã¤?ž)¿4¤­žU¡­ž)ªI¬­¡¨ª4ž)¬t«(´›¤­ÖL¬?®”¨X¬ÃL¨X¿4¡(¬­®4ž"¤­ž)·€±4¾°¨X¤ ž»¿Q¤­ž)¡?¡­¥¶´›ª “Tä ¥§ªI¬?´ «âå “Tä «æ ˬ?®”¨X¬"¥§¡É;¬?®4ž¡£¨ºÃžžÂIµ ¿Q¤­ž)¡?¡­¥¶´›ªsž)ª”¢¾¶´€¡­žOŸs¦bž)¬t«(ž)ž)ª:¬t«(´¡­¿bžO¢¥°¨º¾¼ŸIž)¾¶¥¶Ã¥¶¬­žU¤­¡OÉ «âå ¨Xª”Ÿ «æ Ƀ¬­®”¨º¬ÃL¨X¤?Öi¥¶¬¨X¡¨_¤?ž)·›±4¾°¨º¤tµtžÂ4¿4¤­ž)¡?¡­¥¶´›ª ¡­±Q¦4¡­¬?¤­¥¶ª4·”Å a * 0:^[ 0:^] à/¥¶·›±4¤?žmã4ò'Ïèçž)¬t«(´›¤?Ö:«¥§¬?® ¨tHž)·€±4¾°¨X¤tµ­Ü/Â4¿4¤?ž)¡­¡?¥§´€ª Ô ±4¦Q¡­¬­¤?¥¶ª4· ×®4ž-¨º¿4¿4¾¶¥°¢)¨X¬?¥§´€ªÍ´º²i¬­®4ž8¢´›ÃÇ¿4¥¶¾§žµ¹¤?ž)¿4¾°¨›¢ž©¨º¾§·€´Xµ ¤­¥¶¬?®4ÃB´›ªL¬?®4ž'¾¶´X«¼žU¤¼¡?¥ÙŸ»ž´º²c¬­®4žª4ž)¬t«(´›¤?Öž)¾¶¥¶Ã¥¶ª”¨º¬­ž)¡ ¬­®QžÃL¨X¤?֛ž)¤?¡OÉI¢´›ÃÇ¿4¥¶¾§žU¡c¬­®4ž¤?ž)·›±4¾°¨º¤¼žÂ4¿4¤?ž)¡­¡?¥¶´›ª;Éb¨Xª”Ÿ Ã}¨º¿4¡¬­®Qž±4¿4¿bž)¤¡­¥°ŸIž´X²(¬­®4ž¿”¨X¬?®i¬?´¬?®4ž¾°¨ºª4·›±”¨º·›ž ¤­žU¡­±4¾¶¬­¥¶ª4·²g¤­´›ÃĬ­®Qž¢´›ÃÇ¿4¥¶¾°¨X¬­¥¶´€ª;Å×®4žª4ž)¬t«(´›¤?Ö_¢¤­žµ ¨X¬?žOŸÇ¦IÀ¬­®Qž´€¿”žU¤£¨X¬?¥¶´›ªL¥¶¡¡­®4´X«ªL¥¶ª}à/¥¶·›±4¤?žé4Å a:0 a *:0 *:0 0:a *:a à/¥¶·›±4¤?žêé4ò ϲp¬?ž)¤-¬?®4žÏ0¿4¿Q¾§¥°¢)¨º¬­¥¶´›ªE´º²ëu(´›ÃÇ¿4¥¶¾§žµ HžU¿4¾°¨›¢ž ©®4žUª ¨X¿4¿4¾¶¥¶žOŸ¥¶ªÄ¬­®4ž ó ±4¿I« ¨º¤£ŸnFBŸI¥¶¤?žO¢¬­¥¶´€ª;É_¬?®4ž ¬­¤¨Xª4¡ŸI±”¢žU¤¥§ªà/¥¶·›±4¤?ž=é0ÃL¨X¿4¡c¨ºª>À"¡­¬­¤?¥¶ª4·'´X²¬­®Qž¥§ª»¸4µ ª4¥¶¬­ž “Tä ¾°¨ºª4·›±”¨º·›ž1¥§ªI¬?´¬?®4ž'¤­žU·›±4¾°¨X¤ž»¿Q¤­ž)¡?¡­¥¶´›ªL²g¤­´›Ã «®4¥°¢?®L¬­®4ž¾°¨Xª4·€±”¨X·€ž0«¨X¡0¢´€Ã¿4¥¶¾¶žOŸÅ ×®4ž_¢´›ÃÇ¿4¥¶¾¶žµ¹¤?ž)¿4¾°¨›¢ž}¨X¾¶·€´›¤­¥¶¬?®4ÃÈ¥§¡žU¡­¡­žUª>¬?¥°¨X¾¶¾¶ÀS¨ ½ ¨X¤?¥°¨XªI¬c´X²c¨'¡?¥¶Ã¿4¾¶ž¤­ž¢±4¤­¡?¥ ½ žµ­ŸIž)¡¢ž)ª4¬'¢´€¿>ÀI¥¶ª4·¤­´€±Iµ ¬­¥¶ª4ž€ÅÆt¬¼žÂ4¿bžO¢¬?¡0¬­´"¸”ª”ŸÊÃ}¨º¤­Ö›žŸ'¤?ž)·›±4¾°¨º¤tµtžÂ4¿4¤­ž)¡?¡­¥¶´›ª ¡­±Q¦4¡­¬?¤­¥¶ª4·›¡(´›ª¬­®Qž0ŸIž)¡­¥¶·€ª”¨X¬?žOŸ¡­¥°ŸIžãg±4¿4¿bž)¤¼´€¤c¾¶´X«¼žU¤£é ´X²¬­®4žª4žU¬t«¼´€¤­ÖÅBìªI¬­¥¶¾¨XªÇ´›¿bž)ª4¥¶ª4·¾§¥¶ÃÇ¥§¬?ž)¤ «âå ¥§¡(ž)ªIµ ¢´€±4ªI¬­ž)¤?žOŸÉb¬­®4ž'¨º¾§·€´›¤?¥§¬?®4ÃB¢´›ªQ¡­¬­¤?±”¢¬?¡¨¢´›¿IÀ´º² ¬?®4ž ¿”¨º¬­® ¥¶¬¥¶¡"²p´›¾¶¾¶´X«¥¶ª4·”Å&Æg²¬­®Qž_ª4ž)¬t«(´›¤?Ö ¢´›ªI¬£¨º¥¶ª4¡ª4´ ¤­žU·›±4¾°¨X¤Áµ¹ž»¿Q¤­ž)¡?¡­¥¶´›ªs¡­±4¦Q¡­¬­¤?¥¶ª4·›¡ɔ¬?®4ž'¤?ž)¡­±4¾¶¬1«¥§¾¶¾¦bž¨ ¢´€¿>À:´º²0¬­®4žL´€¤­¥¶·›¥¶ª”¨º¾cª4ž)¬t«(´›¤?Ö Å³©®4ž)ª¨ «âå ¥§¡ž)ªIµ ¢´€±4ªI¬­ž)¤?žOŸÉ¬­®QžÕ¨º¾§·€´›¤?¥§¬?®4à ¾§´I´€Ö>¡Ê²p´›¤_¨Ó¢¾¶´›¡­¥¶ª4· «æ ¨XªbŸž»¬?¤£¨€¢¬­¡¬­®Qž¿”¨º¬­®L¦bž)¬t«¼žUž)ªi¬?®4ž'ÃL¨X¤­Ö€ž)¤­¡¬?´¦bž ®”¨ºª”ŸI¾¶žOŸÇ¥¶ª}¨¡­¿bžO¢¥°¨º¾;«¨À ò å›Å(×®4ž¡?ÀIæ”´€¾§¡¨X¾¶´›ªQ·¬­®Qž¥¶ª”ŸI¥°¢)¨º¬­žOŸ&¡­¥°ŸIž´º² ¬?®4ž ¿”¨º¬­®¨º¤­ž¢´›ª”¢)¨º¬­ž)ªb¨X¬­žŸ¥¶ªI¬­´¨1¡­¬?¤­¥¶ª4·¨Xª”ŸžU¾§¥¶Ãʵ ¥¶ª”¨º¬­žOŸ˜²p¤­´€Ã[¬­®4ž¿b¨X¬­®¾¶žO¨ ½ ¥¶ª4·¸˜±Q¡­¬¬­®4ž¡?ÀIæ”´€¾§¡ ´€ª}¬?®4ž´›¿4¿b´›¡?¥¶¬­ž¡­¥°ŸIž›Å ì4Å(ÏB¡?ž)¿”¨º¤£¨X¬?žLª4ž)¬t«(´›¤­Ös¥¶¡¢¤?žO¨X¬?žOŸ:¬?®”¨X¬¢´›ªI¬¨X¥¶ª4¡ ¬?®4ž'ô4ŸI¥Î¸”žŸ¿”¨X¬?®;Å è4Å(×®4žƒžÂ4¬­¤£¨€¢¬­žŸ'¡?¬­¤­¥¶ª4·1¥¶¡;¢´›ÃÇ¿4¥¶¾§žŸ¥¶ªI¬­´0¨(¡­ž¢´›ª”Ÿ ª4žU¬t«¼´€¤­Ö'«¥¶¬­®¬­®Qž¼¡?¬£¨XªbŸ4¨X¤Ÿ'¤?ž)·›±4¾°¨º¤tµtžÂ4¿4¤­ž)¡?¡­¥¶´›ª ¢´€Ã¿4¥¶¾¶ž)¤Å ¿”Å(×®4ž¬t«(´ªQž)¬t«¼´€¤­ÖI¡¨º¤­ž'¢´€Ã¦4¥¶ª4žOŸ¥¶ªI¬­´¨¡­¥¶ª4·›¾¶ž ´€ª4ž±4¡­¥¶ª4·¬?®4ž'¢¤­´€¡­¡­¿Q¤­´4ŸI±”¢¬´€¿”ž)¤¨X¬?¥§´€ª;Å ã4Å(×®4ž(¤­žU¡­±4¾¶¬c¥¶¡;¡?¿4¾¶¥°¢žOŸ"¦”ž)¬t«(ž)ž)ª¬?®4ž(¡­¬£¨º¬­ž)¡ƒ¤­ž)¿4¤?žµ ¡?ž)ªI¬­¥¶ª4·¬?®4ž´›¤­¥¶·€¥§ª_¨ºª”ŸL¬­®Qž‘ŸIžU¡­¬­¥¶ª”¨º¬­¥¶´›ª&´X²¼¬?®4ž ¤?ž)·›±4¾°¨º¤tµtžÂ4¿4¤­ž)¡?¡­¥¶´›ª&¿”¨X¬?®;Å Ï ²p¬­žU¤ ¬­®Qž ¡?¿”žO¢¥Ù¨º¾B¬­¤?žO¨X¬?Þ)ªI¬á´X²B¬­®4ž ¤­žU·›±4¾°¨X¤Áµ ž»¿Q¤­ž)¡?¡­¥¶´›ª¿”¨º¬­®:¥¶¡'¸bª4¥¶¡­®4žOŸ†Écª4´€¤­Ã}¨º¾¿4¤­´4¢ž)¡?¡­¥¶ª4·:¥¶¡ ¤?ž)¡­±QÞOŸÊ¥§ªÇ¬­®4ž'ŸIžU¡­¬­¥¶ª”¨º¬­¥¶´›ªL¡?¬£¨X¬?ž´X² ¬?®4ž'¢¾¶´›¡?¥§ªQ· «æ ¨º¤£¢XÅ à4´€¤;ž”¨ºÃ¿4¾¶ž›ÉU¬­®4ž(¤­ž)¡?±4¾¶¬¡­®4´X«ª¥¶ª˜à/¥¶·›±Q¤­ž!餭žU¿4¤­žµ ¡?ž)ªI¬­¡¬?®4ž¢¤?´›¡?¡­¿4¤?´»Ÿ»±”¢¬0´º² ¬?®4ž0¬t«(´ªQž)¬t«¼´€¤­ÖI¡(¡­®4´X«ª ¥¶ªLà;¥¶·€±4¤­žG4Å a a * à/¥¶·›±Q¤­žG4ò,çž)¬t«(´›¤?Ö>¡"Æt¾§¾¶±4¡?¬­¤¨X¬­¥¶ª4· Ô ¬?ž)¿4¡ìL¨ºª”ŸièÇ´º² ¬?®4žu(´›ÃÇ¿4¥¶¾§žµÔH ž)¿4¾°¨›¢žÏ1¾¶·›´›¤?¥¶¬­®4à Ætª¬?®4¥¶¡¼¡?¥¶Ã¿4¾¶ž(ž”¨XÿQ¾§ž€É›¬?®4ž0±4¿Q¿”ž)¤¾°¨ºª4·›±”¨º·›ž(´X²;¬?®4ž ´€¤­¥¶·›¥¶ª”¨º¾›ª4žU¬t«¼´€¤­Ö'¥¶ªà/¥¶·›±4¤?žDã1¥¶¡;¥°ŸIžUª>¬?¥°¢)¨X¾4¬?´0¬?®4ž¤­ž)·ºµ ±Q¾Ù¨º¤¼ž»¿Q¤­ž)¡?¡­¥¶´›ªs¬­®”¨º¬¥¶¡¢´€Ã¿4¥¶¾¶žOŸ¨XªbŸL¤­ž)¿Q¾Ù¨€¢žOŸÅ(Ætª ¬?®4žL¾¶¥§ªQ·›±4¥¶¡­¬?¥°¢¨X¿Q¿4¾¶¥Ù¢U¨X¬­¥¶´€ª4¡¿Q¤­ž)¡?ž)ªI¬­žOŸ ¥¶ª ¬­®Qž_ª4žÂ4¬ ¡?žO¢¬?¥§´€ª4¡Oɺ¬­®4ž(¬t«¼´¡?¥°ŸIž)¡´X²”¨¤?ž)·›±Q¾Ù¨º¤tµtžÂ4¿4¤­žU¡­¡­¥¶´€ª¿”¨º¬­® ¢´›ªI¬£¨º¥§ªLŸI¥5í žU¤­ž)ªI¬¡­¬?¤­¥¶ª4·€¡OÅc×®Qž±Q¿4¿”žU¤¡­¥°ŸIž'¢´›ªI¬¨X¥¶ª4¡ ÃÇ´›¤?¿4®4´›¾¶´€·›¥°¢)¨X¾X¥¶ªI²g´›¤­ÃL¨X¬?¥§´€ª;ˬ­®Qž¤­ž)·€±4¾°¨X¤tµtžÂ4¿4¤?ž)¡­¡?¥§´€ª ´€¿”žU¤£¨X¬?´›¤?¡‘¨º¿4¿”ž¨X¤'´€ª4¾¶Ài´€ª:¬­®4ž¾¶´X«(ž)¤'¡­¥°ŸIžL¨Xª”Ÿ:¨º¤­ž ªQ´›¬(¿4¤­ž)¡?ž)ªI¬¥§ªÇ¬­®4ž˜¸”ª”¨X¾¤?ž)¡­±Q¾§¬Å îL`Pa ïl8iâð¸²^e)hs´µd8phPqc פ¨›ŸI¥¶¬­¥¶´€ª”¨X¾Ç׫¼´ºµÁ↞ ½ ž)¾L¥¶Ã¿Q¾§žUÞ)ªI¬£¨º¬­¥¶´›ªQ¡Õ¨º¤­žÍ¨X¾Îµ ¤?žO¨›Ÿ»À ¢)¨º¿”¨X¦Q¾§ž:´º²ÊŸ»ž)¡£¢¤?¥¶¦4¥¶ª4·-¡?´›ÃǞi¾¶¥¶Ã¥¶¬­žŸ ¤?žOŸI±Iµ ¿Q¾§¥°¢)¨º¬­¥¶´›ªL¨Xª”ŸÇ¥¶ªI¸4”¨º¬­¥¶´›ªL¨X¡¥¶ªL×$¨º·>¨º¾§´€·ãpÏ1ªI¬t«¼´€¤­¬­®/É åæ›æ€í4É©åã"éñåé€ì›é)Å ×®QžBô€¤­žB¢?®”¨X¾¶¾¶ž)ª4·€¥§ªQ·6¿4®Qžµ ªQ´›ÃžUª4´›ª ¥¶¡ ½ ¨X¤­¥°¨º¦4¾¶žµ¹¾¶ž)ªQ·›¬­® ¤?žOŸI±4¿4¾¶¥°¢)¨º¬­¥¶´›ª;Éá¨X¡ ²g´›±4ª”Ÿ"¥¶ªÌ_¨X¾°¨À'¨XªbŸ¬­®4ž¢¾¶´›¡­žU¾§À¤­žU¾Ù¨º¬­žOŸÆtªbŸI´›ª4žU¡­¥°¨Xª ¾°¨ºª4·›±”¨º·›ž›Å Ï0ªž”¨XÃÇ¿4¾¶ž´º² ½ ¨º¤­¥°¨X¦Q¾§žµ¹¾¶ž)ª4·€¬­®³²p±4¾¶¾Îµ¹¡?¬­ž)ä?žOŸI±Iµ ¿Q¾§¥°¢)¨º¬­¥¶´›ªÇ´4¢)¢±4¤?¡«¥¶¬?®}¬?®4žÌ_¨º¾°¨À¡­¬?ž)Ãò½¾¢°É4«®Q¥Ù¢?® ÃǞO¨XªQ¡ ó ¦”¨º·"F:´›¤ ó ¡?±4¥¶¬£¢)¨º¡­žFbˬ­®4¥¶¡²g´›¤­Ã ¥¶¡Ç¥§ªÓ²¹¨€¢¬ ªI±4æbž)¤Áµ¹ª4žU±4¬­¤¨X¾0¨XªbŸS¢U¨Xª¬­¤¨Xª4¡?¾Ù¨º¬­ži¨º¡¬­®4žL¿4¾¶±4¤¨X¾tÅ Æt¬?¡´ ½ ž)¤?¬0¿4¾¶±4¤¨X¾¥¶¡0¿Q®4´›ª4´€¾¶´›·›¥°¢)¨º¾¶¾§ÀŸ½3¾"¢ó½3¾"¢ ÉI²p´€¤­ÃžŸ ¦IÀ_¤?ž)¿bžO¨X¬?¥§ªQ·_¬­®4žÇ¡­¬?ž)ÃȬt«¥Ù¢ž¥§ª:¨Ç¤­´X«'ÅÇÏ0¾¶¬­®Q´›±4·€® ¬?®4¥¶¡:¿4¾¶±4¤£¨º¾¶¥§ï¨X¬­¥¶´€ª[¿4¤­´4¢ž)¡?¡Ã}¨À[¨X¿4¿bžO¨X¤ ¢´›ª”¢)¨º¬­žµ ªb¨X¬­¥ ½ ž›Éc¥¶¬ŸI´Iž)¡Lª4´€¬¥§ª ½ ´›¾ ½ žL¢´›ª”¢U¨X¬­žUª”¨X¬?¥§ªQ· ¨:¿4¤?žµ Ÿ»¥Ù¢¬£¨X¦Q¾§ž¿Q¾§±Q¤£¨X¾¶¥¶ï)¥¶ª4·_ÃÇ´›¤?¿4®4ž)ÃǞ›É¦4±4¬¤¨X¬?®4ž)¤¢´›¿IÀ€µ ¥¶ª4·L¬?®4ž¿4¤­ž¢žOŸI¥¶ª4·s¡­¬?ž)ÃiÉ/«®”¨X¬?ž ½ žU¤'¥¶¬'Ã}¨ÀL¦bž¨Xª”Ÿ ®Q´X«¼ž ½ ž)¤¾¶´›ª4·:¥¶¬ÃL¨Ài¦bž›Å8×®I±4¡¬?®4žL´ ½ ž)¤?¬¿4¾¶±4¤¨X¾ ´º²zŒ8?ô/"½ª õ8¥Bã ó ¿b´›¤­¬F>éUÉ¥§¬?¡­ž)¾Î²}¨ŸIž)¤?¥ ½ žOŸ ²p´›¤?ÃiÉ¥¶¡ ¿Q®4´›ª4´€¾¶´›·›¥°¢)¨º¾¶¾§ÀŒŽ?ô/½ª õ8¥Œ8ô "½ªõT"¥4Å º ¤­´4ŸI±b¢¬­¥ ½ ž:¤?žOŸI±4¿Q¾§¥°¢)¨º¬­¥¶´›ª[¢)¨ºª4ª4´›¬L¦bžÕŸ»ž)¡£¢¤?¥¶¦”žOŸ ¦IÀ¸”ªQ¥§¬?žµt¡­¬£¨º¬­ž'´€¤ž ½ žUªi¢´›ªI¬?žÂ4¬tµg²p¤­ž)ž˜²p´›¤?Ã}¨X¾¶¥¶¡­Ã¡ÅÆt¬ ¥¶¡«(ž)¾¶¾ÖIª4´X«ª¬­®”¨º¬¬­®4ž_¢´€¿>À¾°¨XªQ·›±”¨º·›ž›É÷öøBøúù÷ø û â^üIÉ4«®4ž)¤?žž¨›¢?®_«(´›¤£Ÿ&¢´›ªI¬£¨º¥¶ª4¡0¬t«(´¢´›¿Q¥§žU¡0´X²c¬?®4ž ¡¨XÃǞc¡­¬?¤­¥¶ª4·”É€¥¶¡c¨'¢´›ªI¬?žÂ4¬tµt¡­ž)ª4¡?¥¶¬­¥ ½ ž¾°¨XªQ·›±”¨º·›ž›Åð´X«(µ ž ½ ž)¤ɔ¥Î²c¬­®Qž ó ¦”¨º¡­žFǾ°¨Xª4·€±”¨X·€ž˜â³¥¶¡¸”ª4¥¶¬?ž›Éb«¼ž¢U¨Xª_´º² ¢´›±4¤?¡­ž:¢´›ª4¡?¬­¤?±”¢¬:¨s¸”ª4¥¶¬?žµ¹¡?¬£¨º¬­žsª4ž)¬t«(´›¤­Ö ¬­®”¨º¬}žUªIµ ¢´»Ÿ»ž)¡‘â ¨Xª”Ÿs¬?®4ž¤?žOŸI±4¿4¾¶¥°¢)¨º¬­¥¶´›ª4¡"´X²¨X¾¶¾c¬?®4ž¡?¬­¤?¥§ªQ·›¡ ‹8„ §Ã"“nÄ Æ † “ É § {"}~€n7{Š Ä €n… “Ä ‡€n…Ç “Tà „‰Æ «âå|ý † “ É §7þ « ® «æ ‹8„ §Ã"“nÄ Æ ’ „ ÄГ †k€Ðÿ “  {}~€nÐ{nŠ Ä €n… “nÄ ‡€n…Ç “Tà „‰Æ «âå|ý ’ „ ÄГ †k€Ðÿ “  þ « ® «æ à/¥¶·›±4¤?ž'ç4ò׫(´ º ¨X¬?®4¡¥¶ªL¬­®4žÆtª4¥¶¬­¥°¨º¾;Ì_¨X¾°¨À× ¤¨Xª4¡ŸI±”¢žU¤1ž¸bª4žOŸ ½ ¥Ù¨&u(´›ªb¢)¨X¬?ž)ª”¨X¬?¥¶´›ª ‹8„ §Ã"“nÄ Æ † “ É § {"}~€nÐ{nŠ Ä €n… “nÄ ‡€n…Ç “Tà „‰Æ † “ É § † “ É § ‹8„ §Ã"“nÄ Æ ’ „ ÄГ †k€Ðÿ “  {}~"€nÐ{nŠ Ä €n… “nÄ ‡€n…Ç “Tà „‰Æ ’ „ ÄГ †k€Ðÿ “  ’ „ Äk“ † €Åÿ “  à/¥§·€±4¤­ž'æQò×®4ž'Ì_¨X¾°¨Àw8xy&ϲp¬?ž)¤0¬?®4žÏ0¿4¿4¾¶¥°¢)¨º¬­¥¶´›ªÇ´X²Êu(´›ÃÇ¿4¥¶¾§žµÔH ž)¿4¾°¨›¢ž(¬­´¬?®4ž↴X«¼ž)¤Áµ Ô ¥ÙŸ»žâ;¨ºª4·›±b¨X·›ž ¥¶ª‘âƒÅ›:ž(«¥¶¾¶¾4¡­®4´X«¨¡­¥¶ÃÇ¿4¾¶ž¼¨Xª”ŸžU¾§žU·>¨XªI¬c«¨À'¬?´ŸI´ ¬­®Q¥§¡±Q¡­¥¶ª4·¡­¬?¤­¥°¢¬?¾§Àʸ”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž'´›¿bž)¤¨X¬­¥¶´€ª4¡OÅ ×´i±4ªbŸIž)¤­¡?¬£¨ºª”Ÿ¬­®Qž}¡?´›¾¶±4¬?¥§´€ª ¬?´s²p±4¾¶¾Îµ¹¡?¬­ž)ä?žOŸI±Iµ ¿4¾¶¥°¢)¨X¬?¥¶´›ª_±4¡?¥¶ª4·_¬­®Qž¢´›ÃÇ¿4¥¶¾§žµ¹¤?ž)¿4¾°¨›¢ž¨X¾¶·€´›¤­¥¶¬?®4ÃE¤­žµ ¯€±4¥¶¤­ž)¡¨'¦Q¥§¬´º²c¦”¨›¢?ÖI·›¤?´›±4ª”Ÿ†ÅÆtª_¬­®4žê1ž)¤­´Â_¤­žU·›±4¾°¨X¤Áµ žÂ4¿4¤?ž)¡­¡?¥§´€ª ¢U¨X¾°¢±4¾¶±4¡¬­®Qž)¤­ž[¨X¤?ž-¡?ž ½ ž)¤¨X¾_´€¿”žU¤£¨X¬?´›¤?¡ ¬­®b¨X¬(¥§ª ½ ´›¾ ½ ž¢´›ª”¢U¨X¬­žUª”¨X¬?¥§´€ª;ÅcàQ´›¤(ž”¨XÿQ¾§ž€É›¥Î²ÊÑÇ¥§¡¨ ¤­žU·›±4¾°¨X¤1žÂ4¿4¤­žU¡­¡­¥¶´€ª ŸIž)ª4´€¬­¥¶ª4·}¨Ê¾°¨Xª4·€±”¨X·€ž´€¤'¨¤­žU¾Ù¨jµ ¬­¥¶´€ª;ÉnÑ ä ŸIž)ª4´€¬­ž)¡1ï)ž)¤­´´€¤Ã´€¤­ž¨ºª”ŸÑn{ŸIžUª4´›¬?ž)¡0´€ª4ž ´›¤ƒÃ´›¤?ž ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥¶´€ª4¡¼´º²BÑ«¥¶¬­®¥¶¬?¡­ž)¾Î²£Å/×®4ž)¤?ž¨X¤?ž ¨X¾¶¡?´´€¿”žU¤£¨X¬?´›¤?¡¼¬?®”¨X¬cž»¿Q¤­ž)¡?¡0¨1¸4Â4žOŸªI±4æbž)¤´X²;¢´›ªIµ ¢)¨º¬­ž)ª”¨º¬­¥¶´›ªQ¡OÅcÜ/Â4¿4¤?ž)¡­¡?¥§´€ª4¡c´º² ¬?®4ž(²p´›¤?Ã Ñ « ÉO«®Qž)¤­ž@ ¥¶¡¼¨Xª¥¶ªI¬­žU·›ž)¤É4ŸIž)ª4´€¬­žz¥Ç¢´›ª”¢U¨X¬­žUª”¨X¬?¥§´€ª4¡(´X²ÊÑÅTö “ † à ü ŸIž)ªQ´›¬­žU¡¬­®4ž'¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥¶´›ªL´X²;¡?ÀIæ”´€¾§¡ “ Én†ÉI¨Xª”Ÿ à Š:žcž)ÃÇ¿4¾¶´XÀ «âå ¨Xª”Ÿ «æ ¨X¡ Ÿ»ž)¾¶¥§ÃÇ¥¶¬­ž)¤¡?À>æb´›¾¶¡¨X¤­´€±4ª”Ÿ ¤­žU·›±4¾°¨X¤Áµ¹ž»¿Q¤­ž)¡?¡­¥¶´›ª_¡?±4¦4¡?¬­¤­¥¶ª4·€¡OÅ ×®4ž¤­žOŸI±Q¿4¾¶¥Ù¢U¨X¬­¥¶´€ª_´X² ¨ºªIÀ}¡?¬­¤?¥§ªQ· øÚ¢)¨Xªs¬­®4žUªi¦bž ª4´€¬£¨X¬?žOŸL¨X¡ ý Í þ « ®bÉ>¨ºª”ŸÇ«¼ž1¡­¬¨X¤­¬¦IÀLŸIž¸”ª4¥¶ª4·¨"ª4ž)¬tµ «(´›¤­Ö«®4žU¤­ž¬­®Qž¼¾¶´X«(ž)¤tµt¡­¥°ŸIž(¡­¬?¤­¥¶ª4·›¡c¨º¤­ž(¦4±4¥¶¾¶¬;¦IÀ¡­¥¶Ãµ ¿4¾¶ž¢´›ª”¢U¨X¬­žUª”¨X¬?¥§´€ª´º²†¨ ¿4¤­ž¸4 «âå ÉX¨1¤­´I´›¬ž)ª”¢¾§´€¡­žOŸ¥¶ª ¦4¤¨›¢ž)¡ɼ¨ºª”Ÿi¨ºª:´ ½ ž)¤?¬tµt¿4¾¶±4¤£¨º¾c¡­±»î « ®Ç²p´€¾¶¾§´X«(žOŸ&¦IÀ ¬­®Qž‘¢¾¶´€¡­¥¶ª4· «æ Åà;¥¶·€±4¤­žçÊ¡­®4´X«¡¬?®4ž¿b¨X¬­®Q¡0²g´›¤0¬t«(´ Ì_¨º¾Ù¨À¿Q¾§±Q¤£¨X¾¶¡(¥¶ª}¬?®4ž¥¶ª4¥¶¬­¥°¨X¾bª4ž)¬t«(´›¤?Ö Å ×®4ž}¢´›Ã¿Q¥§¾¶žµt¤­žU¿4¾°¨›¢ž¨º¾§·€´›¤?¥§¬?®4ÃiÉ/¨X¿4¿4¾¶¥¶žOŸs¬?´i¬?®4ž ¾¶´X«¼žU¤¡?¥ÙŸ»ž}´º²¬?®4¥¶¡ª4žU¬t«¼´€¤­ÖÉ(¤­žO¢´€·›ª4¥¶ï)žU¡žO¨€¢­®¥¶ª”ŸI¥Îµ ½ ¥°ŸI±”¨X¾ŸIž)¾¶¥¶Ã¥¶¬­žŸ¤­žU·›±4¾°¨X¤Áµ¹ž»¿Q¤­ž)¡?¡­¥¶´›ªs¡­±4¦Q¡­¬­¤?¥¶ª4·¾¶¥¶Ö›ž «âåÓý † “ É § þ « ® «æ É;¢´›ÃÇ¿4¥¶¾¶ž)¡'¥¶¬OÉc¨ºª”Ÿ:¤?ž)¿4¾°¨›¢žU¡¥§¬«¥¶¬­® ¬­®Qž¤­ž)¡?±4¾¶¬'´X²¼¬?®4ž¢´€Ã¿4¥¶¾°¨X¬?¥§´€ª;ÉI®4ž)¤?žž½¾¢P½¾¢°Å×®4ž ¡£¨ºÃž¿4¤?´»¢ž)¡­¡¨º¿4¿4¾¶¥¶ž)¡0¬?´¬?®4žžUª>¬?¥¶¤­ž'¾¶´X«(ž)¤tµt¡­¥°ŸIž'¾°¨ºªIµ ·›±b¨X·›ž€ÉI¤­ž)¡?±4¾¶¬­¥¶ª4·¥¶ª}¨"ª4ž)¬t«(´›¤?Ö¬­®b¨X¬(¤­ž)¾°¨X¬?ž)¡0¿b¨X¥¶¤­¡(´X² ¡­¬?¤­¥¶ª4·€¡c¡­±”¢?®¨º¡ ¬?®4ž´›ªQž)¡¥§ªà/¥¶·›±4¤?ž æQÅ;×®4¥¶¡¿4¤?´ ½ ¥°ŸIžU¡ ¬­®Qž ŸIžU¡­¥¶¤­žOŸ"¡­´€¾¶±4¬­¥¶´›ª/ÉO¡?¬­¥¶¾¶¾X¸”ªQ¥§¬?žµt¡­¬£¨º¬­ž›Éj²p´›¤¨Xª”¨º¾§ÀIï)¥¶ª4· ¨XªbŸ·€ž)ª4ž)¤¨X¬?¥§ªQ·²p±4¾¶¾Îµ¹¡?¬­ž)ÃB¤?žOŸI±4¿Q¾§¥°¢)¨º¬­¥¶´›ªÇ¥¶ª Ì&¨X¾°¨À>Å   %'·|>^ÁÁÞ'( >B&Ú ƒù)úX÷©ù?"< C†ú)"Ú    >ù?©ú<" Ø ÁÚ)ù?)/ Ú)ù? ÙnÍùÍø,úÝÚOø,Ø  Ø"Þ "ø,">ù?&B^ù;§ù÷E^%'ø,‚§ùA IùA!µß÷ ù,ØIù)ú!¸Èø ú3< "صÁÚù?< ù?"<Ÿ"úø'ù÷zÙn t÷ ø'ù)ÚB<µ8£úÚ3âÙ)>^3@";ú< "Ø ÁÚ)ù? ù?"<@" Ù>ù?POútø&E=ܟÈÙnÁ9&>ù?! ›ú0ù?Ø"؛úXù?ÚÓt'ú<" Ø ÁÁÞ Ú)ù)¡Úù?_ùÚÚ " Pú="Úø,Ø žØ""øÈIù}ù? >^Á Ù "‚>¸‚Ú)ù"Ê<µÚ "‚"‚ "@"ú!<" =÷§ùÚ( 6Ø>ù?Ú3E ×®4ž¡­¿bžO¢¥°¨º¾;ŸIž)¾¶¥¶Ã¥¶¬­ž)¤?¡ «âå ¨Xª”Ÿ «æ ¢)¨ºª¦”ž±Q¡­žOŸÊ¬­´ ¡?±4¤­¤?´›±4ªbŸ}¨ºªIÀ}¨X¿Q¿4¤­´€¿4¤­¥°¨X¬?ž¤­ž)·€±4¾°¨X¤tµtžÂ4¿4¤?ž)¡­¡?¥§´€ª_¡­±Q¦Iµ ¡?¬­¤?¥§ªQ·”É€±4¡­¥¶ª4·¨XªIÀª4žO¢ž)¡­¡¨X¤­ÀL¤?ž)·›±4¾°¨º¤tµtžÂ4¿4¤­ž)¡?¡­¥¶´›ªL´€¿Iµ žU¤£¨X¬?´›¤?¡OÉ'¨XªbŸ©¢´›ÃÇ¿4¥¶¾¶žµ¹¤?ž)¿4¾°¨›¢ž_Ã}¨ÀÓ¦”ž:¨X¿4¿Q¾§¥¶žOŸ ¬­´ ¬?®4žL¾¶´X«¼ž)¤Áµ¹¡?¥°ŸIž}¨ºª”Ÿ4ØX´€¤±4¿4¿bž)¤tµt¡­¥°ŸIž&´X²¬?®4žLª4ž)¬t«(´›¤?Ö ¨º¡ ŸIž)¡?¥§¤?žOŸÅ ×®4ž)¤­ž ¥§¡:ªQ´›¬­®Q¥§ªQ·-¬?´[¡­¬­´€¿B¬­®QžS¾¶¥¶ªIµ ·€±4¥¶¡­¬ƒ²p¤­´€ÃB¥¶ª4¡­žU¤­¬­¥¶ª4·Ÿ»ž)¾¶¥§ÃÇ¥¶¬­ž)¤?¡¼Ã±4¾¶¬?¥§¿Q¾§ž(¬­¥¶ÃǞ)¡OÉI¥¶ªIµ ¢¾§±bŸI¥¶ª4· ½ ¥°¨¢´›ÃÇ¿”´€¡­¥¶¬­¥¶´›ª/É>¨ºª”ŸÇ¤­žO¨º¿4¿4¾¶ÀI¥§ªQ·¢´›ÃÇ¿4¥¶¾§žµ ¤?ž)¿4¾°¨›¢ž-ñ4¾¶¬?¥§¿Q¾§žÓ¬­¥¶ÃžU¡OÅ ×®4ž8¬­ž¢­®Qª4¥°¯›±Qž-¥¶Ã¿Q¾§žµ ÃǞ)ªI¬­žOŸÇ¥¶ªi¢´€Ã¿4¥¶¾¶žµt¤­ž)¿Q¾Ù¨€¢ž¥¶¡˜¨·€ž)ª4ž)¤¨X¾c«¨À´º² ¨X¾Îµ ¾¶´X«¥¶ª4·'¬?®4ž¤­ž)·€±4¾°¨X¤Áµ¹žÂ4¿4¤?ž)¡­¡?¥¶´›ª_¢´›ÃÇ¿4¥¶¾¶ž)¤¼¬?´¤?žO¨X¿Q¿4¾¶À ¬?´‘¨ºª”ŸÇô4ŸI¥Î²pÀ¥¶¬­¡(´X«ª}´€±4¬­¿Q±4¬OÅ îL`/_ ¯l™h)ph´ :pl"!c6plno"iâh#Vh'p?d8phóqc îL`/_L`Pa ï·l$:h'l% q'&)(!d8o e)h'lno+*Ýqo-, ÌL±”¢?® ´º²0¬­®4žÇ«¼´€¤­Ö:¥¶ª:ª4´€ªIµÁ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥ ½ ž¸”ª4¥¶¬?žµ ¡?¬£¨º¬­ž_ÃÇ´›¤­¿Q®4´›¬¨›¢¬?¥Ù¢¡®”¨X¡¦bž)ž)ª[ŸIžOŸI¥°¢)¨º¬­žOŸ ¬­´:®”¨ºªIµ Ÿ»¾§¥¶ª4· Ô žUÃ¥¶¬­¥°¢ƒ¡­¬­žUÃÝ¥¶ªI¬­žU¤£ŸI¥¶·›¥¶¬¨X¬­¥¶´€ª;ÅÏ0ªÇž”¨XÃÇ¿4¾¶ž¼´º² ¥¶ªI¬­žU¤£ŸI¥¶·›¥¶¬¨X¬­¥¶´€ª´»¢U¢±4¤­¡«¥¶¬­®¬?®4žcÏ0¤¨X¦4¥°¢¡?¬­ž)Ã/. “ ¬ “ † É «®Q¥Ù¢?®ÚÞO¨ºª4¡ ó «¤­´€¬­žFbŠύ¢)¢´›¤£Ÿ»¥§ªQ·[¬­´[¨Xª¥¶ªn”±Iµ žUª>¬?¥°¨X¾¨X±4¬?´›¡­žU·›ÃžUª>¬¨X¾¨Xª”¨º¾¶À>¡?¥¶¡ãtÌ_¢u¨X¤?¬­®IÀIÉ0åOæ€ç4åé)É ¬?®4¥¶¡L¡­¬­žUà ¢´€ª4¡­¥¶¡­¬?¡_´X²¨Xª[¨X¾¶¾Îµ­¢´›ª4¡?´›ª”¨ºªI¬¤­´I´€¬.8¬n† «®Q´›¡­ž1·›ž)ªQž)¤£¨º¾”ÃǞO¨Xª4¥¶ª4·®”¨º¡c¬­´ŸI´'«¥¶¬?®«¤­¥¶¬­¥¶ª4·bɛ¨Xª ¨º¦4¡­¬?¤£¨€¢¬‘¢´›ª4¡?´›ª”¨ºª>¬Áµ ½ ´X«¼žU¾c¬­ž)ÃÇ¿4¾°¨X¬?žmÕ"ƒ8Õƒ8ÕbÉ ¨ºª”Ÿ:¨ ½ ´X«¼žU¾§¥¶ª4·´€¤ ½ ´4¢)¨X¾¶¥¶ïO¨º¬­¥¶´›ªL¬­®b¨X¬®4ž¡­ÀIæ”´€¾¶¥§ïUžOŸL¡­¥¶Ãʵ ¿Q¾§À¨X¡ “ É¡­¥¶·›ª4¥Î²pÀI¥¶ª4·(¿”žU¤t²pžO¢¬¼¨X¡?¿”žO¢¬$¨ºª”Ÿ'¨€¢¬­¥ ½ ž ½ ´€¥°¢ž›Å ×®Qž:¤­´I´›¬i¢´›ª4¡?´›ª”¨ºª>¬?¡i¨X¤?ž ¨X¡?¡­´4¢¥°¨X¬?žOŸ[«¥¶¬­®8¬­®Qž›Õ ¡?¾¶´›¬­¡s´X²L¬­®4ž¬?ž)ÿ4¾°¨º¬­žS¨ºª”Ÿ[¬?®4ž ½ ´X«(ž)¾´›¤ ½ ´X«¼žU¾§¡ «¥¶¬?®'¬­®4ž!ƒ¡?¾§´€¬­¡ɛ¿4¤?´4ŸI±”¢¥¶ª4·'¨0¢´›Ã¿Q¾§žU¬­ž(¡­¬­žUÃ1032)"½Å Æg²¬­®4žs¤­´I´›¬_¨ºª”Ÿ¬­®Qž ½ ´»¢U¨X¾¶¥¶ïO¨X¬?¥§´€ª©¨X¤­žs¬­®Q´›±4·€®>¬L´º² ¨º¡'ô€¤­¿4®Qž)Þ)¡Ébª4ž)¥¶¬­®4žU¤ÃÇ´›¤­¿Q®4ž)Þ"´»¢U¢±4¤­¡¢´€ª>¬?¥¶ªIµ ±Q´›±4¡?¾§ÀÇ¥¶ª}¬?®4ž'¡?¬­ž)Ãsż×®Qž'¡£¨ºÃž¤­´I´›¬4.8¬†L¢)¨Xªs¢´›Ãʵ ¦Q¥§ªQž'«¥¶¬­®L¬­®4ž¬­žUÿ4¾°¨X¬?žÕ"ƒ8Õƒ8Õ¨ºª”Ÿ_¨Ÿ»¥íž)¤?ž)ªI¬ ½ ´Xµ ¢U¨X¾¶¥¶ïO¨X¬?¥§´€ª™€ § É(¡?¥§·€ª4¥Î²pÀI¥¶ª4·_¿bž)¤t²pž¢¬}¨º¡­¿bžO¢¬}¨ºª”ŸÓ¿”¨X¡Áµ ¡?¥ ½ ž ½ ´€¥°¢ž›É€¿4¤­´4ŸI±”¢¥§ªQ·'¬­®4ž1¡­¬?ž)Ã50ª62)¢P½ɛ«®Q¥Ù¢?®ÃǞO¨Xª4¡ ó « ¨º¡«¤­¥¶¬­¬?ž)ª8F”Å Ô ¥¶Ã¥¶¾°¨X¤­¾¶À›ÉQ¬­®4ž¤?´I´›¬.8¬n†i¢)¨ºª ¢´›Ãʵ ¦Q¥§ªQž:«¥§¬?®[¬­ž)ÃÇ¿4¾°¨X¬?ž›Õ"ƒnƒ8Õƒ8Õ©¨Xª”Ÿk€ § ¬?´©¿4¤­´4ŸI±b¢ž 0ªnª2'¢P½ÉI¬­®Qž¤?´I´›¬!–n… ¨ ¢)¨Xª_¢´€Ã¦4¥¶ª4ž«¥¶¬­®žÕƒŽÕƒ8Õ¨Xª”Ÿ € § ¬?´²p´›¤?Ã87"ªn‘¢ £)É4¨ºª”ŸÇ¡­´²p´€¤­¬­®/Å ä¨À ã£åæ›ç"G€éÚ¤­ž²p´€¤­ÃL¨X¾¶¥¶ï)žOŸÈ¬?®4ž ¨º±4¬­´€¡­ž)·€Ãž)ªI¬¨X¾ ¬?¥¶ž)¤­¡8´X²SÌ_¢?u¨X¤­¬?®IÀÈã£åOæ€ç4å驨X¡8¿4¤­´˜žO¢¬?¥§´€ª4¡8´X²S¨ ñ4¾¶¬­¥Îµt¾¶ž ½ ž)¾¬?¤£¨ºª4¡£ŸI±b¢ž)¤¨XªbŸ_«¤?´›¬­ž¨Ç¡­ÃL¨X¾¶¾ º ¤?´›¾¶´›·ºµ ¦”¨º¡­žOŸs¿4¤?´›¬?´›¬tÀI¿”ž¬?®”¨X¬®”¨ºª”ŸI¾¶žOŸ&¬­®4ž¥¶ªI¬­žU¤£ŸI¥¶·›¥¶¬¨X¬­¥¶´€ª ´X²¤­´I´›¬?¡OÉVu:9µt¬­ž)ÃÇ¿4¾°¨X¬­žU¡ ¨ºª”Ÿ ½ ´»¢U¨X¾¶¥¶ïO¨X¬?¥§´€ª4¡(¥§ªI¬?´‘¨º¦Iµ ¡­¬?¤£¨€¢¬0Ï1¤£¨X¦Q¥Ù¢1¡?¬­ž)ÃÇ¡OÅc×®Q¥§¡·€ž)ª4ž)¤¨X¾¨X¿4¿4¤?´>¨€¢­®/É4«¥¶¬­® ñ4¾¶¬­¥Îµt¬£¨º¿”ž:¬?¤£¨XªQ¡£ŸI±”¢ž)¤­¡ÉL®”¨X¡:¦bž)ž)ªBž»¿Q¾§´€¤­žOŸÛ¨Xª”Ÿ žÂ4¬­žUª”ŸIžOŸñ¦>À8ä0¥¶¤¨Xïs¥§ª8¡­ž ½ ž)¤¨X¾'¿”¨º¿”žU¤­¡OÉ"¡­ž)ž ä1¥¶¤£¨ºï ã¹ì€í›í€í›é$²p´›¤0¨"¡­±4ÃÇÃ}¨º¤­À¨Xª”Ÿ‘²p±4¤­¬?®4ž)¤1¤­ž²pžU¤­ž)ª”¢ž)¡OÅ Ætª«¼´€¤­Öô€¤­ž0ŸI¥¶¤­ž¢¬­¾¶À¤­žU¾Ù¨º¬­žOŸÊ¬­´'¬?®4ž'¢±4¤?¤­ž)ªI¬¡­´ºµ ¾¶±4¬­¥¶´€ª;É¥§¬«¨X¡ä¨X¬£¨?˜)¨0¨ºª”Ÿ'ä1´›¡?֛ž)ª4ªQ¥§žUÃ¥4ãåOæ€ç›ç›é†«®4´ ¸”¤?¡­¬Ÿ»ž)ô€ª4¡­¬?¤£¨X¬?žOŸÇ¬­®”¨º¬ Ô žUÃ¥¶¬­¥°¢ãgÏ0ÖI֛¨›Ÿ»¥Ù¨ºª”郤­´I´›¬?¡ ¨XªbŸs¿”¨X¬?¬­ž)¤?ª4¡‘¢´›±4¾°Ÿ&¦”žÊ²p´›¤?Ã}¨X¾¶¥¶ï)žOŸ&¨X¡'¤?ž)·›±4¾°¨º¤¾°¨ºªIµ ·›±b¨X·›žU¡OɨXª”Ÿ&¬­®”¨º¬¬­®4žª4´€ªIµ­¢´›ª”¢U¨X¬­žUª”¨X¬?¥ ½ ž¥¶ª>¬?ž)¤£Ÿ»¥§·ºµ ¥¶¬£¨º¬­¥¶´›ª_´º² ¡?¬­ž)ÃÇ¡'¢´›±4¾°Ÿ&¦”žžU¾§žU·>¨XªI¬?¾§À ²p´›¤?Ã}¨X¾¶¥¶ï)žOŸL¨º¡ ¬­®Qž¥¶ªI¬­žU¤­¡­ž¢¬­¥¶´›ª&´X²c¬­®4´€¡­ž¤­žU·›±4¾°¨X¤¾°¨ºª4·›±”¨º·›ž)¡Å ×®4¥¶¡« ¨º¡ ¬?®4ž֛žUÀ¥¶ª4¡?¥§·€®I¬OòÃÇ´›¤­¿Q®4´›¬¨›¢¬?¥Ù¢(ŸIž)¡¢¤­¥¶¿Iµ ¬­¥¶´€ªi¢´›±Q¾ÙŸ ž)ÿ4¾¶´XÀ ½ ¨º¤­¥¶´›±4¡ ¸”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž´€¿”žU¤£¨X¬?¥¶´›ª4¡É ª4´€¬z˜±4¡?¬Õ¢´€ª”¢)¨º¬­ž)ª”¨º¬­¥¶´›ª/˨Xª”Ÿ8¾°¨ºª4·›±”¨º·›ž)¡&¬­®”¨º¬_¤­žµ ¯€±4¥¶¤­žOŸÊ´›ª4¾¶À¢´€ª”¢)¨º¬­ž)ª”¨º¬­¥¶´›ªÇ«¼žU¤­žÊ˜±Q¡­¬(¡­¿bžO¢¥°¨X¾¢)¨X¡?ž)¡OÅ ëƒÀ-žÂ4¬?ž)ª4¡­¥¶´€ª;ɬ­®4ž:«¥°ŸIžU¾§À ª4´›¬?¥Ù¢žOŸ8¾¶¥§ÃÇ¥¶¬£¨X¬?¥¶´Xª4¡´X² žO¨º¤­¾¶À¸”ªQ¥§¬?žµt¡­¬£¨º¬­ž'¥¶ÃÇ¿4¾¶ž)Þ)ªI¬£¨º¬­¥¶´›ª4¡¥¶ª_ŸIžO¨º¾§¥¶ª4·«¥¶¬­® ª4´€ªIµÁ¢´›ª”¢)¨º¬­ž)ªb¨X¬­¥ ½ ž'ô€¤­¿4®4´€¬£¨€¢¬­¥°¢¡0¢´€±4¾°ŸL¦bž'¬­¤¨›¢žOŸ ¬­´L¬?®4ž)¥¶¤ŸIž)¿bž)ª”Ÿ»ž)ª”¢žL´›ªs¬­®Qž¢´›ª”¢U¨X¬­žUª”¨X¬?¥§´€ªi´€¿”žU¤£¨µ ¬­¥¶´€ª¥¶ªLô›¤?¿4®4´€¬£¨›¢¬­¥°¢0ŸIžU¡£¢¤?¥§¿Q¬­¥¶´›ª4¡Å ×®4¥¶¡;¥¶ª4¡?¥§·€®I¬´X² ä¨X¬¨˜)¨1¨Xª”Ÿ'ä1´€¡­Ö›žUª4ª4¥¶ž)Ã¥€« ¨º¡ ¨º¿Iµ ¿4¾¶¥¶žOŸ8¦IÀ 냞)ž)¡?¾¶ž)ÀÍ¥¶ª[¨¾°¨X¤­·€žµt¡£¢)¨º¾§žÓô€¤­¿4®4´€¾¶´›·›¥°¢)¨º¾ ¨Xªb¨X¾¶ÀIï)ž)¤Ç²p´›¤LÏ1¤£¨X¦Q¥Ù¢ºÉ¸”¤?¡­¬L±4¡?¥§ªQ· ¨ºª-¥¶ÃÇ¿4¾¶ž)Þ)ªI¬£¨jµ ¬­¥¶´€ªS¬?®”¨X¬Ç¡­¥¶Ã±4¾°¨X¬?žOŸ:¬?®4ž_¥¶ªI¬­ž)¤?¡­žO¢¬­¥¶´›ª©´º²'¡­¬­žU᥶ª ¢´4ŸIž_¨X¬¤­±Qª>¬?¥¶ÃžÇã¹ëƒž)ž)¡­¾¶ž)ÀIÉåOæ€æ4åOéUÉc¨Xª”Ÿ³¤£¨ºª ¤£¨º¬­®4žU¤ ¡­¾¶´X«¾¶À˛¨XªbŸ'¾°¨X¬?ž)¤Oɀ±4¡?¥§ªQ·'ê0ž)¤?´Â¸bª4¥¶¬­žµt¡­¬¨X¬­ž¬?žO¢?®4ª4´€¾ßµ ´›·€Àã¹ëƒž)ž)¡?¾§žUÀ>ÉåOæ€æ"é€é)Éc¨&ª4ž)«Ä¥§ÃÇ¿4¾¶ž)Þ)ªI¬£¨º¬­¥¶´›ª&¬­®”¨º¬ ¥¶ªI¬­ž)¤?¡­žO¢¬?žOŸ[¬?®4žs¡­¬­žUá_¨X¬_¢´€Ã¿4¥¶¾¶žL¬­¥¶Ãž_¨XªbŸ©¿bž)¤tµ ²p´€¤­ÃžŸ«(ž)¾¶¾;¨X¬¤­±Qª>¬?¥¶Ãž›Å×®4žåæ›æ"é¨X¾¶·€´›¤­¥¶¬?®4ì­®”¨º¬ ¥¶ªI¬­ž)¤?¡­žO¢¬?žOŸ¤?´>´€¬­¡ ¨ºª”Ÿ[¿”¨º¬­¬?ž)¤­ª4¡:¥¶ªI¬­´8¡­¬?ž)áɨXª”Ÿ ¡­±Q¦4¡­¬?¥§¬?±4¬­žŸ¬­®Qž¼´€¤­¥¶·›¥¶ª”¨º¾>¤?´I´›¬­¡c¨ºª”Ÿ'¿b¨X¬­¬?ž)¤­ªQ¡c´›ª!˜±4¡?¬ ¬­®Qž0¾¶´X«¼žU¤c¡­¥°ŸIž«¥¶¬­®¬?®4ž0¥¶ªI¬­žU¤­¡­ž¢¬­žOŸÇ¡?¬­ž)Ãsɛ¬?´>´€Ö¬t«(´ ®4´€±4¤­¡(¬­´'®b¨Xª”ŸI¾¶ž0¨º¦”´€±4¬¼æ›íQÉ í›í›í1¡?¬­ž)ÃÇ¡c´›ª¨ Ô ì=çkì¾Îµ ¬­¤¨s«¼´€¤­ÖI¡­¬¨X¬?¥§´€ª;ũ׮Qži¢´›ÃÇ¿4¥¶¾¶žµ¹¤?ž)¿4¾°¨›¢ž}¨X¾¶·€´›¤­¥¶¬?®4à ¥¶¡ ¨ ½ ¨º¡­¬(¥¶Ã¿4¤?´ ½ ž)ÃǞ)ªI¬ ¥¶ªÇ¦”´€¬­®L·›ž)ªQž)¤£¨º¾§¥¶¬tÀ¨ºª”ŸÇžîµ ¢¥¶ž)ª”¢À>É¿4¤?´»Ÿ»±”¢¥¶ª4·¬­®Qžc¡£¨ºÃž¤­ž)¡?±4¾¶¬;¥¶ª'¨ƒ²pž)«sÃ¥¶ªI±4¬­žU¡OÅ à4´€¾¶¾§´X«¥¶ª4·¬?®4ž'¾¶¥¶ª4ž)¡0´º²$ä¨X¬¨˜)¨¨ºª”Ÿ_ä1´›¡?֛ž)ª4ªQ¥§žUÃ¥ ã£åæ›ç€ç›é)ÉÇ«(ž ¢´€±4¾°Ÿ6Ÿ»ž¸”ª4ž8¥¶ª>¬?ž)¤­ÃǞOŸI¥°¨X¬?ž©ª4ž)¬t«(´›¤?ÖI¡ «¥¶¬­®8¤­ž)·€±4¾°¨X¤Áµ¹žÂ4¿4¤?ž)¡­¡?¥¶´›ªÍ¡?±4¦4¡?¬­¤­¥¶ª4·€¡_¬­®b¨X¬_¥¶ª”ŸI¥°¢)¨º¬­ž ¬­®Qž_¥¶ª>¬?ž)¤­¡?žO¢¬?¥§´€ª-´º²'¡­±4¥¶¬£¨º¦4¾¶À ž)ªb¢´4ŸIžOŸ ¤­´I´›¬?¡Oɬ?ž)õ ¿4¾°¨X¬?ž)¡OÉ(¨Xª”Ÿ ½ ´4¢)¨º¾§¥¶ïO¨º¬­¥¶´›ªQ¡OÅið ´X«¼ž ½ ž)¤Éc¦bžO¢)¨X±Q¡­ž_¬?®4ž ¥¶ªI¬­ž)¤ŸI¥¶·›¥¶¬£¨º¬­¥¶´›ªs´X²¡­¬?ž)á'¤?ž)¿4¤?ž)¡­ž)ªI¬?¡}¨L¡?¿”žO¢¥Ù¨º¾0¢)¨X¡?ž ´X²_¥¶ªI¬­ž)¤?¡­žO¢¬­¥¶´›ªÄ«¼ž8¢´›ÃÇ¿4±4¬­ž¥¶¬:±4¡?¥§ªQ·Ý¨8¡­¿bžO¢¥°¨º¾ßµ ¥¶ï)žOŸÉ(ÃÇ´›¤­žLžî}¢¥§žUª>¬Ƀ¸”ª4¥¶¬­žµt¡­¬¨X¬?ži¨X¾¶·›´€¤­¥¶¬­®Qà ¢)¨X¾¶¾¶žOŸ K=Q S<; Q;Å îL`/_:` _ ±klno-#Vl ×®4žÃž)¤?·›ž'¨X¾¶·€´›¤­¥¶¬?®4ÃB¥¶¡¨¿”¨º¬­¬?ž)¤­ªIµg¸”¾¶¾¶¥¶ª4·´€¿”žU¤£¨µ ¬­¥¶´€ª}¬?®”¨X¬0¢´›Ã¦4¥¶ª4ž)¡¬t«(´¤­žU·›±4¾°¨X¤¾°¨ºª4·›±”¨º·›ž)¡É4¨¬?ž)õ ¿4¾°¨X¬?ž¨Xª”Ÿ:¨}¸”¾¶¾§žU¤OÉ¥¶ªI¬­´_¨L¡­¥¶ª4·€¾¶ž´›ª4ž›ÅL×®Qž¡­¬?¤­¥¶ª4·€¡ ´X²(¬­®4ž¸”¾¶¾¶ž)¤1¾Ù¨ºª4·›±b¨X·›ž¢´›ª4¡?¥¶¡­¬´X²´›¤ŸI¥¶ª”¨X¤?À}¡?ÀIæ”´€¾§¡ ¡­±b¢­®©¨º¡z–É^…É ¨ É^€É § Åi×®Qž}¬?ž)ÿQ¾Ù¨º¬­žžÂ4¿4¤?ž)¡­¡?¥¶´›ª4¡ Ã}¨À:¢´€ª>¬¨X¥¶ª ¡?¿”ž¢¥°¨X¾¢¾Ù¨º¡­¡Ç¡­ÀIæ”´€¾¶¡'¡­±”¢?®-¨º¡mÕÓãsr ¢´›ª4¡?´›ª”¨ºª>¬é'´€¤ƒãsr ½ ´X«¼žU¾Ù阬­®”¨º¬¤?ž)¿4¤­žU¡­ž)ªI¬}¨&¿4¤?žµ Ÿ»ž¸”ª4žOŸ ¡­žU¬}´º²´›¤ŸI¥¶ª”¨X¤?À ¡?À>æb´›¾¶¡Å[×®4žs´›¦"˜žO¢¬­¥ ½ ž ´º²¬?®4ž_ÞU¤­·›žL´€¿”ž)¤¨X¬?¥§´€ª©¥¶¡¬­´ ¨º¾§¥¶·€ª ¬­®Qž_¬­ž)ÃÇ¿4¾°¨X¬?ž ¡?¬­¤?¥§ªQ·›¡«¥¶¬?®S¬?®4ž&¸”¾¶¾§žU¤¡­¬?¤­¥¶ª4·€¡}¨XªbŸ¬­´:¥¶ª4¡?¬£¨XªI¬?¥Ù¨º¬­ž ¬?®4ž0¢¾°¨X¡?¡¼¡?ÀIæ”´€¾§¡´º²;¬­®4ž¬?ž)ÿ4¾°¨º¬­ž0¨X¡ƒ¬­®4ž1Ã}¨X¬¢?®4¥¶ª4· ¸b¾§¾¶ž)¤(¡­ÀIæb´›¾¶¡OŠ⥶֛ž¥¶ªI¬­žU¤­¡­ž¢¬­¥¶´›ª/É4¬­®Qž0ÞU¤­·›ž¨X¾¶·›´€¤­¥¶¬­®QÃÍ´›¿bž)¤¨X¬­žU¡ ¦IÀL²g´›¾¶¾¶´X«¥§ªQ·¬t«(´L¿”¨X¬?®4¡OÉ´€ª4ž¥¶ªi¬?®4ž¬­ž)ÃÇ¿4¾°¨X¬­žª4ž)¬Áµ «(´›¤?Öɔ¬?®4ž´›¬?®4ž)¤¥¶ªi¬?®4ž¸”¾¶¾¶ž)¤0ªQž)¬t«¼´€¤­ÖɨXª”ŸL¥¶¬'¢´€ªIµ ¡?¬­¤?±”¢¬­¡(¬­®Qž¢´€¤­¤­žU¡­¿b´›ª”ŸI¥¶ª4·¡­¥¶ª4·€¾¶ž¼¿”¨º¬­®¥¶ª¬­®Qž¤­ž)¡?±4¾¶¬ ªQž)¬t«¼´€¤­ÖÅcÜ ½ ž)¤­ÀÇ¡­¬£¨º¬­ž'¥¶ªÇ¬­®4ž'¤?ž)¡­±Q¾§¬0¢´›¤­¤?ž)¡­¿b´›ªbŸI¡¬­´ ¬t«(´'´€¤­¥¶·›¥¶ª”¨º¾4¡­¬¨X¬­žU¡OÉI´›ªQž0¥¶ª¬­®4ž¬?ž)ÿ4¾°¨º¬­ž›ÉX¬?®4ž0´€¬­®4žU¤ ¥¶ª_¬?®4ž¸”¾¶¾¶ž)¤OÅÆg²¬­®4ž´€¤­¥¶·›¥¶ª”¨º¾¡­¬£¨º¬­ž)¡¨X¤­ž¦b´›¬?®_¸bª”¨X¾tÉ ¬?®4ž(¤­ž)¡?±4¾¶¬­¥¶ª4·¡­¬¨X¬­ž¥¶¡¨X¾¶¡­´1¸”ª”¨º¾tËO´€¬­®4žU¤­«¥¶¡­ž¥¶¬¥¶¡;ª4´€ªIµ ¸bª”¨X¾tÅ ×®4ž´›¿bž)¤¨X¬­¥¶´€ªi¡?¬£¨X¤?¬­¡¥¶ªi¬?®4ž¥¶ª4¥¶¬­¥°¨X¾/¡­¬£¨º¬­žÇ´X²¼¬?®4ž ´€¤­¥¶·›¥¶ª”¨º¾cª4ž)¬t«(´›¤?ÖI¡OÅ:ϬžO¨›¢?®¿b´›¥¶ªI¬Oɬ­®Qž ¨º¾¶·›´›¤?¥¶¬­®4à ¬?¤­¥¶ž)¡L¬­´³¸”ª”Ÿ©¨º¾¶¾0¬­®Qži¡?±”¢)¢ž)¡?¡t²p±Q¾ÃL¨X¬£¢?®4žU¡¦bž)¬t«¼žUž)ª ¬?®4ž¬­ž)ÃÇ¿4¾°¨X¬?ž˜¨º¤£¢¡¨Xª”ŸÊ¸”¾¶¾¶ž)¤0¨º¤£¢¡żÏ[ÃL¨X¬¢­®Ç¥¶¡¡­±”¢?µ ¢ž)¡­¡Á²p±4¾¥Î²0¬­®4žÇ¸”¾¶¾¶ž)¤¨º¤£¢Ê¡­ÀIæ”´€¾c¥¶¡¥¶ª”¢¾§±bŸIžOŸ:¥¶ª:¬?®4ž ¢¾Ù¨º¡­¡ŸIž)¡?¥§·€ª”¨X¬?žOŸ¦IÀ¬­®4ž1¬­ž)ÃÇ¿4¾°¨X¬?ž0¨X¤£¢ ¡­ÀIæb´›¾tÅ×®4ž ÃL¨X¥¶ª'ŸI¥5í žU¤­ž)ª”¢ž0¦bž)¬t«¼žUž)ªÃǞ)¤­·€ž ¨ºª”Ÿ¢¾Ù¨º¡­¡?¥Ù¢U¨X¾4¥¶ªI¬­ž)¤Áµ ¡?žO¢¬?¥§´€ªL¥§¡¥¶ªtu(´›ª”ŸI¥¶¬?¥§´€ª4¡å'¨Xª”ŸLì"¦”ž)¾¶´X«'ò å€Å(Æg²¨L¡­±”¢U¢ž)¡­¡Á²p±4¾Ã}¨º¬£¢?®s¥§¡"²p´€±4ª”ŸÉc¨LªQž)«6¨X¤¢¥¶¡ ¨›Ÿ4ŸIžŸL¬­´¬?®4ž¢±4¤?¤­ž)ªI¬¤­žU¡­±4¾¶¬¡­¬¨X¬­ž€Å×®4ž¨º¤£¢¥¶¡ ¾Ù¨º¦”žU¾§žŸs«¥§¬?® ¬?®4ž¸”¾¶¾¶ž)¤¨X¤¢¡­ÀIæb´›¾tË¥¶¬­¡ŸIž)¡?¬­¥Îµ ª”¨X¬?¥§´€ª¥¶¡¬­®4ž¤­žU¡­±4¾¶¬0¡?¬£¨º¬­ž¬­®”¨º¬0¢´›¤?¤­ž)¡?¿”´€ª”ŸI¡1¬­´ ¬­®4ž¬t«(´´€¤­¥¶·›¥¶ª”¨º¾;ŸIž)¡­¬?¥¶ª”¨X¬?¥§´€ª4¡OÅ ìQÅ(Æg²˜ªQ´ ¡­±b¢)¢ž)¡?¡t²p±4¾"Ã}¨X¬¢?®S¥¶¡Ê²p´›±4ªbŸ:²g´›¤}¨³·›¥ ½ žUª ¬­ž)ÃÇ¿4¾°¨X¬­ž¨X¤£¢ºÉ;¬?®4ž¨X¤¢'¥¶¡¢´€¿4¥¶žOŸ&¥§ªI¬?´}¬?®4ž¢±4¤Áµ ¤­ž)ªI¬¤­žU¡­±4¾¶¬0¡?¬£¨º¬­ž›ÅƒÆt¬­¡Ÿ»ž)¡­¬?¥§ªb¨X¬­¥¶´€ª}¥¶¡¬?®4ž¤­ž)¡?±4¾¶¬ ¡­¬£¨º¬­žÇ¬­®”¨º¬‘¢´€¤­¤?ž)¡­¿b´›ª”Ÿ»¡¬­´s¬­®4žLŸIž)¡­¬?¥¶ª”¨X¬?¥§´€ª ´º² ¬­®4ž¬­žUÿ4¾°¨X¬?ž¨º¤£¢¨Xª”ŸÇ¬?®4ž¢±4¤?¤­ž)ªI¬ ¸”¾¶¾§žU¤¡­¬£¨º¬­ž›Å Ætª&ží ž¢¬OÉ6u(´€ª”ŸI¥¶¬­¥¶´€ª ìÊ¿4¤­žU¡­ž)¤ ½ ž)¡¨XªIÀ¬?ž)ÿQ¾Ù¨º¬­ž¨º¤£¢ ¬?®”¨X¬(ŸI´Iž)¡cª4´€¬¸”ª”ŸÇ¨0ÃL¨X¬¢­®/ÅÆtª¬­®”¨º¬ ¢U¨X¡­ž€É›¬?®4ž0¿”¨º¬­® ¥¶ªL¬­®Qž¬?ž)ÿ4¾°¨º¬­ž'ª4žU¬t«¼´€¤­Ö_¨›Ÿ ½ ¨ºª”¢ž)¡¬?´}¨ªQž)«Í¡?¬£¨X¬?ž «®Q¥§¾¶ž¬?®4ž¿”¨º¬­®:¥¶ª ¬?®4ž¸b¾§¾¶ž)¤"ª4ž)¬t«(´›¤­Ö:¡?¬£¨ÀI¡¨X¬'¬?®4ž ¢±4¤­¤?ž)ªI¬0¡­¬¨X¬?ž›Å :ž±Q¡­ž¬­®Qžª4žU¬t«¼´€¤­ÖI¡'¥¶ªÕà/¥¶·›±Q¤­žLåOí¬?´L¥§¾¶¾¶±4¡?¬­¤£¨º¬­ž ¬?®4ž(ží ž¢¬c´X²b¬­®4ž(Þ)¤?·›ž¨X¾¶·›´€¤­¥¶¬­®QÃiŔà/¥¶·›±Q¤­žåOí¡?®4´X«¡ ¨˜¾§¥¶ª4ž¨X¤¬­ž)ÃÇ¿4¾°¨X¬?žª4ž)¬t«(´›¤­Ö¨XªbŸ¬t«¼´¸b¾§¾¶ž)¤ƒª4ž)¬t«(´›¤­ÖI¡É ´€ª4ž´X²«®4¥°¢­®L¥¶¡¢À4¢¾¶¥Ù¢ºÅ i u C V V C V C d r s à/¥¶·›±Q¤­ž_åOíQòÏBמ)ÿ4¾°¨º¬­žçž)¬t«(´›¤?Ö ¨Xª”Ÿs׫(´ià/¥¶¾§¾¶ž)¤ ç ž)¬t«(´›¤­ÖI¡ Æt¬0¥¶¡žO¨º¡­À¬?´¡?ž)ž'¬­®b¨X¬0¬?®4ž'ÃǞ)¤­·€ž0´X²c¬?®4ž–n… ¨ ª4ž)¬Áµ «(´›¤?Ö«¥¶¬­®&¬­®4ž'¬?ž)ÿQ¾Ù¨º¬­žª4ž)¬t«(´›¤?ÖLÀ>¥¶ž)¾°ŸI¡1¬­®4ž"¤­ž)¡?±4¾¶¬ ¡?®4´X«ª'¥¶ªà/¥¶·›±4¤?ž0å›å€Å×®4ž(¬­®4¤?ž)ž(¡­ÀIæ”´€¾¶¡´X²”¬?®4ž¸”¾¶¾¶ž)¤ ¡­¬?¤­¥¶ª4·'¨X¤?žc¥¶ª4¡­¬¨XªI¬­¥°¨º¬­žOŸ"¥¶ª'¬­®4ž(¬­®Q¤­ž)ž0¢´€ª4¡­´€ª”¨XªI¬/¡­¾¶´›¬?¡ ¥¶ª¬?®4žzÕƒnƒŽÕƒ8Õ¬­žUÿ4¾°¨X¬?ž›Å d V V r V s à/¥¶·›±4¤?ž‘å€å›òÆtªI¬­žU¤­ÃžŸI¥°¨X¬­ž,Hž)¡?±4¾¶¬OÅ à/¥¶·›±4¤?ž åOì&¿4¤­žU¡­ž)ªI¬­¡&¬­®4žL¸bª”¨X¾1¤­ž)¡?±4¾¶¬¥¶ª«®4¥°¢­®¬?®4ž ¡­ž¢´›ª”Ÿ ¸”¾¶¾¶ž)¤ª4ž)¬t«(´›¤?ÖL¥§ª_à/¥¶·›±4¤?žåOí¥¶¡1Þ)¤?·›žOŸÇ«¥¶¬­® ¬­®Qž¥¶ªI¬­žU¤­ÃžŸI¥°¨X¬­ž¤?ž)¡­±4¾¶¬¡?®4´X«ªL¥§ª_à/¥¶·›±4¤?žå›å›Å u s d u r i à/¥¶·›±4¤?ž‘åì4òà/¥§ªb¨X¾6HžU¡­±4¾¶¬ Ætª¬­®4¥¶¡c¢)¨X¡?ž›É€¬­®4žƒ¸”¾¶¾¶ž)¤¾Ù¨ºª4·›±b¨X·›ž(¢´›ªI¬£¨º¥¶ª4¡c¨Xª¥§ª»¸”ª4¥¶¬­ž ¡­žU¬´X²;¡­¬?¤­¥¶ª4·€¡OÉI¦4±4¬(´›ªQ¾§À´›ªQž0¡­±b¢)¢ž)¡?¡t²p±4¾/¿”¨X¬?®}¢)¨ºª¦bž ¢´€ª4¡­¬?¤­±”¢¬?žOŸÅ0냞O¢)¨º±4¡­ž¬­®Qž¸”¾¶¾¶ž)¤¡­¬?¤­¥¶ª4·ž)ª”Ÿ»¡0«¥¶¬­®_¨ ¡­¥¶ª4·€¾¶ž § É4¬­®Qž'¸”¤?¡­¬1¬t«¼´ 9Í¡­ÀIæb´›¾¶¡0¢)¨ºª_¦”ž"¥§ªQ¡­¬£¨ºªI¬­¥Îµ ¨X¬?žOŸ´€ª4¾¶À¨X¡‚€Å¸ç ´›¬?ž0¬­®b¨X¬c´€¤£ŸI¥¶ª”¨º¤­À¡­ÀIæ”´€¾¶¡ ¥¶ª¬?®4ž ¿”¨º¤­¬­¥°¨º¾§¾¶Àʸ”¾¶¾§žŸL¬­ž)ÃÇ¿4¾°¨X¬?ž˜¨º¤­ž¬­¤?žO¨X¬?žOŸs¾¶¥§Ö€ž¬­®4ž¢¾°¨X¡?¡ ¡­ÀIæb´›¾¶¡¬­®b¨X¬'ŸI´ªQ´›¬1¸”ª”Ÿs¨ÃL¨X¬£¢?®;Å1×®”¨º¬¥¶¡ɔ¬?®4ž)À ¨X¤?ž¢´€¿4¥¶žOŸ¥¶ªI¬­´"¬­®4ž1¤­ž)¡?±4¾¶¬¼¥¶ª¬­®Qž)¥¶¤ ¢±Q¤­¤­žUª>¬¿b´›¡?¥¶¬­¥¶´›ª «¥¶¬­®4´€±4¬0¢´€ª4¡­±4ÃÇ¥¶ª4·¨"¸”¾¶¾¶ž)¤(¡­ÀIæ”´€¾tÅ ×´¥§ªI¬?¤­´4ŸI±”¢ž¬­®QžÃǞ)¤­·€ž'´›¿bž)¤£¨º¬­¥¶´›ª&¥§ªI¬?´¬­®4žê0žµ ¤­´ÂL¤­ž)·€±4¾°¨X¤Áµ¹žÂ4¿4¤?ž)¡­¡?¥¶´›ª_¢)¨X¾°¢±Q¾§±Q¡«¼žª4ž)žŸL¬­´¢?®4´I´›¡?ž ¨Xª[´€¿”žU¤£¨X¬?´›¤_¡?ÀIæ”´€¾¹ÅáëcžO¢U¨X±4¡?žSÃǞ)¤­·€ž›É¾¶¥¶Ö›žs¡­±4¦Iµ ¬­¤¨›¢¬?¥¶´›ª;ÉI¥¶¡¼¨"ª4´›ª»µÁ¢´€Ãñ4¬¨X¬?¥ ½ ž(´›¿bž)¤¨X¬­¥¶´€ª;ɛ«(ž¨º¾¶¡­´ ñ4¡?¬ŸI¥¶¡­¬­¥¶ª4·€±4¥¶¡­®©¦bž)¬t«(ž)ž)ª[¬?®4ži¬?ž)ÿQ¾Ù¨º¬­ži¨ºª”Ÿ¬?®4ž ¸”¾¶¾¶ž)¤OÅÈàQ´›¤_¢´›ª ½ ž)ª4¥¶ž)ª”¢ž€É«(ž:¥¶ªI¬­¤­´4ŸI±b¢ž:¬t«¼´ ½ ¨X¤­¥Îµ ¨XªI¬?¡ ´X²_¬?®4ž-ÃǞ)¤­·€ž´›¿bž)¤£¨º¬­´›¤É/=?> ¦ = ¨XªbŸ@= ¦'A =)É ¬­®b¨X¬Ÿ»¥íž)¤1´›ª4¾¶À«¥¶¬?®}¤?ž)¡­¿bžO¢¬"¬­´«®4ž)¬?®4ž)¤¬­®Qž¬?ž)õ ¿4¾°¨X¬?ž¥¶¡¬­´Ç¬­®4ž¾¶ž²g¬ãB=?> ¦ =­é&´›¤¬­´L¬­®Qž¤­¥¶·›®I¬'ãB= ¦'A =­é ´X²'¬?®4žÇ¸”¾¶¾§žU¤Oũ׮Qž_žÂ4¿4¤­žU¡­¡­¥¶´€ª å Ñ8=?> ¦ =C æ ¤­ž)¿4¤?žµ ¡­žUª>¬?¡¬?®4žL¡£¨XÃǞÃǞ)¤­·€ž´›¿bž)¤¨X¬­¥¶´€ªS¨X¡ å CD= ¦EA =mÑ æ Å ÆtªB¦b´›¬?® ¢)¨º¡­ž)¡ɞÑBŸIž)ªQ´›¬­žU¡S¬?®4ž8¬­ž)ÃÇ¿4¾°¨X¬­ž€ÉCBŸIžµ ª4´€¬­ž)¡¬­®Qž'¸”¾¶¾¶ž)¤É ¨ºª”ŸL¬?®4ž'¤­žU¡­±4¾¶¬¥¶¡0¬­®Qž¡¨Xހŝ©¥¶¬­® ¬­®Qž)¡­žÓª4ž)«´€¿”ž)¤¨X¬?´›¤­¡É'¬?®4ž ª4žU¬t«¼´€¤­Ö8¥§ª[à/¥¶·›±4¤?ž©åOì ¢)¨ºª ¦bžÝ¢´›Ã¿Q¥§¾¶žOŸÛ²g¤­´›Ã ¨Xª ž»¿Q¤­ž)¡?¡­¥¶´›ªÈ¡­±b¢­®¨º¡ – … ¨ = ¦EA =tÕ7ƒ ƒÝÕ7ƒÝÕF=?> ¦ =Ÿ€ äݧ îL`/_:` î ±klno-#Vh'cG#Ìï·q:qp gvd8cBiIH,q6´µd8e'hKJ"dTph)qcBg %&h)p³ML‰l6™²^e)d8plŽg à4´€¾¶¾§´X«¥¶ª4·&¬­®4ž_¬?¤£¨€ŸI¥¶¬­¥¶´›ª;É(«(žÕ¢U¨Xª©¤?ž)¿4¤­žU¡­ž)ªI¬_¬?®4ž ¾¶žÂ4¥°¢)¨X¾€²p´€¤­Ã¡´º²4Ï0¤£¨º¦4¥°¢c¡?¬­ž)ÃÇ¡ ¨º¡$¢´›ª4¡?¥§¡?¬­¥¶ª4·1´X²”¬?®4¤­ž)ž ¢´€Ã¿b´›ª4ž)ªI¬?¡OÉ0¨Ó¢´›ª4¡?´›ª”¨ºªI¬£¨X¾¤?´>´€¬Oɨ›Õƒ ¬?ž)ÿQ¾Ù¨º¬­ž ¨XªbŸ'¨ ½ ´»¢U¨X¾¶¥¶ïO¨X¬?¥§´€ª;É)¿b´›¡?¡­¥¶¦4¾¶À¿4¤­ž¢žOŸIžOŸÇ¨Xª”Ÿ²p´›¾¶¾¶´X«¼žŸ ¦IÀS¨€Ÿ4ŸI¥¶¬­¥¶´›ªb¨X¾0¨îÂ4ž)¡Å8Ætª ¢´›ªI¬­¤¨X¡­¬L¬?´ Ì_¢u¨X¤?¬­®IÀIÉ ä¨À>ÉX¨ºª”Ÿä1¥¶¤£¨ºï›ÉX«(ž¼¢´€Ã¦4¥¶ª4žc¬­®Qž¼¬?®4¤­ž)ž0¢´›Ã¿b´›ªQž)ªI¬­¡ ¥¶ªI¬­´i¨L¡?¥¶ª4·›¾¶ž¿4¤?´˜žO¢¬?¥¶´›ª;ÅLÆtª¨L¡­ž)ªQ¡­ž›É(Ì_¢u¨X¤?¬­®IÀON ¡ ¬­®Q¤­ž)žÓ¬­¥¶ž)¤­¡_¨º¤­ž ¢´€ªn ¨X¬?žOŸ8¥¶ªI¬­´©¨Ó¡­¥¶ª4·›¾¶ž_´€ª4ž:«¥¶¬­® ¬­®Q¤­ž)ž_ŸI¥¶¡­¬?¥¶ª”¢¬¿”¨X¤?¬­¡OÅsÆtªÓ´›±4¤´›¿4¥¶ª4¥¶´€ª;É;¬?®4ž)¤?ž}¥¶¡ª4´ ¡­±Q¦4¡­¬¨XªI¬­¥ ½ ž0Ÿ»¥íž)¤?ž)ª”¢ž1²p¤?´›Ãݨ"¢´›ÃÇ¿4±4¬£¨º¬­¥¶´›ªb¨X¾›¿b´›¥¶ªI¬ ´X² ½ ¥§žU«'Å à4´€¤;ž”¨ºÃ¿4¾¶ž›ÉU¬­®4ž¥¶ª4¥¶¬­¥°¨º¾X¾§ž»¥°¢)¨º¾I¤­ž)¿4¤?ž)¡­žUª>¬¨X¬?¥§´€ª´X² ¬­®Qž¡?±4¤t²t¨›¢ž1²p´€¤­Ã¡:0"B2'½'¨Xª”ŸP7"ªnªn‘¢ £Ã}¨À'¦bž¤­ž)¿4¤?žµ ¡?ž)ªI¬­žOŸ¨X¡c¨1¢´›ª”¢U¨X¬­žUª”¨X¬?¥§´€ª´º²”¬­®Qž¬­®4¤?ž)ž ¢´›Ã¿b´›ªQž)ªI¬­¡ ¡?®4´X«ªs¥§ª:à/¥¶·›±4¤?ž}åè4ŝ:ž±Q¡­ž¬?®4ž¡­ÀIæb´›¾¶¡RQBS~n~¬ É QT8„ ¦8’ŽÄn“ ¬8„bÉj¨ºª”Ÿ Qnƒ8~ à ¬?´‘Ÿ»ž)¡­¥¶·›ªb¨X¬­ž¬?®4ž0¬?®4¤­ž)ž¢´›Ãʵ ¿b´›ªQž)ªI¬­¡´º²¬?®4ž_¾¶žÂ4¥°¢)¨X¾(²p´€¤­ÃsÅ ×®4ž_¢´€¤­¤?ž)¡­¿b´›ª”Ÿ»¥§ªQ· ¥¶ª4¥¶¬?¥Ù¨º¾¡­±4¤Á²¹¨›¢žs²p´›¤?á}¨º¤­žs¤­ž)·€±4¾°¨X¤Áµ¹žÂ4¿4¤?ž)¡­¡?¥¶´›ª[¡­±Q¦Iµ ¡?¬­¤?¥§ªQ·›¡´X²¼¬?®4ž¬tÀI¿”ž«(ž®”¨ ½ žD˜±4¡­¬'ŸI¥¶¡£¢±4¡­¡?žOŸÅ×®4žUÀ ¢´›ªI¬£¨º¥§ª ¬t«¼´ÓÞ)¤?·›žs´›¿bž)¤£¨º¬­´›¤?¡L¬­®”¨º¬}¥¶ª”Ÿ»¥Ù¢U¨X¬­žs¬­®b¨X¬ ¬?®4ž¤?´I´›¬¢´›ª4¡?´›ª”¨ºªI¬­¡¨Xª”Ÿ³¬­®4ž ½ ´4¢)¨X¾¶¥¶¡­ÃÈ¡?®4´›±4¾°Ÿs¦bž ÃǞ)¤­·€žOŸ¥¶ªI¬­´¬­®Qž'¬­ž)ÃÇ¿4¾°¨X¬?ž›Å ×®4ži¨º¿4¿4¾¶¥°¢)¨X¬?¥§´€ª ´º²˜¬?®4ži¢´€Ã¿4¥¶¾¶žµt¤­ž)¿Q¾Ù¨€¢žL´›¿bž)¤£¨jµ ¬?¥¶´›ª¬­´¬?®4ž0¥¶ª4¥¶¬?¥Ù¨º¾”¾¶žÂ4¥°¢´€ªÀI¥¶ž)¾°ŸI¡ ¨"¬­¤¨Xª4¡ŸI±”¢žU¤0¬­®b¨X¬ ÃL¨X¿4¡/¬­®4žÏ1¤£¨º¦4¥°¢c¥¶ªI¬­ž)¤ŸI¥¶·›¥¶¬£¨º¬­žOŸ˜²p´€¤­Ã¡ŸI¥¶¤­žO¢¬­¾¶À¥¶ªI¬­´ ¬?®4ž)¥¶¤_¢´›¤?¤­ž)¡?¿”´€ª”ŸI¥¶ª4·©¬?¤­¥¶¿”¨º¤­¬­¥¶¬?žÕ¨ºª”¨X¾¶ÀI¡­ž)¡_¨ºª”Ÿ ½ ¥°¢ž ½ ž)¤­¡¨4ɨX¡_¥¶¾¶¾¶±4¡­¬?¤£¨X¬?žOŸ8¥¶ªÍà/¥§·€±4¤­ž©å?¿”ÅáÏ0¾¶¬­žU¤­ª”¨º¬­¥¶´›ª ¤?±4¾¶ž)¡¼¨º¤­ž¡­±Q¦4¡­žO¯€±4žUª>¬?¾¶À‘¢´€Ã¿b´›¡?žOŸ'´€ª¬­®4ž¾¶´X«(ž)¤¡­¥°ŸIž ´º²0¬­®4žÇ¤­ž)¡?±4¾¶¬¬­´_ÃL¨X¿:¬?®4ž¥¶ªI¬­ž)¤ŸI¥¶·›¥¶¬£¨º¬­žOŸ†É¦4±4¬¡­¬­¥¶¾¶¾ ÃÇ´›¤?¿4®4´›¿Q®4´›ª4žUÃ¥°¢XÉ¡­¬?¤­¥¶ª4·€¡c¥¶ª>¬?´¤­žO¨º¾4¡­±4¤Á²¹¨›¢ž¡­¬­¤?¥¶ª4·›¡Å U Z¼24A/24F0<ÓÒWV2430, Ñ á \0=­,âá,;.ƒ2»A/24H­Òƒ.0< Xâ`Pa ±kd8e'dnf7±kqo ²^³^qe)qY#h´ d8e bc^d8e)fOJ"lnoZjŸlc^lno d8pqo Ì&¨X¾°¨À6¨XªbŸEÆtª”ŸI´€ª4ž)¡?¥Ù¨ºª ¨º¤­žB¢¾¶´›¡?ž)¾¶À›µt¤­žU¾Ù¨º¬­žOŸ ¾°¨XªIµ ·€±”¨X·€ž)¡¢?®”¨º¤£¨€¢¬­ž)¤?¥¶ï)žOŸs¦IÀL¤­¥°¢?®iŸIž)¤?¥ ½ ¨º¬­¥¶´›ªs¨Xª”ŸL¾¶¥¶¬­¬?¾¶ž ´€¤ª4´›¬?®4¥¶ª4·:¬­®b¨X¬¢´€±4¾°Ÿ¦”žs¢)¨X¾¶¾¶žOŸÓ¥¶ªn”žO¢¬?¥¶´›ª;Å8×®4ž Ì&¨X¾°¨Àô€¤­¿4®4´€¾¶´›·›¥°¢)¨º¾ ¨ºª”¨X¾¶ÀIï)ž)¤1¿4¤­´€¬­´€¬tÀ>¿bž›Éb«¤­¥¶¬?¬­ž)ª ±Q¡­¥¶ª4· e)l6[6´ ÉÊH ž)¿4¾°¨›¢žžH ±4¾¶ž)¡OɨXªbŸ ¢´€Ã¿4¥¶¾¶žµt¤­ž)¿4¾°¨€¢ž›É ¥¶ÃÇ¿4¾¶ž)Þ)ªI¬­¡-¨º¿4¿4¤?´»¥¶ÃL¨X¬­žU¾§Àèã›íğI¥5í ž)¤?ž)ªI¬ÝŸIžU¤­¥ ½ ¨µ ¬?¥¶´›ª”¨º¾›¿4¤?´»¢ž)¡­¡?ž)¡OÉI¥¶ª”¢¾¶±”Ÿ»¥§ªQ·¿4¤­ž¸QÂQ¨º¬­¥¶´›ª/ÉO¡?±IX¬?¥§´€ª;É ¿Q¤­ž¸4ÂIµt¡­±IîÂL¿”¨X¥¶¤?¡ã¹¢¥¶¤£¢±4ø4”¨º¬­¥¶´›ª”éUÉI¤­žOŸ»±4¿4¾¶¥°¢)¨X¬?¥§´€ª;É ¡?´›ÃǞ¥¶ªI¸4”¨X¬?¥¶´›ª;É(¨Xª”Ÿ¢´€Ã¦4¥¶ª”¨X¬?¥§´€ª4¡'´º²˜¬?®4ž)¡?ž_¿4¤­´ºµ ¢ž)¡­¡?ž)¡OÅcÜc¨€¢?®'¤­´I´›¬¥¶¡/Ã}¨X¤?֛žOŸ Ã}¨ºª>±b¨X¾¶¾¶Àc¥¶ª¬?®4žc¡?´›±4¤¢ž Ÿ»¥Ù¢¬­¥¶´›ª”¨º¤­ÀL¬­´&¥§ªbŸI¥°¢)¨X¬?ž¬?®4ž¥°ŸI¥¶´›¡­ÀIª”¢¤£¨X¬?¥°¢'¡?±4¦4¡­žU¬´º² Ÿ»ž)¤­¥ ½ ¨X¬?¥¶´›ª”¨º¾”¿4¤?´4¢ž)¡­¡?ž)¡¬­®”¨º¬¥¶¬±4ª”ŸIž)¤?·›´Iž)¡Å ×®4ž:¡­ÃL¨X¾¶¾0¿Q¤­´›¬?´›¬tÀI¿bž ŸI¥°¢¬­¥¶´›ªb¨X¤­ÀIÉ1¡­¬­´€¤­žOŸ8¥¶ª[¨Xª êÌ_â&²p´€¤­Ã}¨º¬OÉ4¢´€ªI¬£¨X¥¶ª4¡0¨º¿4¿4¤?´»¥¶ÃL¨X¬­žU¾§À_åí›í€í'¤?´>´€¬­¡É «¥¶¬?® ¨º¦”´€±4¬_åã€í›í:ŸIžU¤­¥ ½ ¨X¬?¥§´€ª”¨X¾¡?±4¦bž)ªI¬­¤­¥¶ž)¡³ãp¥tÅ ž›Åc¨Xª ¨ ½ ž)¤£¨º·›ž(´X²cå›ÅÁãŸIž)¤?¥ ½ ¨º¬­¥¶´›ªb¨X¾I¿4¤­´4¢žU¡­¡­žU¡¿”ž)¤(¤­´I´€¬£é)Å/Ϭ ¢´›Ã¿Q¥§¾¶ž¬­¥¶ÃǞ›É¬­®QžêÌ_⩟»¥Ù¢¬­¥¶´›ª”¨º¤­ÀL¥¶¡'¿”¨º¤­¡­žŸ:¨Xª”Ÿ ó ŸI´X«ªI¬­¤¨Xª4¡?¾Ù¨º¬­žOŸŽF'¥¶ª>¬?´¬?®4ž0¡?´›±4¤¢ž²p´€¤­ÃL¨X¬¤­žO¯€±4¥¶¤?žOŸ ²g´›¤c¬?®4ž e)l[6´ ¢´›ÃÇ¿4¥¶¾¶ž)¤OÅ×®QžêÌ_â_ŸI¥°¢¬?¥¶´›ª”¨º¤­À¢´›±4¾°Ÿ ¦bžžÂ4¿”¨Xª”Ÿ»žOŸ:¦IÀi¨ºª>À_¢´€Ã¿bž)¬­žUª>¬Ì_¨X¾°¨ÀL¾¶žÂ4¥Ù¢´›·›¤¨µ ¿Q®4ž)¤OÅ Xâ`/_ bo d\^h´m±kqVo ²¸³^qVe'qY#Vhs´µd8e bc^d8e)fOJ"lnoZjŸlc^lno d8pqo ×®Qž ¢±4¤­¤?ž)ªI¬:Ï0¤¨X¦4¥°¢:¡?ÀI¡­¬­žUà ®”¨X¡s¦bž)ž)ªÝŸ»ž)¡£¢¤?¥¶¦”žOŸ ¥¶ª:¡?´›ÃžŸIž)¬¨X¥¶¾(¥§ªÓ¿4¤­ž ½ ¥¶´€±4¡¿Q±4¦4¾¶¥°¢)¨X¬?¥§´€ª4¡ã¹ëƒž)ž)¡?¾§žUÀ>É åæ›æ€ç›é¨Xª”Ÿ¥¶¡c¨ ½ ¨X¥¶¾°¨X¦Q¾§ž$²p´›¤c¬?ž)¡­¬?¥¶ª4·'´›ª¬?®4ž0ÆtªI¬?ž)¤­ª4žU¬OÅ ] ×®Qžcô4ŸI¥Î¸ ¢)¨º¬­¥¶´›ª1´X²4¬­®Qž¼¡?À>¡?¬­ž)ÃÚ¬­´±4¡?ž¼¬?®4ž¢´›ÃÇ¿4¥¶¾§žµ ¤?ž)¿4¾°¨›¢ž˜¨º¾¶·›´›¤?¥¶¬­®4º¡ª4´›¬0¢?®”¨ºª4·›žOŸÇ¬?®4ž¡­¥¶ï)ž'´€¤¼¬?®4ž ¦bž)®”¨ ½ ¥¶´›¤´º²c¬­®4ž¡­ÀI¡­¬?ž)à ¥¶ª_¨XªIÀÇ« ¨ÀIÉ4¦4±Q¬0¥¶¬0®”¨º¡0¤?žµ Ÿ»±”¢žOŸ¬?®4ž¼¢´€Ã¿4¥¶¾°¨X¬?¥¶´›ª(¬­¥¶Ãž†²p¤­´€Ã-®4´€±4¤­¡¬?´0ÃÇ¥§ªI±4¬?ž)¡OÅ ^ Ø8*__?>B>B>÷E XúÚE  ú`nE Úøa_)ú)ù)úÚ6_Uø@?_OùUú­ùÙµÚ`_ ‹Ž„ §Ã"“Ä Æ .ϬņbQBS~n~"¬kÕ7ƒÏÕ ƒÏÕcQT8„ ¦8’ŽÄn“ ¬8„ “ {/QnƒŽ~ à ‡"€n…nÇ “Tà „‰Æ «âå .Ϭņ = ¦EA =tÕ7ƒÏÕ ƒÏÕ =?> ¦ = “ { «æ ‹Ž„ §Ã"“Ä Æ – … ¨ QBS~n~"¬kÕ7ƒ ƒkÕЃ ÕdQT8„ ¦8’8Än“ ¬Ž„°€ ä § Qƒ8~ à ‡"€n…nÇ “Tà „‰Æ «âå – … ¨ = ¦EA =tÕ7ƒ ƒkÕЃ Õ =e> ¦ = € ä § «æ à/¥¶·›±4¤?ž‘åè4òÆtª4¥¶¬?¥Ù¨º¾”¿b¨X¬­®Q¡ ‹Ž„ §Ã"“Ä Æ .ϬņbQBS~n~"¬kÕ7ƒÏÕ ƒÏÕcQT8„ ¦8’ŽÄn“ ¬8„ “ {/QnƒŽ~ à ‡"€n…nÇ “Tà „‰Æ . “ ¬ “ † ‹Ž„ §Ã"“Ä Æ – … ¨ QBS~n~"¬kÕ7ƒ ƒkÕЃ ÕdQT8„ ¦8’8Än“ ¬Ž„°€ ä § Qƒ8~ à ‡"€n…nÇ “Tà „‰Æ –Ѐ €Å… §7¨ à/¥¶·›±4¤?žå¿”ò/ϲg¬­ž)¤1Ï0¿4¿4¾¶ÀI¥¶ª4·zu(´€Ã¿4¥¶¾¶žµÓHž)¿4¾°¨€¢ž¬­´¬?®4žâ´X«(ž)¤ Ô ¥°ŸIž f à Òc.0œ=?F0<IH­Òc. ×®4žs«¼žU¾§¾Îµg²p´›±Qª”ŸIžOŸ©¢¤­¥¶¬­¥°¢¥¶¡­Ã ´º²¬­¤¨›ŸI¥¶¬?¥§´€ª”¨X¾¥¶Ã¿Q¾§žµ ÞUª>¬¨X¬?¥§´€ª4¡'´º²¸”ª4¥¶¬­žµt¡­¬¨X¬?ž}ÃÇ´›¤?¿4®4´›¾¶´€·›ÀIɬ­®”¨º¬¬?®4ž)À ¨X¤?ž©¾¶¥¶Ã¥¶¬­žOŸ ¬­´[®b¨Xª”ŸI¾¶¥¶ª4·[¢´›ªb¢)¨X¬?ž)ª”¨X¬?¥ ½ ž©ÃÇ´›¤­¿Q®4´Xµ ¬£¨€¢¬­¥°¢¡Éc¥¶¡¨_Ÿ»¥§¤?žO¢¬¤?ž)¡­±4¾¶¬´X²0¬?®4ž)¥¶¤‘Ÿ»ž)¿”žUª”ŸIž)ª”¢ž_´›ª ¬­®Qž}¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥¶´›ª:´€¿”ž)¤¨X¬?¥§´€ª:¥¶ªiÃÇ´›¤­¿Q®4´›¬¨›¢¬?¥Ù¢ŸIžµ ¡£¢¤­¥¶¿4¬­¥¶´€ª;ũ׮4žL¬?žO¢?®4ª4¥°¯€±4žiŸIž)¡¢¤­¥¶¦bžOŸ ®4ž)¤­ž€É0¥¶Ã¿Q¾§žµ ÞUª>¬?žOŸ:¥¶ª¬­®Qž ¢´›Ã¿Q¥§¾¶žµt¤­žU¿4¾°¨›¢žL¨X¾¶·›´€¤­¥¶¬­®4ÃsÉ ¨º¾¶¾§´X«¡ ¬­®Qž¤?ž)·›±Q¾Ù¨º¤tµtžÂ4¿4¤­žU¡­¡­¥¶´€ªi¢´›ÃÇ¿4¥¶¾¶ž)¤c¬­´¤­ž¨X¿4¿4¾¶À¬?´¨Xª”Ÿ ô4ŸI¥Î²pÀ¥¶¬­¡L´X«ª[´€±4¬­¿4±Q¬OÉžížO¢¬?¥ ½ ž)¾¶À8²p¤­žUž)¥¶ª4·©Ã´€¤tµ ¿4®4´€¬£¨€¢¬­¥°¢_ŸIžU¡£¢¤?¥§¿Q¬­¥¶´›ª©¬?´ ±4¡?ži¨XªIÀ³¸”ª4¥¶¬­žµt¡­¬¨X¬?ži´€¿Iµ ž)¤¨X¬­¥¶´€ª;Å Ô ¥¶·€ª4¥Î¸ ¢)¨XªI¬žÂ4¿bž)¤­¥¶ÃžUª>¬?¡«¥§¬?®ÕÌ&¨X¾°¨À_¨Xª”Ÿ ¨Lñ”¢?®:¾°¨X¤?·›ž)¤¨X¿Q¿4¾¶¥Ù¢U¨X¬­¥¶´€ªi¥¶ª:Ï0¤¨X¦4¥°¢Ê®”¨ ½ ž¡?®4´X«ª ¬­®Qž ½ ¨X¾¶±4ž´X²¬­®Q¥§¡"¬­žO¢?®4ª4¥°¯€±4ž¥¶ª:®”¨ºª”ŸI¾¶¥¶ª4·¬t«(´i¢¾°¨X¡Áµ ¡­¥°¢1ž”¨XÃÇ¿4¾¶ž)¡(´X² ªQ´›ªIµ­¢´›ªb¢)¨X¬?ž)ª”¨X¬?¥ ½ ž'ÃÇ´›¤­¿Q®4´›¬¨›¢¬?¥Ù¢¡Oò ²p±4¾¶¾Îµt¡­¬­žUÃͤ­žŸI±4¿4¾¶¥°¢)¨X¬?¥¶´›ª¨XªbŸ Ô ž)Ã¥¶¬?¥Ù¢ƒ¡­¬?ž)Ã¥¶ª>¬?ž)¤£Ÿ»¥§·ºµ ¥¶¬£¨º¬­¥¶´›ª;Åĝ:´€¤­Ö¤­ž)ÃL¨X¥¶ª4¡L¬­´¦bž ŸI´›ªQž:¥§ª[¨X¿Q¿4¾¶À>¥¶ª4· ¬­®Qž:¬­žO¢?®4ª4¥°¯€±4žÓ¬­´©´€¬­®4ž)¤&Ö>ªQ´X«ª ½ ¨X¤­¥¶ž)¬?¥¶ž)¡_´X²ª4´€ªIµ ¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥ ½ ž'ÃÇ´›¤?¿4®4´›¬¨›¢¬?¥°¢¡OÅ ×®4ž_¢´›ÃÇ¿4¥¶¾¶žµ¹¤?ž)¿4¾°¨›¢ž}¨X¾¶·€´›¤­¥¶¬?®4à ¨XªbŸ:¬­®Qž_Þ)¤?·›ž ´›¿bž)¤¨X¬­´€¤¥§ªI¬?¤­´4ŸI±”¢žŸ¥¶ª'¬­®4¥¶¡/¿”¨X¿bž)¤c¨X¤?žc·›ž)ªQž)¤£¨º¾4¬­žO¢?®Iµ ª4¥°¯€±4ž)¡ª4´€¬¾¶¥¶Ã¥¶¬?žOŸ_¬?´i®b¨Xª”ŸI¾¶¥¶ª4·_¬?®4ž¡?¿”ž¢¥Î¸ ¢Lô€¤tµ ¿4®4´€¬£¨€¢¬­¥°¢¿Q¤­´›¦Q¾§žU᫼ž1®”¨ ½ žŸI¥¶¡£¢±Q¡­¡­žŸÅc:žžÂ4¿bžO¢¬ ¬­®b¨X¬;¬?®4ž)À«¥¶¾¶¾O®”¨ ½ žcÃ}¨ºªIÀ¼´€¬­®4ž)¤/±4¡­ž²p±4¾4¨X¿4¿Q¾§¥°¢)¨º¬­¥¶´›ªQ¡OÅ 5,YVO,;GQ,;.0œ,/< Ü ½ ¨ºª âƒÅcÏ1ª>¬t«(´›¤?¬­®;Ååæ›æ›íQÅMghGijlknmomqpsrI2)øB¤-i ô/ t?ôŒ¯‘¤u?£3£¤‘Ev¤‘Ê &¤"‘sŒŽõ8¤µô/¤¾"¢wu3µô8"¥8µô/£¢ £)Å Ô ±4õ Þ)¤(Ætª4¡?¬­¥¶¬­±4¬?ž´X²¼â¥¶ª4·€±4¥¶¡­¬­¥°¢¡É¨X¾¶¾°¨X¡OÅ ä1ž)ª4ª4žU¬­®›H'Å/ëcž)žU¡­¾¶ž)À:¨Xª”Ÿsâ;¨º±4¤­¥cä¨X¤?¬­¬­±Qª4ž)ª;Åì€í›í€í4Å x‚¢P¥8¢2)`izyY2'B2'Fm̤‘)ŒŽõT¤ ô ¤3¾-r {‘Ó¤ |~}V¤¤ ô£ "¥'7 }Vuõ8¥T¢€ªn?£)Å u¨Xæ4¤?¥ÙŸ»·›žòìª4¥ ½ ž)¤­¡?¥§¬tÀ º ¤?ž)¡­¡Å à4´€¤­¬?®”¢´›ÃÇ¥¶ª4·”Å ä1ž)ª4ª4žU¬­®›H'ÅbëcžUž)¡­¾¶ž)ÀIÅåæ›æQå›Å=u(´€Ã¿4±4¬?ž)¤0¨Xªb¨X¾¶ÀI¡­¥¶¡´X² Ï0¤¨X¦4¥°¢ÃÇ´›¤?¿4®4´›¾¶´€·›ÀòbÏ[¬t«¼´ºµ¹¾¶ž ½ ž)¾ ¨º¿4¿4¤?´>¨›¢?®L«¥¶¬­® ŸIž)¬?´›±4¤?¡OÅÆtªIg!‘£)Œ8u 2)¢t?£z¤¥‚,‘Ӎ½¢wu„ƒB¢ó¥"¾"ªn¢ £`2)¢u?£ k?k…krGg!Œ8‘;£†v‘Ó¤ D2/õT‚}:õT¢P‘?7 ,¥8¥8ªnµô‡y  ÷ŒŽ¤µ£;¢óª  ¤"¥ˆ,‘"½¢uaƒB¢P¥¾"ªn¢ £`2)¢u£)É¿b¨X·›žU¡‘åã"ãñ åG€ì4ÅQÏ0á?¬­ž)¤Áµ Ÿ4¨XÃsÅ ä1ž)ªQª4ž)¬­® H'Å냞)ž)¡­¾¶ž)ÀIÅ åOæ›æé4ÅÄÏ0¤¨X¦4¥°¢ ¸”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž ô€¤­¿4®4´€¾¶´›·›¥°¢)¨º¾i¨Xªb¨X¾¶ÀI¡­¥¶¡Í¨Xª”Ÿá·€ž)ª4ž)¤¨X¬­¥¶´€ª;Å Ætª hsp‡ƒGkn‰‹Š Œ 3Ž€É ½ ´›¾¶±4ÃǞ'å›ÉI¿”¨º·›ž)¡0ç€æñ4æ ¿”Å ä1ž)ªQª4ž)¬­®kH'ż냞)ž)¡?¾¶ž)À>Ååæ›æ›çQÅÏ0¤¨X¦4¥°¢LÃÇ´›¤­¿Q®4´›¾¶´›·€À ±4¡­¥¶ª4·Ú´›ª4¾¶Àñ¸”ª4¥¶¬­žµt¡­¬¨X¬?ž-´€¿”ž)¤¨X¬?¥§´€ª4¡OÅ ÆtªhB¤ ai Œ¯ª2)32)¢P¤"¥8µôDŒ"Œ¯‘¤Bu3õT£M2)¤‘y &¢2'¢wuIƒ6¥"¾ª¾"?£`r g!‘Ó¤ u7¢P¥"¾µ£¤?v’2/õT”“Ÿ¤"‘?0µ£Óõ8¤;Œ É©¿b¨X·›žU¡ ã€íñ8ã"G4É ÌL´›ªI¬?¤• žO¨º¾¹É–1±s• žU¦”žO¢ºÅ â/¨X±4¤?¥¼ä¨X¤?¬­¬­±Qª4ž)ª;ÉBH ´›ª”¨º¾ÙŸ&Ì:Å;ä¨X¿4¾°¨ºª;É;¨XªbŸsÏ0ª4ª4¥¶ž —¨Xž)ªQž)ª;ÅåOæ€æ›ìQŔ׫(´Xµt¾¶ž ½ žU¾›Ã´€¤­¿4®Q´›¾¶´›·€À¼«¥¶¬­®¢´›Ãʵ ¿”´€¡­¥¶¬­¥¶´€ª;Å;Ætªchsp‡ƒGkz‰ Š Œ ™˜>ÉI¿”¨º·›ž)¡å¿”å;ñ å?¿>ç4Å â/¨X±4¤¨䍨º¬£¨˜U¨S¨ºª”Ÿ8ä0¥¶ÃÇô:ä0´€¡­Ö€ž)ª4ª4¥¶ž)ÃÇ¥¹ÅáåOæ€ç›ç4Å à/¥§ªQ¥§¬?žµt¡­¬£¨º¬­ž'ŸIž)¡¢¤­¥¶¿4¬?¥¶´›ªL´X² Ô ž)Ã¥¶¬?¥Ù¢(ÃÇ´›¤?¿4®4´›¾¶´€·›Àò Ï ¢)¨º¡­ž_¡?¬­±”Ÿ»ÀS´º²Ï1ª”¢¥¶ž)ªI¬Ï1Ö>֛¨€ŸI¥°¨Xª;ÅÆtªšhsp›ƒ›i kn‰‹ŠŒ œ3œ>ÉQ¿”¨X·€ž)¡0è4åèñ4è4åãQÅ Ì&¨X¤­¬?¥¶ªiä¨À>ÅåOæ€ç"G4Å,ç´€ª”¢´€ª”¢)¨X¬?ž)ª”¨º¬­¥ ½ ž"¸”ª4¥¶¬­žµ¹¡?¬£¨X¬?ž ô€¤­¿4®4´€¾¶´›·›ÀIÅbÆtªˆ‡+hžƒaŒ œOŸ)ÉI¿”¨º·›ž)¡0ìñ åí4Å   ž)´›¤?·›ž:Ï0ªI¬?´›ªÍä1¥¶¤£¨Xï€Å ì€í›í›íQÅ ÌL±4¾¶¬?¥ßµt¬­¥¶ž)¤?žOŸ8ª4´€ªIµ ¾¶¥§ªQžO¨X¤Çô›¤?¿4®4´€¾§´€·›ÀòLÏ¢)¨X¡?ži¡?¬­±”Ÿ»À©´›ª Ô ž)Ã¥¶¬­¥°¢XÅ hB¤" ÷Œ¯ª2)32)¢P¤"¥8µô<ƒ‰¢P¥"¾ª¢ £ 2)¢u?£)É4ì"éIãåOéUÅ ä1¥¶ÃÃÇ´ ä1´€¡­Ö›žUª4ª4¥¶ž)Ã¥tÅ åOæ›ç€è4Š׫(´Xµt¾§ž ½ ž)¾_ÃÇ´›¤Áµ ¿4®4´€¾§´€·›ÀòÓÏ·›ž)ªQž)¤£¨º¾‘¢´€Ã¿4±Q¬£¨X¬?¥¶´›ª”¨º¾0ô4ŸIžU¾0²g´›¤ «¼´€¤£Ÿ€µg²p´€¤­Ã ¤?žO¢´€·›ª4¥¶¬­¥¶´€ª ¨Xª”Ÿs¿4¤?´4ŸI±”¢¬?¥§´€ª;Å º ±4¦Q¾§¥Îµ ¢)¨X¬?¥¶´›ª[å›å€ÉÊìªQ¥ ½ ž)¤­¡?¥¶¬tÀ:´X²ðž)¾¶¡?¥§ªQÖ>¥tÉ:0žU¿”¨X¤?¬­ÃžUª>¬ ´X²   ž)ª4ž)¤¨X¾â¥¶ª4·›±Q¥§¡?¬­¥°¢¡É4ðž)¾¶¡­¥¶ª4ÖI¥tÅ ¡€´›®4ªo¡bÅ4Ì_¢u¨X¤?¬­®IÀ_¨Xª”ŸÇÏ1¾Ù¨ºª º ¤?¥¶ª”¢ž›Å1åOæ›æã4Åà”¨º¥¶¬­®Iµ ²p±4¾¶ª4ž)¡?¡}¨ºª”ŸÓ¤­žOŸI±Q¿4¾¶¥Ù¢U¨X¬­¥ ½ ž}¥°ŸIžUª>¬?¥¶¬tÀ>Å ô ¢)¢)¨X¡?¥¶´›ª”¨º¾ ¿”¨X¿bž)¤?¡ ¥¶ªÝ↥§ªQ·›±4¥¶¡­¬?¥°¢¡Såç4ÉìªQ¥ ½ ž)¤­¡?¥¶¬tÀÍ´X²_Ì_¨X¡Áµ ¡£¨€¢­®I±4¡?ž)¬­¬?¡OÉbÏ0ÃÇ®4ž)¤?¡­¬OÉbÌLÏÅBH ô ϵÓé›íQÅ ¡€´›®4ªq¡”ÅcÌ_¢?u¨X¤­¬?®IÀ>Å8åOæ€ç4å›ÅsÏE¿4¤?´›¡?´»Ÿ»¥Ù¢Ç¬?®4ž)´›¤?À ´º² ª4´›ªb¢´›ª”¢U¨X¬­žUª”¨X¬?¥ ½ ž:ô€¤­¿4®4´€¾¶´›·›ÀIÅ5ƒ‰¢P¥"¾ªn¢ £`2'¢u¢k3¥i €ªn¢P‘?XÉåOìIãtè›éUò èG›èñ¿”åç4Å H ¥°¢­®b¨X¤£Ÿ Ô ¿4¤?´>¨º¬OÅåOæ€æ›ìQŞm¡¤"‘)Œ8õ8¤ ô ¤3¾&"¥7ohB¤ !Œ:ª62)-i 2'¢ó¤"¥4ÅcÌLÆt× º ¤?ž)¡­¡ɯu¨ºÃ¦4¤­¥°ŸI·€ž›ÉIÌLÏÅ
2000
25
A Morphologically Sensitive Clustering Algorithm for Identifying Arabic Roots Anne N. DE ROECK Department of Computer Science University of Essex Colchester, CO4 3SQ, U.K. [email protected] Waleed AL-FARES Computer Science Department College of Business Studies, Hawaly, Kuwait [email protected] Abstract We present a clustering algorithm for Arabic words sharing the same root. Root based clusters can substitute dictionaries in indexing for IR. Modifying Adamson and Boreham (1974), our Two-stage algorithm applies light stemming before calculating word pair similarity coefficients using techniques sensitive to Arabic morphology. Tests show a successful treatment of infixes and accurate clustering to up to 94.06% for unedited Arabic text samples, without the use of dictionaries. Introduction Canonisation of words for indexing is an important and difficult problem for Arabic IR. Arabic is a highly inflectional language with 85% of words derived from tri-lateral roots (AlFedaghi and Al-Anzi 1989). Stems are derived from roots through the application of a set of fixed patterns. Addition of affixes to stems yields words. Words sharing a root are semantically related and root indexing is reported to outperform stem and word indexing on both recall and precision (Hmeidi et al 1997). However, Arabic morphology is excruciatingly complex (the Appendix attempts a brief introduction), and root identification on a scale useful for IR remains problematic. Research on Arabic IR tends to treat automatic indexing and stemming separately. Al-Shalabi and Evans (1998) and El-Sadany and Hashish (1989) developed stemming algorithms. Hmeidi et al (1997) developed an information retrieval system with an index, but does not explain the underlying stemming algorithm. In Al-Kharashi and Evans (1994), stemming is done manually and the IR index is built by manual insertion of roots, stems and words. Typically, Arabic stemming algorithms operate by “trial and error”. Affixes are stripped away, and stems “undone”, according to patterns and rules, and with reference to dictionaries. Root candidates are checked against a root lexicon. If no match is found, affixes and patterns are readjusted and the new candidate is checked. The process is repeated until a root is found. Morpho-syntactic parsers offer a possible alternative to stemming algorithms. Al-Shalabi and Evans (1994), and Ubu-Salem et al (1999) develop independent analysers. Some work builds on established formalisms such a DATR (Al-Najem 1998), or KIMMO. This latter strand produced extensive deep analyses. Kiraz (1994) extended the architecture with multi-level tape, to deal with the typical interruption of root letter sequences caused by broken plural and weak root letter change. Beesley (1996) describes the re-implementation of earlier work as a single finite state transducer between surface and lexical (root and tag) strings. This was refined (Beesley 1998) to the current on-line system capable of analysing over 70 million words. So far, these approaches have limited scope for deployment in IR. Even if substantial, their morpho-syntactic coverage remains limited and processing efficiency implications are often unclear. In addition, modern written Arabic presents a unique range of orthographic problems. Short vowels are not normally written (but may be). Different regional spelling conventions may appear together in a single text and show interference with spelling errors. These systems, however, assume text to be in perfect (some even vowelised) form, forcing the need for editing prior to processing. Finally, the success of these algorithms depends critically on root, stem, pattern or affix dictionary quality, and no sizeable and reliable electronic dictionaries exist. Beesley (1998) is the exception with a reported 4930 roots encoded with associated patterns, and an additional affix and non-root stem lexicon1. Absence of large and reliable electronic lexical resources means dictionaries would have to be updated as new words appear in the text, creating a maintenance overhead. Overall, it remains uncertain whether these approaches can be deployed and scaled up cost-effectively to provide the coverage required for full scale IR on unsanitised text. Our objective is to circumvent morphosyntactic analysis of Arabic words, by using clustering as a technique for grouping words sharing a root. In practise, since Arabic words derived from the same root are semantically related, root based clusters can substitute root dictionaries for indexing in IR and furnish alternative search terms. Clustering works without dictionaries, and the approach removes dictionary overheads completely. Clusters can be implemented as a dimension of the index, growing dynamically with text, and without specific maintenance. They will accommodate effortlessly a mixture of regional spelling conventions and even some spelling errors. 1 Clustering and Arabic. To our knowledge, there is no application of automatic root-based clustering to Arabic, using morphological similarity without dictionary. Clustering and stemming algorithms have mainly been developed for Western European languages, and typically rely on simple heuristic rules to strip affixes and conflate strings. For instance, Porter (1980) and Lovins (1968) confine stemming to suffix removal, yet yield acceptable results for English, where roots are relatively inert. Such approaches exploit the morphological frugality of some languages, but do not transfer to heavily inflected languages such as Arabic. In contrast, Adamson and Boreham (1974) developed a technique to calculate a similarity co-efficient between words as a factor of the number of shared sub-strings. The approach (which we will call Adamson’s algorithm for short) is a promising starting point for Arabic 1 Al-Fedaghi and Al-Anzi (1989) estimate there are around 10,000 independent roots. clustering because affix removal is not critical to gauging morphological relatedness. In this paper, we explain the algorithm, apply it to raw modern Arabic text and evaluate the result. We explain our Two-stage algorithm, which extends the technique by (a) light stemming and (b) refinements sensitive to Arabic morphology. We show how the adaptation increased successful clustering of both the original and new evaluation data. 2 Data Description We focus on IR, so experiments use modern, unedited Arabic text, with unmarked short vowels (Stalls and Knight 1998). In all we constructed five data sets. The first set is controlled, and was designed for testing on a broad spectrum of morphological variation. It contains selected roots with derived words chosen for their problematic structure, featuring infixes, root consonant changes and weak letters. It also includes superficially similar words belonging to different roots, and examples of hamza as a root consonant, an affix and a silent sign. Table 1 gives details. Table 1: Cluster size for 1st data set root size root size ktb wrote 49 HSL obtained 7 qwm straightened 38 s’aL asked 6 mr passed 26 HSd cultivated 5 wSL linked 11 shm shared 4 r’as headed 10 Data sets two to four contain articles extracted from Al-Raya (1997), and the fifth from AlWatan (2000), both newspapers from Qatar. Following Adamson, function words have been removed. The sets have domain bias with the second (575 words) and the fourth (232 words) drawn randomly from the economics and the third (750 words) from the sports section. The fifth (314 words) is a commentary on political history. Sets one to three were used to varying extents in refining our Two-stage algorithm. Sets four and five were used for evaluation only. Electronically readable Arabic text has only recently become available on a useful scale, hence our experiments were run on short texts. On the other hand, the coverage of the data sets allows us to verify our experiments on demanding samples, and their size lets us verify correct clustering manually. 3. Testing Adamson’s Algorithm 3.1 The Algorithm Adamson and Boreham (1974) developed a technique expressing relatedness of strings as a factor of shared sub-strings. The algorithm drags an n-sized window across two strings, with a 1 character overlap, and removes duplicates. The strings' similarity co-efficient (SC) is calculated by Dice’s equation: SC (Dice) = 2*(number of shared unique n-grams)/(sum of unique n-grams in each string) Table 2: Adamson's Algorithm Illustrated String 2-grams Unique 2-grams phosphorus ph ho os sp ph ho or ru us ph ho os sp or ru us (7) phosphate ph ho os sp ph ha at te ph ho os sp ha at te (7) Shared unique 2-grams ph ho os sp (4) SC (Dice) = 2(4)/(7+7) = 0.57 After the SC for all word pairs is known, the single link clustering algorithm is applied. A similarity (or dissimilarity) threshold is set. The SC of pairs is collected in a matrix. The threshold is applied to each pair’s SC to yield clusters. A cluster absorbs a word as long as its SC to another cluster item exceeds the threshold (van Rijsbergen 1979). Similarity to a single item is sufficient. Cluster size is not pre-set. 3.2 Background Assumptions This experiment tests Adamson's algorithm on Arabic data to assess its ability to cluster words sharing a root. Each of the data sets was clustered manually to provide an ideal benchmark. This task was executed by a native Arabic speaker with reference to dictionaries. Since we are working with very small texts, we sought to remove the effects of sampling in the tests. To assess Adamson’s algorithm’s potential for clustering Arabic words, we preferred to compare instances of optimal performance. We varied the SC to yield, for each data set, the highest number of correct multi-word clusters. Note that the higher the SC cut-off, the less likely that words will cluster together, and the more single word clusters will appear. This has the effect of growing the number of correct clusters because the proportion of correct single word clusters will increase. As a consequence, for our purposes, the number of correct multiword clusters (and not just correct clusters) are an important indicator of success. A correct multi-word cluster covers at least two words and is found in the manual benchmark. It contains all and only those words in the data set which share a root. Comparison with a manual benchmark inevitably introduces a subjective element. Also, our evaluation measure is the percentage of correct benchmark clusters retrieved. This is a “recall” type indicator. Together with the strict definition of correct cluster, it cannot measure cluster quality. Finer grained evaluation of cluster quality would be needed in an IR context. However, our main concern is comparing algorithms. The current metrics aim for a conservative gauge of how Adamson’s algorithm can yield more exact clusters from a full range of problematic data. Table 3: Adamson's Algorithm Test Results Data set Set 1 Set 2 Set 3 Set 4 Set 5 Benchmark: Total Manual Clusters(A) 9 267 337 151 190 Multi-word (B) 9 130 164 50 63 Single word (C) 0 137 173 101 127 SC cut-off2 0.50 0.54 0.75 0.58-0.60 0.61-0.66 Test:(% of Benchmark) Correct Clusters (% of A) 11.11% 56.55% 60.83% 70.86% 74.21% Multi-word (% of B) 11.11% 38.46% 21.95% 40% 34.92% Single word (% of C) 0.0% 73.72% 97.69% 86.14% 93.70% 2 Ranges rather than specific values are given where cut-offs between the lower and higher value do not alter cluster distribution. Our interpretation of correct clustering is stringent and therefore conservative, adding to the significance of our results. Cluster quality will be reviewed informally. 3.3 Adamson’s Arabic Test Results Table 3 shows results for Adamson’s algorithm. The figures for the first data set have to be suitably interpreted. The set deliberately did not include single word clusters. The results suggest that the algorithm is very successful at identifying single word clusters but performs poorly on multi-word clusters. The high success rate for single word clusters is partly due to the high SC cut-off, set to yield as many correct multi-word clusters as possible. In terms of quality, however, only a small proportion of multi-word clusters were found to contain infix derivations (11.11%, 4.76%, 0.0% 4.35% and 9.09% for each data set respectively), as opposed to other variations. In other words, strings sharing character sequences in middle position cluster together more successfully. Infix recognition is a weak point in this approach. Whereas the algorithm is successful for English, it is no surprise that it should not perform equally well on Arabic. Arabic words tend to be short and the chance of words derived from different roots sharing a significant proportion of characters is high (eg Khbr (news) vs Khbz (bread)). Dice’s equation assumes the ability to identify an uninterrupted sequence of root consonants. The heavy use of infixes runs against this. Similarly, affixes cause interference (see 4.1.1). 4 The Two-Stage Algorithm. The challenge of root based clustering for Arabic lies in designing an algorithm which will give relevance to root consonants only. Using Adamson’s algorithm as a starting point, we devised a solution by introducing and testing a number of successive refinements based on the morphological knowledge and the first three data sets. The rationale motivating these refinements is given below. 4.1 Refinements 4.1.1 Affixes and light stemming: The high incidence of affixes keeps accurate cluster formation low, because it increases the SC among words derived from different roots, and lowers the SC between derivations of the same root using different affixes, as illustrated in tables 4 and 5. Following Popovic and Willet (1992), we introduced stemming to minimise the effect of affixes. We found empirically that light stemming, removing a small number of obvious affixes, gave better results than heavy stemming aimed at full affix stripping. Heavy stemming brought the risk of root consonant loss (eg t’amyn (insurance) from root amn (sheltered): heavy stemming: t’am, light stemming: t’amn). Light stemming, on the other hand, does little more than reducing word size to 3 or 4 characters. 4.1.2 Weak letters, infixes and “cross”: Weak letters (alif, waw, ya) occur freely as root consonants as well as affixes. Under derivation, their form and location may change, or they may disappear. As infixes, they interfere with SC, causing failure to cluster (table 6). Their effects were reduced by a method we refer to as “cross”. It adds a bi-gram combining the letters occurring before and after the weak letter. Table 4: Inflected words from different roots: ?Lm (learned) and arb (arabised) String Unique 2-grams with affixes Unique 2-grams without affixes aL?aLmyh (the universal) aL L? ?a Lm my yh (6) ?a Lm (2) aL?rbyh (the Arabic) aL L? ?r rb by yh (6) ?r rb (2) SC (Dice) 2(3)/(6+6) = 0.50 2(0)/(2+2) = 0 Table 5: Inflected words from the same root: mrr (passed) String Unique 2-grams with affixes Unique 2-grams without affixes mstmr (continuous) ms st tm mr (4) mr (1) mr (passed) mr (1) mr (1) SC (Dice) 2(1)/(4+1) = 0.40 2(1)/(1+1) = 1.0 Table 6: Infix derivation from root wqf (stopped) - post light stemming String Unique 2-grams without cross Unique di-grams with cross qaf qa af (2) qa af qf (3) wqf wq qf (2) wq qf (2) SC (Dice) 2(0)/(2+2) = 0 2(1)/(2+3) = 0.4 4.1.3 Suspected affixes and differential weighting: Our objective is to define an algorithm which gives suitable precedence to root consonants. Light stemming, however does not remove all affixes. Whereas fool proof affix detection is problematic due to the overlap between affix and root consonants, affixes belong to a closed class and it is possible to identify “suspect” letters which might be part of an affix. Following Harman (1991) we explored the idea of assigning differential weights to substrings. Giving equal weight of 1 to all substrings equates the evidence contributed by all letters, whether they are root consonants or not. Suspected affixes, however, should not be allowed to affect the SC between words on a par with characters contributing stronger evidence. We conducted a series of experiments with differential weightings, and determined empirically that 0.25 weight for strings containing weak letters, and 0.50 for strings containing suspected non-weak letter affixes gave the best SC for the first three data sets. 4.1.4 Substring boundaries: N-gram size can curtail the significance of word boundary letters (Robertson and Willet 1992). To give them opportunity to contribute fully to the SC, we introduced word boundary blanks (Harman 1991). Also, the larger the n-gram, the greater its capacity to mask the shorter substring which can contain important evidence of similarity between word pairs (Adamson and Boreham 1974). Of equal importance is the size of the sliding overlap between successive n-grams (Adams 1991). Table 7: Blank insertion with “cross” String Unique 2-grams (no) qaf *q qa af qf f* (5) wqf *w wq *q qf f* (5) SC (Dice) 2(3)/(5+5) = 0.60 The problem is to find the best setting for ngram and overlap size to suit the language. We sought to determine settings experimentally. Bigrams with single character overlap and blank insertion (* in the examples) at word boundaries raised the SC for words sharing a root in our three data sets, and lowered the SC for words belonging to different roots. 4.1.5 SC formula: Dice’s equation boosts the importance of unique shared substrings between word pairs, by doubling their evidence. As we argued earlier, since Arabic words tend to be short, the relative impact of shared substrings will already be dramatic. We replaced the Dice metric with the Jaccard formula below to reduce this effect (see van Rijsbergen 1979). SC (Jac) = shared unique n-grams/(sum of unique n-grams in each string shared unique n-grams) 4.2 The Two-stage Algorithm The Two-stage algorithm is fully implemented. Words are first submitted to light stemming to remove obvious affixes. The second stage is based on Adamson’s algorithm, modified as described above. From the original, we retained bi-grams with a one character overlap, but inserted word boundary blanks. Unique bi-grams are isolated and cross is implemented. Each bigram is assigned a weight (0.25 for bi-grams containing weak letters; 0.5 for bi-grams containing potential non-weak letter affixes; 1 for all other bi-grams). Jaccard’s equation computes a SC for each pair of words. We retained the single-link clustering algorithm to ensure comparability. 4.3 Testing the Two-stage Algorithm Table 8 shows the results of the Two-stage algorithm for our data sets. The maximally effective cut of point for all sets lies closer. Figures for the first set have to be treated with caution. The perfect clustering is explained by the text’s perfect spelling and by the sample containing exactly those problematic phenomena on which we wanted to concentrate. Table 8: Two-stage Algorithm Test Results Data set Set 1 Set 2 Set 3 Set 4 Set 5 Benchmark: Total Manual Clusters (A) 9 267 337 151 190 Multi-word (B) 9 130 164 50 63 Single word (C) 0 137 173 101 127 SC cut-off 0.42-0.66 0.54 0.54 0.53-0.54 0.62-0.66 Test: (% of Benchmark) Correct Clusters (% of A) 100% 88.05% 86.94% 94.04% 86.84% Multi-word (% of B) 100% 85.39% 82.93% 94% 74.60% Single word (% of C) 90.51% 90.75% 94.06% 92.91% The algorithm deals with weak letter mutation, and infix appearance and disappearance in words sharing a root (eg the root qwm and its derived words, especially the role of Hamza as an infix in one of its variations). Even though the second and third data sets informed the modifications to a limited extent, their results show that the improvements stood up to free text. For the second data set, the Two-stage algorithm showed 31.5% improvement over Adamson’s algorithm. Importantly, it discovered 84.13% of the multi-word clusters containing words with infixes, an improvement of 79.37%. The values for single word clustering are close and the modifications preserved the strength of Adamson’s algorithm in keeping single word clusters from mixing, because we were able to maintain a high SC threshold. On the third data set, the Two-stage algorithm showed an 26.11% overall improvement, with 84% successful multi-word clustering of words with infixes (compare 0% for Adamson). The largest cluster contained 14 words. 10 clusters counted as unsuccessful because they contained one superficially similar variation belonging to a different root (eg TwL (lengthened) and bTL (to be abolished)). If we allow this error margin, the success rate of multi-word clustering rises to 90%. Since our SC cut-off was significantly lower than in Adamson’s base line experiment, we obtained weaker results for single word clustering. The fourth and fifth data sets played no role in the development of our algorithm and were used for evaluation purposes only. The Two-stage algorithm showed an 23.18% overall improvement in set four. It successfully built all clusters containing words with infixes (100% compare with 4.35% for Adamson’s algorithm), an improvement of 95.65%. The two-stage algorithm again preserved the strength of Adamson at distinguishing single word clusters, in spite of a lower SC cut-off. The results for the fifth data set are particularly important because the text was drawn from a different source and domain. Again, significant improvements in multi and single word clustering are visible, with a slightly higher SC cut-off. The algorithm performed markedly better at identifying multi-word clusters with infixes (72.72% - compare with 9.09% for Adamson). The results suggest that the Two-stage algorithm preserves the strengths of Adamson and Boreham (1994), whilst adding a marked advantage in recognising infixes. The outcome of the evaluation on fourth and fifth data sets are very encouraging and though the samples are small, they give a strong indication that this kind of approach may transfer well to text from different domains on a larger scale. 5 Two-stage Algorithm Limitations Weak letters can be root consonants, but our differential weighting technique prevents them from contributing strong evidence, whereas nonweak letters featuring in affixes, are allowed to contribute full weight. Modifying this arrangement would interfere with successful clustering (eg after light stemming: t is a root consonant in ntj (produced) and an infix in Ltqy (from root Lqy - encountered). These limitations are a result of light stemming. Although the current results are promising, evaluation was hampered by the lack of a sizeable data set to verify whether our solution would scale up. Conclusion We have developed, successfully, an automatic classification algorithm for Arabic words which share the same root, based only on their morphological similarities. Our approach works on unsanitised text. Our experiments show that algorithms designed for relatively uninflected languages can be adapted for highly inflected languages, by using morphological knowledge. We found that the Two-stage algorithm gave a significant improvement over Adamson’s algorithm for our data sets. It dealt successfully with infixes in multi-word clustering, an area where Adamson’s algorithm failed. It matched the strength of Adamson in identifying single word clusters, and sometimes did better. Weak letters and the overlap between root and affix consonants continue to cause interference. Nonetheless, the results are promising and suggest that the approach may scale up Future work will concentrate on two issues. The light stemming algorithm and the differential weighting may be modified to improve the identification of affixes. The extent to which the algorithm can be scaled up must be tested on a large corpus. Acknowledgements Our thanks go to the Kuwait State's Public Authority for Applied Education and Training, for the supporting research studentship, and to two anonymous referees for detailed, interesting and constructive comments. Appendix - Arabic in a Nutshell The vast majority of Arabic words are derived from 3 (and a few 4) letter roots via a complex morphology. Roots give rise to stems by the application of a set of fixed patterns. Addition of affixes to stems yields words. Table 9: Stem Patterns Root Pattern Stem ktb wrote fa?L katb writer mf?wL mktwb document qtL killed fa?L qatL killer mf?wL mqtwL corpse Table 9 shows examples of stem derivation from 3-letter roots. Stem patterns are formulated as variations on the characters f?L (pronounced as f'l - ? is the symbol for ayn, a strong glottal stop), where each of the successive consonants matches a character in the bare root (for ktb, k matches f, t matches ? and b matches L). Stems follow the pattern as directed. As the examples show, each pattern has a specific effect on meaning. Several hundred patterns exist, but on average only about 18 are applicable to each root (Beesley 1998). The language distinguishes between long and short vowels. Short vowels affect meaning, but are not normally written. However, patterns may involve short vowels, and the effects of some patterns are indistinguishable in written text. Readers must infer the intended meaning. Affixes may be added to the word, either under derivation, or to mark grammatical function. For instance, walktab breaks down as w (and) + al (the) + ktab (writers, or book, depending on the voweling). Other affixes function as person, number, gender and tense markers, subject and direct object pronouns, articles, conjunctions and prepositions, though some of these may also occur as separate words (eg wal (and the)). Arabic morphology presents some tricky NLP problems. Stem patterns “interdigitate” with root consonants, which is difficult to parse. Also, the long vowels a (alif), w (waw) and y (ya) can occur as root consonants, in which case they are considered to be weak letters, and the root a weak root. Under certain circumstances, weak letters may change shape (eg waw into ya) or disappear during derivation. Long vowels also occur as affixes, so identifying them as affix or root consonant is often problematic. The language makes heavy use of infixes as well as prefixes and suffixes, all of which may be consonants or long vowels. Apart from breaking up root letter sequences (which tend to be short), infixes are easily confused with root consonants, whether weak or not. The problem for affix detection can be stated as follows: weak root consonants are easily confused with long vowel affixes; consonant affixes are easily confused with non-weak letter root consonants. Erroneus stripping of affixes will yield the wrong root. Arabic plurals are difficult. The dual and some plurals are formed by suffixes, in which case they are called external plurals. The broken, or internal plural, however, changes the internal structure of the word according to a set of patterns. To illustrate the complexity, masculine plurals take a -wn or -yn suffix, as in mhnds (engineer), mhndswn. Female plurals add the -at suffix, or change word final -h to -at, as in mdrsh (teacher), mdrsat. Broken plurals affect root characters, as in mal (fund from root mwl), amwal, or wSL (link from root wSL), ‘aySaL. The examples are rife with long vowels (weak letters?). They illustrate the degree of interference between broken plural patterns and other ways of segmenting words. Regional spelling conventions are common: eg. three versions of word initial alif occur. The most prominent orthographic problem is the behaviour of hamza, (’), a sign written over a carrier letter and sounding a lenis glottal stop (not to be confused with ayn). Hamza is not always pronounced. Like any other consonant, it can take a vowel, long or short. In word initial position it is always carried by alif, but may be written above or below, or omitted. Mid-word it is often carried by one of the long vowels, depending on rules whose complexity often gives rise to spelling errors. At the end of words, it may be carried or written independently. Hamza is used both as a root consonant and an affix, and is subject to the same problems as non-weak letter consonants, compounded by unpredictable orthography: identical words may have differently positioned hamzas and would be considered as different strings. References Adams, E. (1991) A Study of Trigrams and their feasibility as Index Terms in a full text Information Retrieval System. PhD Thesis, George Washington University, USA. Adamson, George W. and J. Boreham (1974) The use of an association measure based on character structure to identify semantically related pairs of words and document titles. Information Storage and Retrieval,. Vol 10, pp 253-260 Al-Fedaghi Sabah S. and Fawaz Al-Anzi (1989) A new algorithm to generate Arabic root-pattern forms. Proceedings of the 11th National Computer Conference, King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia., pp04-07 Al-Kharashi, I. and M. Evens (1994) Comparing words, stems, and roots as Index terms in an Arabic Information Retrieval system. Journal of the American Society for Information Science, 45/8, pp. 548-560 Al-Najem, Salah R. (1998). An Explanation of Computational Arabic Morphology. DATR Documentation Report, University of Sussex. Al-Raya (1997) Newspaper. Quatar. Al-Shalabi, R. and M. Evens (1998) A Computational Morphology System for Arabic. Proceedings of COLING-ACL, New Brunswick, NJ. Al-Watan (2000) Newspaper. Qatar. Beesley, K.B. (1996) Arabic Finite-State Morphological Analysis and Generation. Proceedings of COLING-96, pp 89-94. Beesley, K.B. (1998) Arabic Morphological Analysis on the Internet. Proceedings of the 6th International Conference and Exhibition on Multi-Lingual Computing, Cambridge. El-Sadany, T. and M. Hashish (1989) An Arabic morphological system. IBM System Journal, 28/4 Harman, D. (1991) How effective is suffixing? Journal of the American Society for Information Science, 42/1, pp 7-15. Hmeidi, I., Kanaan, G. and M. Evens (1997) Design and Implementation of Automatic Indexing for Information Retrieval with Arabic Documents. Journal of the American Society for Information Science, 48/10, pp. 867-881. Kiraz, G. (1994) Multi-tape two-level Morphology: a case study in Semitic non-linear morphology. Proceedings of COLING-94, pp180-186. Lovins, J.B. (1968) Development of a Stemming Algorithm. Mechanical Translation and Computational Linguistics, 11/1. Popovic, M. and P. Willet (1992) The effectiveness of stemming for natural language access to Sloven textual data. Journal of the American Society for Information Science, 43/5, pp. 384-390. Porter, M.F. (1980) An Algorithm for suffix stripping. Program, 14 /3, pp 130-137 Stalls, B. and Knight, K. (1998) Translating names and technical terms in Arabic text. Proceedings of COLING-ACL, New Brunswick, NJ, 1998 van Rijsbergen, C. J. (1979) Information Retrieval. Butterworths, London. Robertson, A. and Willett, P.(1992) Searching for historical word-forms in a database of 17th-century English text using spelling-correction methods. 15th Annual International Conference SIGIR. Ubu-Salem H., Al-Omari M., and M. Evens (1999) Stemming methodologies over individual query words for an Arabic information retrieval system. Journal of the American Society for Information Science. 50/6, pp 524-529.
2000
26
    !" $#%& (')" * + , .-/ ('0 #12346587&9;:=<>5?A@CBEDGFIH,JLKMON69QPSR25?S:=TU9QPWVCXZY[@\B(D]F^9 _"`GabJLcdfe(`GKgd/hLi j3hWekaldf`]c*m\nGop`GKn]` qLhWrKs*thWau8ovKs*wKoyxW`]csfoyd z {|JL}yd~opekhWc`L\€_ƒ‚C„A‚\„[… †‡eEJˆop}ЉŒ‹A\ŽWbS‘ ’A“C$” •g–[—bŽW8˜g‘ ™Sšb–8’œ›ˆ—Cž|›ŸW˜Wž  ¢¡ D£Y[?A5bPLY ¤^¥A¦¨§/©ˆª«©L¬­*©S­;¬f§Q¬®G¯.§*ª±°²£­Š©A³S§Q´pµSªG§Q¬f¶¢ª«·¸´ ¹ ²£­Š¦º¯Š¥A»¼°ª£©Sª«µA·½¬3²«¾Z¦½®S¶[³ˆ°¦½® ¹ ¦¸®[¿ˆ¬° ¯Š¦¸²G®Sª«· »|²£­Š©A¥A²G·¸² ¹ ¦¨°ª£·Àª£®Sª«·½Á[§Q¬f§Â²«¾¢µˆ²£¯;¥Ã­Š¬ ¹ ³A´ ·¨ª«­/ª£®S¶E¥A¦ ¹ ¥A·½ÁĦ¸­Š­Š¬ ¹ ³S·½ª£­¾Œ²G­;»Å§Æv§Q³S°.¥6ªG§ µA­Š²£³ ¹ ¥]¯ ÇĵA­Š¦¸® ¹ÉÈ ¾Œ­Š²£»Ê¶[¦¨§Q¯;­Š¦¸µA³A¯;¦½²£®Sª£·[©SªË¯;´ ¯Š¬­Š®S§¦¸®Ã·¨ª«­ ¹ ¬Ì»|²G®A²£·½¦¸® ¹ ³Sª£·Í¯Š¬ÎɯÐÏ&¦º¯Š¥ ®A²Ð¶[¦½­Š¬° ¯k§;³A©L¬­ŠÑɦ½§;¦¸²G®8ÒÓ¤^¥A¬6ª«· ¹ ²G­;¦¸¯;¥A» °²£»$µS¦¸®A¬f§ ¾Œ²G³A­I²£­Š¦ ¹ ¦¸®Sª£·Sª«·½¦ ¹ ®A»|¬®]¯I»3²[¶[¬·½§ µSªG§Q¬f¶"²£®­Š¬·¨ªË¯Š¦¸ÑG¬ °²£­Š©A³S§&¾Œ­;¬fÔG³S¬®S°Á£ÕW°²£®A´ ¯Š¬Îɯ;³Sª£·I§;¦½»3¦½·¨ª«­Š¦º¯ÖÁGÕÏI¬¦ ¹ ¥]¯Š¬¶Ä§Q¯;­Š¦¸® ¹ §Q¦½»|¦º´ ·¨ª«­Š¦º¯ÖÁ3ª«®S¶|¦½®S°­;¬»3¬®]¯Šª«·½·½Á­Š¬¯Š­Šª£¦¸®A¬f¶3¦¸®[¿ˆ¬° ´ ¯Š¦¸²G®Sª«·¯;­.ª«®ˆ§;¶[³ˆ° ¯;¦½²£®Í©A­Š²£µSª£µA¦¸·½¦¸¯;¦½¬§Ò$×]¯Šª£­Q¯;´ ¦½® ¹ Ï&¦º¯Š¥*®A²3©Sª«¦½­Š¬¶Ø&¦½®[¿S¬f° ¯;¦½²£®CÕ ­Š²É²«¯Ù6¬ÎÉ´ ª£»3©S·¸¬f§I¾Œ²£­I¯;­.ª«¦½®A¦½® ¹ ª«®S¶®A²|©A­Š¦¸²G­^§;¬¬f¶[¦½® ¹ ²£¾$·¸¬ ¹ ª£·œ»|²G­;©A¥S²£·½² ¹ ¦¨°ª£·¯Š­Šª£®S§Ö¾Œ²G­;»Åª«¯;¦½²£®S§Õ ªG°°³S­ŠªG°Á$²£¾W¯;¥A¬Ú¦¸®S¶A³S°¬f¶/ª«®Sª£·¸Á[§;¬§Û²«¾8Ü£ÝGÝ£Ý ©SªG§Ö¯;´v¯Š¬®S§;¬|¯;¬f§Ö¯|°ªG§Q¬f§Þ¦¸®kßÛ® ¹ ·½¦¨§Q¥À¬ÎA°¬¬¶S§ àGà ÒâáGã侌²G­¯;¥A¬Ä§;¬¯fՇÏ&¦º¯Š¥Ê°³A­Š­Š¬®]¯;·½Áå²ËÑ£¬­ ÝGæ]ãçªG°°³S­ŠªG°Á"²£®Í¯;¥A¬Å»|²]§Ö¯‡¥A¦ ¹ ¥A·½Á覽­;­Š¬ ¹ ´ ³A·¨ª«­I¾Œ²G­;»Å§^ª«®S¶ à£à ÒâéGã̪G°°³A­ŠªG°Á|²£®*¾Œ²£­Š»Å§ ¬ÎÉ¥S¦¸µA¦¸¯;¦½® ¹ ®A²£®A´ê°²£®S°ªË¯;¬®SªË¯Š¦¸ÑG¬Z§;³[ë|ÎAªË¯;¦½²£®CÒ ì í 5bD]F=4åVgîœXÚ9ïYA9Ö@ÛX ¤^¥A¦¨§&©Sª£©ˆ¬­&©A­;¬f§Q¬®]¯Š§&ª«®²£­Š¦ ¹ ¦½®Sª«·gª«®ˆ¶§;³S°°¬§Š§Q¾Œ³A·8ª«·¸´ ¹ ²G­;¦¸¯;¥S»=¾Œ²G­œ¯;¥A¬3®A¬ª£­;·½Á"³A®S§;³A©L¬­ŠÑ]¦¨§;¬¶è¦½®S¶[³ˆ° ¯;¦½²£®Í²«¾ ¦½®[¿S¬°¯;¦½²£®Sª£·8»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·gª£®Sª«·½ÁÉð¬­.§Õ]Ï&¦¸¯;¥èª¾Œ²[°³ˆ§ ²£®|¥A¦ ¹ ¥S·¸Á¦¸­Š­;¬ ¹ ³A·¨ª«­¾Œ²G­;»Å§b®A²£¯b¯ÖÁÉ©A¦½°ª«·½·¸Á¥Sª«®ˆ¶[·¸¬f¶|µÉÁ ²«¯Š¥A¬­I»|²£­Š©A¥A²G·¸² ¹ Á3¦¸®S¶A³S° ¯Š¦¸²G®ª«· ¹ ²G­;¦¸¯;¥S»|§Òñê¯I¦¨§I³S§Q¬´ ¾Œ³A·g¯Š²/°²G®S§Q¦¨¶[¬­Z¯Š¥A¦¨§^¯ŠªG§Qò*ªG§I¯;¥A­Š¬¬ §;¬©ˆª«­.ªË¯;¬Þ§Q¯;¬©ˆ§ó ô È ßZ§Ö¯Š¦¸»ÅªË¯Š¬Íª±©S­;²GµSª«µA¦½·½¦½§Q¯;¦¨°>ª£·¸¦ ¹ ®A»|¬®G¯µˆ¬¯ÖÏZ¬¬® ¦½®[¿S¬f° ¯;¬f¶|¾Œ²£­Š»Å§Ûª£®S¶|­;²É²£¯ ¾Œ²£­Š»Å§b¦¸®ª ¹ ¦½Ñ£¬®Å·½ª£®[´ ¹ ³Sª ¹ ¬ á È ¤\­Šª£¦¸®õªö§;³A©L¬­ŠÑɦ½§;¬¶÷»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·èª£®Sª«·½Á[§Q¦¨§ ·½¬ª£­;®A¬­^²£®"ªÏI¬¦ ¹ ¥]¯Š¬¶*§;³AµS§;¬¯&²£¾\¯Š¥A¬§;¬‡ª«·½¦ ¹ ®A¬¶ ©Sª£¦¸­.§Ò Ü È*ø §Q¬¯Š¥A¬>­Š¬§;³A·¸¯*²«¾$×]¯Š¬©ùáĪ£§/¬¦º¯Š¥A¬­ªÄ§Q¯Šª«®ˆ¶É´ ª£·¸²G®A¬ª«®Sª£·¸ÁÉ𬭲£­ÛªÞ©S­;²GµSª«µA¦½·½¦½§Q¯;¦¨°^§;°²£­Š¦¸® ¹ °²£»3´ ©L²£®A¬®]¯‡¯;²"¦º¯Š¬­.ªË¯Š¦¸ÑG¬·½Á­;¬úS®A¬3¯;¥A¬/ª£·¸¦ ¹ ®S»3¬®]¯Þ¦½® ×]¯Š¬© ô Ò ¤^¥A¬^¯Šª£­ ¹ ¬¯²£³[¯Š©A³[¯ ²£¾g×]¯Š¬© ô ¦½§bª£®¦½®[¿S¬f° ¯Š¦¸²G®[´p­Š²É²«¯ »Åª«©S©A¦¸® ¹ §Q³S°.¥ªG§§Q¥S²ËÏ&®¦½®è¤ª«µA·½¬ ô ÕAÏ&¦¸¯;¥²G©[¯;¦½²£®ˆª«· °²£·½³A»|®S§ ¹ ¦½Ñɦ¸® ¹ ¯;¥A¬¥ÉÁÉ©L²«¯;¥S¬§;¦¸ð¬¶|§Ö¯Š¬»û°.¥Sª«® ¹ ¬ª«®S¶ §;³[ë|Ϊ£®Sª«·½Á[§Q¦¨§^ª£§^ÏI¬·½·8ª£§^©Sª£­Q¯&²«¾b§;©ˆ¬¬°.¥8Ò ü;ý[þGÿ  ý  Lþ ü   fþ]ý    ü  !  " #%$ &  " ')(*  +,$ #.-0/21 3-4/1 ')(  $56$ #.7 87 ')(9  +,$ #. / 8/ ':( 7;3-4< $=,< #.?> 7;3-0<2<?> ')(* >28@A AB,#.?> >28C?> ')(* >28@A AB!-0 #.7 >28C 7 ')(9 >28@A $56$ #.-0/21 >28@A-0/21 ')( DFE 1HG 1HG., 1 #% DFE  1 ' 2IŠü DFE 1HG 1HG., 1 #%/ DFE  1H/ ' IJ DFE 1HG HG.K$ #%LM "7 DFE 1LM 7 ' ON& & /8G  /8G.!-4 / #. / &-4 / / ' IJ ¤ª«µA·½¬ ô óA¤ª«­ ¹ ¬¯I²G³[¯;©A³A¯$Æyß ® ¹ ·½¦½§;¥ª«®S¶"×É©Sª£®A¦½§;¥ È ¤^¥A¦¨§Í§;³[ë|ÎÉ´v¾Œ²[°³ˆ§Q¬f¶Â¯;­.ª«®S§Q¾Œ²£­Š»ÅªË¯Š¦¸²G®Sª«· »|²[¶[¬·¦¨§ ®A²£¯Õ2ªG§ ¹ ¦¸ÑG¬®8Õ§;³[ë/°¦½¬®]¯*¾Œ²G­·¨ª«® ¹ ³Sª ¹ ¬§/Ï&¦º¯Š¥å©A­Š¬´ úAÎAª£·vÕA¦½®[úAÎAª£·\ª«®ˆ¶­Š¬¶A³A©A·½¦½°ªË¯;¦½Ñ£¬‡»|²G­;©A¥S²£·½² ¹ ¦½¬§ÒQPI³[¯ ¦¸¯I¦¨§I­;¬»Åª«­Šò~ª£µA·½Á©A­Š²[¶[³S°¯;¦½Ñ£¬ÚªG°­Š²G§Š§ ñï®S¶A²«´ïß ³A­Š²£©L¬ª£® ·¨ª«® ¹ ³Sª ¹ ¬§¦½®3¦¸¯Š§Z°³A­Š­Š¬®]¯¾Œ²£­Š»Ìª«®S¶|°ª«®|µˆ¬&¬Îɯ;¬®ˆ¶[¬¶ ¯Š²Å²«¯;¥S¬­ªËë|ÎAªË¯Š¦¸²G®Sª«·g§Š°.¥A¬»Åª3Ï&¥A¬®ª£©A©A­Š²£©A­Š¦½ª«¯;¬GÒ RA²G­Û»Åª«®ÉÁª£©A©A·½¦½°ªË¯Š¦¸²G®S§Õ]²£®ˆ°¬^¯;¥S¬2Ñ£²[°ª«µA³A·¨ª«­ŠÁ ·½¦½§Q¯ ªG°.¥A¦¸¬Ñ£¬f§Í§;³[ë/°¦½¬®]¯;·½ÁеA­Š²GªG¶Ð°²ËÑ£¬­Šª ¹ ¬GՇ¯;¥A¦¨§Àª«·½¦ ¹ ®[´ »|¬®]¯ ¯Šª£µA·½¬÷¬HSL¬f° ¯Š¦¸ÑG¬·½ÁUT V8W X3YZV\[ ª)»3²G­;©S¥A²£·½² ¹ ¦º´ °ª«·ª£®Sª«·½ÁÉð¬­Ä§Q¦½»|©A·¸ÁûµÉÁʯ.ª«µA·½¬å·½²]²GòɳA©¼ÆŒ¦½®S¶[¬©ˆ¬®[´ ¶[¬®]¯k²«¾®A¬f°¬§Š§Šª«­ŠÁ̰²G®G¯Š¬Îɯ;³ˆª«·*ª£»$µA¦ ¹ ³A¦¸¯ÖÁÌ­Š¬§;²£·½³[´ ¯Š¦¸²G® È Ò,]6¥S¦¸·½¬À¯;¥A¬k©A­;²GµSª«µS¦¸·½¦½§Q¯;¦¨°Àª«®Sª£·¸ÁÉ𬭯;­.ª«¦½®A¬¶ ¦½® ×]¯Š¬© ᢭Ь»Åª£¦¸®S§>³S§;¬¾Œ³A·¾Œ²£­Í©A­;¬Ñɦ¸²G³S§Q·½Á ³S®S§Q¬¬® ÏI²£­.¶A§ÕA§Q³S°.¥ÏI²£­.¶A§&ª«­Š¬2¯ÖÁÉ©A¦¨°ª£·¸·½Á*Ô]³A¦¸¯;¬ ­Š¬ ¹ ³A·¨ª«­&ª«®S¶ »|²G§Q¯‡²«¾ ¯Š¥A¬Å¶[¦¸ëŰ³A·¸¯‡§;³AµS§Q¯Šª£®S°¬|²£¾ ¯;¥A¬|·½¬»|»ÅªË¯Š¦¸ðfªË´ ¯Š¦¸²G®ù©A­Š²£µS·¸¬»1°ª£®ù²£¾ ¯;¬®µL¬Ä°ª«©A¯;³A­Š¬¶ µ]Áùª(·¨ª«­ ¹ ¬ ­Š²É²«¯?^`_)acb"d±¦½®[¿S¬f° ¯Š¦¸²G®å»Åª«©A©A¦½® ¹ ¯Šª«µS·¸¬ª«®S¶ùªk§;¦¸»3´ ©A·½¬/¯Š­Šª£®S§Š¶[³S°¬­‡¯;²>¥Sª«®S¶A·¸¬/­Š¬§;¦¨¶[³Sª«·Û¾Œ²£­Š»Å§Ò¤^¥A¦¨§ ¦¨§ ®A²£¯¯Š¥A¬°ªG§Q¬*¾Œ²£­|ª ¹£¹ ·¸³A¯;¦½®SªË¯Š¦¸ÑG¬*·¨ª«® ¹ ³Sª ¹ ¬§$§;³S°.¥Eª£§ ¤\³A­Šò]¦¨§;¥å²£­eR¦¸®A®S¦½§;¥8՜²£­¾Œ²£­Ñ£¬­ŠÁE¥A¦ ¹ ¥A·½Á6¦¸®[¿ˆ¬° ¯Š¬¶ ·¨ª«® ¹ ³Sª ¹ ¬§3§Q³ˆ°.¥(ª£§gfZð¬°.¥8ÕZÏ&¥A¬­Š¬§;©Sª£­Š§;¬¶Aª«¯ŠªÍµˆ¬´ °²£»|¬§Zª«®Å¦¨§;§;³A¬£ÒhPI³[¯Z¾Œ²£­Û»Åª£®]Á3·¨ª«® ¹ ³Sª ¹ ¬§Õ£ª«®S¶3¯Š²ª Ô]³A¦¸¯;¬/©S­ŠªG° ¯;¦¨°ª£·Û¶[¬ ¹ ­Š¬¬£Õ\¦¸®A¿S¬°¯;¦½²£®Sª£·Û»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«· ª£®Sª«·½Á[§Q¦¨§ ª«®ˆ¶ ¹ ¬®S¬­.ªË¯;¦½²£®±°ª«®Àµˆ¬Ñɦ¸¬ÏZ¬f¶Í©S­;¦½»Åª«­Š¦¸·½Á ªG§\ª£®jik0l0m3n)YZVHnoɯ.ª£§;òœ²G®ª2µA­;²]ª£¶ °²ËÑ£¬­Šª ¹ ¬bÏI²£­.¶[·¸¦¨§Q¯Ò ¤^¥]³ˆ§ÕbÏ&¥A¦½·½¬/¯;¥S¦½§©Sª«©L¬­3Ï&¦½·¸·^¶[¦¨§;°³S§;§²£³S­$¦½»3©S·¸¬´ »|¬®]¯Šª«¯;¦½²£®è²£¾Ûª/§Q¯Šª«®ˆ¶É´êª£·¸²G®A¬ ©A­Š²£µˆª«µA¦½·¸¦¨§Q¯;¦¨° ª«®Sª£·¸ÁÉ𬭠ª«®ˆ¶*­Š¬¯Š­Šª£¦¸®A¦½® ¹ ©A­Š²[°¬§Š§I¦¸®è×]¯Š¬©S§Ú᪣®S¶ÜAÕɯ;¥S¬ °.¥Sª«·¸´ ·½¬® ¹ ¬>²«¾·½ª£­ ¹ ¬´ï°²ËÑ£¬­Šª ¹ ¬¦¸®[¿ˆ¬° ¯Š¦¸²G®[´ê­;²É²«¯"ª«·½¦ ¹ ®A»|¬®]¯ ¬Î[©A­Š¬§Š§;¬¶/¦½®è×]¯;¬© ô ¦½§I¯Š¥A¬ °²G­;¬Þ²«¾\¯;¥A¦¨§&ÏI²£­ŠòWÒ p:qrp sutvxwcyrzt{}| ~{€Z‚ƒy…„)~| †.s‡tˆ"„)wcz‰tˆ ñï®|¾Œ³A­;¯;¥A¬­Z°·½ª£­;¦¸úˆ°ª«¯;¦½²£®|²«¾W¯Š¥A¬¯.ª£§;ò¶[¬f§;°­;¦½©[¯;¦½²£®CÕ£¯Š¥A¬ »|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·^¦½®S¶[³S°¯;¦½²£®å¶[¬f§;°­;¦½µL¬¶¢¦½®¢¯;¥A¦¨§/©Sª£©ˆ¬­ ª£§Š§;³A»|¬§Ոª«®ˆ¶"¦½§2µSª£§;¬¶"²G®8Ո²G®A·½Á*¯Š¥A¬ ¾Œ²G·¸·½²ËÏ&¦¸® ¹ ·¸¦½»3´ ¦¸¯;¬¶"§;¬¯²£¾Iƌ²£¾ ¯;¬®²G©[¯;¦½²£®ˆª«· È ª~Ñ~ª£¦¸·¨ª«µS·¸¬œ­;¬f§Q²G³A­.°¬§ó Æyª ÈjŠ ¯.ª«µA·½¬èÆy§;³S°.¥±ª£§‡¤ª«µA·½¬*á È ²£¾Z¯Š¥A¬Å¦¸®A¿S¬°¯;¦½²£®Sª£· ©Sª£­Q¯.§Þ²«¾&§Q©L¬¬f°.¥Í²£¾Z¯Š¥A¬ ¹ ¦½Ñ£¬®Í·¨ª«® ¹ ³Sª ¹ ¬£Õ8ª£·¸²G® ¹ Ï&¦¸¯;¥Äª·½¦¨§Ö¯$²£¾^¯;¥A¬*°ª«®A²G®A¦¨°ª«·Z§Q³[ë|Î[¬§ ¾Œ²£­ ¬fª£°.¥ ©Sª£­Q¯Ú²«¾ §Q©L¬¬f°.¥8Ò¤^¥A¬f§Q¬$§;³[ë|Î[¬§&®S²«¯Ú²G®A·½Á§;¬­ŠÑ£¬ ªG§»|®A¬»|²£®A¦¨°¯Šª ¹ §¾Œ²£­¯;¥S¬Œ‹.Þ×6·¨ª«µL¬·¨§՜µS³[¯ ¯Š¥A¬Á/°ª«®/ª«·¨§;²$µL¬2³S§;¬¶|¯Š²²Gµ[¯Šª£¦¸®ª‡®A²£¦¨§;Á|§;¬¯I²«¾ °ª«®S¶[¦¨¶Aª«¯;¬^¬ÎAª«»|©A·½¬§¾Œ²G­b¬ªG°.¥©Sª£­Q¯b²£¾L§;©ˆ¬¬°.¥8Ò N ƌµ ÈjŠ ·¨ª«­ ¹ ¬œ³A®Sª£®A®A²«¯.ªË¯Š¬¶*¯;¬Îɯ2°²£­Š©A³S§Ò Æy° ÈjŠ ·½¦½§Q¯I²£¾8¯;¥S¬Þ°ª«®S¶A¦½¶Aª«¯;¬œ®A²£³S®8ÕÉÑ£¬­Šµª£®S¶*ªG¶ŽÖ¬°´ ¯Š¦¸ÑG¬/­;²É²£¯Š§‡²£¾&¯;¥A¬/·¨ª«® ¹ ³Sª ¹ ¬èÆ ¯ÖÁÉ©A¦¨°ª£·¸·½Á²£µ[¯.ª«¦½®[´ ª£µA·¸¬I¾Œ­Š²£»ûªœ¶A¦½°¯;¦½²£®Sª£­;Á È ÕGª«®S¶3ª«®ÉÁ ­Š²£³ ¹ ¥»|¬f°.¥[´ ª£®A¦½§;» ¾Œ²£­¦½¶[¬®]¯;¦¸¾ŒÁ]¦½® ¹ ¯;¥A¬"°ª£®S¶[¦¨¶AªË¯Š¬/©Sª«­;¯Š§$²«¾ §;©ˆ¬¬°.¥²«¾C¯Š¥A¬Þ­;¬»Åª«¦½®A¦¸® ¹ Ñ£²[°ª«µA³A·¨ª«­ŠÁ|µSª£§;¬¶*²£® ª ¹£¹ ­Š¬ ¹ ªË¯Š¬2»|²[¶[¬·¨§^²«¾°²£®]¯Š¬Îɯ&²£­I¯Šª ¹ §;¬Ô]³A¬®S°¬GÕ ®A²£¯»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª£· ª«®Sª£·¸Á[§;¦½§ҏÚ³A­°²£®S°³A­;­Š¬®]¯ ÏI²£­ŠòÆFfZ³S°¬­;ðfª«®*ª£®S¶ZÛª£­;²Ëϧ;òÉÁ£Õ[á«æ£æGæ È ¾Œ²[°³S§Q¬f§ ²G® ¯;¥A¬©A­Š²£µA·½¬» ²«¾Íµˆ²É²«¯.§Ö¯Š­Šª£©A©A¦½® ¹ ª£©A©A­Š²~Îɦ¸´ »ÅªË¯Š¬è¯.ª ¹ ©S­;²GµSª«µA¦½·½¦º¯ÖÁ¢¶[¦¨§Ö¯Š­;¦½µA³[¯Š¦¸²G®S§*µÉÁE»|²É¶[´ ¬·¸·½¦¸® ¹ ­Š¬·¨ªË¯Š¦¸ÑG¬ÍÏI²£­.¶É´p¾Œ²£­Š»ƒ²É°°³A­Š­Š¬®S°¬Í©A­Š²£µˆªË´ µA¦½·½¦º¯Š¦¸¬f§&ª£°­Š²G§Š§ ¦½®S¶[¦¨°ª«¯;¦½Ñ£¬Ú·¸¬Îɦ¨°ª£·g°²£®]¯Š¬ÎɯЧœÆŒ¬GÒ ¹ Ò ‘ oO’V‡Ø+“”ac•–“ٗi˜JVJ™/ª«®ˆ¶ ‘ T V V\n؛šcœž ٟo ’ VJ™ È Õ ª£»3²G® ¹ ²«¯Š¥A¬­>©A­;¬f¶[¦¨° ¯;¦½Ñ£¬kÑ~ª£­;¦¨ª«µS·¸¬f§ÕÞÏ&¦¸¯;¥Ð¯Š¥A¬ ¹ ²Gª«·S²«¾8°²£´v¯Š­Šª£¦¸®A¦½® ¹ Ï&¦¸¯;¥|¯Š¥A¬2»|²[¶[¬·¨§ ©S­;¬f§Q¬®G¯Š¬¶ ¥A¬­;¬GÒ ñê¯2¦¨§&®A²£¯®A¬°¬§Š§;ª£­;Á|¯Š²/§Q¬·¸¬f° ¯&¯;¥S¬ ©Sª«­;¯&²«¾ §;©ˆ¬¬°.¥À²«¾ª"ÏI²£­.¶¦¸®±ª«®ÉÁ ¹ ¦½Ñ£¬®±°²£®]¯;¬Î]¯fÕ\²G®A·½Á ©A­Š²ËÑɦ½¶A¬œª£®*¬§Q¯;¦½»ÅªË¯Š¬œ²£¾8¯;¥S¬Þ°ª«®S¶A¦½¶Aª«¯;¬Ú¯Šª ¹ ¶A¦½§Q´ ¯Š­;¦½µA³[¯Š¦¸²G®S§&ª£°­;²]§;§Zª ¾Œ³A·½·C°²£­Š©A³S§Ò¤^¥A¬‡§;²£³A­.°¬Ú²«¾ ¯Š¥A¬§;¬Z°ª«®S¶[¦¨¶Aª«¯;¬ ¯.ª ¹ ¬f§Ö¯Š¦¸»Åª«¯;¬§\¦¨§C³A®S¦¸»|©L²£­;¯Šª«®]¯fÕ ¥A²ËÏI¬ÑG¬­fÕ$ª«®ˆ¶Â¯;¥A¬E·½¦½§Q¯Š§°ª£®Êµˆ¬¢Ô]³A¦¸¯;¬E®A²£¦¨§QÁGÒ ¤^¥A¬¦¸­&»Åª3ŽÖ²£­I¾Œ³A®S°¯;¦½²£®"¦½§I¯Š²|©Sª«­;¯;¦¨ª«·½·¸Á*·¸¦½»|¦º¯&¯Š¥A¬ ©L²«¯Š¬®]¯;¦¨ª«·ª£·¸¦ ¹ ®A»|¬®G¯Å§;©Sª£°¬*¾Œ­Š²£» ³S®A­;¬f§Ö¯Š­;¦¨° ¯Š¬¶ ÏI²£­.¶É´p¯;²«´êÏI²£­.¶Úª£·¸¦ ¹ ®A»|¬®G¯.§CªG°­Š²G§Š§W¯;¥A¬I¬®]¯Š¦¸­Š¬ Ñ£²£´ °ª«µA³A·¨ª«­ŠÁ£Ò Æy¶ È ¤^¥A¬Ú°³A­Š­;¬®]¯ ¦¸»|©A·½¬»|¬®]¯.ªË¯;¦½²£®/ªG§;§;³A»|¬§Ûª‡·¸¦¨§Q¯Û²«¾ ¯Š¥A¬ °²G®S§Q²G®Sª«®]¯.§Iª£®S¶Ñ£²ËÏI¬·¨§Z²£¾\¯Š¥A¬‡·¨ª«® ¹ ³Sª ¹ ¬£Ò ƌ¬ È ]6¥A¦½·½¬è®A²«¯¬§Š§Q¬®G¯Š¦½ª£·&¯;²Ä¯;¥A¬è¬Î[¬°³A¯;¦½²£®6²«¾Þ¯Š¥A¬ ª£· ¹ ²£­Š¦º¯Š¥A»"Õ«ª2·½¦½§Q¯b²«¾ˆ°²£»|»|²£®$¾Œ³A®S° ¯Š¦¸²G®ÏI²£­.¶A§\²«¾ ¡&¢¤£ =¥4-47F&7/2 ?>`/2 c¦5J§ £  E 7F&-0¨""©3/>M/A`LM-47;7;-4/1 -ªG;G& 1 E ¥0HG%7 E« § 7¬O"­ 12­c £ ®–/1"¥4-47 £ <H7FQ& /7;¯#¯°±¤²?/ ¦³²?H<2 E G& >´¨3-0ej7F& Lµ² £ H/1g/>´/ E ¥4¥57 E« §¶¬O"­ 12­ 7; />·¹¸?º°c#%$»¼7; /J±J©7;-4LM-4¥\G›& ½ £ ¯G& <G& 7; /H&-4 "/ @:"·=¾\¿À;ŒÁ?Á\¿#%$5»,&  "2±J­ ¯Š¥A¬ ¹ ¦½Ñ£¬®*·½ª£® ¹ ³Sª ¹ ¬¦¨§I³S§;¬¾Œ³A·L¯;²$¯;¥A¬œ¬ÎɯŠ­ŠªG° ¯;¦½²£® ²£¾°²£®]¯;¬Îɯ&§;¦½»3¦½·¨ª«­Š¦º¯ÖÁžŒ¬fªË¯Š³A­;¬f§Ò Æ ¾ È ñê¾*ª~Ñ˪£¦¸·¨ª«µA·½¬£ÕÞ¯Š¥A¬kÑ˪£­;¦½²£³S§>¶[¦½§Q¯Šª£®S°¬Â˧;¦¸»|¦½·½ª£­;¦¸¯ÖÁ ¯.ª«µA·½¬§ ¹ ¬®A¬­Šª«¯;¬f¶µÉÁ¯;¥A¦¨§‡ª«· ¹ ²G­;¦¸¯;¥S»õ²£®>©S­;¬Ñ]¦¸´ ²G³S§;·¸ÁŧQ¯;³S¶[¦½¬¶/·½ª£® ¹ ³Sª ¹ ¬f§ °ª£®/µˆ¬œ³S§;¬¾Œ³A·Wª£§I§;¬¬¶ ¦½®[¾Œ²G­;»ÅªË¯Š¦¸²G®8Õ[¬§;©L¬°¦¨ª«·½·½Á/¦º¾¯;¥S¬§;¬‡·½ª£® ¹ ³Sª ¹ ¬f§Zª£­;¬ °·¸²]§Q¬·¸Á*­;¬·½ª«¯;¬¶ƌ¬GÒ ¹ ÒÛ×É©Sª£®A¦½§;¥"ª«®S¶ñꯊª£·¸¦¨ª«® È Ò Ã N¢V”ÄÖ5Y[VC:=Tä@ ?SF ¤^¥A¬­;¬Í¦½§"ªk­;¦¨°.¥å¯Š­ŠªG¶[¦º¯Š¦¸²G®å²«¾3§Q³A©L¬­ŠÑɦ½§;¬¶ ª«®S¶6³S®[´ §;³A©L¬­ŠÑ]¦¨§;¬¶Ð·½¬ª£­;®A¦½® ¹ ¦¸®Ì¯;¥A¬¢¶[²£»Åª£¦¸®Ê²£¾*»|²£­Š©A¥A²G·º´ ² ¹ Á£ÒµÅ&³A»|¬·¸¥Sª£­Q¯>ª£®S¶ÇÆè°"fZ·½¬·½·½ª£®S¶ûÆ ôà ÝÈ È ÕÞß ¹ ¬f¶[¦ ª£®S¶>×É©A­Š²Gª«¯3Æ ôà ÝGÝ È Õ:ÉC¦¸® ¹ Æ ôfà£à3Ê È ª«®ˆ¶Æ²É²£®S¬Áª«®S¶ fIª£·¸¦SÆ ôà£àË È ¥Sª~ÑG¬b¬ªG°.¥œ¦½®ÉÑ£¬f§Ö¯Š¦ ¹ ªË¯;¬f¶Ú¯;¥A¬I§Q³A©L¬­ŠÑɦ½§;¬¶ ·½¬ª£­;®A¦½® ¹ ²£¾Å¯;¥A¬¢ß ® ¹ ·½¦½§;¥Ð©Sª£§Q¯>¯Š¬®S§;¬Ä¾Œ­Š²£» ©Sª£¦¸­Š¬¶ ¯Š­Šª£¦¸®A¦½® ¹ ¶Aª«¯ŠªAÕ8¯;¥A¬|úS­.§Q¯œ¯ÖÏI²"³S§;¦¸® ¹ ©A¥S²£®A²G·¸² ¹ ¦¨°ª«·½·½ÁG´ µSªG§Q¬f¶ °²£®A®S¬° ¯Š¦¸²G®A¦¨§Ö¯±»|²[¶[¬·¨§±ª«®S¶Ì¯;¥S¬E·¨ªË¯;¯;¬­À¯ÖÏZ² ©L¬­;¾Œ²£­Š»|¦¸® ¹ °²£»|©Sª£­Šª«¯;¦½Ñ£¬&§Ö¯Š³S¶[¦½¬§bÏ&¦¸¯;¥Åñ&ÌœÜ ¶[¬f°¦¨§Q¦½²£® ¯Š­;¬¬§&ª«®ˆ¶*úˆ­Š§Q¯Q´ê²£­.¶[¬­^¶[¬f°¦¨§Q¦½²£®·¸¦¨§Q¯Š§&­Š¬§;©ˆ¬f° ¯Š¦¸ÑG¬·½Á£Ò PI­;¬®G¯*Æ ôàGà ÜAÕ ôà£àGà È Õ8¶[¬‡Æèª£­Š°.òG¬®(Æ ôàGà2Ë È Õ”Í‡ª«ðfªË´ òG²ËÑùÆ ôfà£à é È ª«®ˆ¶ÏΜ²£·¨¶A§Q»|¦¸¯;¥ÊÆpá«æ£æGæ È ¥Sª~ÑG¬¬ªG°.¥¢¾Œ²«´ °³S§Q¬f¶²£®¯;¥A¬$©A­;²GµA·½¬»ö²£¾³A®ˆ§Q³A©L¬­ŠÑɦ½§;¬¶·½¬ª£­;®A¦½® ¹ ²«¾ »|²£­Š©A¥A²G·¸² ¹ ¦¨°ª£·\§;ÁɧQ¯;¬»Å§ÚªG§2¬f§;§;¬®]¯;¦¨ª«·½·½Á"ª§;¬ ¹ »|¬®G¯.ªË´ ¯Š¦¸²G®‡¯ŠªG§QòWÕ~Áɦ¸¬·½¶[¦½® ¹ ª»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·½·½Á2©A·¨ª«³S§;¦½µA·¸¬Iª«®S¶ §Q¯Šª«¯;¦¨§Ö¯Š¦½°ª«·½·¸Áè»|²£¯;¦½Ñ~ª«¯;¬f¶>©Sª£­Q¯Š¦º¯Š¦¸²G®Í²£¾I§Q¯;¬»|§‡ª£®S¶ªË¾ ´ úAÎ[¬f§ÒÐPI­;¬®G¯*ª£®S¶6¶[¬Æ>ª«­.°.ò£¬®EµL²«¯;¥6¥ˆª~Ñ£¬³S§;¬¶6ª »|¦½®A¦¸»³A»ç¶[¬§Š°­Š¦¸©A¯;¦½²£®"·¸¬® ¹ ¯Š¥"¾Œ­Šª£»|¬ÏI²£­ŠòLÕ[Ï&¦¸¯;¥"¯Š¥A¬ ©A­Š¦½»|ª£­;Á ¹ ²Gª«·²«¾*¦½®S¶[³ˆ°¦½® ¹ ·½¬Î[¬»3¬f§è¾Œ­Š²£» µL²£³A®ˆ¶É´ ª£­;ÁÉ·½¬§Š§Ä§Q©L¬¬f°.¥[´ê·¸¦½ò£¬å§Q¯;­Š¬ª£»|§ÒÑΜ²£·¨¶A§Q»|¦¸¯;¥ö§;©ˆ¬f°¦¸¾ ´ ¦¨°ª£·¸·½Á̧;²£³ ¹ ¥G¯¯;²Â¦½®S¶[³S°¬6§Q³[ë|ÎÌ©ˆª«­.ª£¶[¦ ¹ » °·½ªG§;§;¬§ Æy¬£Ò ¹ ÒÓÒÕÔxÖcÖQ×OV Ø×lOnmÀÑ[§Ò¼V×OV Ø×lOnmÀÑ[§Ò¼V×OV Ø2×OV?[H×lOnm Ñ[§ÒÏoFV Ø2×o…lrX3n È ¾Œ­Š²£» ­.ª~Ï ¯;¬ÎɯfÒÚÙ²ËÏI¬ÑG¬­fÕb¥Sª«®ˆ¶[·¸¦½® ¹ ²£¾ ¦¸­Š­;¬ ¹ ³A·¨ª«­2ÏZ²G­Š¶A§2ÏIªG§·¨ª«­ ¹ ¬·¸Á¬ÎA°·½³S¶[¬f¶"¾Œ­;²G»÷¯;¥A¦¨§ ÏI²£­ŠòWÕGª£§.Μ²£·¨¶A§;»3¦¸¯;¥ª£§Š§Q³A»|¬f¶3ª§Ö¯Š­;¦¨° ¯Š·¸Á|°²G®S°ª«¯;¬®SªË´ ¯Š¦¸ÑG¬Z»|²£­Š©A¥A²G·¸² ¹ ÁÚÏ&¦º¯Š¥A²£³[¯»|²[¶[¬·¨§\¾Œ²£­§Ö¯Š¬»Ê°.¥Sª£® ¹ ¬f§Ò Ʋ£­Š©A¥A²G·¸² ¹ Á)¦¸®S¶A³S° ¯Š¦¸²G® ¦½® ª ¹G¹ ·½³[¯;¬®SªË¯Š¦¸ÑG¬ö·½ª£®[´ ¹ ³Sª ¹ ¬§"§Q³ˆ°.¥ªG§¤C³A­Šòɦ½§;¥Âª«®S¶ÐR¦¸®S®A¦½§;¥Â©A­Š¬§;¬®]¯Š§ª ©A­Š²£µS·¸¬»Ã§;¦½»3¦½·¨ª«­¯;²©Sª«­.§Q¦½® ¹ ²G­3§;¬ ¹ »|¬®]¯;¦½® ¹ ªÍ§Q¬®[´ ¯Š¬®S°¬£Õ ¹ ¦½Ñ£¬®|¯;¥A¬Ú·¸²G® ¹ §Q¯;­Š¦¸® ¹ § ²£¾CªËë|ÎAªË¯;¦½²£®ˆ§Ûª«·½·½²ËÏZ¬f¶ ª£®S¶å¯Š¥A¬±­Š¬·¨ªË¯;¦½Ñ£¬·¸Á¢¾Œ­Š¬¬±ª«ë3Îù²£­.¶[¬­fÒÜÛb²G³[¯;¦½·¨ª«¦½®A¬® Æ ôà£àË È ¥SªG§*ª£©A©A­Š²Gª£°.¥S¬¶k¯;¥A¦¨§/©A­Š²£µS·¸¬»ä¦½®åªÀúS®A¦¸¯;¬´ §Q¯Šª«¯;¬Å¾Œ­.ª«»|¬ÏI²£­ŠòWÕCª«®ˆ¶ºÙ2ª«òÉò˪«®S¦º´ï¤ÞÝ ³A­‡¬¯|ª«·pÒÆpá«æGæ£æ È ¥Sª~ÑG¬‡¶[²£®A¬‡§;²|³S§Q¦½® ¹ ª¯;­Š¦ ¹ ­.ª«» ¯.ª ¹£¹ ¬­fÕÉÏ&¦º¯Š¥¯Š¥A¬ ª£§Q´ §;³A»|©[¯;¦½²£®"²£¾ª|°²G®S°ª«¯;¬®SªË¯Š¦¸ÑG¬œª«ë3ÎAª«¯;¦½²£®»3²[¶[¬·vÒ ¤^¥A¬¯ÖÏZ²£´p·½¬ÑG¬·Þ»|²[¶[¬·‡²£¾»|²£­Š©A¥A²G·¸² ¹ ÁÐÆrÍÞ²]§QòG¬®[´ ®A¦½¬»|¦pÕ ôà Ý£Ü È ¥ˆª£§(µˆ¬¬® ¬Îɯ;­Š¬»|¬·½Á §Q³S°°¬f§;§Q¾Œ³A·¦¸® »Åª«®É³Sª£·¸·½Á¢°ª«©A¯;³A­Š¦¸® ¹ ¯;¥A¬Í»|²£­Š©A¥A²£·½² ¹ ¦½°ª«·&©A­Š²[°¬f§;§;¬§ ²£¾>¯;¥S¬ÏI²£­Š·¨¶xß §(·¨ª«® ¹ ³Sª ¹ ¬§Ò ¤^¥A¬Ð°²G®G¯Š¬Îɯ6§Q¬®S§Q¦¸´ ¯Š¦¸ÑG¬3§Q¯;¬»´ï°.¥Sª£® ¹ ¬»3²[¶[¬·½§œ³S§;¬¶>¦¸®>¯;¥A¦¨§‡°³S­;­Š¬®]¯Ú©ˆªË´ ©L¬­¥Sª~ÑG¬>µL¬¬®ù©Sª£­Q¯Š¦½ª£·¸·½Á6¦½®S§Q©S¦¸­Š¬¶ùµÉÁ¢¯;¥A¦¨§¾Œ­Šª£»|¬´ ÏI²£­ŠòWÒÐRS²£­¬ÎAª«»|©A·½¬£Õ&ªÀ¯ÖÏI²«´ê·¸¬Ñ£¬·¬Ô]³A¦½Ñ˪«·½¬®]¯°ª£©[´ ¯Š³A­;¦½® ¹ ’ i ààžáãâäV\˜Þåæ’ i ààžlrV\˜&¦¨§`áçlxèéàxç à ÕSÔ]³A¦¸¯;¬ §;¦¸»|¦½·½ª£­*¦½® §;©A¦¸­Š¦¸¯ª«®ˆ¶E¾Œ³A®S° ¯Š¦¸²G®6¯;²(²£³A­©A­Š²£µSª£µA¦¸·½¦¨§Ö´ ¯Š¦½°œ»|²[¶[¬·–êÅÆrëAǺì\í4îîî ï"ððcñ ^¹ò"ó È Òb¤^¥A¬­Š²£®ª«®S¶efZ·¸²É¬¯;¬ ®”/21"¥4-47 £ · ô HG;5 @”õ3<  ² £ ö%÷ ö.÷ø ö%÷hù ö%÷¤ú ö%÷û #.?> #.8/ ü /2 "/-4²?H¥ #%$ ¬r#=J± #.7 #.-4/1 #. > õ E« § 7 #%$ ¬r#=J± #%$ ®–§2HLM<¥4 7 DFE LM< DýE LM<?> DFE LM<7 DýE LM<2-0/21 DFE LM<?> ¬FþÁ° E 7;?>B-4/ /2/ E /²8 /2/ E /²8?> H//2 E /2²  7 H// E /2² -4/1 //2 E /² ?> ;G-4/2-0/21±  &  " 87 3-4/1 " / õ3<H/-47 £ · ô \G;= H@cõ< 8² £ ö.ÿ   ö ô 7 ö ô 7 ö ô 7 ö ô  < ö ô < ö ô < ü H/ "/2-4²?¥ #%HG #. #%7 #% #%HLM "7 # -47 #%/ õ E2« § 7 #.8G #. 7 #. #. LM 7 #  -47 #. / #.-ªG #.-4LM "7 #  7 ¤ª«µA·½¬$á[óAß Î[ª£»|©A·¸¬Þ©Sª£­Q¯.§I²«¾b§Q©L¬¬f°.¥ª£®S¶¯;¥A¬¦¸­ªG§;§;²[°¦¨ªË¯;¬f¶°ª«®A²G®A¦¨°ª«·g§;³[ë|Î[¬§^¦½®"ßÛ® ¹ ·½¦¨§Q¥"ª«®S¶"×É©Sª£®A¦¨§Q¥ Æ ôfà£à é È §;²£³ ¹ ¥]¯$¯Š²Í·½¬ª£­;®EªÍá~´ê·½¬Ñ£¬·Z­Š³A·½¬§;¬¯3¾Œ²£­ÅßÛ®[´ ¹ ·½¦¨§Q¥8ÕÚ¥A²G§Šª±ª«®S¶ Š ¾Œ­;¦½ò˪£ª£®S§ÅµÉÁE§;³A©L¬­ŠÑɦ½§;¦¸²G®E¾Œ­;²G» 3Æ Ê æ£æGæ È ª«·½¦ ¹ ®A¬¶÷¦½®[¿S¬°¯;¦½²£®[´ê­Š²]²£¯6©Sª«¦½­Š§¢¬Îɯ;­.ª£° ¯Š¬¶ ¾Œ­Š²£»0¶[¦¨° ¯Š¦¸²G®Sª«­Š¦½¬§Ò*×ɦ¸® ¹ ·½¬|°.¥ˆª«­.ª£° ¯Š¬­Þ¦½®S§Q¬­Q¯Š¦¸²G®ª£®S¶ ¶[¬·¸¬¯;¦½²£®S§IÏI¬­Š¬2ª«·½·¸²ËÏI¬¶8Õ]ª£®S¶Å¯Š¥A¬Ú·½¬ª£­;®S¬¶Å­Š³A·½¬§I§Q³A©A´ ©L²£­;¯;¬¶|µL²«¯Š¥/©A­Š¬úAÎAª«¯;¦½²£®Åª«®S¶Å§;³[ë|ÎAªË¯Š¦¸²G®8Ò¤^¥S¬¦½­Z§;³[´ ©L¬­ŠÑ]¦¨§;¬¶E·¸¬fª«­Š®A¦½® ¹ ª«©A©A­Š²GªG°.¥E°²£³A·¨¶Eµˆ¬Íª«©A©A·½¦½¬¶¢¶[¦¸´ ­Š¬° ¯Š·¸Á/¯;²|¯;¥S¬ ª«·½¦ ¹ ®A¬¶©Sª«¦½­.§Z¦½®S¶[³ˆ°¬¶¦½®¯;¥A¦¨§&©Sª£©ˆ¬­Ò R\¦½®Sª£·¸·½Á£Õx2¿ˆª«ð¬­2ª£®S¶¦½­;¬®ÉµA³A­ ¹ Æ ôàGà£à È ¥Sª~Ñ£¬‡¶[¬´ Ñ£¬·¸²G©ˆ¬f¶kªè¾Œ­Šª£»3¬ÏZ²G­;ò¯Š²Í·½¬ª£­;®±¯ÖÏI²«´ê·½¬Ñ£¬·Û»|²£­Š©A¥A²£´ ·½² ¹ ¦¨°ª£·Úª£®Sª«·½ÁÉð¬­.§|¾Œ­Š²£» ¦¸®]¯Š¬­.ª£° ¯Š¦¸ÑG¬>§;³A©ˆ¬­;Ñɦ¨§Q¦½²£®6¦½® ª>ßÛ·¸¦¨°¦¸¯Q´;PI³A¦½·½¶É´ï¤\¬§Q¯3·½²É²£©Ä³A®ˆ¶[¬­3¯;¥S¬jPI²GªG§$©S­;²ŽÖ¬° ¯fÒ Ù³S»|ª£®S§Z©A­;²ËÑɦ¨¶[¬ª£§Q´ê®A¬¬f¶[¬¶3¾Œ¬¬¶[µSªG°.ò­Š¬ ¹ ª«­.¶[¦½® ¹ ¬­;´ ­Š²£­.§3ª£®S¶E²£»|¦¨§;§;¦¸²G®S§Ò Ŭ°¬®]¯;·½Á(ª«©S©A·¸¦½¬¶(¯Š² ‹²G·¸¦¨§;¥8Õ ¯;¥S¬Í»|²[¶[¬·‡ª£·½§;²(ªG§;§;³A»|¬§°²G®S°ª«¯;¬®ˆªË¯;¦½Ñ£¬>»3²G­;©S¥A²£·¸´ ² ¹ Áª«®S¶è¯;­Š¬ªË¯.§œ®A²£®[´ï°²G®S°ª«¯;¬®SªË¯Š¦¸ÑG¬ ¦¸­Š­Š¬ ¹ ³S·½ª£­¾Œ²£­Š»Å§ ¯;¥S­;²G³ ¹ ¥¯Šª£µA·¸¬Þ·½²É²£òɳA©8Ò ¤^¥]³ˆ§>¯Š¥A¬­Š¬E¦½§ª ®S²«¯Šª£µA·½¬ ¹ ª«©Ê¦½®Ê¯Š¥A¬(­Š¬§;¬ª£­Š°.¥ ·½¦º¯Š¬­.ªË¯;³S­;¬±¾Œ²G­Í¦½®S¶[³ˆ° ¯;¦½²£® ²«¾*ª£®Sª«·½ÁÉð¬­.§"¾Œ²G­Í¦½­;­Š¬ ¹ ³[´ ·¨ª«­‡»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·©A­Š²[°¬§Š§;¬§Õ8¦¸®ˆ°·½³S¶[¦½® ¹ §Q¦ ¹ ®S¦ºúˆ°ª«®]¯ §Q¯;¬»O°.¥Sª£® ¹ ¦½® ¹ Ò̤^¥A¬Àª«· ¹ ²£­Š¦¸¯;¥A»ƒ¶[¬§Š°­Š¦½µˆ¬f¶6µˆ¬·¸²ËÏ ¶[¦½­;¬f° ¯Š·¸ÁèªG¶A¶[­Š¬§Š§Q¬f§¯Š¥A¦½§ ¹ ª«©8ÕgÏ&¥A¦½·½¬3§;³S°°¬§Š§Ö¾Œ³S·¸·½Á¦¸®A´ ¶[³S°¦¸® ¹ »|²£­Š¬2­Š¬ ¹ ³A·¨ª«­Zª£®Sª«·½Á[§Q¬f§ÛÏ&¦¸¯;¥S²£³[¯&§Q³S©ˆ¬­;Ñɦ¨§Q¦½²£® ª£§^ÏI¬·½·pÒ   Vû5   ÄQ9 ÛXûVCXZY ¡ H&?SVVCXÚPLH  9  9JÄÖ5?S9ïY£H ¤^¥A¬»3²£¯;¦½Ñ˪˯;¦½® ¹ ¶A¦¸·½¬»|»Åª/µL¬¥A¦½®S¶>²£³A­œª«©A©A­Š²GªG°.¥*¯Š² »|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·Úª«·½¦ ¹ ®A»|¬®]¯*¦¨§/¯Š¥A¬ÍÔ]³A¬f§Ö¯Š¦¸²G®å²«¾ ¥A²ËÏ ²£®S¬œ¶[¬¯;¬­;»|¦½®A¬§I¯Š¥SªË¯^¯;¥S¬œ©ˆª£§Q¯Z¯Š¬®S§;¬Þ²«¾Q[?lOnmœ¦¨§¹[Hinm ª«®ˆ¶Å®A²«¯¹[?lOnmV ØËÒ ¤^¥A¬Ú©Sª«¦½­;¦½® ¹ [ lOnmfÇ}[?lOnmV Ø2­Š¬Ô]³A¦½­Š¬§ ²£®S·¸Á/§;¦¸»|©A·½¬Þ°²£®ˆ°ªË¯Š¬®Sª«¯;¦½²£®/Ï&¦¸¯;¥¯;¥S¬œ°ª«®A²G®A¦½°ª«·W§Q³A¾ ´ úAÎgÕ=âV Ø«ÕLª£®S¶ [?lOnmV8Ø ¦½§2¦¸®ˆ¶[¬¬f¶èªÅ·½¬ ¹ ª£·8ÏI²£­.¶¦½®è²G³A­ Ñ£²[°ª«µA³A·¨ª«­ŠÁkƌ¯;¥A¬©Sª£§Q¯‡¯;¬®S§Q¬*²«¾`[?lOnmV È Ò Š ®S¶±Ï&¥S¦¸·½¬ ¾Œ¬ϱ¦½­Š­;¬ ¹ ³A·¨ª«­8ÑG¬­ŠµS§8¥ˆª~Ñ£¬Ûª¯;­Š³A¬ÛÏI²£­.¶Þ²[°°³S©]Áɦ½® ¹ ¯Š¥A¬ §;·¸²£¯Å¯;¥Sª«¯/ÏZ²G³A·½¶EµL¬ ¹ ¬®A¬­.ªË¯Š¬¶(µ]Á(ªÀ­;¬ ¹ ³A·¨ª«­|»|²£­;´ ©A¥A²G·¸² ¹ ¦¨°ª£·L­Š³A·½¬£ÕAª|·¨ª«­ ¹ ¬Þ°²G­;©A³ˆ§Z¦¨§^úS·½·½¬¶Ï&¦º¯Š¥"»|ª£®ÉÁ §;©ˆ¬·¸·½¦¸® ¹ »|¦½§Q¯Šª£ò£¬f§g²£­¶[Á[§Ö¿S³S¬®S°¦¸¬f§\§;³S°.¥ ª£§5oFiV ØÚƌ²Gµ[´ §;¬­ŠÑ£¬¶3Ï&¦¸¯;¥*ª ¾Œ­;¬fÔ]³A¬®S°Á²£¾ ô È Õ[ª£®S¶Å§Q³ˆ°.¥Å¬­Š­;²G­Š§ °ª«® Ï&­Š¬ª«òÅ¥ˆª~Ñ£²[°œ¦¸®®Sª"Ý ¸ÑG¬œª£·¸¦ ¹ ®A»|¬®G¯;´pµˆª£§;¬¶/»|¬¯;¥A²[¶A§Ò Ù²ËÏ °ª«®÷ÏZ¬Ð²ËÑG¬­.°²G»3¬ ¯;¥A¦¨§6©A­Š²£µS·¸¬»"! Ŭ·¸´ ªË¯Š¦¸ÑG¬ °²£­Š©A³S§ ¾Œ­;¬fÔG³S¬®S°Á÷¦¨§ù²£®A¬ ³S§Q¬¾Œ³A·À¬Ñɦ¨¶[¬®S°¬ §;²£³A­.°¬GÒ Úµˆ§Q¬­;ÑG¬>¦½®ù¤ª«µA·½¬ÍÜk¯;¥Sª«¯¦½®ùª£® Ý£æk»|¦¸·¸´ ·½¦¸²G®>ÏI²£­.¶°²G·¸·½¬°¯;¦½²£®è²£¾Û®A¬Ï§;Ï&¦½­;¬$¯;¬Îɯœ¯;¥A¬3­Š¬·¨ªË¯;¦½Ñ£¬ ¾Œ­Š¬Ô]³A¬®S°ÁÀ¶[¦¨§Ö¯Š­;¦½µA³[¯Š¦¸²G®Ä²«¾M[Hinm$#)[?lOnm*¦½§ ôHÊ áGé ô á£æ Ê Æy²£­ ô Ò ôfà  ô È Õ‡Ï&¥S¦½°.¥Ð¦½®S¶[¦¨°ª«¯;¬§ª6­Š¬ªG§Q²G®Sª«µA·½Á °·¸²]§Q¬ ¾Œ­Š¬Ô]³A¬®S°Áå»Åª«¯Š°.¥8ÕÞÏ&¥A¦½·¸¬¯Š¥A¬ [?lOnmV8Ø$#[?lOnm±­Šª«¯;¦½²(¦¨§ æSÒ æGæ]é ô Õɪ|§Q³Aµˆ§Ö¯.ª«®]¯;¦¨ª«·8¶[¦¨§;©Sª«­Š¦º¯ÖÁGÒ Û¯P%Ìó Û¹P %'&'( %'& )+*, Æ%&-( %'& È ˆ"| ~-.  ˆy…~'. ôHÊ á]é ô á«æ Ê ô Ò ôfà æAÒ ô é §;¦¸® ¹ ¬¶:Â˧;¦¸® ¹ à  ô á£æ Ê æSÒ æGæ]é ´ Ê Ò à æ ˆ"yr~-.)t{  ˆ"yr~-.)t à «á Ê Ò Ë ô Ò Ë æ §Šª«® ¹ Â˧;¦¸® ¹ ¬ ôHÊ áGé2 à ôË ÝAÒ Ë Ë Ò æÈ Š ·¸·–Û¹P%Ì`ÂÛ¯P Ò Ý Ë ´ïæAÒ ô È ¤ª«µA·½¬ ÜAóAß Î[ª£»|©A·¸¬Þ¦½®[¿S¬f° ¯;¦½²£®A´p­Š²É²«¯I¾Œ­;¬fÔG³S¬®S°Á/­Šª«¯;¦½²G§ Ù²ËÏI¬ÑG¬­fÕ§;¦½»3©S·¸Á±·½²É²£òɦ¸® ¹ ¾Œ²£­|°·½²G§;¬*­Š¬·¨ªË¯Š¦¸ÑG¬/¾Œ­Š¬´ Ô]³A¬®S°¦½¬§$µˆ¬¯ÖÏZ¬¬®±ª«®±¦½®[¿S¬f° ¯;¦½²£®Äª£®S¶À¦¸¯Š§$°ª«®S¶A¦½¶Aª«¯;¬ ­Š²É²«¯Ú¦¨§œ¦¸®ˆª«©A©A­Š²£©S­;¦¨ªË¯Š¬£Õ ¹ ¦½Ñ£¬®è¯;¥Sª«¯Þ§;²£»|¬$¦½®[¿S¬f° ¯Š¦¸²G®S§ ª£­;¬&­Š¬·¨ªË¯Š¦¸ÑG¬·½Á ­Šª£­;¬&ª£®S¶ÕV/\àV8WHo;V8ØZ¯Š² ²[°°³A­ »$³ˆ°.¥3·½¬§Š§ ¾Œ­Š¬Ô]³A¬®G¯Š·¸Á/¯;¥Sª£®¯Š¥A¬‡­Š²]²£¯^¾Œ²£­Š»Ò ¤^¥É³S§¦½®²£­.¶[¬­^¯Š²Åµˆ¬3ª«µA·½¬Þ¯;²*­Šª£®Aòů;¥S¬³[Hinm0#[?lOnm ª£®S¶ [ lOnm2V Ø$#[?lOnmͰª£®S¶[¦¨¶AªË¯Š¬§|¬\SW¬°¯;¦½Ñ£¬·¸ÁGÕI¦¸¯*¦¨§Å®A¬f° ´ ¬f§;§Šª«­ŠÁÀ¯;²µˆ¬èª£µA·½¬¯Š²±ÔG³ˆª«®]¯;¦¸¾ŒÁÄ¥A²ËÏöÏI¬·½·&¬ªG°.¥ÄúA¯.§ Æy²£­è¶[¬Ñ]¦¨ªË¯Š¬§¾Œ­Š²£» È ¬Î[©L¬°¯;¬¶ù¾Œ­Š¬Ô]³A¬®S°Á¶[¦½§Q¯;­Š¦½µA³[´ ¯Š¦¸²G®S§Òû¤C²E¶[²k§;²SÕÏI¬è³S§;¬Í§;¦½»3©S·¸¬Í®A²£®[´ê©Sª£­Šª£»3¬¯;­Š¦½° §Q¯Šª«¯;¦¨§Ö¯Š¦½°§b¯;²3°ª£·½°³A·½ª«¯;¬&¯Š¥A¬2©S­;²GµSª«µA¦½·½¦º¯ÖÁ²«¾Cª‡©Sª£­Q¯Š¦½°³[´ ·¨ª«­ %'&'( %'& ­Šª«¯;¦½²$µÉÁŬÎAª«»|¦½®A¦½® ¹ ¥A²ËÏ¢¾Œ­Š¬Ô]³A¬®]¯;·½ÁŲ«¯;¥S¬­ §;³S°.¥Ä­.ªË¯Š¦¸²]§ ¦½®(ª>§;¦½»3¦½·¨ª«­3­Šª£® ¹ ¬/¥ˆª~Ñ£¬/µL¬¬®(§;¬¬®k¦¸® ¯Š¥A¬°²£­Š©A³S§Ò+R¦ ¹ ³A­;¬ ô ¦½·¸·½³S§Q¯;­.ªË¯;¬f§2§;³S°.¥èª/¥S¦½§Q¯;² ¹ ­.ª«» ÆyµSª£§;¬¶Í²£®Í¯;¥A¬|·½² ¹ ²«¾Z¯;¥A¬|­.ªË¯;¦½²G§Ú¯Š²¾Œ²[°³S§Þ»|²£­Š¬3ª«¯Q´ ¯Š¬®]¯;¦½²£®±²G®À¯;¥A¬¬Îɯ;­Š¬»Åª È Òè¤^¥A¬/¥A¦¨§Q¯;² ¹ ­Šª£»0¦¨§‡¯Š¥A¬® §;»|²]²£¯;¥A¬f¶*ª£®S¶/®A²G­;»Åª£·¸¦½ð¬f¶/ª£§Iª«®ª«©A©A­Š²~Î[¦¸»Åª«¯;¦½²£®Å²«¾ ¯Š¥A¬2©A­Š²£µˆª«µA¦½·¸¦¸¯ÖÁ¶A¬®S§;¦º¯ÖÁ3¾Œ³A®ˆ° ¯;¦½²£®/¾Œ²£­ ¯Š¥A¦¨§ ¬§Q¯;¦½»ÅªË¯Š²£­ Æ )+*, Æ%&'( %& È;È Õ^Ï&¥A¦¨°.¥¢ÏZ¬è°ª«®(¯;¥A¬®E³ˆ§Q¬"¯Š²ÀÔ]³Sª£®]¯;¦¸¾ŒÁ ¯Š²3Ï&¥ˆªË¯&¬Îɯ;¬®]¯2ª ¹ ¦¸ÑG¬®°ª«®ˆ¶[¦½¶SªË¯;¬ )1*, Æ %&-( %'& È Õˆ§Q³S°.¥ ªG§ )1*, Æv§;ª£® ¹ «§Q¦½® ¹]È32 Ò ô éÉÕgúS¯Š§3²£³A­¬»|©A¦½­;¦¨°ª£·¸·½ÁÀ»|²«¯;¦¸´ Ñ˪˯Ь¶k¬Î[©ˆ¬f° ¯.ªË¯;¦½²£®ˆ§ÒÀ¤^¥A¬*­Š¬·¨ªË¯Š¦¸ÑG¬/©L²G§;¦º¯Š¦¸²G®k²«¾&¯Š¥A¬ °ª«®S¶[¦¨¶Aª«¯;¬&©Sª£¦¸­Š¦¸® ¹ §²£®3¯;¥S¬ ¹ ­.ª«©A¥|§Q³ ¹£¹ ¬f§Ö¯.§\¯;¥Sª«¯b¯;¥A¦¨§ ¬f§Ö¯Š¦¸»ÅªË¯Š²£­Þ¦¨§Þ¦¸®S¶A¬¬¶Í¦½®[¾Œ²£­Š»ÅªË¯;¦½Ñ£¬ ¹ ¦½Ñ£¬®è¯Š¥A¬3¯Šª£§;ò²«¾ ­.ª«®Aòɦ½® ¹ ©ˆ²£¯;¬®]¯Š¦½ª£·8­;²É²£¯Q´ê¦¸®[¿ˆ¬° ¯Š¦¸²G®©ˆª«¦½­;¦½® ¹ §Ò Ù²ËÏI¬ÑG¬­fÕ ¬f§Ö¯Š¦¸»Åª«¯;¦½® ¹ ¯;¥A¬f§Q¬ ¶[¦¨§Ö¯Š­;¦½µA³[¯Š¦¸²G®S§ ©A­Š¬§;¬®]¯.§ ªÞ©A­Š²£µA·½¬» ¹ ¦½Ñ£¬®3¯Š¥SªË¯ ¯Š¥A¬&¯Š­;³A¬ª£·¸¦ ¹ ®S»3¬®]¯Š§ Ævª«®S¶k¥A¬®S°¬*¾Œ­Š¬Ô]³A¬®S°Á±­.ªË¯Š¦¸²]§ È µˆ¬¯ÖÏZ¬¬®Ä¦½®[¿S¬f° ¯Š¦¸²G®S§ singed/sing (-4.9) 0 0.05 0.1 0.15 0.2 0.3 -10 -5 0 5 10 sang/sing (0.17) singed/singe (1.5) sang/singe (5.1) took/take (-0.35) 0.25 log(VBD/VB) taked/take(-10.5) R¦ ¹ ³A­Š¬ ô ó ø §Q¦½® ¹ ¯Š¥A¬ )1*, Æ %&-( %& È ¬f§Ö¯Š¦¸»Åª«¯;²£­$¯;²è­.ª«®Sò ©L²«¯;¬®]¯;¦¨ª«·–šcœ5476šcœ"©Sª£¦¸­.§I¦¸®ß ® ¹ ·½¦½§;¥ ª«­Š¬Å®A²£¯3ª£§Š§;³A»|¬¶Í¯;²èµL¬*òÉ®A²ËÏ&®±¦½®Äª£¶AÑ~ª£®S°¬GÒ¤^¥]³ˆ§ ¯;²Êª£©A©A­Š²~Î[¦¸»ÅªË¯Š¬E¯;¥A¦¨§k¶[¦¨§Q¯;­Š¦¸µA³A¯;¦½²£® ª«³[¯Š²£»ÅªË¯Š¦½°ª«·½·¸ÁGÕ ÏI¬»Åª£ò£¬¯;¥A¬Ê§;¦½»3©S·¸¦¸¾ŒÁɦ¸® ¹ ªG§;§;³A»|©[¯Š¦¸²G® ¯Š¥SªË¯6¯Š¥A¬ ¾Œ­Š¬Ô]³A¬®ˆ°Áö­.ªË¯Š¦¸²]§¢µˆ¬¯ÖÏZ¬¬®ç¦¸®A¿S¬°¯;¦½²£®S§åª£®S¶=­;²É²«¯.§ ƌ·¨ª«­ ¹ ¬·¸ÁĪ£®¢¦½§Š§Q³S¬²£¾Ú¯Š¬®S§;¬èª«®ˆ¶k³S§Šª ¹ ¬ È ¦¨§|®A²«¯*§;¦ ¹ ´ ®A¦¸úˆ°ª£®]¯;·½Á>¶[¦SL¬­;¬®G¯ µL¬¯ÖÏI¬¬®À­;¬ ¹ ³A·¨ª«­‡ª«®ˆ¶>¦½­;­Š¬ ¹ ³A·¨ª«­ »|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«·L©A­Š²[°¬§Š§;¬§Ò ¤\ª£µA·¸¬ Ê ª«®S¶ÚR¦ ¹ ³A­;¬á>¦¸·½·¸³ˆ§Ö¯Š­Šª«¯;¬¯;¥Sª«¯|¯;¥A¦¨§|§Q¦½»3´ ©A·½¦º¾ŒÁɦ½® ¹ ª£§Š§Q³S»3©A¯;¦½²£®¦½§&§;³A©A©L²£­;¯;¬f¶*¬»3©S¦¸­Š¦½°ª«·½·¸ÁGÒhÌ2¬´ §;©A¦º¯Š¬"·½ª£­ ¹ ¬·½¬»|»Åª¾Œ­;¬fÔ]³A¬®S°ÁĶ[¦SL¬­;¬®S°¬f§|µˆ¬¯ÖÏZ¬¬® ­Š¬ ¹ ³S·½ª£­ª«®S¶¦½­;­Š¬ ¹ ³A·½ª£­ß ® ¹ ·¸¦¨§;¥ÑG¬­ŠµS§Õ[¯;¥A¬¦¸­2­;¬·½ª«¯;¦½Ñ£¬ ¯;¬®S§;¬I­.ªË¯Š¦¸²]§8¾Œ²G­µˆ²£¯;¥8%&-( %& ª£®S¶9%'&-: %& ª£­;¬^Ô]³A¦¸¯;¬^§Q¦½»3´ ¦½·½ª£­ ¦¸®3¯Š¬­Š»|§Û²«¾L¯;¥A¬¦¸­Û»3¬fª«®S§Ûª«®S¶Å¶[¬®S§;¦º¯ÖÁ$¾Œ³A®S° ¯Š¦¸²G®S§Ò Ûb¬­ŠµS¤ZÁÉ©ˆ¬ %&-( %& %&': %& Š Ñ ¹ ÒhÉC¬»|»ÅªBRS­;¬fÔ<; Å&¬ ¹ ³A·¨ª«­ Ò Ý Ê é Ò é Ê È Ý2È ô ñתּ­;¬ ¹ ³A·¨ª«­ Ò Ý Ê á Ò éÈ ô ô é Ê æ2È ¤ª«µA·½¬ Ê óS×[¦¸»|¦½·½ª£­ ­;¬ ¹ ³A·¨ª«­;´ê¦¸­Š­;¬ ¹ ³A·¨ª«­\¾Œ­Š¬Ô]³A¬®ˆ°Á3­Šª«¯;¦½²G§ 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 -6 -4 -2 0 2 4 6 8 log(VBD/VB) Regular Verbs Irregular Verbs R¦ ¹ ³A­Š¬á[ó ÌÚ¦½§Q¯;­Š¦½µA³[¯;¦½²£®ˆª«·Z§;¦½»3¦½·¨ª«­Š¦º¯ÖÁµˆ¬¯ÖÏZ¬¬®±­Š¬ ¹ ³[´ ·¨ª«­ª«®ˆ¶*¦½­;­Š¬ ¹ ³A·¨ª«­I¾Œ²£­Š»Å§I¾Œ²£­›šcœ54>=šœ ¤^¥]³ˆ§ÚÏI¬$¦½®A¦¸¯;¦¨ª«·½·¸Áͪ«©A©A­Š²~Î[¦¸»Åª«¯;¬‡¯Š¥A¬½šœ?4@=šcœÍ­Šª«´ ¯;¦½²G§2¾Œ­Š²£»¼ª«®Íª«³[¯Š²£»ÅªË¯Š¦½°ª«·½·¸Á"¬ÎɯŠ­ŠªG° ¯;¬f¶ÄÆyª£®S¶è®S²£¦¨§QÁ È §;¬¯Í²£¾*Ñ£¬­;µÐ©ˆª«¦½­Š§Í¬Î[¥A¦½µA¦¸¯;¦½® ¹ §;¦¸»|©A·½¬¢ª«®S¶Ð³S®S°²G®[´ ¯;¬f§Ö¯Š¬¶¢§;³[ë|Î[ª«¯;¦½²£®¢Ï&¦¸¯;¥¢¯;¥S¬è°ª«®S²£®A¦¨°ª£·ZâV Ø"§;³[ë|ÎgÒ ¤^¥A¦¨§C¶[¦¨§Q¯;­Š¦¸µA³A¯;¦½²£® ¦¨§8­Š¬´ê¬§Q¯;¦½»ÅªË¯;¬f¶Þª£§Cª£·¸¦ ¹ ®S»3¬®]¯Š§g¦½»3´ ©A­Š²ËÑ£¬GÕCµA³[¯|ª§Q¦½® ¹ ·¸¬Å¾Œ³S®S° ¯Š¦¸²G®Ä°²£®]¯Š¦¸®É³A¬f§Þ¯;²è©S­;¬f¶[¦½°¯ ¾Œ­Š¬Ô]³A¬®ˆ°Á¢­Šª«¯;¦½²G§²«¾$³S®Sª«·½¦ ¹ ®A¬¶Ìƌ·¨ª«­ ¹ ¬·½Á¢¦¸­Š­Š¬ ¹ ³S·½ª£­ È ÏI²£­.¶"©Sª«¦½­Š§2¾Œ­Š²£»÷¯;¥A¬3²£µˆ§Q¬­;ÑG¬¶¾Œ­Š¬Ô]³A¬®ˆ°Á"²«¾ ©A­Š¬Ñɦ¸´ ²£³ˆ§Q·½Á*ª«·½¦ ¹ ®A¬f¶ÍÆvª«®S¶·¨ª«­ ¹ ¬·½ÁŭЬ ¹ ³S·½ª£­ È ²£®A¬f§Ò RA³A­;¯;¥S¬­Š»3²G­;¬GÕbÏZ¬ª«­Š¬*®A²£¯+ŽÖ³S§Q¯Å·¸¦½»|¦º¯Š¬¶Ä¯;²³S§Q¦½® ¹ ¯Š¥A¬/­.ªË¯;¦½²ãêBADC7EFHGDIõ¯;²è©A­Š¬¶A¦½°¯ ¯;¥A¬*¬Î[©ˆ¬f° ¯Š¬¶¾Œ­Š¬´ Ô]³A¬®S°Á²«¾hêBADC7Eb¦½®¯Š¥A¬ °²G­;©S³S§Ò ¤^¥S¬ ¬Î[©L¬° ¯Š¬¶¾Œ­Š¬´ Ô]³A¬®S°Á±²£¾œªÍÑ]¦¨ª«µS·¸¬©Sª£§Q¯Q´p¯;¬®S§Q¬"°ª£®S¶[¦¨¶AªË¯Š¬*¾Œ²G­j[ lOnm §;¥A²£³S·½¶ùª«·¨§Q²Eµˆ¬¬§Q¯;¦½»Åª«µA·½¬¾Œ­;²G»ƒ¯;¥A¬Í¾Œ­Š¬Ô]³A¬®ˆ°Á6²«¾ ª£®]Á*²«¾¯;¥A¬Þ²«¯Š¥A¬­2¦¸®[¿ˆ¬° ¯Š¦¸²G®Sª«·gÑ˪£­;¦¨ª«®]¯Š§Ò Š §Š§Q³S»3¦½® ¹ ¯;¥Sª«¯¬fª«­Š·¸¦½¬­¦¸¯;¬­.ªË¯Š¦¸²G®S§Í²«¾/¯Š¥A¬Eª£· ¹ ²«´ ­Š¦º¯Š¥A»û¥Sª£¶$úS·½·¸¬f¶3¯;¥A¬¯bKJO“”"·½¬»|»Åª‡§;·¸²£¯Š§¾Œ²£­5šcœ:èª«®S¶ šcœML$¦¸®¤ª«µA·½¬ Ë Ï&¦º¯Š¥­Š¬ ¹ ³A·¨ª«­I¦¸®[¿ˆ¬° ¯Š¦¸²G®S§Õ %&-( %&': ª«®S¶ %'&-( %'N °²£³A·¨¶Àª«·¨§Q²µˆ¬Å³ˆ§Q¬f¶ªG§Þ¬§Q¯;¦½»ÅªË¯;²G­Š§Ò³R¦ ¹ ³A­;¬ÅÜ §;¥A²Ëϧ\¯;¥A¬^¥A¦¨§Q¯;² ¹ ­Šª£»¾Œ²£­¯;¥S¬I¬f§Ö¯Š¦¸»Åª«¯;²£­ )1*, ÆO%&-( %'&': È Ò I P  LML  ö%÷ ö%÷ø ö.÷hú ö%÷hù ö.÷û Q G> õ û›ú 7;-4/1 R 7;-4/21"-4/1 7;-4/17 R S G&UT R 0WVKX R UYZ WXX R ¤ª«µA·½¬ Ë óSßbÎAª£»3©S·¸¬ ·½¬»|»Åª&¾Œ­Š¬Ô]³A¬®S°ÁÞ©A­Š²«úˆ·¸¬Û¾Œ²£­=[ lOnm 0 0.05 0.1 0.15 0.2 0.25 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 log(VBD/VBG) taked/taking (-9.5) singed/singing (-5) took/taking (0.6) sang/singing (1.0) singed/singeing (2.2) sang/singeing(7.3) R¦ ¹ ³A­Š¬*ÜAó ø §Q¦½® ¹ ¯;¥A¬ )1*, Æ %&-( %'&-: È ¬§Q¯;¦½»|ª«¯;²G­ ¯;²>­Šª£®Aò ©L²«¯Š¬®]¯;¦¨ª«·–šœ?4g´ýšcœžÍ»ÅªË¯.°.¥A¬§^¦½®ß ® ¹ ·¸¦¨§Q¥ ]ͬª«­Š¬/ª«·¨§Q²"®A²£¯$·½¦¸»|¦¸¯;¬¶±¯;²è³S§;¦½® ¹ ²£®S·¸Áªè§Q¦½® ¹ ·¸¬ ¬f§Ö¯Š¦¸»ÅªË¯Š²£­fÒ&ñ﮾yª£°¯ÕL¯;¥S¬­Š¬$ª«­Š¬$°²G®S§;¦½¶[¬­Šª£µA·½¬‡­;²GµA³S§Q¯Q´ ®A¬f§;§Cª£¶[Ñ˪«®]¯.ª ¹ ¬f§L¯;²µL¬ ¹ ª«¦½®A¬¶œµÉÁ¯.ª«òɦ¸® ¹ ¯;¥A¬Ûª~ÑG¬­.ª ¹ ¬ ²£¾¬§Q¯;¦½»|ª«¯;²G­Š§Õb¬§;©ˆ¬f°¦¨ª«·½·¸Á;Œ²£­3¥A¦ ¹ ¥A·½ÁÀ¦¸®A¿S¬°¯;¬¶k·½ª£®[´ ¹ ³Sª ¹ ¬§&Ï&¥A¬­;¬Þ¯Š¥A¬$²GµS§;¬­ŠÑ£¬¶*¾Œ­Š¬Ô]³A¬®ˆ°Á"°²£³S®G¯.§&»Åª~Á µL¬ ­Š¬·¨ªË¯;¦½Ñ£¬·¸ÁÚ§Q»Åª«·½·pÒ\¤\²ª£°°²£»|©A·½¦½§;¥Ú¯;¥A¦¨§8¦½® ª ¹ ¬®A¬­.ª«· ¾Œ­.ª«»|¬ÏI²£­ŠòWÕ8ÏI¬úˆ­Š§Q¯ ¬§Q¯;¦½»ÅªË¯;¬|¯Š¥A¬/¥A¦¨¶A¶[¬®ÀÑ~ª£­;¦¨ª«µS·¸¬ ²£¾Z¯Š²«¯Šª£·b·¸¬»|»|ª¾Œ­Š¬Ô]³A¬®S°ÁkÆ\[ ]_^ È Ñɦ¨ªª"°²£®[úˆ¶A¬®S°¬´ ÏI¬¦ ¹ ¥]¯;¬f¶Åª~Ñ£¬­Šª ¹ ¬^²«¾g¯Š¥A¬2²GµS§Q¬­;ÑG¬¶ZêBADC E ¾Œ­;¬fÔ]³A¬®S°Á ª£®S¶Äª ¹ ·¸²GµSª«·½·½Áͬf§Ö¯Š¦¸»ÅªË¯Š¬¶ ` acb de'fhg »3²[¶[¬·vÒÀ¤^¥A¬®(ª«·½· §;³AµS§;¬Ô]³A¬®G¯êBADCE|¾Œ­;¬fÔ]³A¬®S°Á嬧Q¯;¦½»ÅªË¯Š¦¸²G®S§°ª£®µL¬ »Åª£¶A¬­Š¬·¨ªË¯Š¦¸ÑG¬Å¯;² de'f g i acb Õ ²£­|ª§Q²G»3¬Ï&¥SªË¯|ª£¶[Ñ˪£®G¯.ªË´ ¹ ¬²£³ˆ§‡Ñ˪«­Š¦½ª£®G¯fÕ )1*, Æ de'f g [ acb>j5de'f g È ÕbÏ&¦º¯Š¥À¯;¥A¦¨§3¶[¦½§Q¯;­Š¦½µA³[´ ¯Š¦¸²G®(¦½·¸·½³S§Q¯;­.ªË¯Š¬¶Ä¦½® R\¦ ¹ ³S­;¬ Ê Ò Š ®A²£¯;¥A¬­/ª£¶[Ñ˪«®]¯.ª ¹ ¬ ²£¾8¯;¥A¦¨§^°²G®S§Q¬®S§;³S§Ûª£©A©A­Š²Gª£°.¥Å¦¨§Z¯;¥Sª«¯Z¦¸¯I²G®A·½ÁÅ­;¬fÔG³S¦¸­Š¬§ k ­.ªË¯;¥S¬­Þ¯;¥ˆª«®lk;3¬§Q¯;¦½»ÅªË¯Š²£­.§Õ\Ï&¥A¦½°.¥±¦¨§‡¬f§Q©L¬°¦½ª£·¸·½Á ¦½»|©ˆ²G­Q¯.ª«®]¯IªG§b¯;¥S¬2¦½®[¿S¬°¯;¦½²£®Sª£·S¯.ª ¹ §Q¬¯Z¤ ¹ ­Š²Ëϧ Ô]³A¦¸¯;¬ ·¨ª«­ ¹ ¬œ¦¸®§;²£»|¬Þ·½ª£® ¹ ³ˆª ¹ ¬f§Ò m3n 7;-0/21  £ -47 87F&-0L \&"©po¤ <2G&?>-0²JЗ@G&UT E  /²JA ®5¬ ö.÷ø ±q $rKsOt ©>o £ -4² £ -47 /´ \¨8G& 7F&-4L H&½G& ¥0H&-4¨B&  £ +;G E  3XhKt ­ /‡² "/;G7F?©  £ `>2-47F;G&-4¦ E &-4 "/g@ Gvu5w5x u7w5y -47`² "/27;-0>28GH¦¥ªAãLM G& / -47FA©c1-4¨ /ã £  <2G& ¦¥4 LM7 oQ-ª £ ö%÷hù @ G&LM7`¦ -4/21u² /2@ E 7;?>zoQ-4 £ <¥ E G¥h/ E /27 ­ ¢¤£ -47 ¥0H;&8G5LM?H7 E G&%A-4 ¥0>275 E />28G&87F&-0L \&. H@ $YWX ­ 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 log(VBD|LF-VBD) R¦ ¹ ³A­Š¬ Ê ó ø §;¦¸® ¹ ¯Š¥A¬ )+*, Æ{%&-( | [ %'&'( È ¬f§Ö¯Š¦¸»Åª«¯;²£­I¯Š²3­.ª«®Sò ©L²«¯;¬®]¯;¦¨ª«·–šcœ54W´;É8¬»|»|ª»ÅªË¯.°.¥A¬§^¦½®ß ® ¹ ·¸¦¨§Q¥ Š ·¨§Q²ˆÕ²G®A¬ù°ª«®öª«·¸¯;¬­;®ˆªË¯;¬·¸Á °²G®S¶[³S°¯k¯Š¥A¬ù§Šª«»|¬ ¾Œ­Š¬Ô]³A¬®ˆ°Á]´ê¶A¦½§Q¯;­Š¦¸µS³[¯;¦½²£®[´êµSªG§Q¬f¶$­.ª«®Aòɦ½® ¹ ¬Î[©L¬­Š¦¸»|¬®G¯.§ ²ËÑ£¬­6§Q³[ë|Î[¬§å­Šª«¯;¥A¬­¢¯;¥Sª£®=¯.ª ¹ §Ò RS²£­å¬Î[ª£»|©A·¸¬GÕ )+*, Æ@}~'( }U€: È Á]¦½¬·¨¶A§\ª§Q¦½»|¦¸·¨ª«­C¬§Q¯;¦½»|ª«¯;²G­g¯;² )1*, ÆO%&-( %'&': È Õ µA³[¯Ï&¦¸¯;¥§;²£»|¬Ï&¥ˆªË¯&¥A¦ ¹ ¥A¬­&Ñ~ª£­;¦¨ª«®ˆ°¬£Ò  R\¦½®Sª£·¸·½Á£Õg¯Š¥A¬§;¬3¾Œ­;¬fÔ]³A¬®S°Á]´pµSªG§Q¬f¶èª«·½¦ ¹ ®A»|¬®]¯Þ»|²É¶[´ ¬·¨§*°ª«®6µL¬è¦½®[¾Œ²£­Š»ÅªË¯;¦½Ñ£¬>¬Ñ£¬®E¾Œ²£­*»|²G­;¬¥A¦ ¹ ¥S·¸ÁE¦¸®A´ ¿S¬f° ¯;¬f¶Å·¨ª«® ¹ ³Sª ¹ ¬§ҔR¦ ¹ ³A­;¬ Ë ¦¸·½·½³S§Ö¯Š­Šª«¯;¬f§ ª«®Å¬f§Ö¯Š¦¸»ÅªË¯Š¬ ²«¾Û¯;¥A¬|¬»|©A¦½­Š¦½°ª«·¶[¦½§Q¯;­Š¦½µA³[¯;¦½²£®²«¾Û¯;¥A¬ % d  I d %'&'U€ b ©Sª£­Q¯;´ ²«¾ ´ï§;©ˆ¬¬°.¥¢¾Œ­Š¬Ô]³A¬®ˆ°Ák­.ªË¯Š¦¸²]§Å¦½®ù×É©Sª£®A¦½§;¥8Õ&Ï&¦¸¯;¥6¯Š¥A¦¨§ ¬§Q¯;¦½»ÅªË¯Š²£­Ú§Q¯;­Š²£® ¹ ·½Á*¾yª~ÑG²£­Š¦¸® ¹ ¯Š¥A¬|°²£­Š­Š¬° ¯µS³[¯œ¦½­Š­;¬ ¹ ´ ³A·¨ª«­M‚$ƒVFm2i3nH#‚$ƒm2i3˜ª£·¸¦ ¹ ®S»3¬®]¯­.ªË¯Š¥A¬­¯Š¥Sª«®3¦º¯.§b²£­;¯;¥S²«´ ¹ ­.ª«©S¥A¦½°ª«·½·¸Á/§;¦½»3¦½·¨ª«­°²G»|©ˆ¬¯;¦¸¯;²£­.§Ò 0 0.05 0.1 0.15 0.2 0.25 0.3 -8 -6 -4 -2 0 2 4 6 log(VPI3P/VINF) juegan/jugar (-0.4) juegan/juzgar (2.3) juegan/juntar (3.9) juegan/jogar(4.8) R¦ ¹ ³A­Š¬ Ë ó ø §Q¦½® ¹ ¯Š¥A¬ )1*, Æ % d  I d %'U€ b È ¬f§Ö¯Š¦¸»Åª«¯;²£­Û¯;² ­.ª«®Sò ©L²«¯;¬®]¯;¦¨ª«·–šcœ:_cJ  _<6&š'JO“„/©Sª£¦¸­.§^¦½®×[©Sª«®A¦¨§;¥ … ¢¤£ -47jLM?H7 E G&¥47; º@G& 87j "/2@G& "L /AÚ/ ?> & >2 u<\G;†… H@‡†r7;<  ² £ >-47F;G&-0¦ E &-4 "/  7F&-4L H&-4 "/­lˆ. $o¤8¨8G?© o £ 8/B "<&-4 "/H¥)¨HG&-0H/3Q7 E« § 7›¬O7 E ² £ 7D‰5À;¸%/>Љ5ÀJþ± 8§-47F -4/Ç £ º²?/2 "/2-0² ¥Z7 E« §Ç7;J?©Z<8G;@ G&L /2² Œ²?/ ¦-4LM<2G& \¨?>Þ¦3AÞLM 3> ¥4-0/21` £ -47=>2-47F;G&-4¦ E &-4 /Z7; <\GH& ¥ªA @ HGZ¨JG&¦7DoQ-ª £ />9oQ-ª £ E ½ "¦27;8G&¨?> >2-47F&-4/²8Z#‹{Œ @ HG&LM7 ©”7 £ `G& ¥0\&-0¨"B>-07F;G&-4¦ E &-4 "/e H@Ž‡W ¬'‘“’5x ‘“” •-– ±›/> ‡0 ¬_‘?’?x —c˜M˜5™ ±B² £ /1e7; "LMo £ H‡7 E ¦7F/&-0¥4¥ªA¶-4/Ú £ 87; ²?H7; 7 ­›š./2¶>2  7 /2 ´3/ $o -4/ ">¨"H/² © £ $o¤8¨8G?© o £ J £ 8G=`1"-4¨8/B& 7F5¨8G&¦½¦ ¥4 "/175& M8-4 £ 8G57;8?­ ¢¤£ E 7  £ +-4/-ª&-0¥ž@G&$T E  /2²8A³7;-4LM-4¥\G&-ªýA³7;² HG&`7 £ E ¥0>³¦¯¦7;?> "/u £ M?¨8GH1"M H@Q¦  £ 87F&-0L \& G&7 E /&-4¥– £ `<G& 7; /²8 G+H¦7; /2² ¹ H@ £ `>2-47F&-4/²J›¨"\G&-0/%@ HG&L—-4/‡ £ ¹¥4 LML  ²?H/Z¦›7;²88G;-4/ >Z "/Z7 E ¦7;$T E 8/3¤-4&JGH&-4 "/27 ­ œ  Vû5   Ä;9ÛX  V\XÛY ¡ HŸž@ÛXZYÉV IY  9û9JÄÖ5?S9ÖY«H Š §;¬°²£®S¶3©L²ËÏZ¬­Q¾Œ³A·[»|¬fª£§;³A­;¬I¾Œ²G­b­Šª£®Aòɦ¸® ¹ ¯Š¥A¬&©ˆ²£¯;¬®[´ ¯Š¦½ª£·Þª£·¸¦ ¹ ®S»3¬®]¯Š§µˆ¬¯ÖÏZ¬¬® »|²G­;©A¥S²£·½² ¹ ¦¨°ª£·¸·½ÁE­;¬·½ª«¯;¬¶ ¾Œ²G­;»Å§&¦¨§&µSªG§Q¬f¶*²G®¯Š¥A¬ °²G®]¯;¬ÎɯгSª«·C§;¦½»3¦½·¨ª«­Š¦º¯ÖÁ/²£¾\¯Š¥A¬ °ª«®S¶[¦¨¶Aª«¯;¬‡¾Œ²G­;»Å§Ò+RA²G­¯;¥S¦½§Ú»|¬ª£§;³A­Š¬£ÕˆÏZ¬$°²£»|©A³[¯Š¬¶ ¯Š­ŠªG¶[¦º¯Š¦¸²G®Sª«·Å°²G§;¦¸®A¬¢§;¦¸»|¦½·½ª£­;¦¸¯ÖÁеˆ¬¯ÖÏZ¬¬®ûÑ£¬f° ¯;²G­Š§Í²«¾ ÏI¬¦ ¹ ¥]¯;¬f¶/ª«®ˆ¶3úS·º¯Š¬­Š¬¶/°²£®]¯;¬Îɯ۾Œ¬ª«¯;³A­Š¬§Ò]6¥A¦¸·½¬Ú¯;¥A¦¨§ »|¬ªG§Q³S­;¬2ª£·½§;² ¹ ¦¸ÑG¬§ ­Š¬·¨ªË¯Š¦¸ÑG¬·½Á¥S¦ ¹ ¥*§Q¦½»|¦¸·¨ª«­Š¦¸¯ÖÁ$¯Š²§;¬´ »Åª«®]¯Š¦½°ª«·½·¸Á*§;¦½»3¦½·¨ª«­&ÏI²£­.¶A§^§Q³S°.¥"ª£§¹[?l ણ®S¶ ؘ?lOn<«ÕS¦¸¯ ¦¨§|­Šª£­;¬¬ÑG¬®(¾Œ²£­Å§QÁÉ®A²G®ÉÁ]»Å§¯;²À¬ÎÉ¥S¦¸µA¦¸¯Å»|²£­Š¬"§;¦½»3¦¸´ ·¨ª«­ ª£®S¶¦½¶[¦½²G§;ÁÉ®S°­.ªË¯Š¦½°|ª£­ ¹ ³A»|¬®]¯ ¶[¦½§Q¯;­Š¦½µA³[¯;¦½²£®ˆ§ ª«®S¶ §;¬·½¬°¯;¦½²£®Sª£·œ©S­;¬¾Œ¬­Š¬®S°¬§¯;¥Sª£®ù¦½®[¿S¬f° ¯Š¦¸²G®Sª«·‡Ñ˪«­Š¦¨ª«®]¯Š§ ²£¾&¯;¥A¬"§;ª£»|¬*ÏI²£­.¶6ƌ¬£Ò ¹ Ò¶[?l ààV8Ø«Õ[?l ààžlOnm"ª£®S¶ [?l à È Ò Š ©A­Š¦¸»Åª£­;Á ¹ ²]ª«·¦¸®À°·½³S§Q¯;¬­;¦½® ¹ ¦½®[¿S¬f° ¯Š¦¸²G®Sª«·bÑ˪«­Š¦¨ª«®]¯Š§ ²£¾^ÑG¬­ŠµS§‡¦¨§‡¯Š² ¹ ¦½Ñ£¬Å©A­Š¬¶[²G»|¦¸®Sª£®]¯$Ñ£¬f° ¯;²G­‡ÏZ¬¦ ¹ ¥]¯‡¯;² ¯Š¥A¬^¥A¬ªG¶É´p®S²£³A® ²GµŽÖ¬°¯Š§ª«®ˆ¶$§;³AµŽÖ¬°¯Š§²«¾S¯;¥A¬f§Q¬IÑ£¬­;µS§Ò Ù2²ËÏZ¬Ñ£¬­fÕ8¯Š²è»|¦¸®S¦¸»|¦½ð¬*®S¬¬¶A¬¶¯;­.ª«¦½®A¦½® ¹ ­Š¬§;²£³A­.°¬f§Õ ÏI¬Ñ£¬­;Á­;²G³ ¹ ¥S·¸Á¦½¶[¬®]¯;¦¸úS¬¶|¯Š¥A¬§;¬©L²G§;¦º¯Š¦¸²G®S§ µÉÁ|ª §;¬¯ ²£¾3§;¦¸»|©A·½¬Í­Š¬ ¹ ³A·¨ª«­¬Î[©A­Š¬§Š§Q¦½²£®S§²ËÑ£¬­§;»Åª«·½·Þ°·¸²]§Q¬f¶É´ °·½ªG§;§I©Sª£­Q¯.§Û²£¾§;©ˆ¬¬°.¥8Õ[Ï&¦¸¯;¥­;¬»Åª«¦½®A¦¸® ¹ ƌ²G©ˆ¬®[´ï°·¨ª£§Š§ È °²£®]¯;¬®G¯&ÏI²£­.¶A§^·¨ª«µL¬·½¬¶°²£·½·½¬° ¯Š¦¸ÑG¬·½Á*ª£§¡“¢"ÕA¬GÒ ¹ Ò¸ó £?¤v¥¦W§©¨ ¬\ª n¬«®­ û ® ú ±¯°²±3³´Uµ<¶ ·¸ ø ® ¢ R ü Q ¯ £?¤ ¶ §‡¨ Ò ×ɳS°.¥(¬Î[©A­Š¬§Š§Q¦½²£®S§ Ï&¦¸·½·&°·½¬ª«­Š·½ÁµL²«¯Š¥Ä¬ÎɯŠ­ŠªG° ¯3§;¦ ¹ ´ ®A¦¸úˆ°ª«®]¯®S²£¦¨§Q¬ûª«®S¶¼¾yª«¦½·>¯Š²=»ÅªË¯.°.¥0»Åª«®ÉÁç·½¬ ¹ ¦¸¯;¦¸´ »ÅªË¯Š¬>°²G®]¯;¬Îɯ.§Õ&µA³A¯*µˆ¬f°ª£³S§Q¬è¯;¥A¬ÁEª£­;¬èª£©A©A·½¦¸¬f¶E¯;² ª·¨ª«­ ¹ ¬*»3²G®A²£·½¦½® ¹ ³Sª£·Û°²G­;©S³S§Õ¯Š¥A¬*©Sª£­Q¯Š¦½ª£·Û°²ËÑG¬­.ª ¹ ¬ ª£®S¶ §;¦ ¹ ®Sª«·¸´v¯Š²«´ê®A²£¦¨§;¬­.ªË¯Š¦¸²Úª«­Š¬b¯;²G·¸¬­Šª£µA·¸¬GÒCñÖ¶[¬fª«·½·¸ÁGÕ~²£®A¬ ÏI²£³A·¨¶ ª£·½§;²2ª«³A¯;²£»Åª«¯;¦¨°ª«·½·½ÁÚ¦¨¶[¬®]¯Š¦º¾ŒÁÞÏ&¥A¦¨°.¥§;¬¯²«¾A©Sª«¯Q´ ¯Š¬­Š®S§ª£­;¬Íª«©A©A­Š²£©S­;¦¨ªË¯Š¬"¾Œ²£­"ª ¹ ¦½Ñ£¬®6·½ª£® ¹ ³Sª ¹ ¬GÕ&µA³[¯ ¯Š¥A¦½§&°ª«®µˆ¬Þª£°°²£»|©A·½¦¨§Q¥A¬f¶/¦½®§;³AµS§;¬Ô]³A¬®G¯^¦¸¯;¬­.ªË¯Š¦¸²G®S§ ²£¾œ¯Š¥A¬>ª£· ¹ ²£­Š¦º¯Š¥A» µÉÁį.ª«òɦ½® ¹ ©A­Š¬Ñɦ¸²G³S§;·¸Á(¬Îɯ;­.ª£°¯;¬¶ Ø&¦½®[¿S¬f° ¯;¦½²£®CÕ ­Š²É²«¯Ù ©Sª«¦½­.§‡ª«®S¶>¯;¬f§Ö¯Š¦¸® ¹ Ï&¥S¦½°.¥±§;³AµS§;¬¯ ²£¾©A­Š¬¶A¬úS®A¬f¶*­Š¬ ¹ ³A·¨ª«­^¬Î[©A­Š¬§Š§Q¦½²£®ˆ§Z¦¨§&»|²G§Q¯&¬\SW¬°¯;¦½Ñ£¬ ¦½®è»ÅªËÎ[¦½»3¦½ð¦½® ¹ ¯Š¥A¬$»|¬ª£®è°²G®G¯Š¬ÎɯQ´ï§;¦¸»|¦½·½ª£­;¦¸¯ÖÁ*²£¾¯Š¥A¬ Ø&¦½®[¿S¬f° ¯;¦½²£®CÕ ­Š²É²«¯Ù筊¬·¨ªË¯Š¦¸ÑG¬¯;²(®A²£®[´ê©Sª£¦¸­.§Ò ×ɦ½»3¦½·¨ª«­ ¯Š¬°.¥A®A¦¨Ô]³A¬§2°ª«®µL¬‡³S§;¬¶¯Š²ÅÏZ¬¦ ¹ ¥G¯&¯Š¥A¬ ­Š¬·¨ªË¯Š¦¸ÑG¬œ¦½»3´ ©L²£­;¯Šª£®S°¬Þ²«¾b°²G®]¯;¬ÎɯгSª«·W©L²G§;¦º¯Š¦¸²G®S§Ò ¹ RA²G­/§Q¦½»|¦¸·¨ª«­Å­Š¬ªG§Q²G®S§ÕÛ¦º¯/¦¨§Å³S§;¬¾Œ³A·2¦¸®6§;³AµS§;¬Ô]³A¬®]¯ ¦¸¯;¬­Šª«¯;¦½²£®S§|²£¾Ú¯Š¥A¬èª«· ¹ ²G­;¦¸¯;¥A» ¯;²±ª«©A©S·¸Ák¯;¥A¬è°³A­;­Š¬®]¯ ª£®Sª«·½Á[§Q¦¨§C»|²[¶[³A·½¬§\¯Š²ËÏIª£­Š¶A§C·¸¬»|»|ª«¯;¦½ð¦½® ¹ ¯;¥A¬^°²G®G¯Š¬ÎÉ´ ¯Š³Sª«·g¾Œ¬fªË¯Š³A­;¬ §;¬¯.§ÒZ¤^¥A¦½§¥Sª£§I¯Š¥A¬‡¬\SW¬°¯2²«¾µˆ²£¯;¥°²G®[´ ¶[¬®S§;¦¸® ¹ ¯;¥A¬"°²G®G¯Š¬Îɯ;³ˆª«·Û§;¦ ¹ ®Sª«·pÕbª«®S¶Ä­;¬»3²ËÑɦ½® ¹ ©L²«´ ¯Š¬®]¯;¦¨ª«·½·¸Áè¶A¦½§Q¯;­.ª£°¯;¦½® ¹ °²£­Š­;¬·½ª«¯;¦½²£®S§2Ï&¦º¯Š¥Í¦½®[¿S¬f° ¯;¦½²£®ˆª«· ¾Œ²G­;»Å§^¦½®"°²£®]¯;¬ÎɯÒ º ª5/  £ 8G+-4LM< G;/+² /²  <+-0/j² "/&8§3+7;-4LM-0¥0\G&-4…A LM?7 E G& 7c@ GhLM HG&< £ "¥4 "1HA` £ \h>2-¼» 8G&7”@G& LÇ  £ 8G-o G> ² ¥ E 7F&8G&-4/21.LM?H7 E G& 7x-47ž £ c/  >& =>2 $oQ/ho¤ -41 £ – HG– ¥4-4L½† -4/H&j² "/&8§3 o G>27³7 E ² £ 7³7 E ¦ D  ²8g<G& "/ E /27B £ \ 7F;G& "/21"¥ªA‡² G;G& ¥0\&½oQ-4 £ "/2¥4Ag /¯ G+Þ@3oÏ-4/¿¾ ²8&-4 "/¥ @ HG&LM7 ­ ú -4¨3-4/1`7 E ² £ o G>7Q&  ML E ² £ o -41 £ =²?/Z²  E 7; >2-¼» 8G& /=¨8G&¦27= H@c £ +7&HLM¯<JG&7; "/²À\/ E L¹¦JG5& ÞH<<?\G LM G&h7;-4LM-4¥0HG–& %?H² £ H £ 8G– £ /M> . £ h>-¼» 8G& /”-0/Z¾ ²† &-4 "/7Q H@ž £ =7&LM%¨JG&¦)­ S -4¥ª&8G&-4/1`¦7;?>Þ "/ £ -41 £ ²8G& 7;7† ¥4 LML .>-07F;G&-4¦ E &-4 "/¥ /;G& "<A@ G.1-0¨" /`² /3&J§7o G> ²?/ £  ¥4<B8¥0-4LM-4/\&% £  7;›²8 E /&8G†r<G& 3> E ²J&-0¨"=@  H E G&87 ­ Á  Vû5   ÄQ9 ÛXûVCXZY ¡ H T VC9ÛRZY[VC:  Vg7ÛV\X2DÉRZYÉV\9QXõ4å9;D£Y[5bX2PWV ¤^¥A¬Þ¯Š¥A¦¸­.¶ª«·½¦ ¹ ®A»|¬®]¯2§;¦¸»|¦½·½ª£­;¦¸¯ÖÁ/¾Œ³S®S° ¯Š¦¸²G®è°²£®ˆ§Q¦¨¶[¬­.§ ²ËÑ£¬­Šª£·¸·É§Q¯;¬» ¬¶[¦¸¯Û¶[¦¨§Ö¯.ª«®S°¬^³S§Q¦½® ¹ ªœÏZ¬¦ ¹ ¥]¯;¬¶BÉ8¬Ñ£¬®[´ §;¥G¯Š¬¦½®è»|¬ªG§Q³A­Š¬£Ò2ñï®è»|²£­Š©A¥A²£·½² ¹ ¦½°ª«·C§;Á[§Ö¯Š¬»Å§ÚÏZ²G­;·¨¶É´ Ï&¦¨¶[¬£ÕÉÑ£²ËÏI¬·¨§Ûª£®S¶ÅÑG²ËÏZ¬·ˆ°·½³S§Q¯;¬­Š§Iª«­Š¬­Š¬·¨ªË¯;¦½Ñ£¬·¸Á3»$³A´ ¯Šª£µA·½¬&¯;¥A­Š²£³ ¹ ¥|»|²£­Š©A¥A²£·½² ¹ ¦½°ª«·[©A­Š²[°¬f§;§;¬§Õ£Ï&¥A¦½·½¬2°²G®[´ §;²£®Sª£®G¯.§ ¹ ¬®A¬­.ª«·½·½Á‡¯;¬®S¶|¯;²$¥Sª~Ñ£¬&ª‡·½²ËÏZ¬­b©A­Š²£µSª£µA¦¸·½¦¸¯ÖÁ ²«¾ °.¥Sª£® ¹ ¬‡¶[³S­;¦½® ¹ ¦½®[¿S¬°¯;¦½²£®8қŪ«¯;¥A¬­&¯;¥ˆª«®"¯;­Š¬ª«¯;¦½® ¹ ª«·½·W§Ö¯Š­;¦½® ¹ ¬f¶[¦¸¯Š§^ª£§Z¬Ô]³Sª£·vÕ[ª$°²G§Q¯Z»ÅªË¯Š­;¦¸ÎŲ«¾8¯Š¥A¬Ú¾Œ²£­Š» §;¥A²ËÏ&®¦¸®Í¤\ª£µA·¸¬BÈŦ¨§2³[¯Š¦¸·½¦½ð¬¶8ÕLÏ&¦¸¯;¥>¦¸®A¦¸¯;¦¨ª«·¶[¦¨§Ö¯.ª«®S°¬ °²]§Ö¯.§D N 2 š6&š Õ ; 2 š } 6š } Õ' I 2 ¡“63¡±ª£®S¶9  2 ¡“6&š } Õ ¦½®A¦º¯Š¦½ª£·¸·½Á/§;¬¯I¯;²"ÆyæAÒ Ë Õ[æSÒ ÈSÕ ô Ò æSÕÉæSÒ à Ý È Õ]ª­;¬·½ª«¯;¦½Ñ£¬·¸Á|ª«­;´ µA¦¸¯;­.ª«­ŠÁŪ£§Š§Q¦ ¹ ®A»|¬®G¯^­Š¬¿S¬f° ¯;¦½® ¹ ¯;¥A¦¨§Z¯;¬®S¶[¬®ˆ°Á£ÒQÙ2²ËÏ^´ ¬ÑG¬­fÕLªG§Ú§;³AµS§;¬Ô]³A¬®]¯œª«· ¹ ²G­;¦¸¯;¥S»=¦¸¯;¬­Šª«¯;¦½²£®S§Ú©A­Š²É°¬¬f¶gÕ ¯;¥S¦½§*»|ª«¯;­Š¦ºÎ¢¦¨§/­Š¬´ê¬§Q¯;¦½»|ª«¯;¬f¶(Ï&¦¸¯;¥ ¬»|©A¦½­;¦¨°ª£·¸·½Ák²Gµ[´ §;¬­ŠÑ£¬¶°.¥Sª£­ŠªG° ¯;¬­Q´p¯;²£´ê°.¥ˆª«­.ª£° ¯Š¬­Û§Q¯;¬»3´ê°.¥Sª£® ¹ ¬Þ©A­Š²£µˆªË´ µA¦½·¸¦¸¯;¦½¬§¾Œ­Š²£»Â¯Š¥A¬ª«· ¹ ²G­;¦¸¯;¥S»jß §°³A­Š­;¬®]¯µL¬§Q¯bÏZ¬¦ ¹ ¥G¯Š¬¶ ª«·½¦ ¹ ®A»|¬®]¯.§Ò  E  L / ­4­4­  V à ¡ Ã0Ä Ã … à … ­4­4­ à ¡ V Ã Ä Ã … à … ­4­4­ E  Ã0Ä Ã0Ä V à … à … ­4­4­ L à … à … à … V à m ­4­4­ / à … à … à … à m V ­4­4­ ­4­4­ ­4­4­ ­4­0­ ­4­4­ ­4­0­ ­4­4­ ­4­4­ ¤ª«µS·¸¬ ÈSó[ñï®A¦º¯Š¦½ª£·”É8¬Ñ£¬®S§Q¥]¯Š¬¦½®°²]§Ö¯»Åª«¯;­Š¦ºÎ Ʋ£­Š¬Ä²£©A¯;¦½»|ª£·¸·½Á£Õ$¯;¥A¬(¦¸®S¦º¯Š¦½ª£·3§Q¯Šª«¯;¬k²£¾Å¯;¥A¦¨§Í»|ª«´ ¯;­Š¦¸Î°²G³A·¨¶Âµˆ¬k§;¬¬f¶[¬¶ÂÏ&¦¸¯;¥ÐÑ˪£·¸³A¬f§©Sª£­Q¯Š¦½ª£·¸·½Á µL²£­;´ ­Š²ËÏZ¬f¶2¾Œ­Š²£»Â©A­Š¬Ñɦ½²£³S§;·½Á¯;­.ª«¦½®A¬f¶Þ»|ª«¯;­Š¦½°¬§8¾Œ­;²G»²£¯;¥A¬­ ­Š¬·¨ªË¯;¬f¶Ê·¨ª«® ¹ ³Sª ¹ ¬§Ò Š ·¸¯;¬­Š®Sª«¯;¬·½Á£Õ3¯Š¥A¬¢¦¸®A¦¸¯;¦¨ª«·/¶A¦½§Q´ ¯Šª£®S°¬f§\°²£³A·¨¶‡µL¬I§;¬¯©Sª«­;¯;¦¨ª«·½·½Áœ§;¬®S§;¦¸¯;¦½Ñ£¬ ¯Š²2©A¥S²£®A²G·¸² ¹ ´ ¦¨°ª«·[§;¦¸»|¦½·½ª£­;¦¸¯;¦½¬§Õ«Ï&¦¸¯;¥|¶[¦¨§Ö¯~Æý Ø"ÂÉÕ4Âo Â È Øk¶[¦¨§Ö¯~ÆýÂØÂÉÕ4Â$ÅÂ È ¾Œ²£­3¬ÎAª£»3©S·¸¬GÕ ª«·¸¯;¥A²G³ ¹ ¥±¯Š¥A¦¨§©ˆª«­;¯;¦¨°³A·¨ª«­3¶[¦½§Q¯;¦½®S°¯;¦½²£® ¬»|¬­ ¹ ¬§­;¬fª£¶[¦½·¸Á¢Ñɦ¨ªÄ¦¸¯;¬­.ªË¯Š¦¸ÑG¬>­Š¬´ê¬§Q¯;¦½»ÅªË¯;¦½²£®å¾Œ­;²G» ¯;¥S¬‡µSª£§;¬·½¦¸®S¬œ»|²[¶[¬·vÒ Æ  Vû5   ÄQ9 ÛXûVCXZY ¡ H Ç @ ?cȜR@5ÄQ@ÉÛ9QPW5hÄ í ?A5bX2DËÊ.@ ?“û5Y[9Q@ÛX Ì>?A@ ¡ 5 ¡ 9Ä;9ïYA9ÖV\D ¤^¥A¬ ¹ ²]ª«·S²£¾8¯;¥S¦½§I­Š¬§;¬ª«­.°.¥|¦½§I®A²£¯I²G®A·½Á¯Š²¬Îɯ;­.ª£° ¯^ª«® ª£°°³A­.ªË¯Š¬¯Šª«µS·¸¬"²«¾Þ¦½®[¿S¬°¯;¦½²£®[´ê­Š²]²£¯/ª«·½¦ ¹ ®A»|¬®]¯Š§ÕZµS³[¯ ª«·¨§;²2¯Š² ¹ ¬®A¬­Šª£·¸¦½ð¬Û¯;¥A¦¨§»Åª«©A©S¦¸® ¹ ¾Œ³A®ˆ° ¯;¦½²£®3Ñɦ½ªœª ¹ ¬®[´ ¬­.ªË¯Š¦¸ÑG¬2©A­Š²£µˆª«µA¦½·¸¦¨§Q¯;¦¨°»|²[¶[¬·pÒ ¤^¥A¬Ú¾Œ²£·½·¸²ËÏ&¦½® ¹ §;¬°¯;¦½²£® ¶[¬f§;°­;¦½µˆ¬f§g¯Š¥A¬I°­;¬fªË¯Š¦¸²G®œ²£¾[¯;¥A¦¨§\»3²[¶[¬·vÕ«ªG§8ÏI¬·½·]ªG§8¥A²ËÏ ¯;¥S¬°²G®]¯;¬Îɯ;´ê§;¬®S§;¦¸¯;¦½Ñ£¬‡©A­Š²£µSª£µA¦½·¸¦¸¯ÖÁ*²£¾ ¬ª£°.¥"»|²£­Š©A¥A²£´ ·½² ¹ ¦¨°ª£·¯;­.ª«®S§Q¾Œ²£­Š»ÅªË¯Š¦¸²G®À°ª«®ÀµL¬/³S§;¬¶±ª£§‡¯;¥A¬Å¾Œ²G³A­;¯;¥ ª«·½¦ ¹ ®A»|¬®]¯§;¦¸»|¦½·½ª£­;¦¸¯ÖÁ/»|¬ªG§Q³A­Š¬£Ò Š ¯/¬ª£°.¥(¦º¯Š¬­.ªË¯Š¦¸²G®k²«¾œ¯;¥S¬"ª£· ¹ ²£­Š¦º¯Š¥A»"Õ ¯Š¥A¦½§|©S­;²Gµ[´ ª«µS¦¸·½¦½§Q¯;¦¨°è»Åª£©A©A¦½® ¹ ¾Œ³A®S°¯;¦½²£®ù¦½§/¯;­.ª«¦½®A¬¶6²G®¢¯;¥A¬Í¯Šª«´ µA·½¬Ú²G³[¯;©S³[¯&²«¾8¯Š¥A¬Ú©S­;¬Ñ]¦½²£³ˆ§Û¦¸¯;¬­Šª«¯;¦½²£®8ÕɬÔ]³A¦½Ñ˪«·½¬®]¯Û¯Š² ¯;¥S¬&¦¸®[¾Œ²G­;»Åª«¯;¦½²£®|¦¸®Å¤ª«µA·½¬ ô ƌ¬GÒ ¹ ÒZØ&­Š²É²«¯fÕ ¦½®[¿S¬f° ¯;¦½²£®CÙ ©Sª£¦¸­.§8Ï&¦¸¯;¥²£©[¯Š¦¸²G®Sª«·]©Sª«­;¯Q´ê²«¾ ´ï§;©ˆ¬¬°.¥Þ¯Šª ¹ §Õ˰²£®Aúˆ¶[¬®ˆ°¬ §Š°²G­;¬f§ª£®S¶ §Q¯;¬»|°.¥ˆª«® ¹ ¬^œ§;³[ë|Î6ª£®Sª«·½Á[§Q¦¨§ È Ò Í RS­;²G» ¯Š¥A¦½§Z²£³A¯;©A³[¯fÕÉÏZ¬2°·¸³ˆ§Ö¯Š¬­Z¯;¥A¬Ú²£µS§;¬­ŠÑ£¬f¶3§Q¯;¬» °.¥Sª«® ¹ ¬§ µÉÁ/¯Š¥A¬‡Ñ˪«­Š¦½ª£µA·½¬´ê·¸¬® ¹ ¯Š¥*­Š²]²£¯°²£®]¯Š¬Îɯ&¦¸®"Ï&¥A¦¨°.¥¯;¥A¬Á ÏI¬­Š¬‡ª«©A©A·½¦½¬¶gÕAªG§^¦¸·½·¸³ˆ§Ö¯Š­Šª«¯;¬¶¦½®¤\ª£µA·¸¬$é[Ò ÿ   õ& L ÎZH&² £ -4/21 ü "/&8§3 ü £ /1 õ E2« § ü E / ®x§HLM<¥4 7 ­4­ G A $=K$ #. > r 7;<2G A©7F;G A©4­0­4­ ­4­4­  A $=K$ #. > U <¥0 A©7;<2G A©4­4­0­ ­4­4­ ?A $=K$ #. >  /2/ ?A© / D ?A©0­4­4­ ­0­4­ 8A $=K$ #. > r "¦JA©2"8A©4­0­4­ ­0­4­ @A AZ,#. > h ¦? E &-ª@A©4­4­4­ ­4­0­ G;A AZ,#. > t ²?\G;G;A©0­4­4­ ­4­4­ >A AZ,#. > X ¦¥4  3>3A©0­4­4­ ­4­0­ A AZ,#. > X ²?\G;G;A©0­4­4­ ­4­0­ A $=K$ #. > h 7;<2G A©4­4­4­ ­4­0­ A $=K$ #.-4/21 YK ²?\G;G;A©27;<G A©0­4­4­ ­0­4­  +,$ #. > tKKY >H/² ©0­4­4­ ­0­4­  +,$ #.-4/21 tWY >H/² ©2H"©4­4­0­ ­0­4­  $=K$ #.-4/21  7;-4/1 ­4­4­  "+,   #%$  ""©7 £ ©0­4­4­ ­4­4­  H+,  #%$  ohH ­4­4­  ,>2 #%$  L " ­4­4­  A AB,-> #%$  ¥0 A©2< A ­4­0­ A AB,-> #%$  ¥0 A©2< A ¤\ª£µA·¸¬$é[óS×]¯;¬»=°.¥Sª£® ¹ ¬‡¶Aª«¯Šª ¹ ¦½Ñ£¬®­Š²É²«¯&°²£®]¯Š¬Îɯ R¦¸­.§Ö¯"®A²£¯;¬Í¯;¥Sª«¯"µL¬°ª«³S§;¬Í¯;¥A¬¯;­Š¦½©A·¸¬²«¾Ø&­;²É²«¯fÙ ^ Ø§Ö¯Š¬»Å°.¥Sª£® ¹ ¬GÙ ^ ا;³[ë|ÎgÙä³S®A¦½Ô]³A¬·¸Á ¶[¬¯;¬­;´ »|¦½®A¬§èª¢­Š¬§;³A·¸¯;¦½® ¹ ¦½®[¿S¬°¯;¦½²£®8Շ²G®A¬Ä°ª£®Â¬\SW¬° ¯Š¦¸ÑG¬·½Á °²£»|©A³[¯Š¬—êÅÆy¦¸®A¿S¬°¯;¦½²£® í*­Š²É²«¯fÕ(§;³[ë|ÎWÕ ‹.Þ× È µÉÁ êÅÆv§Ö¯Š¬»Å°.¥Sª«® ¹ ¬Kí"­Š²É²«¯Õ¢§;³[ë|ÎWÕϋ%Þ× È Õ>¦vÒ ¬£Ò ¾Œ²£­ ª£®]Á ­Š²]²£¯ 2Ï7Ð Õù§Q³[ë|Î 2 ^®Ñƒª«®S¶ ¦¸®[¿ˆ¬° ¯Š¦¸²G® 2ÏMÒ ÑÕ êÅÆ Ï5Ò ÑQí Ï7Ð ñ?^®ÑžñJêBADC È2 êÅÆ Ð Ç Ò í ÏMÐ ñ ^®Ñžñ ‹.Þ× È Ò ø §;¦¸® ¹ §Q¯Šª«¯;¦¨§Ö¯Š¦½°§Ú§;³S°.¥ÍªG§Ú§;¥A²ËÏ&®è¦½®Í¤ª«µS·¸¬/éÉÕW¦¸¯œ¦¨§ ¯Š¥]³ˆ§ ©L²G§Š§Q¦½µA·½¬^¯;²$°²£»|©A³[¯Š¬^¯;¥A¬ ¹ ¬®A¬­Šª«¯;¦½²£®"ƌ²G­Ûª«·½¦ ¹ ®[´ »|¬®]¯ È ©S­;²GµSª«µA¦½·½¦º¯ÖÁ|¾Œ²G­^ª£®¦½®[¿S¬f° ¯Š¦¸²G® ¹ ¦¸ÑG¬®*­Š²É²«¯&ª«®S¶ §;³[ë|ÎųS§Q¦½® ¹ ¯;¥A¬Þ§Q¦½»|©A·½¬2¦½®G¯Š¬­Š©ˆ²G·½ª«¯;¬f¶ÅµSª£°.òG²3S»3²[¶[¬· ¦½®(Æ ô È Ï&¥A¬­Š¬ÔÓ?EI¦½§Þª*¾Œ³A®S°¯;¦½²£®²«¾ ¯Š¥A¬­Š¬·¨ªË¯Š¦¸ÑG¬§Šª«»3´ ©A·½¬Þ§Q¦½ð¬œ²«¾8¯Š¥A¬Þ°²£®ˆ¶[¦º¯Š¦¸²G®A¦½® ¹ ¬Ñ£¬®G¯fÕ[ª«®S¶ ) ï²ÕWÖ ×ÉÆró *h* Ö È ¦½®S¶[¦¨°ª«¯;¬§&¯;¥A¬ÞúS®ˆª«·Ø°.¥Sª«­.ª£°¯;¬­Š§Z²«¾\¯;¥A¬‡­Š²É²«¯Ò Ù ¬x-4/¿¾ ²8&-4 "/ ­ G&  H?©7 E« § © ô š›õ± q Ù ¬7ÚuÜÛ ­ G&  ?©7 E2« § © ô š›õ± ÝßÞ ¡ Ù ¬7ڇàÛ ­ ‡á¿â3ã m ¬ G&  J±J©7 E« § © ô š›õ2± #Ь @ä Þ ¡ ±8¬ Þ Ä3Ù ¬ڇÜÛ ­ ‡á¿â3ã Ä ¬ G&  HJ±J©7 E« § © ô š›õ± #Ь @ä Þ Ä ±8¬ Þ m Ù ¬ڇÜÛ ­ ‡á¿â3ã ¡ ¬ G&  HJ±J©7 E« § © ô š›õ± #Ь @ä Þ m ±8¬ Þ … Ù ¬ڇÜÛ ­ 7 E2« §© ô š›õ± #Ь @ä Þ … ± Ù ¬7ÚjåÛã± Æ ô È ]¬Ä²£®S·¸Á嵈ª£°.ò£²SЯ;²¢¯Š¥A¬Ä¬ÎɯЬ®]¯è®A¬°¬§Š§;ª£­;ÁGÒ,RA³A­;´ ¯Š¥A¬­Š»|²£­Š¬£ÕS®S²«¯;¬ ¯;¥Sª«¯¾Œ²£­2ßÛ® ¹ ·½¦¨§Q¥ÄÆvª«®S¶"»|²G§Q¯2¦½®[¿S¬f° ´ ¯Š¦¸²G®S§C¦½®×É©ˆª«®A¦¨§Q¥ È Õf¯;¥A¬I§Ö¯Š¬»Ð°.¥Sª£® ¹ ¬f§g²£µS§;¬­ŠÑ£¬f¶œÏ&¥A¬® ªG¶A¶[¦½® ¹ §;³[ë|Î[¬§2ª£­;¬$¦¸®S¶A¬©L¬®S¶[¬®]¯œ²£¾©ˆª«­;¯Ú²«¾Û§;©L¬¬°.¥ æ @ /¥ªAµ £ Ÿ<H-4G&7ÐHG& 1-4¨ /)©çoQ-ª £ /2 7F&8L½† ² £ /21"\#.7 E« §MH/¥ªA7;-47 ©3 £ -47H/H¥4A7;-47¤²?H/M¦=1 /JGH&?> >28&JG&LM-0/2-47F&-0² ¥4¥ªAŒ¦3A G& LM \¨-4/21 £ u¥4 "/21" 7FgL \&² £ -4/1 ²?/2 "/2-0² ¥7 E« §¯@G& "LÏ £ h-4/¿¾ ²8&-4 "/ H/>`1 /8G\&-4/1› £  LM-4/-4L ¥?ÚãÛZ#zèj;G/27F@ HG&L H&-4 "/g²?H<2 E G&-4/21  £ ›G&3† L -4/-4/21`7F& Læ>-é» JG& /² ­ ƌ¦pÒ ¬£ÒZâ%[IµL¬¥ˆª~Ñ£¬§&¯;¥A¬$§Šª«»|¬ ²G®è§Q³Aë3ÎAª«¯;¦½²£®¾Œ²£­Úµˆ²£¯;¥ ®A²G³A®S§^ª«®S¶*Ñ£¬­;µS§ È Õ[§;²$¯Š¥A¬§;¬œ©A­;²GµSª«µS¦¸·½¦º¯Š¦¸¬f§^°ª«®²«¾ ¯Š¬® µL¬‡¾Œ³A­;¯;¥A¬­2§Q¦½»|©A·½¦ºúS¬f¶"µÉÁ¶[¬·¸¬¯;¦½® ¹ ¯;¥S¬$°²G®S¶[¦¸¯;¦½²£®A¦½® ¹ Ñ˪«­Š¦½ª£µA·½¬`‹%Þ×WÕ[ª£§^¦½·¸·½³S§Q¯;­.ªË¯Š¬¶¦¸®±Æpá È Ò Ù ¬–7; "¥4-0>2-ªC?> ­ 7; "¥4-0>2-ª@A©#.?>© ö%÷cø ± q Ù ¬žAÕ­ 7; ¥4->-ª@A© #. >© ö%÷ø ± Ý Ù ¬žAÕ­ 7; ¥4->-ª@A© #. >± ÝßÞ ¡ Ù ¬xAÕ­ -ª@A©#.?>± #Ь @ä Þ ¡ ±8¬ Þ ÄÙ ¬žAÕ­ @A©#. >± #Ь @ä Þ Ä ±8¬ Þ m Ù ¬žAÕ­ A©#.?>± #Ь @ä Þ m ±8¬ Þ … Ù ¬žAÕ­ #. >± #Ь @ä Þ … ± Ù ¬žAÕ-)± Ævá È ]ͬû¥Sª~Ñ£¬Ì¾Œ³A­;¯;¥A¬­ ¹ ¬®A¬­Šª£·¸¦½ð¬f¶ç¯;¥A¬f§Q¬ûÑ˪«­Š¦½ª£µA·½¬´ ·½¬® ¹ ¯;¥Ê°²G®]¯;¬Îɯ>»|²[¶[¬·½§>Ñ]¦¨ªåª¢¾Œ³S·¸·3¥A¦½¬­.ª«­.°.¥A¦½°ª«·½·¸Á]´ §;»3²É²£¯;¥A¬f¶Ä¯Š­;¦½¬ª«­.°.¥A¦º¯Š¬°¯;³A­Š¬£ÕÛª£·¸·½²ËÏ&¦½® ¹ ­Š²£µS³S§Ö¯Å§;©ˆ¬´ °¦¨ª«·½¦½ðªË¯Š¦¸²G®å¯Š²EÑG¬­ŠÁ¢·¸²G® ¹ ­Š²É²«¯°²G®G¯Š¬ÎɯЧ¦¸¾|§Šª«»|©A·½¬ §;¦¸ð¬§&ª«­Š¬‡§Q³Aëۦ¸¬®]¯Ò ê”qrp ëB| ˆ"t†ryr~tì䄖{t†_íF„)zîìä„)z"‚>ï„)†…„?.:yr‰2| † ð z| ~ˆKíF„)zOñ¶|ƒy…„)~ßò`z„?ó|²ó¤yr†…y ƒ"yrtˆ Ú®¯;¥A¬ÞúS­.§Q¯I¦¸¯;¬­Šª«¯;¦½²£®8Õ[®A²|¦½®[¿S¬f° ¯;¦½²£®žÂ~­Š²]²£¯I©ˆª«¦½­Š§^ª«­Š¬ ª~Ñ˪«¦½·½ª£µA·½¬À¾Œ²£­è¬f§Ö¯Š¦¸»ÅªË¯Š¦¸® ¹ ¯;¥A¬(ª«µL²ËÑ£¬±»|²[¶[¬·¨§Ò Š § ©A­Š¦¸²G­òÉ®A²ËÏ&·½¬¶ ¹ ¬œ¦¨§®S²«¯2ª~Ñ˪«¦½·½ª£µA·½¬‡­;¬ ¹ ª«­.¶[¦½® ¹ Ð Ç Ò §Q¯;¬»3´ï°.¥Sª«® ¹ ¬©A­Š²£µˆª«µA¦½·¸¦¸¯;¦½¬§ÕGª£®*ª£§Š§;³A»|©[¯;¦½²£®/¦¨§ »ÅªG¶[¬ ¯;¥ˆªË¯$¯Š¥A¬"°²G§Q¯$²£¾¬ª£°.¥±¦¨§©A­;²G©ˆ²G­Q¯Š¦¸²G®Sª«·Û¯;²>¯;¥A¬©A­;¬´ Ñɦ¸²G³S§;·¸ÁŶ[¬f§;°­;¦½µˆ¬f¶gÉ8¬Ñ£¬®S§Q¥]¯Š¬¦½®/¶[¦½§Q¯Šª£®S°¬Úµˆ¬¯ÖÏZ¬¬® Ð ª«®ˆ¶ Ò Õ\Ï&¦¸¯;¥¯;¥S¬*°²]§Ö¯‡²«¾&ª°.¥Sª£® ¹ ¬|¦½®S°­Š¬ªG§Q¦½® ¹"¹ ¬²«´ »|¬¯Š­;¦¨°ª£·¸·½Á ª£§¯Š¥A¬&¶[¦½§Q¯Šª£®S°¬I¾Œ­Š²£»Â¯Š¥A¬&¬®S¶²«¾ˆ¯;¥A¬^­Š²É²«¯ ¦½®S°­Š¬ªG§Q¬f§ÒC¤^¥S¬Z­.ªË¯Š¬Û²«¾S¯;¥A¦¨§°²G§Q¯\¦½®S°­;¬fª£§;¬Û³A·¸¯;¦½»ÅªË¯;¬·¸Á ¶[¬©ˆ¬®S¶A§I²£®*¯;¥A¬œ¯;¬®S¶[¬®S°ÁŲ£¾8¯;¥A¬œ·¨ª«® ¹ ³Sª ¹ ¬&¯;²|ª«·½·¸²ËÏ ÏI²£­.¶É´p¦½®]¯;¬­;®Sª£·Z§;©ˆ¬·¸·½¦½® ¹ °.¥Sª«® ¹ ¬§ÆyªG§|¦¸®6×É©Sª£®A¦½§;¥k²G­ Š ­.ª«µS¦½° È Õ[²£­§Q¯;­Š²£® ¹ ·½Á|¾yª~Ñ£²£­&°.¥Sª£® ¹ ¬f§^ªË¯&¯;¥S¬‡©ˆ²G¦¸®]¯&²«¾ ªËë|ÎAªË¯Š¦¸²G®Ævª£§^¦½®ß ® ¹ ·¸¦¨§Q¥ È Ò ê”qõô ì „–{ct†¬ö0ñº‚z„<÷)t<ñŒt~žƒøó7ùúö?ƒtz|ƒ"y\÷)t sut¿û8tˆHƒ"y+ñŒ| ƒyr„:~ ¤^¥A¬Å©A­Š¦½»|ª£­;Á ¹ ²Gª£·b²«¾Z¦¸¯;¬­Šª«¯;¦½Ñ£¬|­Š¬¯;­.ª«¦½®A¦½® ¹ ¦½§œ¯Š²­;¬´ úS®A¬I¯Š¥A¬°²G­;¬I»|²£­Š©A¥A²£·½² ¹ ¦½°ª«·G¯;­.ª«®S§Q¾Œ²£­Š»ÅªË¯Š¦¸²G® »|²É¶A¬·pÕ Ï&¥A¦¨°.¥/®A²£¯Z²£®S·¸Á|§Q¬­;ÑG¬§Zª£§ ²G®A¬²£¾g¯;¥A¬Ú¾Œ²£³A­I§Q¦½»|¦¸·¨ª«­Š¦¸¯ÖÁ »|²[¶[¬·¨§ÕZµA³[¯Å¦¨§Åª«·¨§Q²Àª>©S­;¦½»Åª«­ŠÁĶ[¬·½¦¸ÑG¬­.ª«µA·½¬*²£¾2¯Š¥A¬ ·½¬ª«­Š®A¦½® ¹ ©A­;²[°¬§Š§Ò Š §E§Q³Aµˆ§Q¬fÔG³S¬®]¯(¦¸¯;¬­Šª«¯;¦½²£®S§k©A­;²[°¬¬¶8՝ХA¬ù§Q¯;¬»3´ °.¥Sª£® ¹ ¬©A­Š²£µSª£µA¦¸·½¦¸¯ÖÁ6»3²[¶[¬·½§ª£­;¬À­Š¬¯Š­Šª£¦¸®A¬f¶å²G® ¯Š¥A¬ ²£³A¯;©A³[¯²«¾S¯;¥A¬^©A­Š¦¸²G­C¦¸¯;¬­Šª«¯;¦½²£®8ÕËÏI¬¦ ¹ ¥]¯;¦½® ¹ ¬ª£°.¥ ¯;­.ª«¦½®[´ ¦½® ¹ ¬Î[ª£»|©A·¸¬ÍÏ&¦º¯Š¥ ¦¸¯Š§ª«·½¦ ¹ ®A»|¬®]¯°²£®[úL¶[¬®S°¬£Õ2ª£®S¶ úS·¸¯;¬­;¦½® ¹ ²£³[¯ Ð Ç Ò °.¥Sª«® ¹ ¬§œÏ&¦¸¯;¥A²G³[¯$ª»3¦½®A¦½»$³A» ·½¬Ñ£¬·^²«¾Ú§;³A©A©L²£­;¯¯Š²À¥A¬·½©k­Š¬¶[³ˆ°¬®A²£¦¨§Q¬GÒk¤^¥A¬úS®Sª£· §Q¯;¬»3´ï°.¥Sª«® ¹ ¬©A­;²GµSª«µS¦¸·½¦º¯Š¦¸¬f§¯;¥A¬®¢ª£­;¬"ª«®E¦½®G¯Š¬­Š©ˆ²G·½ª«´ ¯;¦½²£®ÂÏ&¦¸¯;¥¯;¥S¬±¯;­.ª«¦½®A¬¶ù»|²[¶[¬·Bê7üĪ«®S¶ù¯Š¥A¬±¦¸®S¦º¯Š¦½ª£· µSªG§Q¬·¸¦½®A¬ÅÆrê'ý È »|²[¶[¬·C¶[¬§Š°­Š¦½µˆ¬f¶*¦½®è×ɬ°¯;¦½²£®eÈAÒ ô ó êÅÆ Ð Ç Ò íÉ­;²É²£¯ÕA§;³[ë|ÎWÕ)‹.Þ× È 2 Ó ü ê ý Æ Ð Ç Ò í[§;³[ë|Î È ^öÆ ôÿþ Ó ü È ê ü Æ Ð Ç Ò í]­;²É²«¯fÕA§Q³Aë3ÎgÕ‹%Þ× È ¤^¥A¬ ÉC¬ÑG¬®S§;¥]¯;¬¦½® ¶A¦½§Q¯Šª£®S°¬ »|²É¶A¬·¨§ ª£­;¬ ­;¬´ ¬§Q¯;¦½»ÅªË¯Š¬¶/ªG§ ²£µˆ§Q¬­;ÑG¬¶3¦¸®×ɬ°¯;¦½²£® Ë Õ]Ï&¥A¦½·¸¬2¯;¥A¬œ°²G®[´ ¯;¬Îɯ>§Q¦½»|¦¸·¨ª«­Š¦º¯ÖÁ6»|²[¶[¬· °ª«®µL¬±¦¸»|©A­Š²ËÑ£¬f¶6¯;¥S­;²G³ ¹ ¥ µL¬¯;¯;¬­Ú§Q¬·º¾ ´ê·¸¬fª«­Š®A¬¶·½¬»|»ÅªË¯;¦½ðª«¯;¦½²£®²«¾¯;¥A¬‡»|²[¶[¬·¸·½¬¶ °²£®]¯;¬Î]¯&ÏI²£­.¶A§Ò  Vû5   Ä;9ÛX  V\XÛY ¡ HÇ @^:2V”Ä žÍ@_ ¡ 9QX25\YA9Q@ÛX 5bXÚ:÷Y[R2V ÌÍ9ÛV8@ÛX2R2@5ÄÖV Ì>?S9;X2Pg9 ȯÄÖV Š §*§;¥A²ËÏ&®¢¬»|©A¦½­;¦¨°ª£·¸·½Á(µL¬·½²ËχÕ^®A²k§;¦¸® ¹ ·½¬»3²[¶[¬·2¦¨§ §;³[ë/°¦½¬®]¯;·½Á|¬\SW¬°¯;¦½Ñ£¬2²G®Å¦º¯.§Û²ËÏ&®CÒc]¬Úª£©A©A·½¦¸¬f¶|¯;­.ª£¶[¦¸´ ¯Š¦¸²G®Sª«· °·½ªG§;§;¦ºúˆ¬­ °²G»$µA¦½®Sª«¯;¦½²£®è¯Š¬°.¥A®S¦½Ô]³A¬f§Þ¯;²"»|¬­ ¹ ¬ ¯Š¥A¬è¾Œ²G³A­*»|²[¶[¬·¨§ßI§Š°²G­;¬f§Õ&§Š°ª«·½¦½® ¹ ¬ªG°.¥¢¯;²(ªG°.¥A¦½¬Ñ£¬ °²£»|©SªË¯Š¦¸µS·¸¬|¶[ÁÉ®Sª«»|¦¨°3­Šª£® ¹ ¬GÒ ¤^¥A¬³RA­Š¬Ô]³A¬®ˆ°Á£Õ–ÉC¬Ñ]´ ¬®S§Q¥]¯Š¬¦½®çª£®S¶!fZ²£®]¯;¬Î]¯å§;¦¸»|¦½·½ª£­;¦¸¯ÖÁö»|²[¶[¬·¨§¢­Š¬¯Šª£¦¸® ¬fÔG³ˆª«·I­;¬·½ª«¯;¦½Ñ£¬ÅÏI¬¦ ¹ ¥]¯3ªG§‡¯;­.ª«¦½®A¦½® ¹ ©A­Š²[°¬¬¶A§ÕÏ&¥A¦¸·½¬ ¯Š¥A¬½Æ²G­;©S¥A²£·½² ¹ ¦½°ª«·C¤\­Šª£®S§Q¾Œ²£­Š»|ª«¯;¦½²£®ÀÆrƲG­;©S¥S¤C­.ª«®ˆ§ È §;¦¸»|¦½·½ª£­;¦¸¯ÖÁ»3²[¶[¬·\¦½®S°­;¬fª£§;¬§&¦½®è­Š¬·¨ªË¯;¦½Ñ£¬$ÏZ¬¦ ¹ ¥G¯œª£§¦¸¯ µL¬°²£»|¬§^µL¬¯;¯;¬­¯;­.ª«¦½®A¬¶gÒ ¤\ª£µA·½¬Iݜ¶[¬»|²G®S§Ö¯Š­Šª«¯;¬f§8¯Š¥A¬&°²£»µA¦½®A¬¶$»3¬fª£§;³A­Š¬§\¦¸® ªG° ¯;¦½²£®CÕ]§;¥A²ËÏ&¦½® ¹ ¯;¥S¬2­Š¬·¨ªË¯;¦½Ñ£¬2­Šª£®Aòɦ¸® ¹ §b²«¾C°ª«®S¶A¦½¶Aª«¯;¬ ­Š²É²«¯Š§Û¾Œ²£­I¯Š¥A¬Þ¦¸®[¿ˆ¬° ¯Š¦¸²G®S§¹oFX"XK«Õž[J’ X"XKÞª£®S¶B‚$ƒVFmin|µÉÁ ¯Š¥A¬‡¾Œ²£³S­2§Q¦½»|¦¸·¨ª«­Š¦¸¯ÖÁ*»|²[¶[¬·¨§ªË¾ ¯Š¬­&¯Š¥A¬‡úS­.§Q¯¦º¯Š¬­.ªË¯;¦½²£® Æy¦¸®ºfZ²£·½³A»|®S§‡á~´ Ê È ÒÞ¤^¥A¬²ËÑ£¬­.ª«·½·\°²£®S§;¬®ˆ§Q³S§Ú§Q¦½»|¦¸·¨ª«­;´ ¦¸¯ÖÁ»|¬ª£§;³A­Š¬$ªË¯œ¯;¥S¬¬®S¶²«¾Zñê¯;¬­.ªË¯Š¦¸²G® ô ¦¨§œ§Q¥A²ËÏ&®>¦¸® fZ²G·¸³A»|® ô Ò  ²£¯;¬^¯;¥ˆªË¯Û¬Ñ£¬®¯;¥S²£³ ¹ ¥3²G®A·½Á$²£®S¬I²£¾ˆ¯Š¥A¬&¾Œ²G³A­ ¬§Q¯;¦¸´ »ÅªË¯Š²£­.§|¦¸®ˆ¶[¬©L¬®S¶A¬®]¯;·½Ák­.ª«®AòG¬¶[J’ iK2VŪG§3¯;¥A¬è»|²]§Ö¯ ·½¦¸òG¬·½ÁÊ­Š²]²£¯Ä²«¾´[’XXËÕ"ªË¾ ¯Š¬­±²G®A·¸Á̯;¥A¬¢úˆ­Š§Q¯±¦º¯Š¬­.ªË´ ¯Š¦¸²G® ¯Š¥A¬°²£®S§;¬®ˆ§Q³S§¢°.¥A²G¦½°¬ ¦¨§¢°²£­Š­Š¬° ¯fÒ ¤^¥A¬ùúA´ ®Sª£·Z°²£·½³A»|®Ä²«¾¤ª«µA·½¬Ýè§;¥A²ËÏ§Þ¯Š¥A¬*­Š¬¯Š­Šª£¦¸®S¬¶ Ʋ£­;´ ©A¥ˆ¤C­.ª«®S§Ð§;¦½»3¦½·¨ª«­Š¦º¯ÖÁ¼»|¬fª£§;³A­;¬ ª«¾ ¯;¬­ °²£®ÉÑG¬­ ¹ ¬®ˆ°¬£Ò P^ª£§;¬¶Ä²£®±¯Š­Šª£¦¸®A¦½® ¹ ¬Ñɦ½¶A¬®S°¬/¾Œ­Š²£»Ã¯;¥S¬°²£®[úˆ¶A¬®]¯;·½Á ª£·¸¦ ¹ ®A¬f¶E©Sª«¦½­Š§eo;X"XK$#)o;iK2VÕM[J’XXU#)[’iK2Vª«®S¶ ƒnžØVH˜ o;XXU#5ƒn:Ø2V\˜?o;iK2V\¾Œ­;²G»Â©A­;¬Ñɦ¸²G³S§8¦¸¯;¬­Šª«¯;¦½²£®S§Õf¯;¥A¬Z©A­;²Gµ[´ ª£µA¦¸·½¦¸¯ÖÁ$²«¾hiK2VŠÇ X"XK¥Sª£§¦½®S°­Š¬ªG§Q¬f¶§;¦ ¹ ®A¦¸úˆ°ª£®]¯;·½Á£Õ˾Œ³A­;´ ¯Š¥A¬­Z¦¸®S°­;¬fª£§;¦¸® ¹ ¯;¥A¬2°²£®[úL¶[¬®S°¬¦½®|¯;¥A¬²ËÑG¬­.ª«·½·Aª«·½¦ ¹ ®[´ »|¬®]¯.§bªË¯ °²G®]ÑG¬­ ¹ ¬®S°¬œÆŒ®A²£¯Û§Q¥A²ËÏ&® È Õ«µA³A¯ ®A²«¯ °.¥Sª£® ¹ ´ ¦½® ¹ ¯;¥S¬œ©S­;¬Ñ]¦½²£³ˆ§Q·½Á*°²G­;­Š¬°¯I­.ª«®Sò]¦½® ¹ ¦¸®¯Š¥A¬§;¬ °ªG§Q¬f§Ò ¤^¥A¬ÚúS®Sª«·gª£·¸¦ ¹ ®A»|¬®G¯^°²G®S§Ö¯Š­Šª£¦¸®]¯ ¯Š¥SªË¯^ÏI¬Ú©A³S­Š§;³A¬¶ Ï^ª£§IµSªG§Q¬f¶/²£®*¯;¥A¬Þ©A¦ ¹ ¬²£®A¥S²£·½¬2©A­Š¦¸®ˆ°¦½©A·¸¬GÒb¤^¥A¦½§^©S­;¦½®[´ °¦¸©A·½¬"§Q³ ¹G¹ ¬f§Ö¯.§ ¯Š¥SªË¯3¾Œ²£­Åª ¹ ¦¸ÑG¬®Ä©ˆª«­;¯3²£¾Ú§Q©L¬¬f°.¥8ÕÛª ­Š²É²«¯§;¥A²G³A·½¶Ê®A²«¯À¥Sª~Ñ£¬k»3²G­;¬k¯;¥Sª£®Ê²£®S¬k¦½®[¿S¬°¯;¦½²£® ®A²G­Å§Q¥A²G³A·¨¶Ä»$³S·º¯Š¦¸©A·½¬"¦¸®A¿S¬°¯;¦½²£®S§3¦½®k¯Š¥A¬§;ª£»3¬©Sª£­Q¯ ²£¾§Q©L¬¬f°.¥Ì§;¥Sª«­Š¬Ä¯Š¥A¬E§Šª«»|¬(­;²É²«¯fÒO¤^¥A¬­Š¬(ª£­;¬GÕ$²«¾ °²£³A­.§Q¬GՈ¬ÎA°¬©A¯;¦½²£®S§¯;²¯;¥A¦¨§¯Š¬®S¶A¬®S°Á£Õg§Q³ˆ°.¥>ª£§½o…˜8i VHkOkV ØÂo…˜JiVHkV Ø ª£®S¶ ؘJV iY³V ØÂؘJV iYBopÕSÏ&¥A¦¨°.¥ª«­Š¬œ²Gµ[´ §;¬­ŠÑ£¬f¶Äª£§Ñ˪«­Š¦½ª£®G¯¾Œ²£­Š»Å§$²£¾¯;¥A¬¦¸­|­Š¬§;©ˆ¬f° ¯Š¬¶±­Š²É²«¯Š§Ò  /Œ">>-ª&-0 / & e £ ³² /7; /27 E 7Þ7;-4LM-0¥0\G&-4…Aº7;² HG&‡-4/ 7 E ¦² ¥ E LM/  ©–7 E ¦² ¥ E LM/  7 £ $oQ7¯ £ M?¨8G1½ H@h £  G/27) @ £ –²?H/>2-0>2H&–G&  x1-0¨" /› £ –-4/¿¾8²8&-4 "/Þ¾þ¸– £  G/27Q H@x £ ›²?H/>2-0>2H&›-4/¿¾ ²8&-4 "/Z1-4¨ /B £ %G&  ?­ ¢¤£ -47 ¦-0>2-ªG& ²J&-0 /H¥)?¨JG1"5G/2-4/21¯7;² HG&.@O?¨ HG&7¤²?H7; 7>o £ 8G& H;;GH²8&-4 "/³¦81o¤  /ZG&  H%H/>B-4/Z¾ ²8&-4 /Z-47=L E  E H¥O©H/> >2-47F@O?¨ HG&7x²?7;87o £ 8G& £ -01 £ 8G:G/"?>+²8 "LM<8&-ª&-4 "/+8§-47F&7 @ HGZjG&  H 7½H;&8/3&-4 /7 ©53»  ²8&-4¨8¥4A´²?H<2 E G&-4/21Õ o¤?H @ HG&L! @% £ Z<-41"8 "/ £ "¥4³<2G&-4/2² -4<¥4"­ ¢¤£ E 7M-ªDohH7 E 7;?> 7M £ ³<G&-4L HG;AÕG/23-0/21e²8G&-ª&8G&-0 ¬O \¨8G GUo 7;-4LM-0¥0\G&-4…A 7;² G&?±J­ ü H/>2-0>2H& ÿ  &7Q@ GQ £ ›®”/21"¥4-47 £ -4/¿¾8²8&-4 "/  µ¬  7F5-4&JGH&-4 "/±J· š.¨8G¥4¥:õ-4LM-4¥0HG&-ªýA ü "/&8§3 S G&$T E  /2²8A P  ¨" /7 £ & -4/ ÎB G&< £¢ G/27 ÎB HG&< £¢ G/7 ¬ &8G\&-0 /  ± õ-4LM-4¥0HG&-ªýA õ-4LM-4¥0HG&-ªýA õ-4LM-4¥0HG&-ªýA õ3-4LM-0¥0\G&-4…Aj¬  ± õ-4LM-4¥0HG&-ªýAu¬ ü ±  ­ VKVZ$s  ­ Y   ­ YKX  ­ VOt &   ­  &   ­ VVOKrK  ­ XsOrrKtKY  E G&/ ­ VKVVYH Y ­ t   E G&/ ­ rWXOs & ¥4¥ ­ VOKY &  "¥ ­  &  "¥ ­ VVOKrK &  H ­ VKVZ0s &8¥0¥ ­ VKVVsK $r ­   & $o8GÑ­ O  E G&/ ­ VZ$s &  ­ Z$V & "/21 ­ VVVKVKs &  ¥ ­ VKVZ0s &87F ­ VKVVKXZ  ­ s X & E ² £ ­ OWX ¥4 ­ VZUX  ­ V & "/2 ­ VVVKVKs & /1 ­ VKVVVWXOY H¥4 ­ VKVVOrh h ­ V r &-4< ­ KsZ & 7F ­ VVZ & "< ­ Ks ­4­0­ ­4­0­ & / ­ VKVVVWXOY &-4 ­ VKVVKXKX Ws ­ t s &-4 ­ KsV &?² £ ­ VVZ & "-4¥ ­ Ks  ­ VVVKVVKs & E  ­ VKVVVWXOY ü H/>->2H& ÿ  &7h@ HGQ £ ®–/1"¥4-47 £ -4/¿¾ ²8&-4 "/   !,¬  7F5-4&JGH&-4 "/±J· š.¨8GH¥4¥žõ3-0LM-4¥0HG&-ª…A ü "/&8§3 S G&UT E  /²JA P  ¨ /27 £ & -4/ ÎB G&< £¢ G/27 ÎB G&< £¢ GH/7 ¬ &8G\&-4 "/  ± õ-4LM-4¥\G&-ªýA õ-4LM-4¥0HG&-ªýA õ-4LM-4¥\G&-ªýA õ3-4LM-0¥0\G&-4…Aj¬  ± õ3-4LM-0¥0\G&-4…A‡¬ ü ± "# $ ­ VVZ3X r ­ r  "#  ­ YOr0X 7 £ HG& ­ VOtK 7 £  ­ rWVV 7 £  H ­ VKVOr "#  ­ XOsOrKrtWY 7 £  H ­ VVZ$Ks  ­   7 £ ?¨ ­ OW 7 £ -4< ­ VsY 7 £   ­ K 7 £  ­ VKVOr 7 £   ­ VVZ$Ks 7 £ -4< ­ VVZUVKX $s ­   7 £ H< ­ HUV 7 £ -ª@ ­ VsO 7 £  ­ H$V 7 £ ² ­ VKVVVs 7 £  ­ VVZ$Ks 7 £ \;&8G,­ VVVKsZ $Y ­  X 7 £ HG& ­ WX 7 £ "< ­ VsV "# $ ­ V 7 £ HG; ­ VKVVVs 7 £ ²J ­ VVVKVKXY 7 £ < ­ VVVKX  ­ Y r 7 £ $o¤JG ­ $YWX "#  ­ VOrKY 7 £ "< ­ Ws 7 £ E  ­ VKVVVOr 7 £ G; ­ VVVKVKXY 7 £ E  ­ VVVKYZ KV ­ s s 7 £  H ­ $s 7 £ E  ­ VOr 7 £ E  ­ Ws ­4­4­ ­4­4­ 7 £ \¨ ­ VVVKVKXY 7 £ E / ­ VVVK KV ­ t t 7 £ ² ­ 0r0X 7 £   ­ VOrH 7 £ 0o ­ Ws "# $ ­ VKVVVKV 7 £ G& ­ VVVKVKXY ü />2-0>\& ÿ  H&7Q@ G5 £ õ3<H/-47 £ -4/¿¾ ²8&-4 "/&%('&)+*&,.-Ó¬  7F=-ª&8G\&-4 "/±J· š.¨8GH¥4¥žõ3-0LM-4¥0HG&-ª…A ü "/&8§3 S G&$T E 8/²8A P  ¨ /27 £ & -4/ ÎB G&< £¢ GH/7 ¬ &8G\&-4 "/  ± õ-4LM-4¥0HG&-ªýA õ3-4LM-0¥0\G&-4…A õ-4LM-4¥0HG&-ªýA õ-4LM-4¥\G&-ªýA‡¬  ± /0(1 32 ­ VVO0X  /0(1 $2 ­ YKY /0 1 $2 ­ VKs /0(1 32 ­ rKV /0(1 32 ­ VVH0 DýE$4 1"HG ­ VVVKs  DFE /HG ­ KY DFE34 1\G ­ VH0r DýE$4 1"HG ­  D "1"HG ­ VVH0 DýE GHG ­ VVV X DFE GHG ­ Ws D 1HG ­ VKV DýE /HG ­ r DýE /HG ­ VVKVVKX D "1\G ­ VVVKV r DFE 7F&-ªC²?\G ­ K DFE /3\G ­ VKVKX DýE G\G ­ $Y DýE$4 1"HG ­ VVKVVKX ¤\ª£µA·¸¬‡ÝSóAßbÎAª«»|©A·½¬Þ©ˆ¬­Q¾Œ²G­;»Åª«®ˆ°¬Þ²«¾¦¸®S¶A¬©L¬®S¶[¬®]¯Úª«®S¶°²G»$µA¦½®A¬f¶§;¦¸»|¦½·½ª£­;¦¸¯ÖÁ/»|¬ªG§Q³S­;¬f§ ¤^¥A¬3¬Îɯ;¬®]¯œ¯Š²Ï&¥S¦½°.¥§Q³S°.¥Í²ËÑ£¬­;·¨ª«©S§Ú§Q¥S²£³A·¨¶èµL¬|©ˆ¬´ ®Sª£·¸¦½ð¬f¶¶A¬©L¬®S¶A§²£®¯;¥A¬‡©S­;²GµSª«µA¦½·½¦º¯ÖÁ/²£¾§;¬¬¦¸® ¹ Ñ˪£­;¦¸´ ª«®]¯I¦¸®[¿ˆ¬° ¯Š¦¸²G®S§ ¦½®/¯Š¥A¬2»|²G­;©A¥S²£·½² ¹ ÁGÕ«µA³[¯I¾Œ²£­^×É©Sª£®A¦½§;¥ ª«®ˆ¶ßÛ® ¹ ·½¦½§;¥¯;¥A¦¨§&¦¨§&­;¬·½ª«¯;¦½Ñ£¬·¸ÁÅ·½²ËÏ‡Ò ]ͬÀ¬Î[©A·½²£¦¸¯;¬f¶å¯Š¥A¬±©A¦ ¹ ¬²£®A¥A²G·¸¬©A­Š¦¸®ˆ°¦½©A·¸¬À¦½® ¯ÖÏI² Ï^ª~Áɧ$§Q¦½»$³A·¸¯Šª£®A¬²G³S§;·¸ÁGÒ>¤^¥A¬/úˆ­Š§Q¯¦¨§ª ¹ ­Š¬¬¶AÁª£· ¹ ²«´ ­Š¦º¯Š¥A»"Õ&¦¸®6Ï&¥A¦¨°.¥å°ª£®S¶[¦¨¶AªË¯Š¬§/ª£­;¬ª£·¸¦ ¹ ®A¬f¶¢¦¸®6²G­Š¶A¬­ ²«¾ ¶[¬f°­Š¬ªG§Q¦½® ¹ §;°²£­Š¬£ÕSª£®S¶Ï&¥A¬®"¯Š¥A¬ ¯Š¥A¬‡úS­.§Q¯Q´ï°.¥A²£¦¨°¬ ­Š²]²£¯^¾Œ²£­2ª ¹ ¦¸ÑG¬®¦¸®[¿ˆ¬° ¯Š¦¸²G®¥Sª£§&ª£·¸­Š¬ªG¶[Á/µL¬¬®"¯Šª«òG¬® µÉÁ誫®A²£¯;¥A¬­Þ¦¸®[¿ˆ¬° ¯Š¦¸²G®Í²£¾ ¯;¥S¬/§;ª£»3¬3©Sª£­Q¯Þ²«¾Z§;©L¬¬°.¥CÕ ¯;¥S¬$ª«· ¹ ²G­;¦¸¯;¥A»÷°²G®G¯Š¦¸®É³A¬f§&³A®]¯;¦½·ª3¾Œ­;¬¬$§Q·½²«¯Ú¦½§¾Œ²£³A®S¶8Ò ¤^¥A¬Í¬ÎA°¬©[¯;¦½²£®6¦¨§*Ï&¥S¬®6¯Š¥A¬>¥A¦ ¹ ¥S¬§Q¯­.ª«®Aòɦ½® ¹ ¾Œ­;¬¬ ¾Œ²£­Š»Ê¦¨§b§Q¬Ñ£¬­Šª£·]²G­Š¶A¬­.§C²£¾L»Åª ¹ ®A¦¸¯;³S¶[¬^·½²ËÏZ¬­C¯Š¥Sª«®3¯Š¥A¬ úS­.§Ö¯ °.¥A²G¦½°¬65¥A¬­Š¬3¯;¥A¬|úˆ­Š§Q¯Q´ï°.¥A²£¦¨°¬|ª«·½¦ ¹ ®A»|¬®]¯ ¦¨§‡ª£§Q´ §;³A»|¬¶¯;²|µL¬ °²£­Š­Š¬° ¯fÕɵA³[¯Úª3Ñ˪«­Š¦½ª£®]¯Z¾Œ²G­;»"Ò 7 8  Ȝ9Ö?S9;PW5¤Ä 8 75hÄ 25\YA9Q@ÛX fZ³A­Š­;¬®]¯¬»|©A¦½­;¦¨°ª£·8¬Ñ˪«·½³Sª«¯;¦½²£®"²«¾¯;¥A¦¨§ÏI²£­Šò/¾Œ²[°³S§Q¬f§ ²£®k¦º¯.§ªG°°³S­ŠªG°Á¦¸®kª£®Sª«·½ÁÉ𦽮 ¹ ¯Š¥A¬²£¾ ¯;¬®Ä¥A¦ ¹ ¥A·½ÁÀ¦¸­;´ ­Š¬ ¹ ³S·½ª£­/©Sª£§Q¯/¯Š¬®S§;¬è²«¾ ßÛ® ¹ ·½¦½§;¥6Ñ£¬­;µˆ§Ò fZ²£®ˆ§Q¦¨§Ö¯Š¬®]¯ Ï&¦¸¯;¥±©A­Š¦¸²G­‡¬»|©A¦½­;¦¨°ª£·Û§Q¯;³S¶[¦½¬§$¦¸®À¯Š¥A¦¨§‡úS¬·½¶gÕ¬Ñ˪£·¸³Sª«´ ¯;¦½²£®Ï^ª£§I©ˆ¬­Q¾Œ²G­;»|¬f¶*²G®ª¯;¬f§Ö¯§;¬¯²«¾Ü£ÝGÝ£Ý ¦½®[¿S¬f° ¯Š¬¶ ÏI²£­.¶A§Õ¦½®S°·½³S¶[¦½® ¹ ô á£Ý¥A¦ ¹ ¥A·½ÁÀ¦¸­Š­;¬ ¹ ³A·¨ª«­$¦¸®[¿ˆ¬° ¯Š¦¸²G®S§Õ ô ÝÉé£éͰªG§Q¬f§|Ï&¥A¬­Š¬"¯;¥S¬©Sª£§Q¯/¯;¬®S§;¬ÏIªG§3¾Œ²£­Š»3¬f¶¢µÉÁ §;¦¸»|©A·½¬°²£®S°ªË¯;¬®SªË¯Š¦¸ÑG¬$§;³[ë|Î[ª«¯;¦½²£®8ÕWª«®ˆ¶ ô Ý£ÝGÜ|¦¸®A¿S¬°´ ¯;¦½²£®ˆ§&¬Î[¥A¦½µA¦¸¯;¦½® ¹ ª|®A²£®[´ï°²G®S°ª«¯;¬®SªË¯Š¦¸ÑG¬Ú§Q¯;¬»÷°.¥Sª«® ¹ ¬ §;³S°.¥ªG§ ¹ ¬»3¦½®Sª«¯;¦½²£®²£­&¬·½¦¨§Q¦½²£®8Ò ñï®(¬Îɬf°³[¯Š¦¸²G®8Õb¾Œ²G­3¬fª£°.¥k¯;¬§Q¯Å¦¸®A¿S¬°¯;¬¶k¾Œ²£­Š»"Õ ¯Š¥A¬ ª«®ˆª«·½Áɧ;¦¨§Iª£· ¹ ²£­Š¦º¯Š¥A» Ï^ª£§I¾Œ­Š¬¬œ¯;²Å°²G®S§;¦½¶[¬­ª«·½¦ ¹ ®A»|¬®]¯ ¯Š²|ª£®ÉÁ/ÏZ²G­Š¶*¦¸®¯;¥S¬ °²£­Š©A³S§IÏ&¥A¦¨°.¥¥Sª£¶µˆ¬¬®¦½¶A¬®]¯;¦¸´ úS¬f¶*ªG§Ûª ©L²«¯Š¬®]¯;¦¨ª«·L­Š²]²£¯ZÑG¬­ŠµÅµ]Á3¯Š¥A¬Ú©ˆª«­;¯Q´ê²«¾ ´ï§Q©L¬¬f°.¥ ¯.ª ¹£¹ ¦¸® ¹ ©A­;²[°¬§Š§±²£­(²É°°³A­Š­Š¬®S°¬6¦¸® ªÌ¶[¦½°¯;¦½²£®Sª£­;Á]´ ¶[¬­;¦½Ñ£¬f¶œ­;²É²«¯Š·¸¦¨§Q¯Õn:X3o\ŽÖ³S§Q¯8¯;¥S²G§;¬ ­;²É²£¯Š§g¦½®Þ¯;¥A¬Û¯;¬f§Ö¯\§;¬¯fÒ ñê¯I¦½§ ¯;¥É³S§Ûª »3²G­;¬°.¥ˆª«·½·¸¬® ¹ ¦½® ¹ ¬Ñ˪£·¸³Sª«¯;¦½²£®3¯;¥ˆª«®|¯;¬f§Ö¯;´ ¦½® ¹ §;¦¸»|©A·½¬ ª«·½¦ ¹ ®A»|¬®]¯2ª£°°³A­.ª£°Áŵˆ¬¯ÖÏZ¬¬®¯ÖÏI²/°·½¬ª«® ª£®S¶¬Îɯ;­.ª«®A¬²£³S§Q´ê¬®]¯;­ŠÁ]´v¾Œ­Š¬¬ÏI²£­.¶[·½¦½§Q¯Š§Ò ¤\ª£µA·½¬ à §;¥A²Ëϧ¯Š¥A¬2©L¬­;¾Œ²£­Š»Åª«®S°¬²«¾8§;¬Ñ£¬­Šª£·A²«¾g¯Š¥A¬ ¦½®ÉÑ£¬§Q¯;¦ ¹ ª«¯;¬f¶*§;¦¸»|¦½·½ª£­;¦¸¯ÖÁ/»|¬ªG§Q³A­Š¬§ÒhRA­Š¬Ô]³A¬®S°Á*§;¦½»3¦¸´ ·¨ª«­Š¦º¯ÖÁ"ÆrRb× È Õɬ®A¥Sª£®S°¬f¶gÉ8¬Ñ£¬®S§Q¥]¯Š¬¦½®èÆrÉ\× È Õɪ«®S¶ufZ²G®[´ ¯Š¬Îɯڧ;¦¸»|¦½·½ª£­;¦¸¯ÖÁÍÆFf^× È ª«·½²£®S¬ ª£°.¥A¦½¬ÑG¬‡²£®A·½Á ô æ]ã*ÕSÜ ô ã ª£®S¶á£Ý]ã÷²ËÑG¬­.ª«·½·ªG°°³A­ŠªG°Á"­Š¬§;©ˆ¬f° ¯;¦½Ñ£¬·¸ÁGÒBÙ2²ËÏZ¬Ñ£¬­Õ ¯Š¥A¬è¥ÉÁÉ©ˆ²£¯;¥A¬f§Q¦¨§3¯;¥Sª«¯/¯;¥S¬§;¬»3¬fª£§;³A­Š¬§|»|²[¶[¬·¦½®S¶[¬´ ©L¬®S¶A¬®]¯ª«®ˆ¶E°²£»|©A·½¬»|¬®]¯.ª«­ŠÁĬÑɦ¨¶[¬®S°¬è§Q²G³A­.°¬§3¦¨§ §;³A©A©L²£­;¯;¬f¶±µÉÁͯŠ¥A¬*­Š²£³ ¹ ¥A·½ÁªG¶A¶[¦¸¯;¦½Ñ£¬*°²£»µA¦¸®S¬¶ÄªG° ´ °³A­ŠªG°Á/²£¾bé ô Ò È]ã*Ò 9 ¤^¥A¬ùúS®Sª«·©L¬­;¾Œ²£­Š»|ª£®S°¬ ²«¾è¯Š¥A¬å¾Œ³A·½·è°²G®ÉÑ£¬­ ¹ ¬f¶ f^×^¹Rb×^¹É×^¹Æ>×»|²[¶[¬·LªË¯ à£à Ò á]ãЪG°°³S­ŠªG°Á ²G®¯Š¥A¬ ¾Œ³A·½·ˆ¯;¬§Q¯I§;¬¯fÕ[ª«®S¶ àGà Ò é£ã̪£°°³A­.ª£°Á$²£®/¦½®[¿S¬f° ¯;¦½²£®ˆ§ ­Š¬´ Ô]³A¦½­;¦½® ¹ ª«®Sª£·¸Á[§;¦½§ZµL¬Á£²G®S¶*§;¦½»3©S·¸¬‡°²G®S°ª«¯;¬®SªË¯Š¦¸ÑG¬Ú§;³[¾ ´ úAÎAª«¯;¦½²£®8Õ8¦¨§ÞÔ]³A¦º¯Š¬|­;¬»|ª£­;ò˪£µA·¸¬ ¹ ¦½Ñ£¬®è¯Š¥SªË¯Þ¯;¥S¬|ª£· ¹ ²«´ ­Š¦º¯Š¥A»÷¥Sª£¶ª£µS§Q²G·¸³A¯;¬·½Á®A²èØ&¦½®[¿S¬°¯;¦½²£®8Õ ­Š²]²£¯Ùù¬ÎAª«»3´ ©A·½¬§œª£§&¯;­.ª«¦½®A¦¸® ¹ ¶AªË¯.ªAÕWª«®S¶"¥ˆª£¶"®A²©A­;¦½²£­2¦¸®ÉÑ£¬®]¯;²£­ŠÁ ²£¾Z§Ö¯Š¬»¼°.¥Sª£® ¹ ¬f§Úª~Ñ˪£¦¸·¨ª«µA·½¬£ÕLÏ&¦¸¯;¥²£®A·½Áª§;·¸¦ ¹ ¥]¯Þ§Ö¯.ªË´ ¯Š¦½§Q¯;¦¨°ª£·8µA¦¨ª£§I¦¸®¾yª~ÑG²£­^²«¾§Q¥A²G­Q¯Š¬­&§Q¯;¬»÷°.¥Sª«® ¹ ¬§IÏ&¦¸¯;¥ : /Ð@O²J?©³-4/æL /ADz 7; 7 £ Œ² "/27; /7 E 7GH/3-4/1 ² £ "-4² +-47=² HG;G& ²8_o £  /³ ² £ -0/>2 <8/>28/3QLM 3> ¥; 75C2G&7F ² £ "-4² M-47ÿohG& "/212©–H²8 E H¥0¥ªAuA2-4 ¥0>-0/21³½7;L H¥4¥c7FA/JG&1"-47F&-4² 7 E <8G=>>2-ª&-4¨3-ªýA­ fZ²G»$µA¦½®SªË¯Š¦¸²G® <û²£¾ Š ·½· Ù¦ ¹ ¥S·¸Á ×ɦ½»|©A·½¬ ²G®[´ ²£¾×ɦ½»|¦¸·¨ª«­Š¦º¯ÖÁ ñꯊ¬­;´ ]²£­.¶A§ ñï­;­Š¬ ¹ ³A·½ª£­ fZ²£®S°ªË¯fÒ fZ²£®ˆ°ªË¯fÒ Æ²[¶[¬·½§ ªË¯;¦½²£®ˆ§ ÆvܣݣÝGÝ È Æ ô á«Ý È Æ ô Ý]éGé È Æ ô Ý£ÝGÜ È Rbׯ>=–˜JV?0ƒVHn:WHáA@)lOY È ÆŒñꯊ¬­ ô È à Ò Ý ô ÝAÒ È ÝAÒ Ý ô æSÒ ô É×ÍÆF֔VV\n[’oFVHlOnB@:lOY È ÆŒñꯊ¬­ ô È Ü ô Ò Ü ôfà Ò È á«æAÒ æ Ü Ê Ò Ê f^ׯDCQX3no;V/2oE@:lOY È ÆŒñꯊ¬­ ô È á«ÝSÒ æ Ü]á[Ò Ý Ü£æAÒ æ á Ë Ò Ý f^×^¹Rb× ÆŒñꯊ¬­ ô È ÜGáAÒ Ë È Ê Ò Ý ÜGá[Ò æ Ü£æSÒâé f^×^¹Rb×^¹É× ÆŒñꯊ¬­ ô È é ô Ò È é3ÈAÒ Ë é ô Ò ô é ô Ò à f^×^¹Rb×^¹É×^¹Æ>× ÆŒñꯊ¬­ ô È à ÈSÒ Ë é Ê Ò æ à éÉÒ Ü à é[Ò Ê f^×^¹Rb×^¹É×^¹Æ>× ÆýfZ²G®]Ñ ¹ÉÈ F(F qõô G(H–qJI F(F q F F(F qLK ¤\ª£µA·½¬ à ó ‹¬­;¾Œ²£­Š»Åª«®S°¬2²«¾b°²G»$µA¦½®A¬f¶ª£·¸¦ ¹ ®A»|¬®G¯&»|²[¶[¬·¨§&²G® Ê °·¨ª£§Š§;¬§^²«¾©Sª£§Q¯Q´p¯;¬®S§;¬ÞßÛ® ¹ ·½¦¨§Q¥Ñ£¬­;µS§ §;»|ª£·¸·½¬­`É8¬Ñ£¬®S§Q¥]¯Š¬¦½®>¶[¦¨§Ö¯.ª«®S°¬£Õgª«®S¶>Ï&¦º¯Š¥è¯Š¥A¬»|¦½®A¦¸´ »Åª«·S§;¬ª£­Š°.¥A´ê§;¦¸»|©A·½¦¸¾ŒÁ]¦½® ¹ ª£§Š§Q³S»3©A¯;¦½²£®3¦¸®Åª«·½·[¯Š¥A¬&»|²É¶[´ ¬·¨§œ¯;¥ˆªË¯$°ª£®S¶[¦¨¶AªË¯Š¬Åª«·½¦ ¹ ®A»|¬®]¯Š§Þ»³S§Ö¯ µL¬ ¹ ¦¸®ÀÏ&¦¸¯;¥±ª ¯;¥S¬ §;ª£»3¬BGNMOMœ©S­;¬úAÎgÒ P Μ¦¸ÑG¬®Íª§Q¯Šª«­;¯;¦½® ¹ ©ˆ²G¦¸®]¯ÞÏ&¥A¬­Š¬3ª£·¸·b§;¦¸® ¹ ·½¬°.¥Sª£­ŠªG° ´ ¯;¬­!QÇSR÷°.¥Sª£® ¹ ¬f§Úª«¯Ú¯Š¥A¬|©ˆ²G¦¸®]¯‡²«¾Z§;³[ë|ÎAªË¯Š¦¸²G®Íª«­Š¬ ¬Ô]³Sª£·¸·½Á$·½¦½ò£¬·½Á£Õ£¯;¥A¬2©A­;²[°¬§Š§Q¬f§²«¾W¬·½¦½§;²£®Ærò«ÇUT È Õ ¹ ¬»|¦¸´ ®Sª«¯;¦½²£®åƌ¬GÒ ¹ ÒVTÇSWͦ½®±¯;¥A¬°²G®G¯Š¬Îɯ$²£¾+W È Õ ª£®S¶ºëSÇŒì §;¥A¦º¾ ¯Eƌ¦½®Ð¯;¥A¬(°²£®]¯Š¬Îɯ貫¾/ª6©A­Š¬°¬¶[¦½® ¹ °²£®S§;²£®Sª£®]¯Õ ®A²£¯3ÑG²ËÏZ¬· È ª£·¸·^¬»3¬­ ¹ ¬/µÉÁÀ¯;¥A¬¬®ˆ¶Ä²«¾2¯;¥A¬úS­.§Ö¯3¦º¯;´ ¬­.ªË¯Š¦¸²G®*Ï&¦¸¯;¥¥A¦ ¹ ¥©A­;²GµSª«µS¦¸·½¦º¯ÖÁŦ½®¯;¥A¬¦¸­ª£©A©A­Š²£©A­Š¦½ª«¯;¬ °²G®]¯;¬Îɯ.§ÕAª£®S¶*·½²ËÏ ©S­;²GµSª«µA¦½·½¦º¯ÖÁ/¬·½§;¬Ï&¥S¬­Š¬£Ò ¤\ª£µA·¸¬ ô 懧Q¥S²Ëϧ¥A²ËÏ(¬fª£°.¥|²«¾L¯;¥S¬»|²É¶A¬·¨§ ©L¬­;¾Œ²£­Š» ²£®ÍªÅ­Šª£®S¶[²G»3·½Á]´ê§;¬·½¬°¯;¬f¶Ü£æ]ãö²«¾b¯Š¥A¬$¥S¦ ¹ ¥A·¸Á"¦½­;­Š¬ ¹ ³[´ ·¨ª«­œ¾Œ²£­Š»Å§ÕgÏ&¦¸¯;¥±°²G­;­Š¬°¯;·½Á§Q¬·¸¬f° ¯;¬f¶>­Š²É²«¯.§Ú¦¨¶[¬®G¯Š¦ºúˆ¬¶ ¦½®/µˆ²G·½¶8Ò¤^¥S¬2­Š¬§;¦½¶[³ˆª«·A¬­;­Š²£­.§ ª«­Š¬&©A­Š¦¸»Åª£­;¦½·¸Á3²«¾W¯Š¥A­;¬¬ ¯ÖÁÉ©ˆ¬f§ó(¤^ÏZ²Ä¦½®[¿S¬f° ¯Š¦¸²G®S§Õ&X5V\n)oœª£®S¶Ÿi3oFVÕ2ÏI¬­Š¬è®A²£¯ ª«·½¦ ¹ ®Sª£µA·¸¬ÍÏ&¦º¯Š¥å¯Š¥A¬¦½­°²£­Š­Š¬° ¯­;²É²«¯.§¶A³A¬>¯Š²(¶[¦SW¬­;´ ¬®]¯úˆ­Š§Q¯2°.¥Sª«­.ª£°¯;¬­Ò ¤^¥A¬$·½ª£­ ¹ ¬§Q¯&°·½ªG§;§^²£¾b¬­Š­;²G­Š§^ª«­Š¬ ¶[³A¬ ¯;²Å¯Š¥A¬ ©A¦ ¹ ¬²£®A¥A²G·¸¬Þ©A­Š¦½®S°¦½©A·½¬$§Ö¯Š­;²G® ¹ ·½Á*¶[¦¨§Q¾yª~Ñ£²£­;´ ¦½® ¹ ¯ÖÏZ²|¦½®[¿S¬°¯;¦½²£®S§&¾Œ­;²G» §Q¥ˆª«­Š¦¸® ¹ ¯;¥S¬ §;ª£»3¬Þ­Š²É²«¯Ò N ý Y ¢ ›< E – £  ¢ ¦2¥0  G& 7 E ¥ª&7c-4/M<8G&7;< ²J&-0¨""©HÎB  /8A /> ü H¥0-¼»´¬ Or ±hH² £ -08¨?> Y ­ rZ \¨8GH¥0¥)²8² E GH²8A E 7† -4/1½Þ@ E ¥4¥ªAg7 E <8G&¨3-47;?>‡>2 ² -47;-4 "/u¥4-07F%¥0 HG&/JG›;G-4/?>g "/ rWV <H-4G& > <H7F†r&8/7;0À?G&  Þ¨8G&¦´<-ªG&7g¬O-4/ <¥0H-0/´&8§3 @ HG&LÞ±J­ ª5¥ª £ E 1 £  £ 8A> "/ M¦G&?> 0oQ/ £ -47M<8G;@ G† L /2² ¹¦A"o G>u…A2<© £  -ªG-4/² ¥ E >?>    *  <G& "1GHL ;G-4/2?>g@G& "L rKV <H-4G&7%/>gH<<¥4-4?>³& ½ E G›8¨"H¥ E H&-4 "/ 7;8H² £ -4 ¨?> $VKVZ H² ² E G²8Ag /g £ ¯<-ªG&7oQ-ª £ 7;-0LM<2¥4 ‰5À;¸½²8 "/² H& /H&-4 "/© YKXZ ² ² E G²JAã "/7F& Lµ² £ /21"-4/1 ¬O/ /¿†r² /²?\J±h<-ªG&7=/> rZ ² ² E G²JAZ "/Z £  £ -41 £ ¥ªAB-ªG† G& 1 E ¥0HGQ<-ªG&7 ©¿oQ-ª £ YZ \¨8G¥4¥²8² E GH²8A­'š= £ 8Gh\¨-4¥¼† ¦2¥457 E <8G&¨3-47;?>¹¥4?HG&/2-4/1%G& 7 E ¥4&7Q¬O"­ 1­ P -4/16[ ÿ E LM ¥ £ HG; />øÎB² ü ¥4 ¥4¥H/>±HG&= "/¥ªAM1-4¨ /M@ HGh< £ "/ ¥4 "1"-4²?H¥²o¤ G> G& <G& 7; /H&-4 "/27 ­ Q £ -0¥4`/2 +>-4G&8²8&¥ªAu²8 "LM<\G¦2¥0 oQ-4 £ E G&8§3†r¦H7;?>`>2H© £  -ªG¤<8G;@ G&L H/² 5-477;-41"/2-4C²?/&¥ªA o¤ HG&7;½ £ /ŠÎB  "/28AãH/> ü ¥4-¼»E 7    *  "/ã² LMLM "/ < £ "/2 "¥4 "1-0² ¥ž<-ªG&?>³>\©7 E 1"1 7F&-4/1M £ \    *  -47. 1"8/8GH¥0¥ªAB² "LM<8&-ª&-4¨.G&J@ 8G& /2² ›< "-4/Q@ HG= E GQG& 7 E ¥4&7 ­ ¡]\ ¢¤£ -47'ohH7¤<2G&8¨-4 E 7;¥ªA`/2 &?>M-4/` £ Q²?H7;. @c¸^;À;¾_a` ¸^;À&¾_ À;¸`/>´¸^&À;¾_¹°O©: Gbc^þ À;¸.`dbc^þ À;¸M/>Bbc6^þ°O© oQ-ª £  £  £ -41 £ 8GQ<2G& ¦H¦-4¥4-4…A½/H¥ªA27;-47hýA<2-0² ¥4¥ªA½ ²8² E <AH† -4/1` £ %G&  =7;¥0 H%H/>½ £ ›¥4 0o8G%<G& "¦¦-4¥4-ªýAÞ@ HG&Lä…A<-¼† ²?H¥0¥ªAZ@ G&²  >³& 7; 8³¥4-41"/2LM /. ¥47;3o £ 8G&"­ />2 ?> © £  <-41 "/ £ "¥4=<2G&-4/² -4<2¥0Q-47c £ QLM "7Fc<2G& "¦2¥4 L H&-4²5 @)¥4¥ £  ¤^¥A¬I­;¬»|ª£¦¸®S¦¸® ¹ ¬­Š­Š²£­.§g¯ÖÁÉ©A¦¨°ª«·½·½ÁÞª£­;¬I¶[³A¬I¯;²Þ§;©Sª«­.§Q¬ §Q¯Šª«¯;¦¨§Ö¯Š¦½°§2¾Œ²G­2¯Š¥A¬3·¸²ËÏI¬­2¾Œ­;¬fÔG³S¬®S°Á¦½­Š­;¬ ¹ ³A·¨ª«­2¾Œ²£­Š»|§Ò Æèª£©A©A¦½® ¹ §Ú§Q³ˆ°.¥>ª£§ [ kVDXcd}[?kiá$ª£­;¬$©Sª«­;¯;¦¨°³A·¨ª«­Š·½Á¶[¦¸¾ ´ úˆ°³A·¸¯2µˆ¬f°ª£³S§Q¬GÕSÏ&¦¸¯;¥èª/°²£­Š©A³S§I¾Œ­Š¬Ô]³A¬®S°Á²«¾²£®A·½Á Ê Õ ¯Š¥A¬­Š¬Ú¦¨§I¯;²É²3·¸¦¸¯Q¯Š·¸¬‡¶Aª«¯Šª$¯;²3¬§Q¯;¦½»ÅªË¯;¬Þª ¹ ²É²[¶*°²G®]¯;¬Îɯ ©A­Š²«úˆ·¸¬Å²G­ª«®À¬HSL¬f° ¯;¦½Ñ£¬·¸ÁÀ¶[¦¨§;°­;¦½»|¦¸®ˆªË¯;²G­;Á>¾Œ­;¬fÔ]³A¬®S°Á ©A­Š²«úˆ·¸¬GÒ2ß ®A·¨ª«­ ¹ ¦½® ¹ ¯;¥S¬$­.ª~ϰ²£­Š©A³S§Ú§Q¦½ð¬$§;¥A²G³A·½¶"¦½»3´ ©A­Š²ËÑ£¬œ©L¬­;¾Œ²£­Š»|ª£®S°¬Þ¦½®µL²«¯Š¥"²«¾¯;¥A¬f§Q¬ °ª£§;¬§Ò e žÍ@ÛXÚPxÄ 2DÉ9Q@ÛX ¤^¥A¦¨§\©Sª«©L¬­¥Sª£§\©A­Š¬§;¬®]¯;¬f¶‡ª«® ²G­;¦ ¹ ¦½®Sª£·Gª«· ¹ ²G­;¦¸¯;¥S»°ªË´ ©Sª£µA·½¬ ²«¾[¦½®S¶[³S°¦¸® ¹ ¯;¥A¬ÛªG°°³A­Šª«¯;¬b»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª£·~ª£®Sª«·¸´ Á[§;¦½§$²«¾&¬ÑG¬®À¥A¦ ¹ ¥A·½Áͦ½­Š­;¬ ¹ ³A·¨ª«­‡ÑG¬­ŠµS§Õ§Q¯Šª«­;¯;¦½® ¹ Ï&¦¸¯;¥ ®A²¢©Sª£¦¸­Š¬¶ÌØ&¦¸®A¿S¬°¯;¦½²£®8Õ ­;²É²£¯Ùõ¬ÎAª£»3©S·¸¬f§¾Œ²£­¯;­.ª«¦½®[´ ¦½® ¹ ª£®S¶¢®A²(©A­;¦½²£­§Q¬¬¶[¦½® ¹ ²£¾ ·¸¬ ¹ ª£·2»|²£­Š©A¥A²G·¸² ¹ ¦¨°ª«· ¯Š­Šª£®S§Ö¾Œ²G­;»Åª«¯;¦½²£®S§Ò/ñ꯶[²É¬f§ §Q²µÉÁ>¯;­Š¬ªË¯Š¦¸® ¹ »3²G­;©S¥A²«´ ·½² ¹ ¦½°ª«·|ª«®Sª£·¸Á[§;¦½§Í©A­Š¬¶[²G»3¦½®Sª£®G¯Š·¸ÁЪG§Íª£® ª«·½¦ ¹ ®A»|¬®]¯ ¯.ª£§;òE¦½®ÂªÄ·¨ª«­ ¹ ¬Í°²£­Š©A³S§Õ©L¬­;¾Œ²£­Š»3¦½® ¹ ¯;¥A¬¬\SW¬°¯;¦½Ñ£¬ °²£·½·½ª£µˆ²G­Šª«¯;¦½²£®>²£¾Û¾Œ²G³A­‡²£­Š¦ ¹ ¦¸®ˆª«·b§Q¦½»|¦¸·¨ª«­Š¦¸¯ÖÁè»|¬ª£§;³A­Š¬§ µSªG§Q¬f¶¢²£®6¬ÎÉ©L¬°¯;¬f¶¢¾Œ­;¬fÔ]³A¬®S°ÁE¶A¦½§Q¯;­Š¦¸µS³[¯;¦½²£®S§Õ°²G®[´ ¯Š¬ÎɯÕ\»3²G­;©S¥A²£·½² ¹ ¦½°ª«·½·¸Á]´êÏZ¬¦ ¹ ¥G¯Š¬¶ÕÉC¬Ñ£¬®S§;¥G¯Š¬¦½®§;¦½»3¦¸´ ·¨ª«­Š¦º¯ÖÁ‡ª£®S¶ ª«®¦º¯Š¬­.ªË¯Š¦¸ÑG¬·½ÁœµL²É²«¯.§Ö¯Š­Šª£©A©ˆ¬f¶‡»|²[¶[¬·É²«¾LªË¾ ´ úAÎAª«¯;¦½²£®ª«®S¶"§Q¯;¬»´ï°.¥Sª£® ¹ ¬Þ©A­Š²£µSª£µA¦½·¸¦¸¯;¦½¬§ÒÛ¤^¥A¦¨§°²G®[´ §Q¯;¦¸¯;³[¯Š¬§"ªÄ§;¦ ¹ ®A¦¸úˆ°ª£®G¯ª£°.¥A¦½¬ÑG¬»|¬®]¯*¦½®6¯;¥ˆªË¯©S­;¬Ñ]¦¸´ ²G³S§ª«©A©A­Š²GªG°.¥A¬§8¯;²œ»|²£­Š©A¥A²£·½² ¹ Á‡ª£°Ô]³A¦¨§;¦º¯Š¦¸²G® ¥Sª~Ñ£¬I¬¦¸´ ¯Š¥A¬­Þ¾Œ²[°³ˆ§Q¬f¶>²G®Í³A®ˆ§Q³A©L¬­ŠÑɦ½§;¬¶Í¦½®S¶[³S°¯;¦½²£®²«¾IÔ]³SªG§Q¦¸´ ­Š¬ ¹ ³A·½ª£­Z°²£®S°ªË¯;¬®SªË¯Š¦¸ÑG¬2ªËë|ÎAªË¯Š¦¸²G®8Õ£²G­Z¥ˆª«®S¶[·½¬¶/¦¸­Š­;¬ ¹ ´ ³A·¨ª«­^¾Œ²G­;»Å§&Ï&¦¸¯;¥¾Œ³A·½·½Á§;³A©L¬­ŠÑ]¦¨§;¬¶*¯;­.ª«¦½®A¦¸® ¹ Ò ñï®°²G®[´ ¯Š­ŠªG§Ö¯fÕb¯;¥A¦¨§|©Sª£©ˆ¬­ß §3¬§Š§Q¬®G¯Š¦½ª£·¸·½ÁÀ³A®S§;³A©L¬­ŠÑ]¦¨§;¬¶kª£· ¹ ²«´ ­Š¦º¯Š¥A» ªG°.¥A¦½¬Ñ£¬f§²ËÑ£¬­"Ý£æÉリ£°°³A­.ª£°Á6²£®ù¯;¥A¬±»|²]§Ö¯ ¥A¦ ¹ ¥S·¸ÁЦ½­;­Š¬ ¹ ³A·¨ª«­¾Œ²G­;»Å§Õ3ª£®S¶ à£à ÒâéGã ª£°°³A­.ª£°Á²£® ª£®Sª«·½Á[§Q¬f§\­Š¬Ô]³A¦½­Š¦¸® ¹ §Q²G»|¬^§Q¯;¬»Ì°.¥Sª£® ¹ ¬GÕ˲£³[¯Š©ˆ¬­Q¾Œ²G­;»3´ ¦½® ¹ Ʋɲ£®S¬Á*ª£®S¶ãfIª«·½¦0S›ß §&¾Œ³A·½·¸Á§Q³S©ˆ¬­;Ñɦ¨§Q¬f¶*·½¬ª£­;®A¦½® ¹ ª£· ¹ ²£­Š¦º¯Š¥A»ö²ËÑ£¬­Šª£·¸·Wª«®ˆ¶*²G®µL²«¯Š¥"²«¾C¯Š¥A¬§;¬‡»|¬ªG§Q³A­Š¬§Ò ¥4-41"/2LM /x<2G&-4/²8-0<2¥4 7 E 7;?> ©H7x-4–²JG&?H& 7–/2?HG&¥ªA+7–L H/3A <2G& ¦¥4 LM7%H7.-ª.C2§ 7 ­ ¢¤£ ¯ \¨JG¥4¥x<8G;@ HG&L /2² ¹>2¨/Z† 1e-47Z7;¥4-01 £ &¥4A¶-4/Œ-ª&7Z@O?¨ Ge¬õoQ-4 £ r LM-47&¥4-41"/LM8/3&7 ?¨ "-0>2 >½@ HG rWV <2G& ¦¥4 LM7h²8G& H&?>±J©¦ E ¤ £ ›²8 "7F5 @ž £ -47 <2<2G& H² £ -475¦ G&/2 £ ??¨3-0¥ªA½¦A½ £ ›-4G;G&81 E ¥0\G%¨8G&¦27 ©H/> ›<2G& "¦¦2-0¥4-47F&-4².LM 3> ¥ H@?o £  /M¨"\G&-0/@ G&LM77 £ E ¥>`¦ 8§< ²8&?>ËÀH¥0¥4 $o¤ >Z-07Q/2 ²  7;7&\G;AB& ¹C2§Þ £  7;%²?H7; 7Éo £ -4¥4 <2G& 7;JG&¨-4/21j £ ³>2¨/1 7M @› £ Z<2G&-4/2² -4<¥4Z-4/º>2 $oQ/Z† o¤ -41 £ &-4/21¯² ¥0H7 £ -4/1+/¥ªA7; 7¤-4/` £ QLM HG&QG&81 E ¥0\G¤¨8G&¦27 ­ ¢  7F ¢ G E  ü õ2# S õ# P õ#ÿγõ ü õ# S õ# P õ ü õ2# S õ P õÞ /¥ªA Q HG> ÿ  H ¬ ü /3¨1± õ² HG& ¬ ;G  ± ¬ ;G  ± ¬ ;G  ± ¬ ;G  ± 1" H 1"8 1  ­ KV 1 1" 1 1 E  3/3o /2 $o gf(h6i  ­ r gf(h6i  f(h6i gf(h6i gf(h6i &  "    ­ rWV    &  H ¦¥43o ¦¥4 $o jlk h3i  ­ YKV jlk h3i jmk h6i jlk h3i jlk h3i ¦ ²?HLM ¦ ² "LM j 6nhpoq  ­ r j 6nhpoq j 6nhros j 6nhpoq j 6nhpoq L "> L  oq  ­ XV oq os oq L \& ² ¥ E /21 ² ¥4-0/21 n kLt f 1  ­ rKr n kLt f 1 n kLt f 1 n kLt f 1 n kLt f 1 >G&o >GUo u 2i  ­ sr u 2i u 2D6i u 2i u 2i 7o¤ HG& 7o¤?\G " iv$2  ­ YKV " iv$2 " iw$2 " iv$2 7F& HG& o¤ HG& o¤?\G iv$2  ­ UV iv$2 iw$2 iv$2 oQ-ªG& ²?HLM ² "LM nhpos  ­ rKr nhpos nhpos nhpos nhpos  £ E 1 £   £ -4/  # t fl  ­ sKV  # t fl  # t fl  # t fl  £ E LM< ¾ E /1 ¾-4/1 xmt f 1 X ­ sKV xmt f 1 xyt f 1 xmt f 1 xmt f 1 ¦2G& E 1 £  ¦2G&-4/1 j 2 t f 1 r ­ r j 2 t f 1 j 2 t f 1 j 2 t f 1 ¦G&-41 £ & / 7F;G& \¨ 7F;G&-0¨" " 2 t{z  r ­ Yr " 2 t{z  " 2 tJz  7F;G>>¥0 " 2 t{z  7F E ² 7F&-0² "  t n s ­ VKV "  t n "  t n 7FH¦-4¥44  7F& ² 7o¤8<2 7o¤  < " iv| s ­ WV " iv| " iw| " iv| 7o¤< 7 £ / 7 £ -4/ "# t f( s ­ rKr "# t f( "# t f( "# t f( "# t f( o¤  oh" iv s ­ r iv iw$ oQ-4/> iv ² ¥4 \¨ ² ¥4??¨ n k  z  t ­ r n k  z  n k  z  n k  z  ²8¥0 7; ¦ G& ¦?HG j $2 t ­ tKr j $2 ¦HG j $2 ¦HG& LM?H/3 LM?/ oq6$f Y ­ WV oq6$f osf L H/H1" LM E / ¥4 / ¥08/> k 3f u  ­ Kr k 3f u k 6f u k 3f u k 3f u 7;¥43o 7;¥ A 7;¥4-ª $V ­ VKs 7;¥4-ª 7;¥0-41 £  7;¥4-41 £  7;¥4 $o 7F;G E ² 7F;G&-0" " 2 t   ­ sKV " 2 t  " 2 t  " 2 t  7F;G E  ¦ E 1 £  ¦ E A j 0(} 0 ­ WV j 0(} j 0(} j 0(} ¦ E /> ¦-ª ¦-ª& jlt  $ ­ sKV jlt  jmt  ¦J;G?A ¦J >2 \¨ >2-4¨ u(tJz  0t ­ Kr u(tJz  ultJz  >27 £ u(tJz  ¦ E G&/ ¦ E G&/ ¦ E G&< 0t ­ KV ¦ E G&< ¦ E G&< ¦ E G&< j 0 2f o¤8/3 1" o¤/ $Y ­  o¤/ oh/ o¤/ o¤/ ²? E 1 £  ²?H&² £ n$n # $Y ­ r n$n # ² E  n$n # ²8 E 1 £ >2 ¥ª >2?H¥ u 6 k H ­ XOr u 6 k u  k >-47&1G&8 u 6 k ¤\ª£µA·½¬ ô æAó ‹¬­Q¾Œ²G­;»Åª«®ˆ°¬2²£¾ Ê ª£·¸¦ ¹ ®A»|¬®G¯&»|²[¶[¬·¨§&²G®Ü]á$­.ª«®ˆ¶[²£»|·½Á/§Q¬·¸¬f° ¯;¬f¶¥A¦ ¹ ¥A·½Á/¦¸­Š­Š¬ ¹ ³S·½ª£­^ßÛ® ¹ ·½¦¨§Q¥Ñ£¬­ŠµS§ N¢VÊ.V8?AVCXÚPWVCD Îu­ ÿ ­ ÷ G& /?©  ­ ÎB-4/-4L H¥.1"8/8G\&-0¨"‡LM 3> ¥47 ·ª LM-0>2>2¥4›1G& E />½¦8 o  /³/ E G& "/27.H/>½;G&-411"8G&7 ­~€^;Á ‚ À&À;¸ƒ þ„…Á‡†.°JˆÀv‰Š°Jˆ‹5þþ3c2¾Œ £ ÁHþ†?À^;ÀJþ ‚ À+Á>†.°JˆÀ £ ÁŽ„ þ3ƒ °;ƒLÀ+ ‚ ƒrÀJþ ‚ À+Á ‚ ƒrÀJ°;‘H©<1 7 WYD’hs ­ Îu­ ÿ ­ ÷ G&8/3?©  ­ ª=/ « ² -4 /?© <2G& ¦H¦-4¥4-07F&-4²?H¥4¥4A 7; E />M¥41" HG&-4 £ L @ G¤7;81"LM /H&-4 / />½o G>M>2-47;² \¨h† JG;A­”“u¾ ‚ ˆƒ þÀ–•žÀ;¾^þ$ƒþ„—™˜š2©2<1 7 tH’Ë$VKs ­ õ ­ ü E ² JG 4 /‡/> ø ­3›¤\G& 0oQ7;A© WVVV ­ P /21 E 1"+-4/>3† <8/>28/3¯LM-4/-4L ¥4¥ªAe7 E <8G&¨3-47;?>e-4/> E ²8&-4 "/ã @=¥48§-4²?¥ <G& "¦¦-4¥4-ª&-4 7 ­œ~€^;Á ‚ À&À;¸ƒ þ„…¹Á>†–‹ £ •ž ŸŸH©¿ˆ. /1w › "/1­ ü ­)>2½ÎZHG&² /)© Or ­ n /27 E <8G&¨3-47;?>³¥0/21 E H1"MH²$T E -¼† 7;-ª&-4 "/­ ô £2ø >2-47;7;8G;\&-0 /)­>Î ¢ ­ ø ­”®–1"?>-5/> ÿ ­cõ3<2G& "H?© YKY ­ ü //2 ²8&-4 "/2-07F û 8† o G&37½/> û H E G¥ P H/1 E 1 ÎB HG&< £ "¥4 "1HA­ n Î ø ü /2@– / ú GLML \G5/> P H/1 E 1 ô G& ² 87;7;-0/212­ ¡ ­ ú ¥>7;LM-ª £ © KVKVV ­ n /7 E <8G&¨3-07; > P ?\G&/¿† -4/21 H@u £ úÎB HG&< £ "¥4 "1HAä @u û H E G¥ P /1 E H1""­ ˆ°O°£¢g¤ ¥¥6ˆc6_`¾þ$ƒ°;ƒrÀ…¦§c ‚ ˆƒ ‚ ¾„Á¦ À;¸c¥†8¾ ‚ c6Œ °;‘¥„ÁŒ4¸…Ž_ ƒ °Jˆ¥ •mƒ þ„cƒL…&°;ƒ ‚ ¾¨ŸŸŸ¥~Q¾¢2À^©¥¢¾¢2À^¦£ˆ3°;_ Œ ­ ø ­ ù ­²ˆ%H/2-¼† ¢žª E G?©g `­“šŽ¾ 4 8G?© ú ­ ¢žª E G?© KVKVV ­¯õH&-47† &-4²?H¥ÎB G&< £ "¥4 1"-4²?¥ ø -47&L¯¦-41 E H&-4 "/g@ HG{ª=11"¥ E &-4/W† &-4¨ P /21 E 1"87 ­ / ~€^;Á ‚ À&À&¸ƒ þ„…¯Á‡† £œ« •(¬®­!¯S¨6ŸŸŸH­ P ­ \G;; E /2 /)© K ­ S -4/-ª&=7F\&.² /7F;GH-0/&7 ­ / ¡ £ / ú ¥0>27;LM-ª £ ¬O?>­ ±s° ˆÀE•:¾…°l~™ˆ2ÁþÁŒ4ÁŽ„ƒ ‚ ¾Œ$±Ec6Œ0ÀJ©3<1 7 $tK’<WX ­ ü £ -4²?1 2· n /2-0¨"8G&7;-ªýAÞ @ ü £ -0² 1" ô G& 7;7 ­ ø ­™  4 H \¨ © Ot ­ n /7 E <JG&¨-47;?>ã¥4?HG&/2-4/1Õ @%/-4¨ LM HG&< £ ¥4 "1A oQ-ª £ 1" /28&-4²=H¥41" G&-ª £ LM7 ­™² £ “N•p¥6“qŒ þÀJ° ³ Á^;¿…®ˆÁ¢ÕÁþN²€_–¢ƒJ^ƒ ‚ ¾Œ$•žÀ;¾^þ$ƒ þ„½Á‡†–­E•(~´°¾…;¿…&­  `­6 › "7; /2/-4 LM-O© Y ­'ª¶1 /8GH¥)²8 "LM< E H&-4 "/ÞLM 3>28¥ @ Go¤ HG>Z† @ HG&L G& ² "1/-ª&-4 "/ H/>¯<G& 3> E ²8&-4 "/)­m~œc3bD¦p‰‰— µ À©¢°©¦”Á>†¯QÀJþ À^;¾Œp•yƒ þ„cƒL…&°;ƒ ‚ …&­ n /-4¨­ @ˆ.8¥07;-4/2-O­ ü ­ « ­ P -4/1© WX ­ P ?\G&/-4/1  £ ¯<7F.& /7;¯ @h®–/1¥4-07 £ ¨JG&¦7 · ¢¤£ ›7FAL¯¦ "¥4-4²›<H;&JG&/Z7;7; ²8-\& G›¨37 ­2² "/2/ ²† &-4 "/2-47F.LM 3> ¥47 ­E¶6¦ ‹·^°©¦ ¬þ°…ÀŒ§¦g±5À…¦0©  · WV † K ­ ÿ ­ÎB  "/JA/>zÎu­ ü H¥4-é»x© r ­ /> E ²8&-4 "/Õ H@=C2G&7F† HG>28G½>28² -47;-0 /º¥4-07F&7 · ÿ 87 E ¥ª&7 "/ ¥4?\G&/-4/1j £ ³<H7F & /27;› @–®–/1¥4-07 £ ¨"8G&¦7 ­E¶6¦g‹·^°©¦(¬þ2°…ÀŒ§¦ ±QÀ…¦0©  ·  † WX ­  `­šŽ¾ 4 8G`/>ãõ ­ û -ªG& /3¦ E G&1©  ­ ô GH²8&-4²?¥h¦  H† 7F;GH<<-4/21³ @5LM G&< £ "¥4 1"-4²?¥¤/H¥ªA 4 8G&7 ­q~€^;Á ‚ À&À&¸ƒ þ„… Á‡†%°JˆÀ £ Áþ†?À^;ÀJþ ‚ À¹ÁþN­›¾°;c^;¾Œp•:¾þ„c¾„À¸•žÀ;¾^þ$ƒ þ„­ ø ­ ÿ E LM ¥ £ \G;MH/> ¡ ­>ÎB² ü ¥08¥0¥0H/>© YKs ­8š./¥4?HG&/Z† -4/1Þ £ `<7F›& /27;` @h®”/21"¥4-47 £ ¨8G&¦27 ­ / ¡ ­MÎB² ü ¥4 ¥¼† ¥0/>© ø ­ ÿ E LM ¥ £ \G;?©H/>¹ £  ô ø ô ÿ  7; HG&² £Þú G& E <© ~Q¾^;¾ŒLŒ0ÀŒ–¸ƒL…&°;^¹ƒºbc3°…À&¸¢^FÁ ‚ À…Ž…Žƒ þ„¤w²¼»¢rŒ4Á^F¾°;ƒ Áþ3… ƒ þu°JˆÀ _wƒ ‚ ^;Á…&°;^¹c ‚ °;c6^;ÀMÁ‡† ‚ ÁŽ„þ3ƒ °;ƒ Áþ2© ö "¥ E LM  ­-Î ¢ ô G& 7;7 ­ ô ­ ¢¤£ 8G& /Þ/> ­ ü ¥4 8&"© Ot ­-ª E & L H&-4²Q²$T E -07;-ª&-4 "/ H@: o K†r¥4 ¨ ¥LM HG&< £ "¥4 "1-0² ¥ G E ¥4 7 ­™~€^;Á ‚ À&À&¸ƒ þ„…+Á‡†5°JˆÀ ½ ƒ †°Jˆ £ Áþ†?À^&ÀJþ ‚ À.Áþ!‹€¢¢rŒ ƒrÀ;¸­¾°;c^F¾Œ•:¾þ„c2¾„À€~€^;Á ‚ À…Ž…Žƒ þ„© Q 7 £ -4/1& /)©<1 7 $VK † UV ­ ª+­ ö E &-4¥0-4/8/)© r ­_ÎB HG&< £ ¥4 "1"-4²?H¥–>2-47&L¯¦-41 E H&-4 "/­ / S ­$ \G&¥47;7; "/)©²ª+­ ö E &-4¥0-4/ /© ¡ ­Ëˆ. -433-4¥3©H/> ª­ ª5/3;&-4¥0 ¬O?>7 ­ ± £ Áþ3…°;^F¾ƒ þ2°w„^;¾_w_M¾^w‹¾Œ4¾þ„c2¾„À ƒ þ¸À©¢2ÀJþ¸ÀJþ° …Ž‘…&°…À_¿† Á^¢¾^¹…¹ƒþ„Uc3þ$^;À…&°;^ƒ ‚ °…À;¸ °…À®»°O© <1"87 UsOr’HKYWX ­ ¢¤£ ÿˆ%1 E "·-ÎB E & "/Z>2 ú G E A&JG?­
2000
27
A Constraint-based Approach to English Prosodic Constituents Ewan Klein Division of Informatics University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK [email protected] Abstract The paper develops a constraint-based theory of prosodic phrasing and prominence, based on an HPSG framework, with an implementation in ALE. Prominence and juncture are represented by n-ary branching metrical trees. The general aim is to define prosodic structures recursively, in parallel with the definition of syntactic structures. We address a number of prima facie problems arising from the discrepancy between syntactic and prosodic structure 1 Introduction This paper develops a declarative treatment of prosodic constituents within the framework of constraintbased phonology, as developed for example in (Bird, 1995; Mastroianni and Carpenter, 1994). On such an approach, phonological representations are encoded with typed feature terms. In addition to the representational power of complex feature values, the inheritance hierarchy of types provides a flexible mechanism for classifying linguistic structures, and for expressing generalizations by means of type inference. To date, little work within constraint-based phonology has addressed prosodic structure above the level of the foot. In my treatment, I will adopt the following assumptions: 1. Phonology is induced in parallel with syntactic structure, rather than being mapped from prebuilt parse trees. 2. Individual lexical items do not impose constraints on their neighbour’s phonology. The first of these assumptions ensures that phonology is compositional, in the sense that the only information available when assembling the phonology of a complex constituent is the phonology of that constituents daughters. The second assumption is one that is standardly adopted in HPSG (Pollard and Sag, 1994), in the sense that heads can be subcategorized with respect to the syntactic and semantic properties of their arguments (i.e., their arguments’ synsem values), but not with respect to their arguments’ phonological properties. Although I am not convinced that this restriction is correct, it is worthwhile to explore what kinds of phonological analyses are compatible with it. Most of the data used in this paper was drawn from the SOLE spoken corpus (Hitzeman et al., 1998).1 The corpus was based on recordings of one speaker reading approximately 40 short descriptive texts concerning jewelry. 2 Syntactic and Prosodic Structure 2.1 Metrical Trees Metrical trees were introduced by Liberman (1977) as a basis for formulating stress-assignment rules in both words and phrases. Syntactic constituents are assigned relative prosodic weight according to the following rule: (1) NSR: In a configuration [C A B], if C is a phrasal category, B is strong. Prominence is taken to be a relational notion: a constituent labelled ‘s’ is stronger than its sister. Consequently, if B in (1) is strong, then A must be weak. In the case of a tree like (2), Liberman and Prince’s (1) yields a binary-branching structure of the kind illustrated in (3) (where the root of the tree is unlabeled): (2) VP V fasten NP Det a N cloak 1The task of recovering relevant examples from the SOLE corpus was considerably aided by the Gsearch corpus query system (Corley et al., 1999). (3)  w fasten s w a s cloak For any given constituent analysed by a metrical tree t, the location of its main stress can be found by tracing a path from the root of t to a terminal element α such that all nodes on that path are labelled ‘s’. Thus the main stress in (3) is located on the element cloak. In general, the most prominent element, defined in this way, is called the Designated Terminal Element (DTE) (Liberman and Prince, 1977). Note that (1) is the metrical version of Chomsky and Halle’s (1968) Nuclear Stress Rule (NSR), and encodes the same claim, namely that in the default case, main stress falls on the last constituent in a given phrase. Of course, it has often been argued that the notion of ‘default prominence’ is flawed, since it supposes that the acceptability of utterances can be judged in a null context. Nevertheless, there is an alternative conception: the predictions of the NSR correctly describe the prominence patterns when the whole proposition expressed by the clause in question receives broad focus (Ladd, 1996). This is the view that I will adopt. Although I will concentrate in the rest of the paper on the broad focus pattern of intonation, the approach I develop is intended to link up eventually with pragmatic information about the location of narrow focus. In the formulation above, (1) only applies to binary-branching constituents, and the question arises how non-binary branching constituent structures (e.g., for VPs headed by ditranstive verbs) should be treated. One option (Beckman, 1986; Pierrehumbert and Beckman, 1988; Nespor and Vogel, 1986) would be to drop the restriction that metrical trees are binary, allowing structures such as Fig 1. Since the nested structure which results from binary branching appears to be irrelevant to phonetic interpretation, I will use n-ary metrical trees in the following analysis. In this paper, I will not make use of the Prosodic Hierarchy (Beckman and Pierrehumbert, 1986; Nespor and Vogel, 1986; Selkirk, 1981; Selkirk, 1984). Most of the phenomena that I wish to deal with lie in the blurry region (Shattuck-Hufnagel and Turk, 1996) between the Phonological Word and the Intonational Phrase (IP), and I will just refer to ‘prosodic constituents’ without committing myself to a specific set of labels. I will also not adopt the Strict Layer Hypothesis (Selkirk, 1984) which holds that elements of a given prosodic category (such as Intonational Phrase) must be exhaustively analysed into a sequence of elements of the next lower category (such as Phonological Phrase). However, it is important to note that every IP will be a prosodic constituent, in my sense. Moreover, my lower-level prosodic constituents could be identified with the ϕ-phrases of (Selkirk, 1981; Gee and Grosjean, 1983; Nespor and Vogel, 1986; Bachenko and Fitzpatrick, 1990), which are grouped together to make IPs. 2.2 Representing Prosodic Structure I shall follow standard assumptions in HPSG by separating the phonology attribute out from syntaxsemantics (SYNSEM): (4) feat-struc ! " PHON pros SYNSEM synsem # The type of value of PHON is pros (i.e., prosody). In this paper, I am going to take word forms as phonologically simple. This means that the prosodic type of word forms will be maximal in the hierarchy. The only complex prosodic objects will be metrical trees. The minimum requirements for these are that we have, first, a way of representing nested prosodic domains, and second, a way of marking the strong element (Designated Terminal Element; DTE) in a given domain. Before elaborating the prosodic signature further, I need to briefly address the prosodic status of monosyllabic function words in English. Although these are sometimes classified as clitics, Zwicky (1982) proposes the term Leaners. These “form a rhythmic unit with the neighbouring material, are normally unstressed with respect to this material, and do not bear the intonational peak of the unit. English articles, coordinating conjunctions, complementizers, relative markers, and subject and object pronouns are all leaners in this sense” (Zwicky, 1982, p5). Zwicky takes pains to differentiate between Leaners and clitics; the former combine with neighbours to form Phonological Phrases (with juncture characterized by external sandhi), whereas clitics combine with their hosts to form Phonological Words (where juncture is characterized by internal sandhi). Since Leaners cannot bear intonational peaks, they cannot act as the DTE of a metrical tree. Consequently, the value of the attribute DTE in a metrical tree must be the type of all prosodic objects which are not Leaners. I call this type full, and it subsumes both Prosodic Words (of type p-wrd) and metrical trees (of type mtr). Moreover, since Leaners form a closer juncture with their neighbours than Prosodic Words do, we distinguish two kinds of metrical tree. In a tree of type full-mtr, all the daughters are of type full, whereas in a tree of type lnr-mtr, only the DTE is of type full.  w fasten w w the s cloak s w at w the s collar Figure 1: Non-binary Metrical Tree pros lnr full p-wrd mtr DOM: list(pros) DTE: full lnr-mtr DOM: list(lnr)  h 1 i DTE: 1 full-mtr DOM: list(full) Figure 2: Prosodic Signature In terms of the attribute-value logic, we therefore postulate a type mtr of metrical tree which introduces the feature DOM (prosodic domain) whose value is a list of prosodic elements, and a feature DTE whose value is a full prosodic object: (5) mtr ! " DOM list(pros) DTE full # Fig 2 displays the prosodic signature for the grammar. The types lnr-mtr and full-mtr specialise the appropriateness conditions on mtr, as discussed above. Notice that in the constraint for objects of type lnr-mtr,  is the operation of appending two lists. Since elements of type pros can be word-forms or metrical trees, the DOM value in a mtr can, in principle, be a list whose elements range from simple word-forms to lists of any level of embedding. One way of interpreting this is to say that DOM values need not obey the Strict Layer Hypothesis (briefly mentioned in Section 2.1 above). To illustrate, a sign whose phonology value corresponded to the metrical tree (6) (where the word this receives narrow focus) would receive the representation in Fig 3. (6)  w fasten s s this w cloak 2 6 6 6 6 6 6 6 6 6 6 6 4 sign PHON 2 6 6 6 6 6 6 6 6 4 full-mtr DOM * fasten, 1 2 6 6 4 full-mtr DOM D 2 this, cloak E DTE 2 3 7 7 5 + DTE 1 3 7 7 7 7 7 7 7 7 5 3 7 7 7 7 7 7 7 7 7 7 7 5 Figure 3: Feature-based Encoding of a Metrical Tree 3 Associating Prosody with Syntax In this section, I will address the way in which prosodic constituents can be constructed in parallel with syntactic ones. There are two, orthogonal, dimensions to the discussion. The first is whether the syntactic construction in question is head-initial or head-final. The second is whether any of the constituents involved in the construction is a Leaner or not. I will take the first dimension as primary, and introduce issues about Leaners as appropriate. The approach which I will present has been implemented in ALE (Carpenter and Penn, 1999), and although I will largely avoid presenting the rules in ALE notation, I have expressed the operations for building prosodic structures so as to closely reflect the relational constraints encoded in the ALE grammar. 3.1 Head-Initial Constructions As far as head-initial constructions are concerned, I will confine my attention to syntactic constituents which are assembled by means of HPSG’s Head2 6 6 4 phrase PHON mkMtr (hϕ0 ;ϕ1 ; : : :ϕn i) SYNSEM h COMPS hi i 3 7 7 5 ! 2 6 6 6 4 word PHON ϕ0 COMPS  1 h PHON ϕ1 i , : : : , n h PHON ϕ1 i  3 7 7 7 5 1 , : : : , n Figure 4: Head-Complement Rule Complement Rule (Pollard and Sag, 1994), illustrated in Fig 4. The ALE rendering of the rule is given in (7). (7) head_complement rule (phrase, phon:MoPhon, synsem:(comps:[], spr:S, head:Head)) ===> cat> (word, phon:HdPhon, synsem:(comps:Comps, spr:S, head:Head)), cats> Comps, goal> (getPhon(Comps, PhonList), mkMtr([HdPhon|PhonList], MoPhon)). The function mkMtr (make metrical tree) (encoded as a relational constraint in (7)) takes a list consisting of all the daughters’ phonologies and builds an appropriate prosodic object ϕ. As the name of the function suggests, this prosodic object is, in the general case, a metrical tree. However, since metrical trees are relational (i.e., one node is stronger than the others), it makes no sense to construct a metrical tree if there is only a single daughter. In other words, if the head’s COMPS list is empty, then the argument mkMtr is a singleton list containing only the head’s PHON value, and this is returned unaltered as the function value. (8) mkMtr( h 1 [pros] i) = 1 The general case requires at least the first two elements on the list of prosodies to be of type full, and builds a tree of type full mtr. (9) mkMtr( 1 h[full], [full], : : : , 2 i) = 2 6 4 full-mtr DOM 1 DTE 2 3 7 5 Note that the domain of the output tree is the input list, and the DTE is just the right-hand element of the domain. (10) shows the constraint in ALE notation; the relation rhd DTE/2 simply picks out the last element of the list L. (10) mkMtr(([full, full|_], L), (full_mtr, dom:L, dte:X)) if rhd_DTE(L, X). Examples of the prosody constructed for an N-bar and a VP are illustrated in (11)–(12). For convenience, I use [of the samurai] to abbreviate the AVM representation of the metrical tree for of the samurai, and similarly for [a cloak] and [at the collar]. (11) mkMtr( hpossession, [of the samurai] i) = 2 6 6 4 full-mtr DOM D possession, 1 [of the samurai] E DTE 1 3 7 7 5 (12) mkMtr( hfasten, [a cloak], [at the collar] i) = 2 6 6 4 full-mtr DOM D fasten, [a cloak], 1 [at the collar] E DTE 1 3 7 7 5 Let’s now briefly consider the case of a weak pronominal NP occurring within a VP. Zwicky (1986) develops a prosodically-based account of the distribution of unaccented pronouns in English, as illustrated in the following contrasts: (13) a. We took in the unhappy little mutt right away. b.*We took in hˇim right away. c. We took hˇim in right away. (14) a. Martha told Noel the plot of Gravity’s Rainbow. b.*Martha told Noel ˇit. c. Martha told ˇit to Noel. Pronominal NPs can only form prosodic phrases in their own right if they bear accent; unaccented pronominals must combine with a host to be admissible. Zwicky’s constraints on when this combination can occur are as follows: (15) A personal pronoun NP can form a prosodic phrase with a preceding prosodic host only if the following conditions are satisfied: a. the prosodic host and the pronominal NP are sisters; b. the prosodic host is a lexical category; c. the prosodic host is a category that governs case marking. 2 6 6 4 phrase PHON extMtr (ϕ1 ;ϕ0 ) SYNSEM h SPR hi i 3 7 7 5 ! 1 2 6 6 6 4 phrase PHON ϕ0 SPR  1 h PHON ϕ1 i  3 7 7 7 5 Figure 5: Head-Specifier Rule These considerations motivate a third clause to the definition of mkMtr: (16) mkMtr( h 1 [p-wrd], 2 [lnr] i 3 ) = mkMtr( h 2 6 4 lnr-mtr DOM h 1 , 2 i DTE 1 3 7 5  3 i ) That is, if the first two elements of the list are a Prosodic Word and a Leaner, then the two of them combine to form a lnr-mtr, followed by any other material on the input list. Because of the way in which this prosodic constraint is associated with the Head-Complement Rule, the prosodic host in (16), namely the p-wrd tagged 1 , is automatically the syntactic head of the construction. As a result, Zwicky’s conditions in (15) fall out directly. (17)–(18) illustrate the effects of the new clause. In the first case, the lnr-mtr consisting of told and it is the only item on the list in the recursive call to mkMtr in (16), and hence the base clause (8) in the definition of mkMtr applies. In the second case, there is more than one item on the list, and the lnr-mtr becomes a subtree in a larger metrical domain. (17) mkMtr([told, it]) = 2 6 6 4 lnr-mtr DOM D 1 told, it E DTE 1 3 7 7 5 (18) mkMtr([told, it, [to Noel]]) = 2 6 6 6 6 6 6 6 6 4 full-mtr DOM * 2 6 6 4 lnr-mtr DOM D 1 told, it E DTE 1 3 7 7 52 [to Noel] + DTE 2 3 7 7 7 7 7 7 7 7 5 By contrast, examples of the form told Noel ˇit fail to parse, since (16) only licenses a head-initial lnr-mtr when the Leaner immediately follows the head. We could however admit told Noel ´it, if the lexicon contained a suitable entry for accent-bearing ´it with prosody of type p wrd, since this would satisfy the requirement that only prosodies of type full can be the value of a metrical tree’s DTE. 3.2 Head-Final Constructions To illustrate head-final constructions, I will focus on NP structures, considering the combination of determiners and prenominal adjectives with N-bar phrases. I take the general case to be illustrated by combining a determiner like this with a phrase like treasured possession to form one metrical tree. Since treasured possession will itself be a metrical tree, I introduce a new, binary, function for this purpose, namely extMtr (extend metrical tree) which adds a new prosodic element to the left boundary of an existing tree. For convenience, I will call the leftmost argument of extMtr the extender. Fig 5 illustrates the way in which extMtr is used to build the prosody of a specifier-head construction, while (19) provides the definition of extMtr. An example of the output is illustrated in (20). (19) extMtr( 1 [full], " DOM 2 DTE 3 # ) = 2 6 4 full-mtr DOM 1  2 DTE 3 3 7 5 (20) extMtr(this, [treasured possession]) = 2 6 6 4 full-mtr DOM D this, treasured, 1 possession E DTE 1 3 7 7 5 However, there are now a number of special cases to be considered. First, we have to allow that the head phrase is a single Prosodic Word such as possession, rather than a metrical tree. Second, the prosodic structure to be built will be more complex if the head phrase itself contains a post-head complement, as in treasured possession of the samurai. Crosscutting this dimension is the question of whether the extender is a Leaner, in which case it will form a lnr-mtr with the immediately following element. We will look at these cases in turn. (i) The head is a single Prosodic Word When the second prosodic argument of extMtr is not in fact a metrical tree, it calls mkMtr to build a new metrical tree. Definition (21) is illustrated in (22). NP Det the Nom AdjP most treasured Nom N possession PP P of NP the samurai Figure 6: Right-branching NP Structure  w w the most w treasured s possession s of the samurai Figure 7: Flat NP Prosodic Structure (21) extMtr( 1 [pros], 2 [p-wrd]) = mkMtr( h 1 , 2 i) (22) extMtr(treasured, possession) = 2 6 6 4 full-mtr DOM D treasured, 1 possession E DTE 1 3 7 7 5 (ii) The head contains post-head material Perhaps the most awkward kind of mismatch between syntactic and prosodic structure arises when when the complement or postmodifier of a syntactic head is ‘promoted’ to the level of sister of the constituent in which the head occurs; this creates a disjuncture between the lexical head and whatever follows. Fig 6 gives a typical example of this phenomenon, where the noun possession is followed by a prepositional complement, while Fig 7 represents the prosodic constituency. Let’s consider how treasured should combine with possession of the samurai. The Head-Complement Rule will have built a prosodic structure of the form [possession [of the samurai]] for the latter phrase. To obtain the correct results, we need to be able to detect that this is a metrical tree M whose leftmost element is a lexical head (by contrast, for example, with the structure [treasured possession]). In just this case, the extender can not only extend M but also create a new subtree by left-associating with the lexical head.2 The required definition is shown in (23) and illustrated in example (24). (23) extMtr( 1 [full], " DOM 2 p-wrd  3 DTE 4 # ) = 2The special prosodic status of lexical heads is incorporated in Selkirk’s (1981) notion of ϕ-phrase, and subsequent developments thereof, such as (Selkirk, 1986; Nespor and Vogel, 1986). 2 6 4 full-mtr DOM extMtr( 1 , 2 )  3 DTE 4 3 7 5 provided that 2 is the lexical head. (24) extMtr(this, 2 6 6 4 full-mtr DOM D possession, 1 [of the samurai] E DTE 1 3 7 7 5) = 2 6 6 6 6 6 6 4 full-mtr DOM * 2 4DOM D this, 2 possession E DTE 2 3 5, 1 [of the samurai] + DTE 1 3 7 7 7 7 7 7 5 Turning back briefly to the Head-Specifier Rule shown in Fig 5, we can now see that if ϕ0 is a metrical tree M, then the value of extMtr(ϕ1 ;ϕ0) depends on the syntactic information associated with the leftmost element P of that tree. That is, if P is the phonology of the lexical head of the phrase, then it can be prosodically disjoined from the following material, otherwise the metrical tree M is extended in the standard way. There are various ways that this sensitivity to syntactic role might be accommodated. One option would to inspect the DTRS (daughters) attribute of a sign. However, I will briefly sketch the treatment implemented in the ALE grammar, which does not build a representation of daughters. Instead, I have introduced an attribute LEX inside the value of HEAD which is constrained in the case of lexical items to be token-identical to the PHON value. For example, the type for possession is approximately as follows: (25) 2 6 6 6 6 6 6 6 4 word PHON 1 possession SYNSEM 2 6 6 4 SYN j HEAD " noun LEX 1 # ARG-ST hPP i 3 7 7 5 3 7 7 7 7 7 7 7 5 Since LEX is a head feature, it percolates up to any phrase projected from that head, and allows the PHON value of the lexical head to be accessed at that projection; i.e., headed phrases will also bear a specification [LEX phon], which can be interpreted as saying “my lexical head’s phonology value is phon”. In addition, we let the function extMtr in Fig 5 take as an extra argument the HEAD value of the mother, and then test whether the leftmost Prosodic Word in the metrical tree being extended is the same as the LEX value of the mother’s HEAD value. (iii) Extending the head with a Leaner Finally, there is an additional clause to accommodate the case where the extending element is a Leaner. This triggers a kind of left association, in that the result of combining a with [treasured possession] is a structure of the form [[a treasured] possession]. (26) extMtr( 1 [lnr], " DOM h 2 i  3 DTE 4 # ) = 2 6 4 full-mtr DOM extMtr( 1 , 2 )  3 DTE 4 3 7 5 This will also allow an unaccented subject pronoun to left-associate with the lexical head of a VP, as in [[he provoked] [the objections of everyone]] (Gee and Grosjean, 1983). 4 Concluding Remarks I believe that the preceding analysis demonstrates that despite the well-known mismatches between syntactic and prosodic structure, it is possible to induce the required prosodic structures in tandem with syntax. Moreover, the analysis retains rather conventional notions of syntactic constituency, eschewing the nonstandard syntactic constituents advocated by Prevost and Steedman (1993), Steedman (1990; 1991). Although I have only mentioned two syntactic rules in HPSG, the radically underspecified nature of these rules, coupled with rich lexical entries, means that the approach I have sketched has more generality than might appear at first. With the addition of a rule for prenominal adjectives, prosodically interpreted like the Head-Specifier Rule, we can derive a range of analyses as summarised in (27). Here, I use square brackets to demarcate trees of type full-mtr and parentheses for trees of type lnr-mtr. (27) a. [this possession](of the samurai) b. [this treasured possession](of the samurai) c. (a treasured) possession d. (a treasured) possession [(of these) people] e. Kim gave (the book) (to the boy) f. Kim (gave it) (to the boy) g. Kim is happy [about Lee] h. Kim is happy [(that Lee) is fond (of the bird)] i. Kim wanted (to rely) (on the report) [(that Lee) is fond (of the bird)] It would be straightforward to augment the grammar to accommodate post-modifiers of various kinds, which would behave prosodically like post-head complements. By contrast, auxiliaries do not conform to the association between headed structures and prosodic structures that we have seen so far. That is, if auxiliaries are a subtype of complement-taking verbs, as assumed within HPSG, then they depart from the usual pattern in behaving prosodically like specifiers rather than heads. There are numerous directions in which the current work can be extended. In terms of empirical coverage, a more detailed account of weak function words seems highly desirable. The approach can also be tested within the context of speech synthesis, and preliminary work is underway on extending the Festival system (Black and Taylor, 1997) to accept input text marked up with metrical trees of the kind presented here. In the longer term, the intention is to integrate prosodic realisation within the framework of an HPSG-based concept-to-speech system. Acknowledgements I am grateful to Philip Miller, Mike Reape, Ivan Sag and Paul Taylor for their helpful comments on various incarnations of the work reported here. References J. Bachenko and E. Fitzpatrick. 1990. A computational grammar of discourse-neutral prosodic phrasing in English. Computational Linguistics, 16(3):155–170. Mary E. Beckman and Janet B. Pierrehumbert. 1986. Intonational structure in English and Japanese. Phonology Yearbook, 3:255–310. Mary E. Beckman. 1986. Stress and Non-Stress Accent. Foris, Dordrecht, Holland. Steven Bird. 1995. Computational Phonology: A Constraint-Based Approach. Studies in Natural Language Processing. Cambridge University Press. Alan W. Black and Paul Taylor. 1997. The festival speech synthesis system. Technical Report TR-83, Human Communication Research Centre, University of Edinburgh, Edinburgh, UK, January. Bob Carpenter and Gerald Penn, 1999. ALE: The Attribute Logic Engine. User’s Guide. Bell Laboratories, Lucent Technologies, Murray Hill, NJ, version 3.2 beta edition. Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper and Row, New York. Steffan Corley, Martin Corley, Frank Keller, Matthew W. Crocker, and Shari Trewin. 1999. Finding syntactic structure in unparsed corpora: The gsearch corpus query system. Computers and the Humanities. James Paul Gee and Franc¸ois Grosjean. 1983. Performance structures: a psycholinguistic and linguistic appraisal. Cognitive Psychology, 15:411– 458. Janet Hitzeman, Alan W. Black, Paul Taylor, Chris Mellish, and Jon Oberlander. 1998. On the use of automatically generated discourse-level information in a concept-to-speech synthesis system. In ICSLP’98, pages 2763–2768. D. Robert Ladd. 1996. Intonational Phonology. Cambridge University Press, Cambridge. Mark Liberman and Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry, 8:249–336. Michael Mastroianni and Bob Carpenter. 1994. Constraint-based morpho-phonology. In Proceedings of the First ACL SIGPhon Workshop, Los Cruces, New Mexico. Association for Computational Linguistics. Marina Nespor and Irene Vogel. 1986. Prosodic Phonology. Number 28 in Studies in Generative Grammar. Foris Publications, Dordrecht. Janet B. Pierrehumbert and Mary E. Beckman. 1988. Japanese Tone Structure. Number 15 in Linguistic Inquiry Monographs. The MIT Press, Cambridge, MA. Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. CSLI and University of Chicago Press, Stanford, Ca. and Chicago, Ill. Scott Prevost and Mark Steedman. 1993. Generating contextually appropriate intonation. In Proceedings of the 6th Conference of the European Chapter of the Association for Computational Linguistics, pages 332–340, Utrecht, The Netherlands, April 21–23. OTS (The Research Institute for Language and Speech). Elisabeth Selkirk. 1981. On prosodic structure and its relation to syntactic structure. In T. Fretheim, editor, Nordic Prosody II: Papers from a Symposium. Tapir, Trondheim. Elisabeth O. Selkirk. 1984. Phonology and Syntax: The Relation between Sound and Structure. Current Studies in Linguistics. MIT Press, Cambridge, Mass. Elisabeth O. Selkirk. 1986. On derived domains in sentence phonology. Phonology Yearbook, 3:371– 405. Stefanie Shattuck-Hufnagel and Alice E. Turk. 1996. A prosody tutorial for investigators of auditory sentence processing. Journal of Psycholinguistic Research, 25(2):193–247. Mark Steedman. 1990. Intonation and structure in spoken language understanding. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, pages 9–16, Pittsburgh, Pa., June. University of Pittsburgh. Mark Steedman. 1991. Structure and intonation. Language, 67(2):260–296, June. Arnold M. Zwicky. 1982. Stranded to and phonological phrasing in English. Linguistics, 20(1/2):3–57. Arnold M. Zwicky. 1986. The unaccented pronoun constraint in English. In Arnold M. Zwicky, editor, Interfaces, volume 32 of Ohio State University Working Papers in Linguistics, pages 100–114. Ohio State University Department of Linguistics, July.
2000
28
Inducing Probabilistic Syllable Classes Using Multiv ariate Clustering Karin Müller, Bernd Möbius, and Detlef Presc her Institut für Masc hinelle Sprac h v erarb eitung Univ ersit y of Stuttgart, German y fk arin.m uellerjb ernd.mo ebiusjdetlef.presc [email protected] Abstract An approac h to automatic detection of syllable structure is presen ted. W e demonstrate a no v el application of EM-based clustering to m ultiv ariate data, exemplied b y the induction of and -dimensional probabilistic syllable classes. The qualitativ e ev aluation sho ws that the metho d yields phonologically meaningful syllable classes. W e then prop ose a no v el approac h to grapheme-to-phoneme con v ersion and sho w that syllable structure represen ts v aluable information for pron unciation systems.  In tro duction In this pap er w e presen t an approac h to unsup ervised learning and automatic detection of syllable structure. The primary goal of the pap er is to demonstrate the application of EM-based clustering to m ultiv ariate data. The suitabilit y of this approac h is exemplied b y the induction of and -dimensional probabilistic syllable classes. A secondary goal is to outline a no v el approac h to the con v ersion of graphemes to phonemes (gp) whic h uses a con text-free grammar (cfg) to generate all sequences of phonemes corresp onding to a giv en orthographic input w ord and then ranks the h yp otheses according to the probabilistic information co ded in the syllable classes. Our approac h builds on t w o resources. The rst resource is a cfg for gp con v ersion that w as constructed man ually b y a linguistic exp ert (Müller, 000). The grammar describ es ho w w ords are comp osed of syllables and ho w syllables consist of parts that are con v en tionally called onset, n ucleus and co da, whic h in turn are comp osed of phonemes, and corresp onding graphemes. The second resource consists of a m ultiv ariate clustering algorithm that is used to rev eal syllable structure hidden in unannotated training data. In a rst step, w e collect syllables b y going through a large text corpus, lo oking up the w ords and their syllabications in a pron unciation dictionary and coun ting the o ccurrence frequencies of the syllable t yp es. Probabilistic syllable classes are then computed b y applying maxim um lik eliho o d estimation from incomplete data via the EM algorithm. T w o-dimensional EM-based clustering has b een applied to tasks in syn tax (Ro oth et al.,  ), but so far this approac h has not b een used to deriv e mo dels of higher dimensionalit y and, to the b est of our kno wledge, this is the rst time that it is b eing applied to sp eec h. A ccordingly , w e ha v e trained and -dimensional mo dels for English and German syllable structure. The obtained mo dels of syllable structure w ere ev aluated in three w a ys. Firstly , the -dimensional mo dels w ere sub jected to a pseudo-disam biguation task, the result of whic h sho ws that the onset is the most v ariable part of the syllable. Secondly , the resulting syllable classes w ere qualitativ ely ev aluated from a phonological and phonotactic p oin t of view. Thirdly , a -dimensional syllable mo del for German w as tested in a gp con v ersion task. The results compare w ell with the b est curren tly a v ailable data-driv en approac hes to gp con v ersion (e.g., (Damp er et al.,  )) and suggest that syllable strucclass 0 0. NOP[I] 0. t 0.0 l 0.0 d 0.0 b 0.0 s 0.00 I 0. NOP[I] 0.0 n 0. N 0.0  z 0.0 t 0.0 ts 0.0 f 0.0 Figure : Class #0 of a -dimensional English mo del with  classes class  0.00 NOP[E] 0.0 ts 0. d 0.0 n 0.00 E 0. 0 n t 0.0 t 0. n 0.0  pt 0.00 ks 0.00 INI 0. FIN 0. MED 0.00 STR 0.  USTR 0.0 Figure : Class # of a -dimensional German mo del with 0 classes ture represen ts v aluable information for pron unciation systems. Suc h systems are critical comp onen ts in text-to-sp eec h (TTS) con v ersion systems, and they are also increasingly used to generate pron unciation v arian ts in automatic sp eec h recognition. The rest of the pap er is organized as follo ws. In Section  w e in tro duce the m ultiv ariate clustering algorithm. In Section  w e presen t four exp erimen ts based on and dimensional data for German and English. Section  is dedicated to ev aluation and in Section  w e discuss our results.  Multiv ariate Syllable Clustering EM-based clustering has b een deriv ed and applied to syn tax (Ro oth et al.,  ). Unfortunately , this approac h is not applicable to m ultiv ariate data with more than t w o dimensions. Ho w ev er, w e consider syllables to consist of at least three dimensions corresp onding to parts of the in ternal syllable structure: onset, n ucleus and co da. W e ha v e also exp erimen ted with -dimensional mo dels b y adding t w o more dimensions: p osition of the syllable in the w ord and stress status. In our m ultiv ariate clustering approac h, classes corresp onding to syllables are view ed as hidden data in the con text of maxim um lik eliho o d estimation from incomplete data via the EM algorithm. The t w o main tasks of EM-based clustering are (i) the induction of a smo oth probabilit y mo del on the data, and (ii) the automatic disco v ery of class structure in the data. Both asp ects are considered in our application. W e aim to deriv e a probabilit y distribution p(y ) on syllables y from a large sample. The k ey idea is to view y as conditioned on an unobserv ed class c  C , where the classes are giv en no prior in terpretation. The probabilit y of a syllable y = (y  ; ::; y d )  Y   ::  Y d ; d  ; is dened as: p(y ) = X cC p(c; y ) = X cC p(c)p(y jc) = X cC p(c) d Y i= p(y i jc) Note that conditioning of y i on eac h other is solely made through the classes c via the indep endence assumption p(y jc) = Q d i= p(y i jc). This assumption mak es clustering feasible in the rst place; later on (in Section .) w e will exp erimen tally determine the n um b er jC j of classes suc h that the assumption is optimally met. The EM algorithm (Dempster et al.,  ) is directed at maximizing the incomplete data log-lik eliho o d L = P y ~ p (y ) ln p(y ) as a function of the probabilit y distribution p for a giv en empirical probabilit y distribution ~ p. Our application is an instance of the EM-algorithm for con text-free mo dels (Baum et al.,  0), from whic h simple re-estimation form ulae can b e deriv ed. Let f (y ) the frequency of syllable y , and jf j = P y Y f (y ) the total frequency of the sample (i.e. ~ p(y ) = f (y ) jf j ), and f c (y ) = f (y )p(cjy ) the estimated frequency of y annotated with c. P arameter up dates ^ p(c); ^ p (y i jc) can th us b e computed b y (c  C ; y i  Y i ; i = ; ::; d): ^ p(c) = P y Y f c (y ) jf j ; and class 0 0.0 D 0. NOP[@] 0. @  NOP[@] 0. m 0.0  ONE 0. STR  class  0.0 NOP[I] 0.  h 0.0 b 0.00 I  n 0. z 0.0 t 0.0 f 0.0 ts 0.0 ONE 0.  INI 0.0 STR  class  0.00 t 0.0 s 0.0 d 0.0 NOP[I] 0.0 n 0.0 I 0.  N 0. d 0. z 0. Nz 0.0 FIN 0.  USTR 0. class  0.0 t 0. v 0. D 0.0 d 0.0  NOP[@] 0.0 @ 0.  O: 0.00 r* 0.  z 0. d 0.0 l 0.0 n 0.0 FIN 0.  MED 0.00 USTR 0. class 0 0.0 S 0. m 0. d 0.0 t 0.0 NOP[@] 0.00 @ 0.  I 0.0 I@ 0.0 E 0.00 n 0. n t 0.  nz 0.0 l 0.0 n ts 0.0 ns 0.0 FIN 0. USTR 0. class  0.0 m 0. p 0.0 k 0.0 0 g 0.0 t 0.00 pl 0.0 st 0.0 eI 0. A: 0. E 0.0 O: 0.0 t 0. s 0. n 0.0 d 0.0 k 0.0 nd 0.0 ts 0.0 ONE 0.  FIN 0. STR 0.  class  0.0 NOP[@] 0.  @  NOP[@] 0. r* 0. ONE 0.  INI 0.00 STR  Figure : Classes #0, #, #, #, #0, #, # of the -dimensional English mo del ^ p(y i jc) = P y Y  ::Y i fy i gY i+ ::Y d f c (y ) P y Y f c (y ) As sho wn b y Baum et al. ( 0 ), ev ery suc h maximization step increases the log-lik eliho o d function L, and a sequence of re-estimates ev en tually con v erges to a (lo cal) maxim um.  Exp erimen ts A sample of syllables serv es as input to the m ultiv ariate clustering algorithm. The German data w ere extracted from the Stuttgarter Zeitung (STZ), a newspap er corpus of ab out  million w ords. The English data came from the British National Corpus (BNC), a collection of written and sp ok en language containing ab out 00 million w ords. F or b oth languages, syllables w ere collected b y going through the corpus, lo oking up the w ords and their syllabications in a pron unciation dictionary (Baa y en et al.,  )  and coun ting the o ccurrence frequencies of the syllable t yp es  .  W e sligh tly mo died the English pron unciation lexicon to obtain non-empt y n uclei, e.g. /idealism/ [aI][dI@][lIzm,] w as mo died to [aI][dI@][lI][z@m] (SAMP A transcription).  Subsequen t exp erimen ts on syllable t yp es (Müller et al., 000) ha v e sho wn that frequency coun ts represen t v aluable information for our clustering task. In t w o exp erimen ts, w e induced -dimensional mo dels based on syllable onset, n ucleus, and co da. W e collected  distinct German syllables and ,  distinct English syllables. The n um b er of syllable classes w as systematically v aried in iterated training runs and ranged from  to 00. Figure  sho ws a selected segmen t of class #0 from a -dimensional English mo del with  classes. The rst column displa ys the class index 0 and the class probabilit y p(0). The most probable onsets and their probabilities are listed in descending order in the second column, as are n ucleus and co da in the third and fourth columns, resp ectiv ely . Empt y onsets and co das w ere lab eled NOP[n ucleus]. Class #0 con tains the highly frequen t function w ords in, is, it, its as w ell as the suxes -ing ,-ting, -ling. Notice that these function w ords and suxes app ear to b e separated in the -dimensional mo del (classes # and # in Figure ). In t w o further exp erimen ts, w e induced dimensional mo dels, augmen ted b y the additional parameters of p osition of the syllable in the w ord and stress status. Syllable p osition has four v alues: monosyllabic (ONE), initial (INI), medial (MED), and nal (FIN). Stress 55 60 65 70 75 80 85 90 95 0 20 40 60 80 100 120 140 160 180 200 onset nucleus coda 65 70 75 80 85 90 0 20 40 60 80 100 120 140 160 180 200 onset nucleus coda Figure : Ev aluation on pseudo-disam biguation task for English (left) and German (righ t) has t w o v alues: stressed (STR) and unstressed (USTR). W e collected ,  distinct German syllables and , distinct English syllables. The n um b er of syllable classes ranged from  to 00. Figure  illustrates (part of ) class # from a -dimensional German mo del with 0 classes. Syllable p osition and stress are displa y ed in the last t w o columns.  Ev aluation In the follo wing sections, (i) the -dimensional mo dels are sub jected to a pseudodisam biguation task (.); (ii) the syllable classes are qualitativ ely ev aluated (.); and (iii) the -dimensional syllable mo del for German is tested in a gp task (.). . Pseudo-Disam biguation W e ev aluated our -dimensional clustering mo dels on a pseudo-disam biguation task similar to the one describ ed b y Ro oth et al. ( ), but sp ecied to onset, n ucleus, and co da am biguit y . The rst task is to judge whic h of t w o onsets on and on 0 is more lik ely to app ear in the con text of a giv en n ucleus n and a giv en co da cod. F or this purp ose, w e constructed an ev aluation corpus of 000 syllables (on; n; cod) selected from the original data. Then, randomly c hosen onsets on 0 w ere attac hed to all syllables in the ev aluation corpus, with the resulting syllables (on 0 ; n; cod) app earing neither in the training nor in the ev aluation corpus. F urthermore, the elemen ts on; n; cod, and on 0 w ere required to b e part of the training corpus. Clustering mo dels w ere parameterized in (up to 0) starting v alues of EM-training, in the n um b er of classes of the mo del (up to 00), resulting in a sequence of 0  0 mo dels. A ccuracy w as calculated as the n um b er of times the mo del decided p(on; n; cod)  p(on 0 ; n; cod) for all c hoices made. T w o similar tasks w ere designed for n ucleus and co da. Results for the b est starting v alues are sho wn in Figure . Mo dels of  classes sho w the highest accuracy rates. F or German w e reac hed accuracy rates of - 0% (n ucleus and co da) and % (onset). F or English w e ac hiev ed accuracy rates of % (co da), % (n ucleus), and % (onset). The results of the pseudo-disam biguation agree with in tuition: in b oth languages (i) the onset is the most v ariable part of the syllable, as it is easy to nd minimal pairs that v ary in the onset, (ii) it is easier to predict the co da and n ucleus, as their c hoice is more restricted. . Qualitativ e Ev aluation The follo wing discussion is restricted to the dimensional syllable mo dels, as the qualit y of the output increased when more dimensions w ere added. W e can lo ok at the results from dieren t angles. F or instance, w e can v erify if an y of the classes are mainly represen tativ es of a syllable class p ertinen t to a particular n ucleus (as it is the case with the dimensional mo dels). Another in teresting asp ect is whether there are syllable classes that represen t parts of lexical con ten t w ords, as opp osed to high-frequency function w ords. Finally , some syllable classes ma y corresp ond to pro ductiv e axes. class  0.0 NOP[aI] 0. z 0. k 0.0 v 0.0 fR 0.0 m 0.0 aI  NOP[aI] 0. n 0.0 nst 0.00 ns 0.00 INI 0. ONE 0. STR 0. class  0.0 NOP[I] 0.0 z 0. I  n 0. x 0.0 st 0.0 n t 0.0 ns 0.00 m 0.00 ONE 0. INI 0. STR 0.  USTR 0.0 class  0.0 f 0. NOP[E] 0. ts 0.00 h 0.00 E 0.  o: 0.00 O 0.00 R 0.  INI 0. 0 MED 0.0  USTR 0.  class  0.0 l 0.0 t 0. d 0. I 0. 0 x 0. 0 xt 0.0 k 0.0 FIN 0.  MED 0.0 USTR 0. class 0 0.00 b 0. R 0. t 0. v 0.0  ts 0.0 0 gl 0.0 aI 0. NOP[aI] 0.0 n 0.0 x 0.0 ts 0.0 s 0.0 l 0.0 MED 0. FIN 0. USTR 0.  STR 0.0 Figure : Classes #, #, #, #, #0 of the -dimensional German mo del German. The ma jorit y of syllable classes obtained for German is dominated b y one particular n ucleus p er syllable class. In  out of 0 classes the probabilit y of the dominan t n ucleus is greater than %, and in cases it is indeed 00%. The only syllable n uclei that do not dominate an y class are the fron t rounded v o w els /y:, Y, :, /, the fron t v o w el /E:/ and the diph thong /O Y/, all of whic h are among the least frequen tly o ccurring n uclei in the lexicon of German. Figure  depicts the classes that will b e discussed no w. Almost one third (%) of the 0 classes are represen tativ es of high-frequency function w ords. F or example, class # is dominated b y the function w ords in, ich, ist, im, sind, sich, all of whic h con tain the short v o w el /I/. Another % of the 0 classes represen ts syllables that are most lik ely to o ccur in initial, medial and nal p ositions in the op en w ord classes of the lexicon, i.e. nouns, adjectiv es, and v erbs. Class # co v ers sev eral lexical en tries in v olving the diph thong /aI/ mostly in stressed w ord-initial syllables. Class #0 pro vides complimen tary information, as it also includes syllables con taining /aI/, but here mostly in w ord-medial p osition. W e also observ e syllable classes that represen t pro ductiv e prexes (e.g., ver-, er-, zer-, vor-, herin class #) and suxes (e.g., -lich , -ig in class #). Finally , there are t w o syllable classes (not displa y ed) that co v er the most common inectional suxes in v olving the v o w el /@/ (sc h w a). Class n um b ers are informativ e insofar as the classes are rank ed b y decreasing probabilit y . Lo w er-rank ed classes tend (i) not to b e dominated b y one n ucleus; (ii) to con tain v o w els with relativ ely lo w frequency of o ccurrence; and (iii) to yield less clear patterns in terms of w ord class or stress or p osition. F or illustration, class # (Figure ) represen ts the syllable ent [En t], b oth as a prex (INI) and as a sux (FIN), the former b eing unstressed (as in Entwurf design) and the latter stressed (as in Dirigent conductor). English. In  out of the 0 syllable classes obtained for English one dominan t n ucleus p er syllable class is observ ed. In all of these cases the probabilit y of the n ucleus is larger than % and in  classes the n ucleus probabilit y is 00%. Besides sev eral diph thongs only the relativ ely infrequen t v o w els /V/, /A:/ and /:/ do not dominate an y class. Figure  sho ws the classes that are describ ed as follo ws. High-frequency function w ords are represen ted b y 0 syllable classes. F or example, class #0 and # and are dominated b y the determiners the and a, resp ectiv ely , and class # con tains function w ords that in v olv e the short v o w el /I/, suc h as in, is, it, his, if, its. Pro ductiv e w ord-forming suxes are found in class # (-ing ), and common inectional suxes in class # (-er, -es, -e d ). Class #0 grph=L phon=l Liquid Onset grph=ö phon=: L V o w el Nucleus grph=t grph=z phon=ts Aricate Co da Syl grph=i phon=i: L V o w el Nucleus grph=n grph=n phon=n Nasal Co da Syl W ord grph=L phon=l Liquid Onset grph=ö phon=: L V o w el Nucleus grph=t phon=t Plosiv Co da Syl grph=z phon=ts Aricate Onset grph=i phon=I SV o w el Nucleus grph=n grph=n phon=n Nasal Co da Syl W ord Figure : An incorrect (left) and a correct (righ t) cfg analysis of Lötzinn is particularly in teresting in that it represen ts a comparably large n um b er of common sufxes, suc h as -tion, -ment, -al, -ant, -ent, enc e and others. The ma jorit y of syllable classes, viz.  out of 0, con tains syllables that are lik ely to b e found in initial, medial and nal p ositions in the op en w ord classes of the lexicon. F or example, class # represen ts mostly stressed syllables in v olving the v o w els /eI, A:, e:, O:/ and others, in a v ariet y of syllable p ositions in nouns, adjectiv es or v erbs. . Ev aluation b y gp Con v ersion In this section, w e presen t a no v el metho d of gp con v ersion (i) using a cfg to pro duce all p ossible phonemic corresp ondences of a giv en grapheme string, (ii) applying a probabilistic syllable mo del to rank the pron unciation h yp otheses, and (iii) predicting pron unciation b y c ho osing the most probable analysis. W e used a cfg for generating transcriptions, b ecause grammars are expressiv e and writing grammar-rules is easy and in tuitiv e. Our grammar describ es ho w w ords are comp osed of syllables and syllables branc h in to onset, n ucleus and co da. These syllable parts are re-written b y the grammar as sequences of natural phone classes, e.g. stops, fricativ es, nasals, liquids, as w ell as long and short v o w els, and diph thongs. The phone classes are then re-in terpreted as the individual phonemes that they are made up of. Finally , for eac h phoneme all p ossible graphemic corresp ondences are listed. Figure  illustrates t w o analyses (out of 00) of the German w ord Lötzinn (tin solder). The phoneme strings (represen ted b y non-terminals named phon=...) and the syllable b oundaries (represen ted b y the nonterminal Syl) can b e extracted from these analyses. Figure  depicts b oth an incorrect analysis [l:ts][i:n] and its correct counterpart [l:t][tsIn]. The next step is to rank these transcriptions b y assigning probabilities to them. The k ey idea is to tak e the pro duct of the syllable probabilities. Using the dimensional  German syllable mo del yields a probabilit y of :  0   :  0  = :  0  for the incorrect analysis and a probabilit y of :  0   :  0  = :  0  for the correct one. Th us w e ac hiev e the desired result of assigning the higher probabilit y to the correct transcription. W e ev aluated our gp system on a test set of  unseen w ords. The am biguit y expressed as the a v erage n um b er of analyses p er w ord w as  . The test set w as constructed b y collecting  ,0 w ords from the German Celex dictionary (Baa y en et al.,  ) that w ere not se en in the STZ corpus. F rom this set w e man ually eliminated (i) foreign w ords, (ii) acron yms, (iii) prop er names, (iv) v erbs, and (v) w ords with more than three syllables. The resulting test set is a v ailable on the W orld Wide W eb  . Figure  sho ws the p erformance of four gp systems. The second and fourth columns sho w the accuracy of t w o baseline systems: gp conv ersion using the and -dimensional empirical distributions (Section ), resp ectiv ely . The third and fth columns sho w the w ord  P osition can b e deriv ed from the cfg analyses, stress placemen t is con trolled b y the most lik ely distribution.  h ttp://www.ims.uni-stuttgart.de/phonetik/gp/ gp system -dim baseline -dim classes -dim baseline -dim classes w ord accuracy . % . % . % . % Figure : Ev aluation of gp systems using probabilistic syllable mo dels accuracy of t w o gp systems using and dimensional syllable mo dels, resp ectiv ely . The gp system using -dimensional syllable mo dels ac hiev ed the highest p erformance (.%), whic h is a gain of % o v er the p erformance of the -dimensional baseline system and a gain of % o v er the p erformance of the -dimensional mo dels  .  Discussion W e ha v e presen ted an approac h to unsup ervised learning and automatic detection of syllable structure, using EM-based m ultiv ariate clustering. The metho d yields phonologically meaningful syllable classes. These classes are sho wn to represen t v aluable input information in a gp con v ersion task. In con trast to the application of t w odimensional EM-based clustering to syn tax (Ro oth et al.,  ), where seman tic relations w ere rev ealed b et w een v erbs and ob jects, the syllable mo dels cannot a priori b e exp ected to yield similarly meaningful prop erties. This is b ecause the syllable constituen ts (or phones) represen t an in v en tory with a small n um b er of units whic h can b e com bined to form meaningful larger units, viz. morphemes and w ords, but whic h do not themselv es carry meaning. Th us, there is no reason wh y certain syllable t yp es should o ccur significan tly more often than others, except for the fact that certain morphemes and w ords ha v e a higher frequency coun t than others in a giv en text corpus. As discussed in Section ., ho wev er, w e do nd some in teresting prop erties of syllable classes, some of whic h apparen tly represen t high-frequency function w ords and pro ductiv e axes, while others are t ypically found in lexical con ten t w ords. Sub jected to   resp.  w ords could not b e disam biguated b y the resp. -dimensional empirical distributions. The rep orted relativ ely small gains can b e explained b y the fact that our syllable mo dels w ere applied only to this small n um b er of am biguous w ords. a pseudo-disam biguation task (Section .), the -dimensional mo dels conrm the in tuition that the onset is the most v ariable part of the syllable. In a feasibilit y study w e applied the dimensional syllable mo del obtained for German to a gp con v ersion task. Automatic con v ersion of a string of c haracters, i.e. a w ord, in to a string of phonemes, i.e. its pron unciation, is essen tial for applications suc h as sp eec h syn thesis from unrestricted text input, whic h can b e exp ected to con tain w ords that are not in the system's pron unciation dictionary or otherwise unkno wn to the system. The main purp ose of the feasibilit y study w as to demonstrate the relev ance of the phonological information on syllable structure for gp con v ersion. Therefore, information and probabilities deriv ed from an alignmen t of grapheme and phoneme strings, i.e. the lo w est t w o lev els in the trees displa y ed in Figure , w as delib erately ignored. Data-driv en pron unciation systems usually rely on training data that include an alignmen t of graphemes and phonemes. Damp er et al. ( ) ha v e sho wn that the use of unaligned training data signican tly reduces the p erformance of gp systems. In our exp erimen t, with training on unannotated text corp ora and without an alignmen t of graphemes and phonemes, w e obtained a w ord accuracy rate of .% for the -dimensional German syllable mo del. Comparison of this p erformance with other systems is dicult: (i) hardly an y quan titativ e gp p erformance data are a v ailable for German; (ii) comparisons across languages are hard to in terpret; (iii) comparisons across differen t approac hes require cautious in terpretations. The most direct p oin t of comparison is the metho d presen ted b y Müller (000 ). In one of her exp erimen ts, the standard probabilit y mo del w as applied to the hand-crafted cfg presen ted in this pap er, yielding % w ord accuracy as ev aluated on our test set. Running the test set through the pron unciation rule system of the IMS German F estiv al TTS system (Möhler,  ) resulted in % w ord accuracy . The Bell Labs German TTS system (Möbius,  ) p erformed at b etter than % w ord accuracy on our test set. This TTS system relies on an annotation of morphological structure for the w ords in its lexicon and it p erforms a morphological analysis of unkno wn w ords (Möbius,  ); the pron unciation rules dra w on this structural information. These comparativ e results emphasize the v alue of phonotactic kno wledge and information on syllable structure and morphological structure for gp con v ersion. In a comparison across languages, a w ord accuracy rate of .% for our -dimensional German syllable mo del is sligh tly higher than the b est data-driv en metho d for English with % (Damp er et al.,  ). Recen tly , Bouma (000) has rep orted a w ord accuracy of .% for Dutc h, using a `lazy' training strategy on data aligned with the correct phoneme string, and a hand-crafted system that relied on a large set of rule templates and a man y-to-one mapping of c haracters to graphemes preceding the actual gp con v ersion. W e are conden t that a judicious com bination of phonological information of the t yp e emplo y ed in our feasibilit y study with standard tec hniques suc h as gp alignmen t of training data will pro duce a pron unciation system with a w ord accuracy that matc hes the one rep orted b y Bouma (000). W e b eliev e, ho w ev er, that for an optimally p erforming system as is desired for TTS, an ev en more complex design will ha v e to b e adopted. In man y languages, including English, German and Dutc h, access to morphological and phonological information is required to reliably predict the pron unciation of w ords; this view is further evidenced b y the p erformance of the Bell Labs system, whic h relies on precisely this t yp e of information. W e agree with Sproat ( , p. ) that it is unrealistic to exp ect optimal results from a system that has no access to this t yp e of information or is trained on data that are insucien t for the task. References Harald R. Baa y en, Ric hard Piep en bro c k, and H. v an Rijn.  . The Celex lexical databaseDutc h, English, German. (Release )[CD-R OM]. Philadelphia, P A: Linguistic Data Consortium, Univ. P ennsylv ania. Leonard E. Baum, T ed P etrie, George Soules, and Norman W eiss.  0. A maximization tec hnique o ccurring in the statistical analysis of probabilistic functions of Mark o v c hains. The A nnals of Math. Statistics, ():. Gosse Bouma. 000. A nite state and dataorien ted metho d for grapheme to phoneme conv ersion. In Pr o c. st Conf. North A meric an Chapter of the A CL (NAA CL), Seattle, W A. Rob ert I. Damp er, Y. Marc hand, M. J. A damson, and Kjell Gustafson.  . Ev aluating the pron unciation comp onen t of text-to-sp eec h systems for English: a p erformance comparison of dieren t approac hes. Computer Sp e e ch and L anguage, :. A. P . Dempster, N. M. Laird, and D. B. Rubin.  . Maxim um lik eliho o d from incomplete data via the EM algorithm. J. R oyal Statistic al So c.,  (B):. Bernd Möbius.  . W ord and syllable mo dels for German text-to-sp eec h syn thesis. In Pr o c. r d ESCA W orkshop on Sp e e ch Synthesis (Jenolan Caves), pages  . Bernd Möbius.  . The Bell Labs German textto-sp eec h system. Computer Sp e e ch and L anguage, : . Gregor Möhler.  . IMS F estiv al. [h ttp://www.ims.uni-stuttgart.de/phonetik/ syn thesis/index.h tml]. Karin Müller, Bernd Möbius, and Detlef Presc her. 000. Inducing probabilistic syllable classes using m ultiv ariate clustering GOLD. In AIMS R ep ort (), IMS, Univ. Stuttgart. Karin Müller. 000. PCF Gs for syllabication and gp con v ersion. In AIMS R ep ort (), IMS, Univ. Stuttgart. Mats Ro oth, Stefan Riezler, Detlef Presc her, Glenn Carroll, and F ranz Beil.  . Inducing a seman tically annotated lexicon via EM-based clustering. In Pr o c. th A nn. Me eting of the A CL, College P ark, MD. Ric hard Sproat, editor.  . Multilingual T extto-Sp e e ch Synthesis: The Bel l L abs Appr o ach. Klu w er A cademic, Dordrec h t.
2000
29
Sp ok en Language T ec hnology: Where Do W e Go F rom Here? Roger K. Mo ore 0/0 Sp eec h Ltd Malv ern, UK Recen t y ears ha v e seen dramatic dev elopmen ts in the capabilities and applications of sp ok en language tec hnology . F rom a few nic he applications for a range of exp ensiv e solutions, the eld has dev elop ed to the p oin t where k eenly-priced pro ducts ha v e sw ept the a w ards at consumer electronics sho ws. Sp eec h recognisers has reac hed the high-street store, and a significan t prop ortion of the dev elop ed w orld's p opulation has no w b een exp osed to the p ossibilit y of con trolling one's computer, or creating a do cumen t, b y v oice. This apparen t progress in sp ok en language tec hnology has b een fuelled b y a n um b er of dev elopmen ts: the relen tless increase in desktop computing p o w er, the in tro duction of statistical mo delling tec hniques, the a v ailabilit y of v ast quan tities of recorded sp eec h material, and the institution of public system ev aluations. Ho w ev er, our understanding of the fundamen tal patterning in sp eec h has progressed at a m uc h slo w er pace, not least in the area of its high-lev el linguistic prop erties. Sp ok en language understanding con tin ues to b e an elusiv e goal, and the proso dic link age b et w een acoustic and linguistic patterning is still something of a m ystery . This talk will illuminate these issues, and will conclude with an analysis of the options for future sp ok en language R&D.
2000
3
Mo deling Lo cal Con text for Pitc h Accen t Prediction Shimei P an Departmen t of Computer Science Colum bia Univ ersit y New Y ork, NY, 10027, USA [email protected] bia.edu Julia Hirsc h b erg A T&T Labs-Researc h Florham P ark, NJ, 07932-0971, USA julia@researc h.att.com Abstract Pitc h accen t placemen t is a ma jor topic in in tonational phonology researc h and its application to sp eec h syn thesis. What factors in uence whether or not a w ord is made in tonationally prominen t or not is an op en question. In this pap er, w e in v estigate ho w one asp ect of a w ord's lo cal con text | its collo cation with neigh b oring w ords | in uences whether it is accen ted or not. Results of exp erimen ts on t w o transcrib ed sp eec h corp ora in a medical domain sho w that suc h collo cation information is a useful predictor of pitc h accen t placemen t. 1 In tro duction In English, sp eak ers mak e some w ords more in tonationally prominen t than others. These w ords are said to b e ac c ente d or to b ear pitch ac c ents. Accen ted w ords are t ypically louder and longer than their unaccen ted counterparts, and their stressable syllable is usually aligned with an excursion in the fundamental fr e quency. This excursion will di er in shap e according to the t yp e of pitc h accen t. Pitc h accen t t yp e, in turn, in uences listeners' in terpretation of the accen ted w ord or its larger syn tactic constituen t. Previous researc h has asso ciated pitc h accen t with v ariation in v arious t yp es of information status, including the given/new distinction, fo cus, and c ontr astiveness, in ter alia. Assigning pitc h accen t in sp eec h generation systems whic h emplo y sp eec h syn thesizers for output is th us critical to system p erformance: not only m ust one con v ey meaning naturally , as h umans w ould, but one m ust a v oid con v eying mis-information whic h reliance on the syn thesizers' defaults ma y result in. The sp eec h generation w ork discussed here is part of a larger e ort in dev eloping an in telligen t m ultimedia presen tation generation system called MA GIC (Medical Abstract Generation for In tensiv e Care) (Dalal et al., 1996). In MA GIC, giv en a patien t's medical record stored at Colum bia Presb yterian Medical Cen ter (CPMC)'s on-line database system, the system automatically generates a p ostop erativ e status rep ort for a patien t who has just undergone b ypass surgery . There are t w o media-sp eci c generators in MA GIC: a graphics generator whic h automatically pro duces graphical presen tations from database en tities, and a sp ok en language generator whic h automatically pro duces coheren t sp ok en language presen tations from these en tities. The graphical and the sp eec h generators comm unicate with eac h other on the y to ensure that the nal m ultimedia output is sync hronized. In order to pro duce natural and coheren t sp eec h output, MA GIC's sp ok en language generator mo dels a collection of sp eec h features, suc h as accen ting and in tonational phrasing, whic h are critical to the naturalness and in telligibil it y of output sp eec h. In order to assign these features accurately , the system needs to iden tify useful correlates of accen t and phrase b oundary lo cation to use as predictors. This w ork represen ts part of our e orts in iden tifying useful predictors for pitc h accen t placemen t. Pitc h accen t placemen t has long b een a researc h fo cus for scien tists w orking on phonology , sp eec h analysis and syn thesis (Bolinger, 1989; Ladd, 1996). In general, syn tactic features are the most widely used features in pitc h accen t predication. F or example, partof-sp eec h is traditionally the most useful single pitc h accen t predictor (Hirsc h b erg, 1993). F unction w ords, suc h as prep ositions and articles, are less lik ely to b e accen ted, while con ten t w ords, suc h as nouns and adjectiv es, are more lik ely to b e accen ted. Other linguistic features, suc h as inferred giv en/new status (Hirsc h b erg, 1993; Bro wn, 1983), contrastiv eness (Bolinger, 1961), and discourse structure (Nak atani, 1998), ha v e also b een examined to explain accen t assignmen t in large sp eec h corp ora. In a previous study (P an and McKeo wn, 1998; P an and McKeo wn, 1999), w e in v estigated ho w features suc h as deep syn tactic/seman tic structure and w ord informativ eness correlate with accen t placemen t. In this pap er, w e fo cus on ho w lo cal con text in uences accen t patterns. More sp eci cally , w e in v estigate ho w w ord collo cation in uences whether nouns are accen ted or not. Determining whic h nouns are accen ted and whic h are not is c hallenging, since part-ofsp eec h information cannot help here. So, other accen t predictors m ust b e found. There are some adv an tages in lo oking only at one w ord class. W e eliminate the in teraction b et w een part-of-sp eec h and collo cation, so that the in uence of collo cation is easier to iden tify . It also seems lik ely that collo cation ma y ha v e a greater impact on con ten t w ords, lik e nouns, than on function w ords, lik e prep ositions. Previous researc hers ha v e sp eculated that w ord collo cation a ects stress assignmen t of noun phrases in English. F or example, James Marc hand (1993) notes ho w familiar c ol lo c ations change their str ess, witness the A meric an pr onunciation of `Little House' [in the television series Little House on the Pr airie], wher e str ess use d to b e on HOUSE, but now, sinc e the series is so familiar, is plac e d on the LITTLE. That is, for collo cated w ords, stress shifts to the left elemen t of the comp ound. Ho w ev er, there are n umerous coun ter-examples: consider apple PIE, whic h retains a righ t stress pattern, despite the collo cation. So, the exten t to whic h collo cational status a ects accen t patterns is still unclear. Despite some preliminary in v estigation (Lib erman and Sproat, 1992), w ord collo cation information has not, to our kno wledge, b een successfully used to mo del pitc h accen t assignmen t; nor has it b een incorp orated in to an y existing sp eec h syn thesis systems. In this pap er, w e empirically v erify the usefulness of w ord collo cation for accen t prediction. In Section 2, w e describ e our annotated sp eec h corp ora. In Section 3, w e presen t a description of the collo cation measures w e in v estigated. Section 4 to 7 describ e our analyses and mac hine learning exp erimen ts in whic h w e attempt to predict accen t lo cation. In Section 8 w e sum up our results and discuss plans for further researc h. 2 Sp eec h Corp ora F rom the medical domain describ ed in Section 1, w e collected t w o sp eec h corp ora and one text corpus for pitc h accen t mo deling. The sp eec h corp ora consist of one m ulti-sp eak er sp on taneous corpus, con taining t w en t y segmen ts and totaling ft y min utes, and one read corpus of v e segmen ts, read b y a single sp eak er and totaling elev en min utes of sp eec h. The text corpus consists of 3.5 million w ords from 7375 disc harge summaries of patien ts who had undergone surgery . The sp eec h corp ora only co v er cardiac patien ts, while the text corpus co v ers a larger group of patien ts and the ma jorit y of them ha v e also undergone cardiac surgery . The sp eec h corp ora w ere rst transcrib ed orthographically and then in tonationally , using the T oBI con v en tion for proso dic lab eling of standard American English (Silv erman et al., 1992). F or this study , w e used only binary accen ted/deaccen ted decisions deriv ed from the T oBI tonal tier, in whic h lo cation and t yp e of pitc h accen t is mark ed. After T oBI lab eling, eac h w ord in the corp ora w as tagged with part-of-sp eec h, from a nine-elemen t set: noun, v erb, adjectiv e, adv erb, article, conjunction, pronoun, cardinal, and prep osition. The sp ontaneous corpus w as tagged b y hand and the read tagged automatically . As noted ab o v e, w e fo cus here on predicting whether nouns are accen ted or not. 3 Collo cation Measures W e used three measures of w ord collo cation to examine the relationship b et w een collo cation and accen t placemen t: w ord bigram predict ability, mutual inf orma tion, and the Dice coefficient. While w ord predictabilit y is not t ypically used to measure collo cation, there is some correlation b et w een w ord collocation and predictabilit y . F or example, if t w o w ords are collo cated, then it will b e easy to predict the second w ord from the rst. Similarly , if one w ord is highly predictable giv en another w ord, then there is a higher p ossibilit y that these t w o w ords are collo cated. Mutual information (F ano, 1961) and the Dice co eÆcien t (Dice, 1945) are t w o standard measures of collo cation. In general, m utual information measures uncertain t y reduction or departure from indep endenc e. The Dice co eÆcien t is a collo cation measure widely used in information retriev al. In the follo wing, w e will giv e a more detailed de nitions of eac h. Statistically , bigram w ord predictabilit y is de ned as the log conditional probabilit y of w ord w i , giv en the previous w ord w i1 : P r ed(w i ) = l og (P r ob(w i jw i1 )) Bigram predictabilit y directly measures the lik eliho o d of seeing one w ord, giv en the o ccurrence of the previous w ord. Bigram predictabilit y has t w o forms: absolute and relativ e. Absolute predictabilit y is the v alue directly computed from the form ula. F or example, giv en four adjacen t w ords w i1 ; w i ; w i+1 and w i+2 , if w e assume P r ob(w i jw i1 ) = 0:0001, P r ob(w i+1 jw i ) = 0:001, and P r ob(w i+2 jw i+1 ) = 0:01, the absolute bigram predictabilit y will b e -4, -3 and -2 for w i ; w i+1 and w i+2 . The relativ e predictabilit y is de ned as the rank of absolute predictabilit y among w ords in a constituen t. In the same example, the relativ e predictabilit y will b e 1, 2 and 3 for w i ; w i+1 and w i+2 , where 1 is asso ciated with the w ord with the lo w est absolute predictabilit y . In general, the higher the rank, the higher the absolute predictabilit y . Except in Section 7, all the predictabilit y measures men tioned in this pap er use the absolute form. W e used our text corpus to compute bigram w ord predictabilit y for our domain. When calculating the w ord bigram predictabilit y , w e rst ltered uncommon w ords (w ords o ccurring 5 times or few er in the corpus) then used the Go o d-T uring discoun t strategy to smo oth the bigram. Finally w e calculated the log conditional probabilit y of eac h w ord as the measure of its bigram predictabilit y . Tw o measures of m utual information w ere used for w ord collo cation: pointwise mutual inf orma tion, whic h is de ned as : I 1 (w i1 ; w i ) = log P r (w i1 ; w i ) P r (w i1 )P r (w i ) and a vera ge mutual inf orma tion, whic h is de ned as: I 2 (w i1 ; w i ) = P r (w i1 ; w i ) log P r (w i1 ; w i ) P r (w i1 )P r (w i ) +P r (w i1 ; w i ) log P r (w i1 ; w i ) P r (w i1 )P r (w i ) +P r (w i1 ; w i ) log P r (w i1 ; w i ) P r (w i1 )P r (w i ) +P r (w i1 ; w i ) log P r (w i1 ; w i ) P r (w i1 )P r (w i ) The same text corpus w as used to compute b oth m utual information measures. Only w ord pairs with bigram frequency greater than v e w ere retained. The Dice co eÆcien t is de ned as: D ice(w i1 ; w i ) = 2  P r (w i1 ; w i ) P r (w i1 ) + P r (w i ) Here, w e also use a cut o threshold of v e to lter uncommon bigrams. Although all these measures are correlated, one measure can score w ord pairs quite di eren tly from another. T able 1 sho ws the top ten collo cations for eac h metric. In the predictabilit y top ten list, w e ha v e pairs lik e sc arlet fever where fever is v ery predictable from sc arlet (in our corpus, sc arlet is alw a ys follo w ed b y fever), th us, it ranks highest in the predictabilit y list. Since sc arlet can b e diÆcult to predict from fever, these t yp es of pairs will not receiv e a v ery high score using m utual information (in the top 5% in I 1 sorted list and in the top 20% in I 2 list) and Dice co eÆcien t (top 22%). F rom this table, it is also quite clear that I 1 tends to rank uncommon w ords high. All the w ords in the top ten I 1 list ha v e a frequency less than or equal Pred I 1 I 2 Dice c hief complain t p olym y algia rheumatica The patien t green eld lter cerebrospinal uid hemiside stepp er presen t illness Guillai n Barre folic acid P epto Bismol hospital course Viet Nam p eripro cedural complication s Glen Co v e p o Neo Synephrine normoactiv e b o w el h ydrogen p ero xide ph ysical exam p olym y algia rheumatica uric acid Viet Nam i d hemiside stepp er p ostp ericardiotom y syndrome Neo Synephrine coronary artery P epto Bismol Staten Island otitis media p ostop erativ e da y Glen Co v e scarlet fev er Lo Gerfo saphenous v ein presen t illness p ericardiotom y syndrome Chlor T rimeton medical history c hief complain t T able 1: T op T en Most Collo cated W ords for Eac h Measure to sev en (w e lter all the pairs o ccurring few er than six times). Of the di eren t metrics, only bigram predictabilit y is a unidirectional measure. It captures ho w the app earance of one w ord a ects the app earance of the follo wing w ord. In contrast, the other measures are all bidirectional measures, making no distinction b et w een the relativ e p osition of elemen ts of a pair of collo cated items. Among the bidirectional measures, p oin t-wise m utual information is sensitiv e to marginal probabilities P r (w or d i1 ) and P r (w or d i ). It tends to giv e higher v alues as these probabilities decrease, indep enden tly of the distribution of their co-o ccurrence. The Dice co eÆcien t, ho w ev er, is not sensitiv e to marginal probabilit y . It computes conditional probabilities whic h are equally w eigh ted in b oth directions. Av erage m utual information measures the reduction in the uncertain t y , of one w ord, giv en another, and is totally symmetric. Since I 2 (w or d i1 ; w or d i )=I 2 (w or d i ;w or d i1 ), the uncertain t y reduction of the rst w ord, giv en the second w ord, is equal to the uncertain t y reduction of the second w ord, giv en the rst w ord. F urther more, b ecause I 2 (w or d i ; w or d i1 ) = I 2 (w or d i ;w or d i1 ), the uncertain t y reduction of one w ord, giv en another, is also equal to the uncertain t y reduction of failing to see one w ord, ha ving failed to see the other. Since there is considerable evidence that prior discourse con text, suc h as previous mention of a w ord, a ects pitc h accen t decisions, it is p ossible that symmetric measures, suc h as m utual information and the Dice co eÆcien t, ma y not mo del accen t placemen t as w ell as asymmetric measures, suc h as bigram predictabilit y . Also, the bias of p oin t-wise m utual information to w ard uncommon w ords can a ect its abilit y to mo del accen t assignmen t, since, in general, uncommon w ords are more lik ely to b e accen ted (P an and McKeo wn, 1999). Since this metric disprop ortionately raises the m utual information for uncommon w ords, making them more predictable than their app earance in the corpus w arran ts, it ma y predict that uncommon w ords are more lik ely to b e de ac c ente d than they really are. 4 Statistical Analyses In order to determine whether w ord collocation is useful for pitc h accen t prediction, w e rst emplo y ed Sp earman's rank correlation test (Cono v er, 1980). In this exp erimen t, w e emplo y ed a unigram predictabilit y-based baseline mo del. The unigram predictabilit y of a w ord is de ned as the log probabilit y of a w ord in the text corpus. The maxim um lik eliho o d estimation of this measure is: log F r eq (w i ) P i F r eq (w i ) The reason for c ho osing this as the baseline mo del is not only b ecause it is con text indep enden t, but also b ecause it is e ectiv e. In a previous study (P an and McKeo wn, 1999), w e sho w ed that when this feature is used, it is as p o w erful a predictor as part-of-sp eec h. When join tly used with part-of-sp eec h information, the com bined mo del can p erform signi can tly b etter than eac h individual mo del. When tested on a similar medical corpus, this com bined mo del also outp erforms a comprehensiv e pitc h accen t mo del emplo y ed b y the Bell Labs' TTS system (Sproat et al., 1992; Hirsc h b erg, 1993; Sproat, 1998), where discourse information, suc h as giv en/new, syn tactic information, suc h as POS, and surface information, suc h as w ord distance, are incorp orated. Since unigram predictabilit y is con text indep enden t. By comparing other predictors to this baseline mo del, w e can demonstrate the impact of con text, measured b y w ord collo cation, on pitc h accen t assignmen t. T able 2 sho ws that for our read sp eec h corpus, unigram predictabilit y , bigram predictabilit y and m utual information are all signi can tly correlated (p < 0:001) with pitc h accen t decision. 1 Ho w ev er, the Dice co eÆcien t sho ws only a trend to w ard correlation (p < 0:07). In addition, b oth bigram predictabilit y and (p oin t wise) m utual information sho w a sligh tly stronger correlation with pitc h accen t than the baseline. When w e conducted a similar test on the sp on taneous corpus, w e found that all but the baseline mo del are signi can tly correlated with pitc h accen t placemen t. Since all three mo dels incorp orate a con text w ord while the baseline mo del do es not, these results suggest the usefulness of con text in accen t prediction. Ov erall, for all the di eren t measures of collo cation, bigram predictabilit y explains the largest amoun t of v ariation in accen t status for b oth corp ora. W e conducted a similar test using trigram predictabilit y , where t w o con text w ords, instead of one, w ere used to predict the curren t w ord. The results are sligh tly w orse than bigram predictabilit y (for the read corpus r = 0:167, p < 0:0001; for the sp on taneous r = 0:355, p < 0:0001). The failure of the trigram mo del to impro v e o v er the bigram mo del ma y b e due to sparse data. Th us, in the follo wing analysis, w e fo cus on bigram predictabilit y . In order to further v erify the e ectiv eness of w ord predictabilit y in accen t prediction, w e will sho w some examples in our sp eec h corp ora rst. Then w e will describ e ho w mac hine learning helps to deriv e pitc h accen t prediction mo dels using this feature. Finally , w e sho w that b oth absolute predictabilit y and relativ e predictabilit y are useful for pitc h accen t prediction. 1 Since p oin t wise m utual information p erformed consisten tly b etter than a v erage m utual information in our exp erimen t, w e presen t results only for the former. 5 W ord Predictabilit y and Accen t In general, nouns, esp ecially head nouns, are v ery lik ely to b e accen ted. Ho w ev er, certain nouns consisten tly do not get accen ted. F or example, T able 3 sho ws some collo cations con taining the w ord c el l in our sp eec h corpus. F or eac h con text, w e list the collo cated pair, its most frequen t accen t pattern in our corpus (upp er case indicates that the w ord w as accen ted and lo w er case indicates that it w as deaccen ted), its bigram predictabilit y (the larger the n um b er is, the more predictable the w ord is), and the frequency of this accen t pattern, as w ell as the total o ccurrence of the bigram in the corpus. In the rst exW ord P air Pred(cell) F req [of ] CELL -3.11 7/7 [RED] CELL -1.119 2/2 [P A CKED] cell -0.5759 4/6 [BLOOD] cell -0.067 2/2 T able 3: c el l Collo cations ample, c el l in [of ] CELL is v ery unpredictable from the o ccurrence of of and alw a ys receiv es a pitc h accen t. In [RED] CELL, [P A CKED] c el l, and [BLOOD] c el l, c el l has the same seman tic meaning, but di eren t accen t patterns: c el l in [P A CKED] c el l and [BLOOD] c el l is more predictable and deaccen ted, while in [RED] CELL it is less predictable and is accen ted. These examples sho w the in uence of con text and its usefulness for bigram predictabilit y . Other predictable nouns, suc h as saver in CELL saver usually are not accen ted ev en when they function as head nouns. Saver is deaccen ted in ten of the elev en instances in our sp eec h corpus. Its bigram score is -1.5517, whic h is m uc h higher than that of CELL (-4.6394{3.1083 dep ending up on con text). Without collo cation information, a t ypical accen t prediction system is lik ely to accen t saver, whic h w ould b e inappropriate in this domain. 6 Accen t Prediction Mo dels Both the correlation test results and direct observ ations pro vide some evidence on the usefulness of w ord predictabilit y . But w e still need to demonstrate that w e can successfully use this feature in automatic accen t prediction. In order to ac hiev e this, w e used mac hine learning Corpus Read Sp on taneous r p-v alue r p-v alue Baseline (Unigram) r = 0:166 p = 0:0002 r = 0:02 p = 0:39 Bigram Predictabilit y r = 0:236 p < 0:0001 r = 0:36 p < 0:0001 P oin t wise Mutual Information r = 0:185 p < 0:0001 r = 0:177 p < 0:0001 Dice Co eÆcien t r = 0:079 p = 0:066 r = 0:094 p < 0:0001 T able 2: Correlation of Di eren t Collo cation Measures with Accen t Decision tec hniques to automatically build accen t prediction mo dels using bigram w ord predictabilit y scores. W e used RIPPER (Cohen, 1995b) to explore the relations b et w een predictabilit y and accen t placemen t. RIPPER is a classi cationbased rule induction system. F rom annotated examples, it deriv es a set of ordered if-then rules, describing ho w input features can b e used to predict an output feature. In order to a v oid o v er tting, w e use 5-fold cross v alidation. The training data include all the nouns in the sp eec h corp ora. The indep enden t v ariables used to predict accen t status are the unigram and bigram predictabilit y measures, and the dep enden t v ariable is pitc h accen t status. W e used a ma jorit y-based predictabilit y mo del as our baseline (i.e. predict ac c ente d). In the com bined mo del, b oth unigram and bigram predictabilit y are used together for accen t prediction. F rom the results in T able 4, w e see that the bigram mo del consisten tly outp erforms the unigram mo del, and the combined mo del ac hiev es the b est p erformance. T o ev aluate the signi cance of the impro v emen ts ac hiev ed b y incorp orating a con text w ord, w e use the standard error pro duced b y RIPPER. Tw o results are statistically significan t when the results plus or min us t wice the standard error do not o v erlap (Cohen, 1995a). As sho wn in T able 4, for the read corpus, except for the unigram mo del, all the mo dels with bigram predictabilit y p erformed signi can tly b etter than the baseline mo del. Ho w ev er, the bigram mo del and the com bined mo del failed to impro v e signi can tly o v er the unigram mo del. This ma y result from to o small a corpus. F or the sp on taneous corpus, the unigram, bigram and the com bined mo del all ac hiev ed signi can t impro v emen t o v er the baseline. The bigram also p erformed signi can tly b etter than the unigram mo del. The com bined mo del had the b est p erformance. It also ac hiev ed signi can t impro v emen t o v er the unigram mo del. The impro v emen t of the com bined mo del o v er b oth unigram and bigram mo dels ma y b e due to the fact that some accen t patterns that are not captured b y one are indeed captured b y the other. F or example, accen t patterns for street names ha v e b een extensiv ely discussed in the literature (Ladd, 1996). F or example, str e et in phrases lik e (e.g. FIFTH str e et) is t ypically deaccen ted while avenue (e.g. Fifth A VENUE) is accen ted. While it seems lik ely that the conditional probabilit y of P r (S tr eetjF if th) is no higher than that of P r (Av enuejF if th), the unigram probabilit y of P r (str eet) is probably higher than that of a ven ue P r (av enue). 2 . So, incorp orating b oth predictabilit y measures ma y tease apart these and similar cases. 7 Relativ e Predictabilit y In the our previous analysis, w e sho w ed the effectiv eness of absolute w ord predictabilit y . W e no w consider whether relativ e predictabilit y is correlated with a larger constituen t's accen t pattern. The follo wing analysis fo cuses on accen t patterns of non-trivial base NPs. 3 F or this study w e lab eled base NPs b y hand for the corp ora describ ed in Section 2. F or eac h base NP , w e calculate whic h w ord is the most predictable and whic h is the least. W e w an t to see, when comparing with its neigh b oring 2 F or example, in a 7.5M w ord general news corpus (from CNN and Reuters), str e et o ccurs 2115 times and avenue just 194. Therefore, the unigram predictabilit y of str e et is higher than that of avenue. The most common bigram with str e et is Wal l Str e et whic h o ccurs 116 times and the most common bigram with avenue is Pennsylvania A venue whic h o ccurs 97. In this domain, the bigram predictabil it y for str e et in Fifth Str e et is extremely lo w b ecause this com bination nev er o ccurred, while that for avenue in Fifth A venue is -3.0995 whic h is the third most predictable bigrams with avenue as the second w ord. 3 Non-recursiv e noun phrases con taining at least t w o elemen ts. Corpus Predictabili t y Mo del P erformance Standard Error baseline mo del 81.98% unigram mo del 82.86%  0.93 Read bigram predictabil it y mo del 84.41%  1.10 unigram+bigram mo del 85.03%  1.04 baseline mo del 70.03% unigram mo del 72.22%  0.62 Sp on taneous bigram mo del 74.46%  0.30 unigram+bigram mo del 77.43%  0.51 T able 4: Ripp er Results for Accen t Status Prediction Mo del Predictabili t y T otal Accen ted W ord Not Accen ted Accen tabilit y unigram Least Predictable 1206 877 329 72.72% Most Predictable 1198 485 713 40.48% bigram Least Predictable 1205 965 240 80.08% Most Predictable 1194 488 706 40.87% T able 5: Relativ e Predictabilit y and Accen t Status w ords, whether the most predictable w ord is more lik ely to b e deaccen ted. As sho wn in T able 5, the \total" column represen ts the total n um b er of most (or least) predictable w ords in all baseNPs 4 . The next t w o columns indicate ho w man y of them are accen ted and deaccen ted. The last column is the p ercen tage of w ords that are accen ted. T able 5 sho ws that the probabilit y of accen ting a most predictable w ord is b et w een 40:48% and 45:96% and that of a least predictable w ord is b et w een 72:72% and 80:08%. This result indicates that relativ e predictabilit y is also a useful predictor for a w ord's accen tabilit y . 8 Discussion It is diÆcult to directly compare our results with previous accen t prediction studies, to determine the general utilit y of bigram predictabilit y in accen t assignmen t, due to differences in domain and the scop e of our task. F or example, Hirsc h b erg (1993) built a comprehensiv e accen t prediction mo del using mac hine learning tec hniques for predicting accen t status for all w ord classes for a text-tosp eec h system, emplo ying part-of-sp eec h, v arious t yp es of information status inferred from the text, and a n um b er of distance metrics, as w ell as a complex nominal predictor dev elop ed b y Sproat (1992). An algorithm making use of these features ac hiev ed 76.5%-80% accen t prediction accuracy for a broadcast news 4 The total n um b er of most predictable w ords is not equal to that of least predictable w ords due to ties. corpus, 85% for sen tences from the A TIS corpus of sp on taneous elicited sp eec h, and 98.3% success on a corpus of lab oratory read sentences. Lib erman and Sproat's (1992) success in predicting accen t patterns for complex nominals alone, using rules com bining a n um b er of features, ac hiev ed considerably higher success rates (91% correct, 5.4% acceptable, 3.6% unacceptable when rated b y h uman sub jects) for 500 complex nominals of 2 or more elemen ts c hosen from the AP Newswire. Our results, using bigram predictabilit y alone, 77% for the sp on taneous corpus and 85% for the read corpus, and using a di eren t success estimate, while not as impressiv e as (Lib erman and Sproat, 1992)'s, nonetheless demonstrate the utilit y of a relativ ely un tested feature for this task. In this pap er, w e ha v e in v estigated sev eral collo cation-based measures for pitc h accen t prediction. Our initial h yp othesis w as that w ord collo cation a ects pitc h accen t placemen t, and that the more predictable a w ord is in terms of its lo cal lexical con text, the more lik ely it is to b e deaccen ted. In order to v erify this claim, w e estimated three collo cation measures: w ord predictabilit y , m utual information and the Dice co eÆcien t. W e then used statistical tec hniques to analyze the correlation b et w een our di eren t w ord collocation metrics and pitc h accen t assignmen t for nouns. Our results sho w that, of all the collo cation measures w e in v estigated, bigram w ord predictabilit y has the strongest correlation with pitc h accen t assignmen t. Based on this nding, w e built sev eral pitc h accen t mo dels, assessing the usefulness of unigram and bigram w ord predictabilit y {as w ell as a combined mo del{ in accen t predication. Our results sho w that the bigram mo del p erforms consisten tly b etter than the unigram mo del, whic h do es not incorp orate lo cal con text information. Ho w ev er, our com bined mo del p erforms b est of all, suggesting that b oth contextual and non-con textual features of a w ord are imp ortan t in determining whether or not it should b e accen ted. These results are particularly imp ortan t for the dev elopmen t of future accen t assignmen t algorithms for text-to-sp eec h. F or our con tinuing researc h, w e will fo cus on t w o directions. The rst is to com bine our w ord predictabilit y feature with other pitc h accen t predictors that ha v e b een previously used for automatic accen t prediction. F eatures suc h as information status, grammatical function, and part-of-sp eec h, ha v e also b een sho wn to b e imp ortan t determinan ts of accen t assignmen t. So, our nal pitc h accen t mo del should include man y other features. Second, w e hop e to test whether the utilit y of bigram predictabilit y can b e generalized across di eren t domains. F or this purp ose, w e ha v e collected an annotated AP news sp eec h corpus and an AP news text corpus, and w e will carry out a similar exp erimen t in this domain. 9 Ac kno wledgmen ts Thanks for C. Jin, K. McKeo wn, R. Barzila y , J. Sha w, N. Elhadad, M. Kan, D. Jordan, and anon ymous review ers for the help on data preparation and useful commen ts. This researc h is supp orted in part b y the NSF Gran t IRI 9528998, the NLM Gran t R01 LM06593-01 and the Colum bia Univ ersit y Cen ter for Adv anced T ec hnology in High P erformance Computing and Comm unications in Healthcare. References D. Bolinger. 1961. Con trastiv e accen t and contrastiv e stress. language, 37:83{96. D. Bolinger. 1989. Intonation and Its Uses. Stanford Univ ersit y Press. G. Bro wn. 1983. Proso dic structure and the giv en/new distinction. In A. Cutler and D.R. Ladd, ed., Pr oso dy: Mo dels and Me asur ements, pages 67{78. Springer-V erlag, Berlin. P . Cohen. 1995a. Empiric al metho ds for arti cial intel ligenc e. MIT press, Cam bridge, MA. W. Cohen. 1995b. F ast e ectiv e rule induction. In Pr o c. of the 12th International Confer enc e on Machine L e arning. W. J. Cono v er. 1980. Pr actic al Nonp ar ametric Statistics. Wiley , New Y ork, 2nd edition. M. Dalal, S. F einer, K. McKeo wn, S. P an, M. Zhou, T. Ho ellerer, J. Sha w, Y. F eng, and J. F romer. 1996. Negotiation for automated generation of temp oral m ultim edia presen tations. In Pr o c. of A CM Multime dia 96, pages 55{64. Lee R. Dice. 1945. Measures of the amoun t of ecologic asso ciation b et w een sp ecies. Journal of Ec olo gy, 26:297{302. Rob ert M. F ano. 1961. T r ansmission of Information: A Statistic al The ory of Communic ations. MIT Press, Cam bridge, MA. J. Hirsc h b erg. 1993. Pitc h accen t in con text: predicting in tonational prominence from text. A rti cial Intel ligenc e, 63:305{340. D. Rob ert Ladd. 1996. Intonational Phonolo gy. Cam bridge Univ ersit y Press, Cam bridge. M. Lib erman and R. Sproat. 1992. The stress and structure of mo di ed noun phrases in English. In I. Sag, ed., L exic al Matters, pages 131{182. Univ ersit y of Chicago Press. J. Marc hand. 1993. Message p osted on HUMANIST mailing list, April. C. Nak atani. 1998. Constituen t-based accen t prediction. In Pr o c. of COLING/A CL'98, pages 939{945, Mon treal, Canada. S. P an and K. McKeo wn. 1998. Learning in tonation rules for concept to sp eec h generation. In Pr o c. of COLING/A CL'98, Mon treal, Canada. S. P an and K. McKeo wn. 1999. W ord informativ eness and automatic pitc h accen t mo deling. In Pr o c. of the Joint SIGD A T Confer enc e on EMNLP and VLC, pages 148{157. K. Silv erman, M. Bec kman, J. Pitrelli, M. Ostendorf, C. Wigh tman, P . Price, J. Pierreh um b ert, and J. Hirsc h b erg. 1992. T oBI: a standard for lab eling English proso dy . In Pr o c. of ICSLP92. R. Sproat, J. Hirsc h b erg, and D. Y aro wsky . 1992. A corpus-based syn thesizer. In Pr o c. of ICSLP92, pages 563{566, Ban . R. Sproat, ed. 1998. Multilingual T ext-to-Sp e e ch Synthesis: The Bel l L abs Appr o ach. Klu w er.
2000
30
A New Statistical Approach to Chinese Pinyin Input Zheng Chen Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Kai-Fu Lee Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Abstract Chinese input is one of the key challenges for Chinese PC users. This paper proposes a statistical approach to Pinyin-based Chinese input. This approach uses a trigram-based language model and a statistically based segmentation. Also, to deal with real input, it also includes a typing model which enables spelling correction in sentence-based Pinyin input, and a spelling model for English which enables modeless Pinyin input. 1. Introduction Chinese input method is one of the most difficult problems for Chinese PC users. There are two main categories of Chinese input method. One is shape-based input method, such as "wu bi zi xing", the other is Pinyin, or pronunciation-based input method, such as "Chinese CStar", "MSPY", etc. Because of its facility to learn and to use, Pinyin is the most popular Chinese input method. Over 97% of the users in China use Pinyin for input (Chen Yuan 1997). Although Pinyin input method has so many advantages, it also suffers from several problems, including Pinyin-tocharacters conversion errors, user typing errors, and UI problem such as the need of two separate mode while typing Chinese and English, etc. Pinyin-based method automatically converts Pinyin to Chinese characters. But, there are only about 406 syllables; they correspond to over 6000 common Chinese characters. So it is very difficult for system to select the correct corresponding Chinese characters automatically. A higher accuracy may be achieved using a sentence-based input. Sentence-based input method chooses character by using a language model base on context. So its accuracy is higher than wordbased input method. In this paper, all the technology is based on sentence-based input method, but it can easily adapted to word-input method. In our approach we use statistical language model to achieve very high accuracy. We design a unified approach to Chinese statistical language modelling. This unified approach enhances trigram-based statistical language modelling with automatic, maximumlikelihood-based methods to segment words, select the lexicon, and filter the training data. Compared to the commercial product, our system is up to 50% lower in error rate at the same memory size, and about 76% better without memory limits at all (Jianfeng etc. 2000). However, sentence-based input methods also have their own problems. One is that the system assumes that users’ input is perfect. In reality there are many typing errors in users’ input. Typing errors will cause many system errors. Another problem is that in order to type both English and Chinese, the user has to switch between two modes. This is cumbersome for the user. In this paper, a new typing model is proposed to solve these problems. The system will accept correct typing, but also tolerate common typing errors. Furthermore, the typing model is also combined with a probabilistic spelling model for English, which measures how likely the input sequence is an English word. Both models can run in parallel, guided by a Chinese language model to output the most likely sequence of Chinese and/or English characters. The organization of this paper is as follows. In the second section, we briefly discuss the Chinese language model which is used by sentence-based input method. In the third section, we introduce a typing model to deal with typing errors made by the user. In the fourth section, we propose a spelling model for English, which discriminates between Pinyin and English. Finally, we give some conclusions. 2. Chinese Language Model Pinyin input is the most popular form of text input in Chinese. Basically, the user types a phonetic spelling with optional spaces, like: woshiyigezhongguoren And the system converts this string into a string of Chinese characters, like: ( I am a Chinese ) A sentence-based input method chooses the probable Chinese word according to the context. In our system, statistical language model is used to provide adequate information to predict the probabilities of hypothesized Chinese word sequences. In the conversion of Pinyin to Chinese character, for the given Pinyin P , the goal is to find the most probable Chinese character H , so as to maximize ) | Pr( P H . Using Bayes law, we have: ) Pr( ) Pr( ) | Pr( max arg ) | Pr( max arg ^ P H H P P H H H H = = (2.1) The problem is divided into two parts, typing model ) | Pr( H P and language model ) Pr(H . Conceptually, all H ’s are enumerated, and the one that gives the largest ) , Pr( P H is selected as the best Chinese character sequence. In practice, some efficient methods, such as Viterbi Beam Search (Kai-Fu Lee 1989; Chin-hui Lee 1996), will be used. The Chinese language model in equation 2.1, ) Pr(H measures the a priori probability of a Chinese word sequence. Usually, it is determined by a statistical language model (SLM), such as Trigram LM. ) | Pr( H P , called typing model, measures the probability that a Chinese word H is typed as Pinyin P . Usually, H is the combination of Chinese words, it can decomposed into n w w w , , , 2 1 Λ , where i w can be Chinese word or Chinese character. So typing model can be rewritten as equation 2.2. ∏ = ≈ n i i i f w P H P 1 ) ( ) | Pr( ) | Pr( , (2.2) where, ) (i fP is the Pinyin of i w . The most widely used statistical language model is the so-called n-gram Markov models (Frederick 1997). Sometimes bigram or trigram is used as SLM. For English, trigram is widely used. With a large training corpus trigram also works well for Chinese. Many articles from newspapers and web are collected for training. And some new filtering methods are used to select balanced corpus to build the trigram model. Finally, a powerful language model is obtained. In practice, perplexity (Kai-Fu Lee 1989; Frederick 1997) is used to evaluate the SLM, as equation 2.3. ∑ = − − = N i i i w w P N PP 1 1) | ( log 1 2 (2.3) where N is the length of the testing data. The perplexity can be roughly interpreted as the geometric mean of the branching factor of the document when presented to the language model. Clearly, lower perplexities are better. We build a system for cross-domain general trigram word SLM for Chinese. We trained the system from 1.6 billion characters of training data. We evaluated the perplexity of this system, and found that across seven different domains, the average per-character perplexity was 34.4. We also evaluated the system for Pinyin-to-character conversion. Compared to the commercial product, our system is up to 50% lower in error rate at the same memory size, and about 76% better without memory limits at all. (JianFeng etc. 2000) 3. Spelling Correction 3.1 Typing Errors The sentence-based approach converts Pinyin into Chinese words. But this approach assumes correct Pinyin input. Erroneous input will cause errors to propagate in the conversion. This problem is serious for Chinese users because: 1. Chinese users do not type Pinyin as frequently as American users type English. 2. There are many dialects in China. Many people do not speak the standard Mandarin Chinese dialect, which is the origin of Pinyin. For example people in the southern area of China do not distinguish ‘zh’-‘z’, ‘sh’-‘s’, ‘ch’-‘c’, ‘ng’-‘n’, etc. 3. It is more difficult to check for errors while typing Pinyin for Chinese, because Pinyin typing is not WYSIWYG. Preview experiments showed that people usually do not check Pinyin for errors, but wait until the Chinese characters start to show up. 3.2 Spelling Correction In traditional statistical Pinyin-to-characters conversion systems, ) | Pr( ) ( i i f w P , as mentioned in equation 2.2, is usually set to 1 if ) (i fP is an acceptable spelling of word i w , and 0 if it is not. Thus, these systems rely exclusively on the language model to carry out the conversion, and have no tolerance for any variability in Pinyin input. Some systems have the “southern confused pronunciation” feature to deal with this problem. But this can only address a small fraction of typing errors because it is not data-driven (learned from real typing errors). Our solution trains the probability of ) | Pr( ) ( i i f w P from a real corpus. There are many ways to build typing models. In theory, we can train all possible ) | Pr( ) ( i i f w P , but there are too many parameters to train. In order to reduce the number of parameters that we need to train, we consider only single-character words and map all characters with equivalent pronunciation into a single syllable. There are about 406 syllables in Chinese, so this is essentially training: ) | Pr( Syllable String Pinyin , and then mapping each character to its corresponding syllable. According to the statistical data from psychology (William 1983), most frequently errors made by users can be classified into the following types: 1. Substitution error: The user types one key instead of another key. This error is mainly caused by layout of the keyboard. The correct character was replaced by a character immediately adjacent and in the same row. 43% of the typing errors are of this type. Substitutions of a neighbouring letter from the same column (column errors) accounted for 15%. And the substitution of the homologous (mirrorimage) letter typed by the same finger in the same position but the wrong hand, accounted for 10% of the errors overall (William 1983). 2. Insertion errors: The typist inserts some keys into the typing letter sequence. One reason of this error is the layout of the keyboard. Different dialects also can result in insertion errors. 3. Deletion errors: some keys are omitted while typing. 4. Other typing errors, all errors except the errors mentioned before. For example, transposition errors which means the reversal of two adjacent letters. We use models learned from psychology, but train the model parameters from real data, similar to training acoustic model for speech recognition (Kai-Fu Lee 1989). In speech recognition, each syllable can be represented as a hidden Markov model (HMM). The pronunciation sample of each syllable is mapped to a sequence of states in HMM. Then the transition probability between states can be trained from the real training data. Similarly, in Pinyin input each input key can be seen as a state, then we can align the correct input and actual input to find out the transition probability of each state. Finally, different HMMs can be used to model typists with different skill levels. In order to train all 406 syllables in Chinese, a lot of data are needed. We reduce this data requirement by tying the same letter in different syllable or same syllable as one state. Then the number of states can be reduced to 27 (26 different letters from ‘a’ to ‘z’, plus one to represent the unknown letter which appears in the typing letters). This model could be integrated into a Viterbi beam search that utilizes a trigram language model. 3.3 Experiments Typing model is trained from the real user input. We collected actual typing data from 100 users, with about 8 hours of typing data from each user. 90% of this data are used for training and remaining 10% data are used for testing. The character perplexity for testing corpus is 66.69, and the word perplexity is 653.71. We first, tested the baseline system without spelling correction. There are two groups of input: one with perfect input (which means instead of using user input); the other is actual input, which contains real typing errors. The error rate of Pinyin to Hanzi conversion is shown as table 3.1. Error Rate Perfect Input 6.82% Actual Input 20.84% Table 3.1 system without spelling correction In the actual input data, approximately 4.6% Chinese characters are typed incorrectly. This 4.6% error will cause more errors through propagation. In the whole system, we found that it results in tripling increase of the error rate from table 3.1. It shows that error tolerance is very important for typist while using sentence-based input method. For example, user types the Pinyin like: wisiyigezhonguoren (), system without error tolerance will convert it into Chinese character like: wiu. Another experiment is carried out to validate the concept of adaptive spelling correction. The motivation of adaptive spelling correction is that we want to apply more correction to less skilled typists. This level of correction can be controlled by the “language model weight”(LM weight) (Frederick 1997; Bahl etc. 1980; X. Huang etc. 1993). The LM weight is applied as in equation 3.1. α) Pr( ) | Pr( max arg ) | Pr( max arg ^ H H P P H H H H = = , where α is the LM weight. (3.1) Using the same data as last experiment, but applying the typing model and varying the LM weight, results are shown as Figure 3.1. As can be seen from Figure 3.1, different LM weight will affect the system performance. For a fixed LM weight of 0.5, the error rate of conversion is reduced by approximately 30%. For example, the conversion of “wisiyigezhonguoren” is now correct. Spel l i ng Cor r ect i on 13. 00% 14. 00% 15. 00% 16. 00% 17. 00% 18. 00% 0. 3 0. 4 0. 5 0. 6 0. 7 0. 8 0. 9 1 1. 1 LM Wei ght Error Rate of Actual 6. 00% 7. 00% 8. 00% 9. 00% 10. 00% 11. 00% Error Rate of Perfect Act ual Pi nyi n I nput Per f ect Pi nyi n I nput Figure 3.1 effect of LM weight If we apply adaptive LM weight depending on the typing skill of the user, we can obtain further error reduction. To verify this, we select 3 users from the testing data, adding one ideal user (suppose input including no errors), we test the error rate of system with different LM weight, and result is as table 3.2. 1 α 2 α 3 α α Dynamic User 0 6.85% 7.11% 7.77% 6.85% User 1 8.15% 8.23% 8.66% 8.15% User 2 13.90% 12.86% 12.91% 12.86% User 3 19.15% 18.19% 17.77% 17.77% Average 12.01% 11.6% 11.78% 10.16% Table 3.2 user adaptation The average input error rates of User 1,2,3 are 0.77%, 4.41% and 5.73% respectively. As can be seen from table 3.2, the best weight for each user is different. In a real system, skilled typist could be assigned lower LM weight, and the skill of typist can be determined by: 1. the number of modification during typing. 2. the difficulty of the text typed distribution of typing time can also be estimated. It can be applied to judge the skill of the typist. 4. Modeless Input Another annoying UI problem of Pinyin input is the language mode switch. The mode switch is needed while typing English words in a Chinese document. It is easy for users to forget to do this switch. In our work, a new spelling model is proposed to let system automatically detect which word is Chinese, and which word is English. We call it modeless Pinyin input method. This is not as easy as it may seem to be, because many legal English words are also legal Pinyin strings. And because no spaces are typed between Chinese characters, and between Chinese and English words, we obtain even more ambiguities in the input. The way to solve this problem is analogous to speech recognition. Bayes rule is used to divided the objective function (as equation 4.1) into two parts, one is the spelling model for English, the other is the Chinese language model, as shown in equation 4.2. Goal: ) | Pr( max arg ^ P H H H = (4.1) Bayes Rule: ) Pr( ) Pr( ) | Pr( max arg ^ P H H P H H = (4.2) One of the common methods is to consider the English word as one single category, called <English>. We then train into our Chinese language model (Trigram) by treating <English> like a single Chinese word. We also train an English spelling model which could be a combination of: 1. A unigram language model trained on real English inserted in Chinese language texts. It can deal with many frequently used English words, but it cannot predict the unseen English words. 2. An “English spelling model” of tri-syllable probabilities – this model should have non-zero probabilities for every 3-syllable sequence, but also should emit a higher probability for words that are likely to be English-like. This can be trained from real English words also, and can deal with unseen English words. This English spelling models should, in general, return very high probabilities for real English word string, high probabilities for letter strings that look like English words, and low probabilities for non-English words. In the actual recognition, this English model will run in parallel to (and thus compete with) the Chinese spelling model. We will have the following situations: 1. If a sequence is clearly Pinyin, Pinyin models will have much higher score. 2. If a sequence is clearly English, English models will have much higher score. 3. If a sequence is ambiguous, the two models will both survive in the search until further context disambiguates. 4. If a sequence does not look like Pinyin, nor an English word, then Pinyin model should be less tolerant than the English trisyllable model, and the string is likely to remain as English, as it may be a proper name or an acronym (such as “IEEE”). During training, we choose some frequently used English syllables, including 26 uppercase, 26 lower-case letters, English word begin, word end and unknown into the English syllable list. Then the English words or Pinyin in the training corpus are segmented by these syllables. We trained the probability for every three syllable. Thus the syllable model can be applied to search to measure how likely the input sequence is an English word or a Chinese word. The probability can be combined with Chinese language model to find the most probable Chinese and/or English words. Some experiments are conducted to test the modeless Pinyin input methods. First, we tell the system the boundary between English word and Chinese word, then test the error of system; Second, we let system automatically judge the boundary of English and Chinese word, then test the error rate again. The result is as table 4.1. Total Error Rate English Error Rate Perfect Separation 4.19% 0% Mixed Language Search (TriLetter English Spelling Model) 4.28% 3.6% Mixed Language Search + Spelling Correction (TriLetter English Spelling Model) 4.31% 4.5% Table 4.1 Modeless Pinyin input method (Only choose 52 English letters into the English syllable list) In our modeless approach, only 52 English letters are added into English syllable list, and a tri-letter spelling model is trained based on corpus. If we let system automatically judge the boundary of English word and Chinese word, we found the error rate is approximate 3.6% (which means system make some mistake in judging the boundary). And we found that spelling model for English can be run with spelling correction, with only a small error increase. Another experiment is done with an increased English syllable list. 1000 frequently used English syllables are selected into English syllable list. Then we train a trisyllable model base on corpus. The result is shown in table 4.2. Total Error Rate English Error Rate Perfect Separation 4.19% 0% Tri Letter English Spelling Model 4.28% 3.6% Tri Syllable English Spelling Model 4.26% 2.77% Table 4.2 Modeless Pinyin input method (1000 frequently used English syllables + 52 English letters + 1 Unknown) As can be seen from table 4.2, increasing the complexity of spelling model adequately will help system a little. 5. Conclusion This paper proposed a statistical approach to Pinyin input using a Chinese SLM. We obtained conversion accuracy of 95%, which is 50% better than commercial systems. Furthermore, to make the system usable in the real world, we proposed the spelling model, which allows the user to enter Chinese and English without language mode switch, and the typing model, which makes the system resident to typing errors. Compared to the baseline of system, our system gets approximate 30% error reduction. Acknowledgements Our thanks to ChangNing Huang, JianYun Nie and Mingjing Li for their suggestions on this paper. References Chen Yuan. 1997.12. Chinese Language Processing. Shang Hai education publishing company. Jianfeng Gao, Hai-Feng Wang, Mingjing Li, KaiFu Lee. 2000. A Unified Approach to Statistical Language Modeling for Chinese. IEEE, ICASSP 2000. Kai-Fu Lee. 1989. Automatic Speech Recognition, Kluwer Academic Publishers. Chin-Hui Lee, Frank K. Soong, Kuldip K. Paliwal. 1996. Automatic Speech and Speaker Recognition -- Advanced Topics, Kluwer Academic Publishers. Frederick Jelinek. 1997. Statistical Methods for Speech Recognition, The MIT Press, Cambridge, Massachusetts. William E. Cooper. 1983. Cognitive Aspects of Skilled Typewriting, Springer-Verlag New York Inc.. Bahl,L., Bakis, R., Jelinek, F., and Mercer, R. 1980. Language Model / Accoustic Channel Balabnce Mechanism. IBM Technical Disclosure Bulletin, vol.23, pp. 3464-3465. X. Huang, M. Belin, F. Alleva, and M. Hwang. 1993. Unified Stochastic Engine (USE) for Speech Recognition, ICASSP-93., vol.2, pp. 636-639.
2000
31
                              ! "#     $ %      " #       $ %       #       &   '( ()'(  #  *  +  ,  )                              .  )   -              /        /             )              "      0                  ,      1            /   2  33!4 )                   5 5   )             .     /  )*6     /   /    *       /           /          )-     7         5-) 5  5 -) 5  2( * 3384   +                 /  2  339  : 33 4        5-) 5   )-     ;) /           /  )  /     ,     )   /  <  ,      / )  ; ,   )         )       * ==     -   )  <>           *5-) 5   / ?      *    /        / -  )  + ,  )         )                )   /         )     2 3!4        /     /                  /    /                    .       ,         "          <     *    /            ,     %   )>    7         )  )6 0-)   6   )  ,>/  )    ,*      /     6    /   +    /       !    !      /     +       /           )              !          !            !  %-)      *      /     6 .   /      !   .       !   . -     !                  "  2 4       24              )    @2δ4)   /)         δ 6 ×!        ⊆             )   / A   )             )         ; ,  )      ) 6        /      !" #  )  BCB C B C   /  ;   )          )   .   2  δ 4             /              /             /    +  )            )          6     ∅  /  , )               + , )  /-   7            )      )         ')          /)   ,  )   7;    7    /       / )  /     ,         /   *)   ,   )     )        ;  ,  )  ,     B$%&'()*+,C   /           )    B$CB$ %C B$%&C B$%&'C B$%&'(CD                     "δ     ≠      ∈  "  !                                  / 7    ,  )        -  21"  33E4          /  ,            + /    , /  /   )          /      )           / )         / )  ) >   )     )               /            ; ,    -.// ! -./0123 )    /   ;- *         /) /24 01232 4+  -./ 012   )   -./0123   )  -.//    )       ,   )   6     / )8FD0      / )                )  )  )  / 6 '     ,+   )   "  /  E )     /) )      -  <            -  G            .   ,   /   0H 1 *! I &>*   ,    ,  <   ,  ;-  * )       -  )  * *       $                 -  )  ;    E-        ,  -   )  ' 0 /E               ,    )     /        )     -). 24/))                6    A    ;        )        ,  4 24   2424@F;         =   ,  5 24  6 24 24@!        ;-  *    ;             /         /   /      , 782 4  9:;2424@F;             /        /        ,  < 24  9 24 = 2/)4  > 2)/4 24@!        ,          ; , 2? @4@8E <   ε        "2εε4@2""4@  2" ε4 @ 2ε "4 @ I ;                 /)      24@J∞ -  6 @KL≠ε∃≠24MJ∞N ;   -  ∈      246 24@KL∈≠ε24MJ∞N +  ∈ 24      .       ) O  P )  OP /6    + +∞ < = + + ≤ ≤ ≤ ≤                                                                         ;      ,       ,                     )  ,              )      /  /    )  8FD+   /     ,    /) )           /  <              "    < >    -    24    ;  8 )   ,   )     .  @2δ4 @2δ4                     /              /   / , )      F-   *2 4)    ,  )  )   ,       ,       /)   )      )     6  @ @  @ @∅            /      )     /             ;  ,  )   ,     /  5$A&'()*+, 5 )  A    /    %    /2$842 8F4 2B8F42C8!4D2$%F42$% &!42$D8I42$E8I4D2F$ 8I42G$8I4D )  $ %&                  /    2  33!4       /     )                  /  1   )   /                 /    /    / <  /     ,   )     <         ,        /             /                               ε≤                          ε        "δ   ≠                 ε≤   ∈          ε             ε               ∅           !!∈  ""∈  !         !#      "   !"≤  "δ " ≠    ∈   "  !  !"      "    !  !"               A    /        )                   + )        ,           / )    /1)         ?))  )  /  ;    ≥<? G     /)   )      ,                /   6        ≤ ∑ ∈                )           ,     ;  F )        ,     / . !"  ,         >       * 2   4       /  ,   1  /              F8 !"        /      ;     ,     /        0Q        )  A1    )       )   2!"4          )                 /)  !"        6 2!" 4@  2!"4R  2!" 4 < ) 6  2!" 4@ ( ) ( ) ∏ ∈            .   /      )            2  4    /)           Q         /                      / /  ,          ,                  ;               * 6 +@ /    , @ /   )        @ /            @ /                                  $                    % ∅   #      %        #                           @ =+RS    @ =RS  @=+RS <  ,    ,  IFI      /             )   -     / 8$/ > 3F3!   T>3!  >3!)   I3I :: >         ,        /     *   ,     ) ; ! ;      )         )  /   /    /      2: 33!4      /                        /            )          /    )        6 <          .               )  /     ;  ,     BHI4J KC!BHL4JKC      ,  / BIC   BLC$      )              )       & $   -         <> -   <       &          #   + ,  )                        7           )  R                                              !""#           !  !                   7   / ,     7            $         $      $        !  !  !  !"#$ %&'()*+,-./ 0 ! 0 ! 0 ! 0 !"#$ %&'()*+,-./ 1 234567 67 67 67 8 1 234567 9 67 9 67 9 67 98 :;<=>?@ ?@ ?@ ?@ABCDEFGHIJ K :;<=>?@ ?@ ?@ ?@ABCDEFGHIJK LMNOPQRCSTUVWXTUY WXTUY WXTUY WXTUY LMNOPQRCSTUVZTUY ZTUY ZTUY ZTUY  7,           /            ): -' >       /    ) )   /     /        /    )  /    ,      )     B-) C  B -) C         * *   7       ,   )     B-) C   */)-            6 A    ,  )     /  *  ,    )        <       .>       **         /        $   ! %&'! ""# & '    &              ' ()*+,#-.#/0#" 1 1'2 "". (  )        #   34'!,'56 .#-#0 ! 17 ( 78 ""9         *     #  &        &            '7 )!' '7 )!' ..-  :/0;" ,' "":*  +                ( , -     . !'<56=>!"9 ?!@A!=!B! "";*   '       +         / 0/     1  (  &  '   * +   *7     *  +! !'<56+'C7=>! /".0#/:
2000
32
         ! #"$&%(' ) *  + ,"-",."/$10234%5"3476  8/94: ;1<>=&<@?ACB DEGFIHKJMLEGNPORQTSEUFHWVYX-ZMV[L\1] V[L\ ^`_ F7DaObOc NPOPdUefZMV[L\1] V[L\ ^ SEUL\[ahg _ JMLEUNPORQiSIEUFHPekjOREmlEGL\ jOREmlEGL\1nbopopo[qYr s F)F)V[t1u _ Ewv s EGFIHfgxdyORzhg{dma| }~€A1ƒ‚ „†…‡ˆ`‰4ˆŠwˆ‰ ‹ E s Q)VkSIVYXŒF OSO _ Q s axe4DaEUL _ Ž[ e4jOREylEUL\‘PEU\[t _ DORLfF)ObQ rP’1“4aE s ahgL€,V _ z e”jOREylEUL\KnRopo[opq[o s Lafg _ L\hv t1E s Q)VPSVYXŒFbd s V[t A1•`–˜—™‡šp— ›œRžŸž œb¡-¢¤£P¥ ¦ §p¨T©h¨T£f§p¨T£fª)«[¬®­f¦Y¯!¨˜§ °®± ¦R²7¨T¡@œ ±³‘´µ ¦ ´ ¦§p² ¢ ´ ¯M£Pœ¶¢¤£ ´ ¨ ± ²7¨T¬ §Y¢G¦ ´ ¨© µP± ¦Y¯¦·ž £PœP§p¨˜¯K¦¸£f§¹¦·žŸž œR¡/¯C£Pœ ª ± œ[¯¯w¢ £P¥ºœ ° ¯!«p£ ´ ¦ª ´ ¢Gª»§p¨T©f¨T£f§p¨T£fª)« žŸ¢ £ ³ ¯T¼¡@¨1§Y¢G¯ª)½f¯¯ µ œR¡¿¾ µ ¢¤£P¨˜¯À¨Á¯À¨T£P¬ ´ ¨T£fª)¨˜¯W¦ ± ¨€¦R£f¦·ž «P¯!¨˜§Â¦R£f§5¦R£P£Pœ ´ ¦ ´ ¨˜§ ½f¯Ã¢¤£P¥ ¦R£ ÄPÅÇÆ1Ȭ®­f¦¯!¨˜§ ¯ª µ ¨T²É¨Ê Ë!¯¯!½P¨˜¯ ± ¨)žG¦ ´ ¨˜§ ´ œ ´ œ¸ž¤¨ ± ¦R£fª)¨†œ ° ¨ ±± œ ± ¯ ¦ ´¹Ì ¦ ± ¢ œ½f¯Íž ¨ Ì ¨)žG¯Îœ ° ¦R£f¦bž¤«P¯w¢G¯Ï¦¸£f§ ª)œY²É©f¦ ´ ¢ ­p¢ŸžŸ¢ ´ «*¡-¢ ´Iµ œ ´µ ¨ ± ¯!«p£ ´ ¦ª ´ ¢Gª °®± ¦R²7¨T¡@œ ±³ ¯¦ ± ¨Ç¦§P§ ± ¨˜¯¯!¨˜§Ê Ð BbˆÑ—¸™Y9xÒÓšp—Šw94ˆ Ԁ¨Õ¦R£P£Pœ ´ ¦ ´ ¨ ´µ ¨Õ¯!«p£ ´ ¦ª ´ ¢Gª&§p¨T©f¨T£f§p¨T£fª)«Î­f¨T¬ ´ ¡¨T¨T£*ž ¨)Öp¢>ªT¦bž×½P£p¢ ´ ¯Ø¢ £»¦#žŸ¢ £P¥½p¢G¯ ´ ¢GªÎ¨)ÖP© ± ¨˜¯!¬ ¯w¢ œ£†¦YªTª)œ ± §Y¢ £P¥ ´ œ ¦R£É¦R©P© ± œ[¦Yª µÉ´ œ Ù¨T©f¨T£f§p¨T£fª)« Å ± ¦R²É²†¦ ±Ú!Û ¨˜¯!£p¢ŒÜ ¨ ± ¨¼fݘÞYßÞpà ´µ ¦ ´ ªT¦¸£É­f¨ ´± ¦ª)¨˜§ ­f¦ª ³1´ œ Ú ÅǦb¢ ° ²†¦R£4¼`ݘÞáߝâ”㦘«P¯T¼xݘÞYáRäfâ”å-œ­p¢ £P¬ ¯!œ£4¼ÑݘÞpæbçpà)ÊԀ¨ ± ¨˜èY½p¢ ± ¨é¯!«p£ ´ ¦ª ´ ¢Gª §p¨T©h¨T£f§p¨T£fª)« ´ œC­f¨© ± œê!¨˜ª ´ ¢ Ì ¨¼x¦R£f§2ªT¦R© ´ ½ ± ¨É£PœY£P¬®© ± œê!¨˜ª ´ ¢ Ì ¨ ¯!«p£ ´ ¦ª ´ ¢GªC© µ ¨T£Pœ²É¨T£f¦2¢ £5žG¦R£P¥Y½f¦R¥¨1¡¢ ´µ ª)œ£P¬ ¯ ´± ¦·¢ £ ´ ¯§p¨)ëf£P¨˜§C¡¢ ´µW± ¨ ° ¨ ± ¨T£fª)¨ ´ œ ´µ ¢G¯© ± œYê蘪)¬ ´ ¢ Ì ¨¯ ³ ¨)ž ¨ ´ œ£ Ú È4¦·¢”¦R£f§ã½f¦R£P¥h¼Ý·ÞÞìR­4âpÈ4¦·¢”¦R£f§ ã½f¦R£P¥f¼PݘÞÞYì¦PâRÈ4¦·¢f¦R£f§ ã½f¦R£P¥f¼PݘÞYÞÞ¦PâRÈ4¦·¢f¦R£f§ ã½f¦R£P¥f¼ݘÞÞÞR­à)Ê Û-µ ¢>¯K¢>¯1§Y¢Ÿí4¨ ± ¨T£ ´‘°®± œ²¦R©P¬ © ± œ[¦Yª µ ¨˜¯ ´µ ¦ ´ ¦bžîž œR¡Õ² ½pž ´ ¢ ©pž ¨ µ ¨˜¦§P¯`¦R£f§ª ± œ[¯¯!¬ ¢ £P¥Ç§p¨T©h¨T£f§p¨T£fª)« žŸ¢ £ ³ ¯ Ú ã½f§P¯!œ£4¼PÝ·ÞìRäfâ˜ÆC¨)žŒïUð ª)½ ³ ¼ ݘÞìYìPâÄ ´ ¦ ± œp¯ ´ ¦P¼ ݘÞìYìPâéã¦Rêw¢Ãð ª)œ Ì ¦¼ÉݘÞÞPݸàñÊ¡@¨ ¦·žG¯!œ ± ¨˜è½p¢ ± ¨ ´µ ¨ ± ¨)žG¦ ´ ¢ œ£f¯ µ ¢ ©Ø­f¨ ´ ¡¨T¨T£ò¦C¥œ Ì ¬ ¨ ± £Pœ ± ¦R£f§5¦·žŸž¢ ´ ¯W§p¨T©f¨T£f§p¨T£ ´ ¯ ´ œ­h¨C¢¤²7²É¨˜§Y¢ ¬ ¦ ´ ¨Ê Ûµ ¨ ± ¨ ¦ ± ¨£Pœ ¢ £ ´ ¨ ± ²É¨˜§Y¢G¦ ´ ¨© µP± ¦¯¦·ž4£PœP§p¨˜¯ ¦¯ó¢ £ô¯Àœ²É¨œ ´µ ¨ ± © ± œê!¨˜ª ´ ¢ Ì ¨T¬Œ§p¨T©f¨T£f§p¨T£fª)«p¬®­f¦¯!¨˜§ ©f¦ ± ¯Ã¢¤£P¥5¦R©P© ± œ[¦Yª µ ¨˜¯ Ú ã¨)žŸž ¡¢ ¥f¼ÉÝ·ÞìáPâ7¾œ Ì ¢ £P¥¬ ´ œ£4¼ݘÞYÞçPâP¾`œ½ ±´ ¢ £W¦R£f§CÅ/¨T£ ´µ ¢G¦·žŒ¼ݘÞÞìPâpõ`œ½ ± ¬ §pœ£C¨ ´ ¦·žŒÊ ¼4ݘÞYÞìPâPö/¦¸¥[¦Rœf¼4Ý·ÞÞ÷pàñÊ øÏ¯w¢ ² ¢ŸžG¦ ± ¦R©P¬ © ± œ[¦Yª µµ ¦¯Ñ­f¨T¨T£É¦§pœY© ´ ¨˜§¢ £é¯Àœ²É¨ ± ¨˜ª)¨T£ ´ ¡œ ±³ ¯ œY£#¾ µ ¢ £P¨˜¯!¨ Ú ã½f¦¸£P¥Õ¨ ´ ¦·žŒÊ ¼ÉÝ·ÞÞùPâúx½f¦¸£û¦R£f§ ã½f¦R£P¥f¼ݘÞYÞùPâhü µ œ½‘¦R£f§Cã½f¦¸£P¥f¼ݘÞÞRäPà)Ê ø¯4¦¯ ´ ¦ ±´ ¼ý¡@¨x²¦R£[½f¦bžîž «¥R¢ Ì ¨`©f¦ ± ¯!¨¯ ´± ½fª ´ ½ ± ¨ ¦¸£P£Pœ ´ ¦ ´ ¢ œ£ ´ œþ¦¯!²†¦·žŸž¾ µ ¢ £P¨˜¯!¨Mž ¨T¥[¦·ž ´ ¨)Ö ´ ª)œ ± ¬ ©P½f¯ýÊ Ûµ ¨ ´ ¨)Ö ´ ¢G¯@ÿý¯!¨T¥²É¨T£ ´ ¨˜§¦R£f§ ´ ¦R¥¥Y¨˜§¼b¦R£f§ ²7œ ± © µ œRž œ¥R¢GªT¦·ž¦R£f¦·ž «P¯w¢G¯¢G¯ ´µ ¨T£€ªT¦ ±± ¢¤¨˜§Cœ½ ´´ œ ª)œY£pë ± ²*¦R£f§†¦Y§·êýf¯ ´ ¡œ ± § ­hœ½P£f§P¦ ± ¢ ¨˜¯TÊÄ«[£ ´ ¦ª)¬ ´ ¢>ªM½P£p¢ ´ ¯Çž>¦ ± ¥¨ ±´µ ¦R£ ´µ ¨ ¡@œ ± §1¦ ± ¨É¦R£P£Pœ ´ ¦ ´ ¨˜§ ½f¯Ã¢¤£P¥‘¦¸£€ÄPÅÇÆ1Ȭ®­f¦Y¯!¨˜§‘¯ª µ ¨T²É¨ÊԀ¨ µ ¦ Ì ¨é¦·žG¯!œ ° œ½P£f§þ¢ ´ ½f¯À¨ ° ½pž ´ œ7¢G§p¨T£ ´ ¢ ° «1½P£p¢ ´ ¯¯!²†¦·žŸž ¨ ±´µ ¦R£ ´Iµ ¨W¡@œ ± § žŸ¢ ³ ¨     Á¦R£f§       TÊ Ëw£ ´µ ¨ ¯!¨˜ª ´ ¢ œ£f¯ ´µ ¦ ´° œRžŸž œR¡ ¼P¡¨ ¥R¢ Ì ¨ ¦¯ µ œ ±I´ §p¨˜¯Iª ± ¢ © ´ ¢ œ£¿œ °&´µ ¨#½P£f§p¨ ± ž «Y¢ £P¥¿§p¨T©f¨T£f§p¨T£fª)«p¬ ­f¦Y¯!¨˜§5¥ ± ¦¸²É²†¦ ±° œ ± ²†¦bžî¢G¯!²C¼ ¦5§p¨˜¯ª ± ¢ © ´ ¢ œ£#œ ° ´Iµ ¨Ç¦R£P£Pœ ´ ¦ ´ ¢ œ£†¯Iª µ ¨T²É¨Ê Ô€¨Ç¦bž>¯Àœô§Y¢G¯ª)½f¯I¯¯!œ²É¨ ²7œ ± © µ œRž œ¥R¢GªT¦·ž€¦R£f§*¯!«p£ ´ ¦ª ´ ¢GªÂª)œ²7©pžî¢GªT¦ ´ ¢¤œY£f¯T¼ ¦¸£f§ ´Iµ ¨)¢ ± ¢ ²É©pžŸ¢>ªT¦ ´ ¢ œ£f¯ ° œ ±´ œRž ¨ ± ¦R£fª)¨Éœ ° ¨ ±± œ ± ¯ ¦¸£f§¹ª)œ²7©f¦ ´ ¢ ­p¢Ÿžî¢ ´ «Í¡¢ ´µ œ ´µ ¨ ± ¥ ± ¦R²É²†¦ ±¶° œ ± ¬ ²¦·žŸ¢>¯À²†¯TÊ   ™9! #"fšp—Š$%" Ò&"'("fˆÒ)"fˆ`š+* – *ˆÑ—‡, Ô ¨ ° œRžŸž œR¡ 圭p¢ £f¯!œY£ Ú Ý·Þpæbçpà`¢ £ ± ¨˜èY½p¢ ± ¢ £P¥€¯!«[£P¬ ´ ¦ª ´ ¢GªW§p¨T©f¨T£f§p¨T£fª)«¹¯ ´± ½fª ´ ½ ± ¨˜¯ ´ œ€œ­f¯!¨ ±IÌ ¨ ´µ ¨ ª)œY£f¯ ´± ¦·¢ £ ´ ¯hœ ° -.0/1 2    ! {¦R£f§(34  5  6  RÊ å-œ­p¢ £f¯!œ£ ± ¨˜èY½p¢ ± ¨˜¯ ´µ ¦ ´ ¦·žŸž[¡œ ± §P¯ Ú ¨)֝ª)¨T© ´Ñ° œ ± ´Iµ ¨ µ ¨˜¦§Cœ °´µ ¨M¡ µ œRž ¨ ¨)Ök© ± ¨˜¯¯w¢ œ£à/¯ µ œ½pžG§€§p¨T¬ ©h¨T£f§ œ£ ¨)Öf¦ª ´ ž « œ£P¨œ ´µ ¨ ± ¡œ ± §,¢ £ ´µ ¨¨)ÖP© ± ¨˜¯!¬ ¯Ã¢¤œY£4ÊÂÄ µ ¨1¦·žG¯!œ ± ¨˜èY½p¢ ± ¨˜¯ ´µ ¦ ´ £Pœ&§p¨T©f¨T£f§p¨T£fª)« žŸ¢ £ ³ ¯‘¯ µ œ½pžG§5ª ± œ[¯I¯W¦R£[«&œ ´µ ¨ ± Ê#›Pœ ± ¨)Öf¦R²É©pž ¨¼ ´Iµ ¨@© ± œê!¨˜ª ´ ¢ Ì ¨@¯!«p£ ´ ¦ª ´ ¢Gª§p¨T©f¨T£f§p¨T£fª)«†¯ ´± ½fª ´ ½ ± ¨ ° œ ±´µ ¨&¾ µ ¢ £P¨˜¯!¨&¯!¨T£ ´ ¨T£fª)¨7 98: .98:  Ú<;yµ ¨ ¡¦R£ ´ ¯=·¡-¦¸£ ´ ¨˜§ ´ œ&žG¦R½P¥ µ ïŸà¢>¯1¦¯þ¢ £Ï›Ñ¢¤¥Y½ ± ¨òÝYÊ Û-µ ¨ ± ¨¢G¯£PœÉ¨)ÖP©pžŸ¢GªI¢ ´ ¯À«[£ ´ ¦ª ´ ¢GªžŸ¢ £ ³ ­f¨ ´ ¡¨T¨T£>  ¦¸£f§?8: Ê Û-µ ¨ ° ¦ª ´ ´Iµ ¦ ´ C¢G¯ ´µ ¨†¯!½P­ê蘪 ´ œ ° 8#@0C¢>¯7¦ªTª)œ½P£ ´ ¨˜§ ° œ ± ­p« ´µ ¨€ÿTª)œ£ ´± œRž@‘¯À©f¨˜ª)¬ ¢ŸëªT¦ ´ ¢¤œY£2¢¤£ ´µ ¨ž ¨)Ö[¢GªT¦·ž¨T£ ´± «€œ °´µ ¨W¡œ ± §98:A ´ ¦ Ú<;yµ ¨ï¤à Öp¢G¦R£P¥ Ú<; ¡-¦¸£ ´ ï¤à Öp¢>¦¸œ Ú<; žG¦R½P¥ µ ï¤à ›Ñ¢ ¥½ ± ¨ÉÝ B&C ± œê!¨˜ª ´ ¢ Ì ¨§p¨T©f¨T£f§p¨T£fª)«€¯ ³ ¨)ž ¨ ´ œY£  +. ´µ ¦ ´ ¢ ´ ¯@¯À½P­Pê蘪 ´ ¢G¯ ´µ ¨/¯!½P­ê蘪 ´ œ ° ¢ ´ ¯`© ± ¨˜§p¬ ¢GªT¦ ´ ¨‘ª)œ²7©pž¤¨T²7¨T£ ´ Ê Å ± ¦R²7²†¦ ´ ¢GªT¦·žª)œ£f¯ ´± ¦b¢¤£ ´ ¯ žŸ¢ ³ ¨ ´µ ¨˜¯À¨‘© ± œ©f¦¸¥[¦ ´ ¨ Ì ¢G¦ ´µ ¨W£PœP§p¨˜¯†¦R£f§Ø¦ ± ªT¯ œ ° ¦© ± œYê蘪 ´ ¢ Ì ¨-¯À«[£ ´ ¦ª ´ ¢Gª`§p¨T©f¨T£f§p¨T£fª)«W¯ ³ ¨)ž ¨ ´ œ£4Ê ¾œ£f¯ ´± ¦·¢ £ ´ ¯ ´µ ¦ ´ ¦ªTª)œY½P£ ´° œ ± ²Éœ Ì ¨T²É¨T£ ´ ¬UžŸ¢ ³ ¨ © µ ¨T£Pœ²É¨T£f¦7¡œ ±³ ¯Ã¢¤²ô¢îžG¦ ± ž¤«Ê Ô€¨C½f¯!¨‘§p¨T©f¨T£f§p¨T£fª)« ± ½pž ¨˜¯ Ú ã¦·«k¯ý¼ݘÞáRäà ´ œ ¥¨T£P¨ ± ¦ ´ ¨×§p¨T©h¨T£f§p¨T£fª)«Ï¯ ´± ½fª ´ ½ ± ¨˜¯TʻԀ¨2²†¦ ³ ¨ ´µ ¨ §p¨T©h¨T£f§p¨T£fª)« ± ½pž¤¨˜¯ Ú ÝRàx­p¢ £f¦ ± «p¬®­ ± ¦R£fª µ ¢ £P¥fÊ Ú ÝRà ¦PÊWãEDGFÏã ĽP­Pê!¨˜ª ´´ œ†ª)œ£f¯ ´± ¦·¢ £ ´ ¯ ¢ £Wž ¨)Ö[¢GªT¦·žx¨T£ ´± «œ ° ã ­4ʑãED.ãHF ĽP­Pê!¨˜ª ´´ œ†ª)œ£f¯ ´± ¦·¢ £ ´ ¯ ¢ £Wž ¨)Ö[¢GªT¦·žx¨T£ ´± «œ ° ã å¨T©h¨˜¦ ´ ¨˜§5¦R©P©pžŸ¢GªT¦ ´ ¢ œ£Âœ °´µ ¨˜¯!¨ ´ ¡@œ€§p¨T©h¨T£P¬ §p¨T£fª)« ± ½pž¤¨˜¯½P£f§p¨ ±´µ ¨ ª)œ£f¯ ´± ¦b¢¤£ ´ ¯œ °{´µ ¨ ¯!½P­P¬ ªT¦ ´ ¨T¥Yœ ± ¢JI˜¦ ´ ¢ œ£»© ± œ©h¨ ±´ ¢ ¨˜¯#œ °ò´µ ¨¥œ Ì ¨ ± £p¢ £P¥ ¡œ ± §2¦R£f§ ¥Rž œ­f¦·ž/ª)œ£f¯ ´± ¦·¢ £ ´ ¯Mœ °´µ ¨ÉžG¦R£P¥Y½f¦R¥¨ ¦ªTª)œ½P£ ´ ¯ ° œ ± §p¨T©h¨T£f§p¨T£ ´ ¨)ž ¨T²É¨T£ ´ ¯7¢ £&¢ ´ ¯1ÿT§pœ¬ ²†¦·¢ £PÊ Ûµ ¨ ¦ª ´ ½f¦·ž4œ ± §p¨ ± ¢ £‘¡ µ ¢Gª µ‘´µ ¨ §p¨T©h¨T£P¬ §p¨T£ ´ ¯¦ ± ¨É¦ ±± ¦R£P¥¨˜§1§p¨T©f¨T£f§P¯Mœ£ µ œR¡ ´µ ¨É¯!½P­P¬ ªT¦ ´ ¨T¥Yœ ± ¢JI˜¦ ´ ¢ œ£,¢¤£ ° œ ± ²¦ ´ ¢ œ£M¢ £ ´Iµ ¨ž ¨)Öp¢GªT¦·žf¨T£ ´I± « œ °4´µ ¨¥œ Ì ¨ ± £Pœ ± ¢G¯`œ ± ¥p¦R£p¢JIT¨˜§¼k¦¸£f§Éœ£ µ œb¡5¯!½P­P¬ ªT¦ ´ ¨T¥Yœ ± ¢JI˜¦ ´ ¢ œ£ ¢ £ ° œ ± ²†¦ ´ ¢ œ£ ¢G¯ ± ¨ ´I± ¢ ¨ Ì ¨˜§ ° œ ± ½f¯!¨ ¢ £ ´µ ¨ §p¨T©h¨T£f§p¨T£fª)« ± ½pž ¨˜¯TÊ ËŒ£ ´µ ¨5­p¢¤£f¦ ± « ± ½pž ¨˜¯ Ú ÝRàñ¼ ´µ ¨LK ¯!«p²é­hœRžG¯ œ£ ´µ ¨ ´ ¡œÁ¯Ã¢>§p¨˜¯ôœ °Ç´Iµ ¨ ± ½pž ¨1¦ ± ¨£Pœ ´ œ£pž «2œ ° ´µ ¨É¯¦R²7¨M 3N ·¼4¦¯¢ £€© µP± ¦¯!¨T¬Œ¯ ´± ½fª ´ ½ ± ¨ ± ¨T¡ ± ¢ ´ ¨ ± ½pž ¨˜¯T¼p­P½ ´ ¢G§p¨T£ ´ ¢GªT¦·žx¦¯O #P# QiÊ ø5žŸ¢ £P¥½p¢G¯ ´ ¢Gª¨)ÖP¬ © ± ¨˜¯¯I¯w¢ œ£5¢G¯ ´µ ½f¯7¢ £f§Y¢G¯ ´ ¢ £P¥½p¢G¯ µ ¦¸­pžG¨ ° ± œ² ´µ ¨ ¯!½P­ªT¦ ´ ¨T¥œ ± ¢RI)¢ £P¥ µ ¨˜¦Y§É¡œ ± §ÊÑø µ ¨˜¦§7¡œ ± § ¦R£f§ ¢ ´ ¯ §p¨T©f¨T£f§p¨T£ ´ ¯ ° œ ± ² ¦TS¦ ´ ¯ ´I± ½fª ´ ½ ± ¨¡¢ ´µ œ½ ´ ¢ £ ´ ¨ ± ²É¨˜§Y¢G¦ ´ ¨M© µP± ¦¯¦·ž4£PœP§p¨˜¯TÊ U V "!'O"fˆ`Ò&"fˆ`š+*XWi•`‡–:"hÒ5‡ˆ`ˆ9—‡h—ŠŒ94ˆ Y[Z\ ]_^`AabMcb#dfe[gihM^Ab#jk?c#lmk`@`)bn+opb q asr:tudpc Û œ ¯ ´ ½f§p« µ œR¡&¡¨¡¢Ÿžîž4­h¨/¦R­pž ¨ ´ œ ¦R£P£Pœ ´ ¦ ´ ¨ª)œ ± ¬ ©hœ ± ¦W¦ªTª)œ ± §Y¢ £P¥ ´ œWœ½ ± §p¨T©f¨T£f§p¨T£fª)«p¬®­f¦¯!¨˜§&¦R©P¬ © ± œ[¦ª µ ¢ £#¦2¡¦˜« ´µ ¦ ´W´µ ¨2ª)œ ± ©fœ ± ¦2¡¢ŸžŸžô¦·žG¯!œ ­h¨ ½f¯!¨ ° ½pž ´ œÉœ ´µ ¨ ±/± ¨˜¯!¨˜¦ ± ª µ ¨ ± ¯T¼h¡¨­f¨T¥R¢ £€¡¢ ´µ ²¦R£[½f¦bžîž « ¦R£P£Pœ ´ ¦ ´ ¢¤£P¥ ¦¯!²†¦bžîžª)œ ± ©P½f¯TÊ Ë ´ ¢G¯x¦·žG¯!œ µ œY©f¨˜§ ´µ ¦ ´x´µ ¢G¯x©p¢Ÿž œ ´ ¢ £P¥ ¨)íóœ ±´ ¡-¢îžŸž ´µP± œR¡€žŸ¢ ¥ µp´ œY£ µ œR¡#¡¨ ²†¦·«W²†¦ ³ ¨ ½f¯!¨ œ ° ž>¦ ± ¥¨É¦R£P£Pœ ´ ¦ ´ ¨˜§ ª)œ ± ©fœ ± ¦ © ± œk§p½fª)¨˜§C­p«†œ ´Iµ ¨ ± ©f¨Tœ©pž ¨Ê Ô ¨Â½f¯!¨#¦Ï¯!²†¦bžîž€ª)œ ± ©P½f¯ƒœ ° ¾ µ ¢ £P¨˜¯!¨ ´ ¨)Ö ´ ¯À¨T¥²É¨T£ ´ ¨˜§€¦R£f§ ´ ¦R¥¥Y¨˜§1½f¯w¢ £P¥€¦‘­p¢ ¥ ± ¦R²É¬®­f¦¯!¨˜§ ¯À¨T¥²É¨T£ ´ ¦ ´ ¢ œ£P¬ ´ ¦¸¥¥R¢ £P¥ ´ œ[œRž Ú È4¦b¢Ç¨ ´ ¦·žŒÊ ¼ ݘÞYÞùPâ ÈѦ·¢¨ ´ ¦·žŒÊ ¼ݘÞYÞìpàñÊ Ûµ ¨‘ª)œ ± ©P½f¯M¢G¯ ´ ¡œ×ÿýª µ ¦R©P¬ ´ ¨ ± ¯<1œ ° ¦2¯ ´ ¦ ´ ½ ´ ¨¢¤£Â㜣P¥>vœ£P¥&ª)œ£ ´ ¦·¢ £p¢ £P¥ äæbÞp桜 ± § ´ œ ³ ¨T£f¯TÊ Y[Zw xyjsnzks{|{sabkb^Aap{Lc q j4n!lmn Û-µ ¨¦R£P£Pœ ´ ¦ ´ ¢ œ£û¯Iª µ ¨T²É¨ ¢G¯C­f¦¯!¨˜§(œ£#ÄPÅÇÆ1È Ú ÄPÅÇÆ1Èx¼ݘÞYìápàñÊ Ûµ ¨¶žG¦ ± ¥¨˜¯ ´´ ¦R¥ ½P£p¢ ´ ¢G¯ ´µ ¨ 3 -.~}!@ ڀ 3s}4à ´Iµ ¦ ´ ²†¦ ±I³ ¯ œ¸í(ª µ ½P£ ³ ¯ œ °/´ ¨)Ö ´é´Iµ ¦ ´ ¦ ± ¨ ± ¨˜¦§p« ´ œ1­f¨C© ± œPª)¨˜¯¯!¨˜§&­p«¦ ¯À«[£ ´ ¦ª ´ ¢GªM©f¦ ± ¯À¨ ± Ê Ë ´ µ ¦¯¦T34/¦ ´´± ¢ ­P½ ´ ¨ ´ œ ± ¨T¬ Sf¨˜ª ´ ¢ ´ ¯©fœ[¯Ã¢ ´ ¢¤œY£‘¢ £ ´µ ¨ ´ ¨)Ö ´ Êx›Pœ ± ¨)Öf¦R²É©pž ¨ B ‚ ƒ+„Lƒf…#†+‡ˆ‰Šs‰‹‰pŒ‰‹‰+‰ ‚Ž|‘<Š&Œs‘+ˆ+‚’ ƒ+„4ˆ ‚ ƒ+„Lƒf…#†0“ˆ‰”s…•N–—4‰˜‰#™ šp…“p‰ ‚Ž|‘ ›Nœ0:ž(‘+ˆ‚+’ ƒ„sˆ ‚ ƒ+„Lƒf…#†+Ÿˆ‰Šs‰‹‰pŒ‰‹‰+‰L‰[Œ-‰‹‰+‰ ‚Ž|‘<Š&ŒpŒf‘€+ˆ+‚+’ƒ+„4ˆ ‚ ƒ+„Lƒf…#†+ ˆ‰šs¡ —4‰˜‰¢0„4¡#“p‰£‰¢s…•%‰ ‰”s…•–—4‰‹‰¤™šp…“p‰‹‰¥+„“f‰‹‰-š4¡ —4‰ ‰–0„f…+‰£‰–!¡¤—s‰‹‰¦!¡¤‡s‰L‰-•4¡ •–N—s‰ ‰§0„Q—s‰‹‰”s…0¨¤œ0“f‰‹‰-š„œ0“f‰‹‰-¥„“f‰ ‰•s¡•–—4‰‹‰ ©…#‡4‰‹‰”4…•–N—s‰‹‰pŒ+Œ-‰ ‚Ž|‘ª¤œ¤•N«+!¨ª#«™?¡ •¬¡#¡ ¬ …•­¨ ©!…:¬‹œ0‹¥4¡+ª œž|…•–®…žQƒQœQ™™…¥Q©Q¡ «NœHª+¨#+¢Lœ#„«˜¥4¡+ª¨ „s™¡i«0š4¡… ›Nœ0:ž[™7¬œ£•Qœ0«Hª¤œ¤•N›œ0:žH«œ ™ƒs¡ª… ›4…0ª¨#«s…:œ¤•4™++ˆ+‚’ ƒ+„4ˆ Ëw£ ´µ ¢G¯C¨)֝¦R²7©pž¤¨¼ ´Iµ ¨©f¦ ± ¯w¢ £P¥ò½P£p¢ ´ ¯ ¦ ± ¨¦ ¯À¨˜ª ´ ¢ œ£Õ£p½P² ­f¨ ± ¢¤£5ø ± ¦R­p¢GªW£p½P² ­f¨ ± ¦bž>¯ý¼¦¯!¨˜ª)¬ ´ ¢¤œY£ôªT¦¸© ´ ¢ œ£4¼[¦M¯!½P­f¯!¨˜ª ´ ¢ œ£É£p½P² ­f¨ ± ¼p¦R£f§É¦œ£P¨T¬ ¯À¨T£ ´ ¨T£fª)¨ ´ ¨)Ö ´ Ê Cx¦ ± ¯w¢ £P¥Â½P£p¢ ´ ¯2²¦˜«Í­f¨Â¯!¨T£P¬ ´ ¨T£fª)¨˜¯ Ú ¨Ê ¥fÊ  3s}¯34°²±pàñ¼Ñ© µP± ¦¯!¨˜¯ Ú ¨Ê ¥fÊ  34} 3s³°µ´)àñ¼Rœ ± £PœY£P¬®© µP± ¦¯¦·žP½P£p¢ ´ ¯ÑžŸ¢ ³ ¨  34}(34³°·¶| ¦R£f§  34}?3s³°²¸[Ê Û-µ ¨T«ò¦ ± ¨ ´µ ¨1½P£p¢ ´ ¯ °®± œ² ¡ µ ¢Gª µ&´ œ‘­P½p¢ŸžG§2žG¦ ± ¥Y¨ ± §Y¢G¯ª)œY½ ± ¯!¨W½P£p¢ ´ ¯ Ú ¡ µ ¢>ª µ ¢G¯­h¨T«œ£f§ ´µ ¨ ¯ª)œ©h¨ œ °”´Iµ ¢>¯©f¦R©h¨ ± àñÊ Ô€œ ± §P¯ ° œ ± ² ´µ ¨½P£p¢ ´ ¯-œ ° ¯À«[£ ´ ¦ª ´ ¢Gª¦R£f¦·ž «P¯w¢G¯TÊ ›Pœ ± ¨)֝¦¸²É©pž ¨ B ‚ ƒ+„£ƒp…#† Qˆ ‚#§0„¹§s…#†4?–+­†Š˜›0•Q†Q™„+¥ºª:«†¤•4ª ™-žs†4‰¤ª œ#•«!¨+ª:«p‰#ˆ‰-š4¡ —4‰i‰¢0„4¡#“f‰ ‚#§0„¹§s…#†Š‹–+­†»ª#«N†Q¨„”H™-žs†s‰<ž[¨:¢p‰¤ˆ ‰¦!¡¤‡s‰‹‰•s¡ •N–—4‰ ‚#§0„¹§s…#†¼‹–+­†Š˜›0•Q†Q¨:”œª:«†Q¨¤½ ™-žs†4‰0…•N­!¨¤©!…#¬4‰¤ˆQ‰§0„Q—s‰i‰”s…0¨¤œ0“f‰ ‚#§0„¹§s…#†4 ‡‹–+­N†+Š˜› •†#ƒ«Hª:«†s‰pŒ+Œ-‰¤ˆ‰pŒ+Œ-‰ ‚+’ ƒ„sˆ Ûµ ¨ ²¾ }s¨)ž¤¨T²7¨T£ ´ ¯ µ ¦ Ì ¨‘¦ ¾  Ú®° œ ±z¾     8fàþ¦ ´´± ¢ ­P½ ´ ¨ ´µ ¦ ´ ²†¦ ±³ ¯ ´µ ¨)¢ ± ©fœp¯w¢ ´ ¢ œ£f¯ ¡¢ ´µ ¢ £ ´µ ¨  34}s2½P£p¢ ´ Ê Ú Äœ²7¨x¡@œ ± §P¯¢¤£ ´µ ¨x¨)ÖP¬ ¦R²É©pž ¨¦ ± ¨œ²ô¢ ´I´ ¨˜§ ° œ ± ´µ ¨¯¦ ³ ¨œ ° ªIž>¦ ± ¢ ´ «ʤàÇÙ¨T¬ ©f¨T£f§p¨T£fª)« µ ¢ ¨ ± ¦ ± ª µ «¢G¯ ²†¦ ±³ ¨˜§¡-¢ ´Iµé´Iµ ¨ µ ¨)ž © œ ° ´µ ¨O. 6 Ú®° œ ± .  6  Yà”¦ ´´I± ¢ ­P½ ´ ¨¼R¡ µ ¢Gª µ7± ¨˜ª)œ ± §P¯ ´µ ¨ ¾  Ì ¦·ž ½P¨€œ °´µ ¨1¥œ Ì ¨ ± £Pœ ± œ ° ¦€¡œ ± §Ê5ø Ì ¦bž¤½P¨œ ° ç ° œ ±·¿À ¯ µ œR¡/¯ ´Iµ ¦ ´ ´µ ¨7¡@œ ± §¶¢>¯ ´µ ¨ µ ¨˜¦§,¨)ž¤¨T²7¨T£ ´ œ °P´µ ¨  34}sÊ Ûµ ¨¯!«p£ ´ ¦Yª ´ ¢GªªT¦ ´ ¬ ¨T¥œ ± «Wœ °@´µ ¨ ¡œ ± §¶¢G¯¥R¢ Ì ¨T£1¢ £ ´Iµ ¨T Ú®° œ ± €0  .  Pà@¦ ´´± ¢ ­P½ ´ ¨¼f¦¸£f§¢ ´ ¯ ÿ)¥ ± ¦R²7²†¦ ´ ¢GªT¦·ž@± ¨)žG¦R¬ ´ ¢ œ£ ´ œM¢ ´ ¯`¢ ²É²7¨˜§Y¢>¦ ´ ¨ ¥œ Ì ¨ ± £Pœ ± ¢>¯ ´µ ¨ Ì ¦·ž ½P¨Mœ ° ´µ ¨ÂÁ- Ú®° œ ± Á}!  4à ¦ ´I´± ¢ ­P½ ´ ¨Ê Ûµ ¨þ¯À¨T²†¦R£P¬ ´ ¢GªM¥Rž œ[¯¯!¨˜¯œ °´µ ¨M¡œ ± §P¯¦ ± ¨ ¥¸¢ Ì ¨T£‘¢ £ ´µ ¨)¢ ± à ¦ ´´I± ¢ ­P½ ´ ¨˜¯TÊ Ô€œ ± §P¯&¦ ± ¨ ° œ ± ²É¨˜§ °®± œ² 0@ @ ! ®}+  ڀ  }sàñ¼ ° œ ± ²†¯ ´µ ¦ ´ ¦ ± ¨žŸ¢G¯ ´ ¨˜§þ¢ £1¾ µ ¢ £P¨˜¯!¨ §Y¢Gª)¬ ´ ¢ œ£f¦ ± ¢¤¨˜¯ýÊ ‚#§0„¹§s…#†4˜ª#«†#•sª‹™-žs†4‰¤ª œ#•«!¨+ª:«p‰#ˆ ‚ ¬#„L«+–N†¤•4ª ¬Qˆ‚+ª„sˆ‰šs¡¤—s‰¤‚ª „4ˆ‰¢ „s¡:“p‰ Ù¢Gª ´ ¢ œ£f¦ ± «#½P£p¢ ´ ¯&¦ ± ¨&²†¦§p¨&½P©Îœ ° œY£P¨Õœ ± ²Éœ ± ¨†ª µ ¦ ± ¦ª ´ ¨ ± ¯ Ú }sàñÊWԀœ ± §P¯ ¦R£f§ §Y¢>ª ´ ¢¤œY¬ £f¦ ± « ½P£p¢ ´ ¯É¦ ± ¨Wª)œ£f¯w¢G§p¨ ± ¨˜§&§Y¢Ÿí4¨ ± ¨T£ ´ ¯ ³ ¢¤£f§P¯ôœ ° ¨T£ ´ ¢ ´ ¢ ¨˜¯1­f¨˜ªT¦R½f¯À¨&œ ° ²Éœ ± © µ œRž œ¥¸¢>ªT¦bž7ª)œY²É©pžŸ¢>ªT¦¸¬ ´ ¢ œ£f¯ ´ œé­h¨é§Y¢G¯ª)½f¯¯À¨˜§žG¦ ´ ¨ ± ¢ £ ´µ ¢G¯©f¦R©f¨ ± Ê Ûµ ¨ ± ¨)žG¦ ´ ¢ œ£2œ °´µ ¨ÅÄ ¿&Ú®° œ ± .hঠ´´± ¢ ­P½ ´ ¨˜¯,œ ° §Y¢Gª)¬ ´ ¢ œ£f¦ ± «‘½P£p¢ ´ ¯ ÚÇÆ+È à ´ œ ´µ ¨ÅÉÄ/¦ ´´± ¢¤­P½ ´ ¨˜¯œ °´µ ¨ ¡œ ± §P¯`¡¢ŸžŸž{­h¨ ¨)ÖP©pžG¦·¢ £P¨˜§WžG¦ ´ ¨ ± Ê Ûµ ¨5ª µ ¦ ± ¦ª ´ ¨ ± ¯ ڀ  }4à€¦ ± ¨ ´Iµ ¨ ´ ¨ ± ² ¢ £f¦·ž ´ ¨)Ö ´ ¨)ž ¨T²É¨T£ ´ ¯¢ £ œ½ ± ¦¸£P£Pœ ´ ¦ ´ ¢ œ£ ¯ª µ ¨T²É¨Ê Ûµ ¨T« ¦ ± ¨W² ½pž ´ ¢ ©pž ¨T¬®­[« ´ ¨2¾ µ ¢ £P¨˜¯!¨ Ú<Ê  Ñàéª µ ¦ ± ¦ª ´ ¨ ± ¯ýÊ Ëw£ ´µ ¢G¯©f¦R©f¨ ± ¼ ´µ ¨T«ò¦ ± ¨‘¡ ± ¢ ´I´ ¨T£Ø¢¤£ ´µ ¨1© µ œ¬ £P¨ ´ ¢>ªË3s@!:5¯ª ± ¢ © ´ ° œ ± ´µ ¨W¨˜¦¯!¨Wœ °± ¨T£f§p¨ ± ¢ £P¥ ¦¸£f§ ± ¨˜¦§Y¢ £P¥fÊ Ëw£€œ½ ± ¦R£P£Pœ ´ ¦ ´ ¢ œ£€¯ª µ ¨T²7¨¼ ´µ ¨ ´ ¦R¥[¯ ²¾ }s ¦¸£f§  }sÏª)œ ±± ¨˜¯!©fœ£f§ ´ œ ´µ ¨ ²¾ ¹¦R£f§   ´ ¦R¥[¯ ° œ ± /Ì@+. }+J z . Ãy !  @0L /Ì Ãy  W¢ £ ¾µÍÄ Ú Ëçp¨2¨ ´ ¦·žŒÊ ¼ÉÝ·ÞÞápàñÊ Ûµ ¨7Ãy Î30/1. €0/ }+ ڌ´ œ‘­h¨W¢ £ ´± œP§p½fª)¨˜§Õ¦R£f§&§Y¢G¯ª)½f¯¯À¨˜§2¢ £ ´µ ¨ £P¨)Ö ´ ¯!¨˜ª ´ ¢ œ£à ¢G¯T¼ µ œR¡@¨ Ì ¨ ± ¼£Pœ ´ œ °4´µ ¨¯¦R²7¨/£f¦R¬ ´ ½ ± ¨/¦¯ ´µ ¨  Ãρ ´ ¦R¥¢ £‘¾µÍÄfÊÔ ¨/§pœ £Pœ ´xµ ¦ Ì ¨ ´ ¦R¥[¯ª)œ ±± ¨˜¯!©hœ£f§Y¢ £P¥ ´ œ  / ¿¦R£f§  3+ ¢ £ ¾µÍÄfÊ Û-µ ½f¯T¼`©f¦ ± ¯Ã¢¤£P¥2½P£p¢ ´ ¯ ڀ 34}sà ¢ £Øœ½ ± ¦R£P£Pœ¬ ´ ¦ ´ ¢ œ£ ¯Iª µ ¨T²É¨Í¦ ± ¨iS¦ ´ ¨T£f¯!¨T² ­pž¤¨˜¯ƒœ ° ¡œ ± §P¯ Ú²¾ }4à)Ê Û-µ ¨&¥œ Ì ¨ ± £Pœ ± ڀ . 6!ඦ ´´± ¢ ­P½ ´ ¨˜¯ œ ° ´µ ¨1¡@œ ± §P¯ ± ¨Sf¨˜ª ´W´µ ¨€§p¨T©f¨T£f§p¨T£fª)« µ ¢ ¨ ± ¦ ± ¬ ª µ «Ê†Ô œ ± §P¯ ¦ ± ¨É²†¦§p¨7½P©œ ° §Y¢Gª ´ ¢ œ£f¦ ± «€½P£p¢ ´ ¯ Ú  }sàñ¼¡ µ ¢Gª µ ¦ ± ¨¢ £ ´ ½ ± £‘²†¦Y§p¨é½P©1œ ° ª µ ¦ ± ¬ ¦Yª ´ ¨ ± ¯ Ú }sàñÊ Ûµ ¨†­f¦Y¯w¢Gª ´ ¨)Ö ´ ¨)ž ¨T²É¨T£ ´ ¯ ¦ ± ¨ ´Iµ ¨†ª µ ¦ ± ¦ª ´ ¨ ± ¯ýÊþÆCœ ± © µ œRž œ¥R¢GªT¦·ž-ª)œY²É©pžŸ¢>ªT¦ ´ ¢ œ£f¯ ¦ ± ¨Ç§Y¢G¯ª)½f¯I¯!¨˜§¶¢¤£ ´µ ¨£P¨)Ö ´ ¯!¨˜ª ´ ¢¤œY£4Ê Ð 8…&"2–bÓ`•XWц94™ÒÎÓˆ`Š®—–ËÒ š…`‡™Y‡šp— "h™–+Ó@Ò`ŠŒšp—ŠŒ94ˆ`‡™#*®"fˆÑ—™Š2"h– ‡ˆÒ#:Â9љ '…)"f:L"h– Ô|Z\ ÕÅjskr k q bnNr#cyks{pe‹e%^ q b^Aap{skr g£n!{br#^AnNc Û-µ ¨é²7œ ´ ¢ Ì ¦ ´ ¢¤œY£ ° œ ±´µ ¨É½f¯!¨Éœ °@´µ ¨É§Y¢Gª ´ ¢ œ£f¦ ± « ½P£p¢ ´´ ¦R¥M¢>¯¨)ÖP©pž>¦b¢¤£P¨˜§C¡¢ ´µW´Iµ ¨ µ ¨)ž¤©Cœ °x´µ ¨ ° œ¸ž¤¬ ž œR¡¢ £P¥É¨)Öf¦R²É©pž ¨Ê ‚ ƒ+„Lƒf…#†+‡ˆ ‚#§0„®§4…#†s–+­†»Hª#«†ž!”H™-žs†4‰ Š)Œ-+‰#ˆ ‚ ¬#„®«+–†ž!”ˆ+‚+ª„sˆQ‰ Šs‰ ‚ ¬#„®«+–†4‰[Œ-‰¤ˆ+‚ª „4ˆ‰[Œ-‰ ‚ ¬#„®«+–†ž!”ˆ+‚+ª„sˆQ‰++‰ ‚+’ ƒ„sˆ Ö ½ ± ¯!¨T¥²7¨T£ ´ ¦ ´ ¢ œ£P¬ ´ ¦R¥¥¸¢¤£P¥ ²Éœk§p½pž ¨© ± œ¬ §p½fª)¨˜¯ ´µP± ¨T¨Õ¯À¨T©f¦ ± ¦ ´ ¨Âÿ ´ œ ³ ¨T£f¯<PÊ Ô€¨Õ¦§·ê!½f¯ ´ ´Iµ ¨@¡œ ± §­fœY½P£f§P¦ ± ¢ ¨˜¯¦R£f§ô¥ ± œ½P© ´µ ¨ ´µP± ¨T¨œ£P¨T¬ ª µ ¦ ± ¦Yª ´ ¨ ±4´ œ ³ ¨T£f¯ ´ œY¥¨ ´µ ¨ ± ¢ £ ´ œœ£P¨@¯Ã¢¤£P¥¸ž¤¨¡@œ ± § Ú²¾ }4à)¼P¡ µ ¢Gª µ ¢G¯¦ ³ ¢ £f§Wœ ± £[½P²É¨ ± ¢>ªT¦bž”žG¦R­h¨)ž Ê Û-µ ¨ ¯!¨T¥²É¨T£ ´ ¦ ´ ¢¤œY£P¬ ´ ¦R¥¥R¢ £P¥ ²ÉœP§p½pž¤¨ ¯!¨T¥¬ ²7¨T£ ´ ¯ ´µ ¨ ´ ¨)Ö ´ ¢ £ ´ œ ´ œ ³ ¨T£f¯¦ªTª)œ ± §Y¢¤£P¥ ´ œ ª ± ¢ ´ ¨T¬ ± ¢>¦©fœY©P½pž>¦ ± ¦R²Éœ£P¥ ± ¨˜¯!¨˜¦ ± ª µ ¨ ± ¯ó¢ £¾ µ ¢ £f¦ÆK¦b¢¤£P¬ žG¦R£f§Ê Û-µ ¨ ´µP± ¨T¨Éª µ ¦ ± ¦ª ´ ¨ ± ¯Ï×T¼@ÿRÊØ¹¦R£f§˜Ù€¦ ± ¨ žŸ¢G¯ ´ ¨˜§Ø¦¯ ¨T£ ´± ¢ ¨˜¯ô¢¤£5œ½ ± §Y¢>ª ´ ¢¤œY£f¦ ± «¼¦R£f§ ´µ ¨T« ª)œY²é­p¢ £P¨ ´± ¦¸£f¯!©f¦ ± ¨T£ ´ ž « ´ œ ° œ ± ² ´µ ¨ ¡@œ ± §7×ÚÙR¼ ¡ µ ¢>ª µ ¢G¯T¼p¡¢ ´µ ¥œ[œP§ ± ¨˜¦¯!œ£f¯ý¼[£Pœ ´ žŸ¢G¯ ´ ¨˜§7¢ £‘§Y¢Gª)¬ ´ ¢¤œY£f¦ ± «ÊWԀ¨¥ ± œY½P© ´µ ¨ ´µP± ¨T¨Wª µ ¦ ± ¦ª ´ ¨ ± ¯Ç¢¤£ ´ œ ¦2¡œ ± §¼­P½ ´ ¡@¨€¦·žG¯!œ ± ¨ ´ ¦·¢ £ ´µ ¨C¢¤£ ° œ ± ²¦ ´ ¢ œ£ ´µ ¦ ´W´Iµ ¢>¯C¡œ ± §ƒ¢G¯1ª)œ²É©hœ[¯!¨˜§ °®± œ² ´µ ¨ ´µP± ¨T¨ œ£P¨T¬Œª µ ¦ ± ¦ª ´ ¨ ± §Y¢>ª ´ ¢¤œY£f¦ ± «€¨T£ ´± ¢ ¨˜¯TÊ Ûµ ¨§Y¢>ª ´ ¢¤œY¬ £f¦ ± « ´ ¦¸¥[¯ ¦¯I¯w¢ ¥£P¨˜§ ´ œ ´µ ¨˜¯!¨ ´µP± ¨T¨ ´ œ ³ ¨T£f¯ ¦ ± ¨ ± ¨ ´ ¦b¢¤£P¨˜§þ¢ £ ´µ ¨Å J.W¦ ´I´± ¢ ­P½ ´ ¨œ °ڀ  }sàñÊ å¨˜¯!¨˜¦ ± ª µ ¨ ± ¯²†¦·« £Pœ ´ ¦R¥ ± ¨T¨œ£ µ œb¡ ´ œ‘ÿT¯!¨T¥¬ ²É¨T£ ´ ¦¸£†¨)ÖP© ± ¨˜¯¯w¢ œ£7žŸ¢ ³ ¨ ´µ ¢G¯¨)Öf¦R²É©pž ¨  3s}4Ê Ô µ ¦ ´ ¡¨ª)œY£f¯w¢G§p¨ ±K´ œÕ­f¨2œ£P¨€¡œ ± §5²†¦·«5­f¨ ´µP± ¨T¨M¡œ ± §P¯ ´ œW¯!œ²7¨©f¨Tœ©pž ¨Êõ`¨˜ªT¦R½f¯!¨ œ °x´µ ¢G¯T¼ ¡¨ µ ¦ Ì ¨ ´ ¦ ³ ¨T£2ªT¦ ± ¨ ´µ ¦ ´´µ ¨†œ ± ¢ ¥R¢ £f¦·ž-ª µ ¦ ± ¦ª)¬ ´ ¨ ± ¯ý¼[¦¯`¡¨)žŸžó¦¯ ´µ ¨œ ± §p¨ ± ¢¤£W¡ µ ¢>ª µW´µ ¨T« œPªTª)½ ± ¼ ¦ ± ¨ ± ¨ ´ ¦·¢ £P¨˜§ô¢¤£ ´µ ¨²†¦ ±³ ¨˜§ ½P© ´ ¨)Ö ´ Ê Ûµ ¢G¯ ¢>¯ ¢ £ ¦ªTª)œ ± §P¦R£fª)¨/¡-¢ ´Iµ ¥œ[œP§ ´ ¨)Ö ´ ¨T£fª)œk§Y¢ £P¥É© ± ¦ª ´ ¢Gª)¨Ê 娘ª)œ Ì ¨ ± « °®± œ²¨ ±± œ ± ¦R£f§ ± ¨T¬Œ¦R£f¦·ž «P¯w¢G¯­p«Wœ ´µ ¨ ± ± ¨˜¯!¨˜¦ ± ª µ ¨ ± ¯¡¢ŸžŸž ¦·žG¯!œ7­f¨ ²¦§p¨¨˜¦¯w¢ ¨ ± Ê Ô%Zw Û¯nNr#^AÜ%^Jkb^Aap{sk`(ks{fe‹^Î{ÝXn q b^Ja[{4k` lÞasr:tujsa4`Aaßg øž ´Iµ œ½P¥ µ ¾ µ ¢¤£P¨˜¯À¨W¢G¯Éœ ° ´ ¨T£Õ¯I¦·¢G§ ´ œ1­f¨1¦CžG¦R£P¬ ¥½f¦R¥Y¨ ¡¢ ´µ œ½ ´MÚ ² ½fª µ à@²Éœ ± © µ œRž œ¥«¼ ´µ ¨ ± ¨É¦ ± ¨ ²Éœ ± © µ œRž œ¥R¢GªT¦·žp¢>¯I¯!½P¨˜¯ ´µ ¦ ´ ªT¦R½f¯!¨ª)œ²É©pžŸ¢GªT¦ ´ ¢ œ£f¯ ¡¢ ´µ&´µ ¨W© ± œê!¨˜ª ´ ¢ Ì ¨T¬®­f¦¯!¨˜§&§p¨T©h¨T£f§p¨T£fª)«Õ¦R£P£Pœ¬ ´ ¦ ´ ¢¤œY£4Ê ¾œ£f¯Ã¢>§p¨ ±`´µ ¨ ° œ¸žîž œR¡¢ £P¥ ´ ¡œ©f¦ ± ¯!¨½P£p¢ ´ ¯ °®± œ² œ½ ± ª)œ ± ©P½f¯B ‚ ƒ+„£ƒp…#†4¤ˆ ‚žQ„9ž|…#†4?§0„†4¤ˆ‚ ¬¤„£«+–N†¤š¤ž[ˆ+‚ª „4ˆ‰ ¬… “f‰ ‚#§0„¹§s…#†4?–+­†—˜›0•Q†¤•¤žàª#«N† ž!” ™ž4†s‰#™+¡#­¡ •N«0š%‰#ˆ+‚¤¬¤„7«+–N† ž”Eˆ‚+ª„sˆ‰á!…N+‰ ‚#§0„¹§s…#†—‹–+­†»ª#«N†¤•4ª © ™ž4†s‰#ª šs¨ƒ«¡#p‰#ˆ ‚ ¬#„L«+–N†Qª•+¥sˆ‚+ª„sˆ‰â0š4¨ •–f+‰ ‚+’ ƒ„sˆ ‚ ƒ+„£ƒp…#†—Qˆ ‚#§0„¹§s…#†4?–+­†»˜›0•Q†Q™„+¥ºª:«†¤•4ªžQšQ†s‰¬Q¡‰ ™ž4†s‰#ª œ¤•N«+¨+ª#«f‰¤ˆ ‚ ¬#„L«+–N†¤•4ª ¬Qˆ‚+ª„sˆ‰šs¡¤—s‰¤‚ª „4ˆ‰¢ „s¡:“p‰ ‚žQ„9ž|…#†4?§0„†4¤ˆ‚ ¬¤„£«+–N†Q¡¤¬Qˆ+‚ª „4ˆ‰ ¬¡‰ ‚#§0„¹§s…#†—‹–+­†»˜›0•Q†Q™„+¥ºª:«†¤•4¨ ™ž4†s‰›œ0:žX‰#ˆ ‚ ¬#„L«+–N†¤•4¨ ¬Qˆ‚+ª„sˆ‰”s…•–—4‰¤‚ª „sˆQ‰¤™šp… “f‰ ‚+’ ƒ„sˆ ˌ£ ´µ ¨Më ± ¯ ´ ©f¦ ± ¯w¢ £P¥‘½P£p¢ ´ÉÚãä(å ÝRàñ¼œY½ ± ¯!¨T¥¬ ²É¨T£ ´ ¦ ´ ¢ œ£Í¦R£f§ ´ ¦R¥¥R¢ £P¥Õ²7œk§p½pž ¨&œ½ ´ ©P½ ´ ¯ Ì´ ¦R£f§ºæ ³Ùͦ¯W¯À¨T©f¦ ± ¦ ´ ¨ ´ œ ³ ¨T£f¯TÊÎˌ£ ´µ ¨2¯!¨˜ª)œ£f§ ©f¦ ± ¯Ã¢¤£P¥#½P£p¢ ´(Ú@ãäå ùpàñ¼ç è¼#}+ €´¿¦¸£f§é0 ¦ ± ¨É¦·žG¯!œ‘¯À¨T©f¦ ± ¦ ´ ¨ ´ œ ³ ¨T£f¯TÊԀ¨Éª)œ£f¯Ã¢>§p¨ ±  Ì´æÙ Ú<; ¯À¨ Ì ¨T£ ´Iµ ïŸàô¦¸£f§L+ è #}+ €´ Ú<; ª)œ£ ´± ¦ª ´ ïÌêìë%íî{à ´ œW­f¨¡@œ ± §P¯T¼­P½ ´ ¡¨é²¦ ±³ œRíL 1´(¦R£f§i W¦¯ ²7œ ± © µ œRž œ¥R¢GªT¦·žf½P£p¢ ´ ¯ ڀ Ãy}sàñÊxø¯x²7œ ± © µ œRž œ¥¬ ¢GªT¦·ž ½P£p¢ ´ ¯‘¦ ± ¨K£Pœ ´ ¡œ ± §P¯T¼ ´µ ¨T«Â§pœÕ£Pœ ´Wµ ¦ Ì ¨ ¾ M¦ ´´± ¢ ­P½ ´ ¨˜¯ýÊ&ˌ£f¯ ´ ¨˜¦§¼ ´Iµ ¨‘¯ ´ ¨T²†¯ ´Iµ ¦ ´ ´µ ¨T« ¦ ± ¨†¦ ´´ ¦ª µ ¨˜§ ´ œK¦ ± ¨ ± ¨˜ª)œ ± §p¨˜§Á¢ £ ´µ ¨)¢ ±Ï¾ }צ ´ ¬ ´I± ¢ ­P½ ´ ¨˜¯TÊxø¯ ´µ ¨T«þ¦ ± ¨ ´µ ¨T²†¯!¨)ž Ì ¨˜¯@žî¢G¯ ´ ¨˜§¢ £ ´µ ¨ §Y¢Gª ´ ¢ œ£f¦ ± « Ú ¦2ª)œ²É²7œ£5© ± ¦ª ´ ¢Gª)¨C¢ £#¾ µ ¢ £P¨˜¯!¨Ràñ¼ ´Iµ ¨T«ò¦ ± ¨C²†¦ ±³ ¨˜§ò¦Y¯   }s ¦R£f§ ´µ ¨)¢ ± J.ò¦ ´ ¬ ´I± ¢ ­P½ ´ ¨˜¯”¦ ± ¨ ± ¨ ´ ¦·¢ £P¨˜§Ê Ûµ ¢G¯Ñ¢G¯x¦/ª)œ£f¯Iª)œ½f¯4¨)í4œ ±I´ ´ œ ± ¨ ´ ¦·¢ £W¢ £ ° œ ± ²†¦ ´ ¢¤œY£¢¤£ ´µ ¨œ ± ¢ ¥R¢ £f¦·ž ´ ¨)Ö ´ Ê Ëw£ ´µ ¨ë ± ¯ ´ ¨)֝¦R²7©pž¤¨¼| Ì´æÙ Ú<; ¯!¨ Ì ¨T£ ´µ ïŸà ¢G¯¦ 0 @6 €1¡œ ± §Ê Û-µ ¨†¯ ´ ¨T²   }sïæÙ Ú<; ¯!¨ Ì ¨T£4ï¤à ­h¨˜¦ ± ¯ ´µ ¨ ²¾ }sÊ Û-µ ¨†¦ ´´± ¢¤­P½ ´ ¨˜¯ Ú ¨Ê ¥hʆªT¦ ´ ¨T¬ ¥Yœ ± «É¦R£f§¯!¨T²†¦R£ ´ ¢Gª¥¸ž¤œp¯¯)àxœ °4´µ ¨¡@œ ± §É½P£p¢ ´ ¦ ± ¨ ´Iµ œ[¯!¨œ °”´Iµ ¨é§p¨ ± ¢ Ì ¨˜§¶¡@œ ± §Ê Ëw£ ´Iµ ¨×¯!¨˜ª)œY£f§Â¨)Öf¦R²É©pž ¨¼Ë è #}+ €´0 Ú<; ª)œ£P¬ ´I± ¦ª ´ ïÌêìë|íî àx¢G¯¦R£iðO    ° œ ± ² œ °´µ ¨ ¡@œ ± § + è #}+ €´PÊ Ûµ ¨Õ¯ ´ ¨T² ªT¦ ±± ¢¤¨˜¯ ´µ ¨&¡œ ± §Â½P£p¢ ´ ¼ ¡ µ ¢>ª µ ¢G¯²†¦ ±I³ ¨˜§Ø¡¢ ´µ ¦R£ºÃ_¦ ´´I± ¢ ­P½ ´ ¨ Ú®° œ ± Ãy Î30/1. àñÊÔ µ ¨T£ ´Iµ ¨¢ £+Sf¨˜ª ´ ¨˜§1¡œ ± §7¢G¯¦ª)¬ ª)¨˜¯I¯!¨˜§ ° œ ± ¯!«p£ ´ ¦Yª ´ ¢Gª€¦R£f¦·ž «P¯w¢G¯T¼ ´µ ¨¦:ñÉÖ¹§pœ[¨˜¯ £Pœ ´µ ¦ Ì ¨ ´ œÉ­h¨é¦ªTª)¨˜¯¯À¨˜§‘­h¨˜ªT¦R½f¯!¨ ´µ ¨ ± ¨˜èY½p¢ ± ¨˜§ ¢ £ ° œ ± ²†¦ ´ ¢ œ£Í¢G¯¦·ž ± ¨˜¦Y§p« ± ¨˜ª)œ ± §p¨˜§û¢¤£ ´µ ¢G¯¦ ´ ¬ ´I± ¢ ­P½ ´ ¨Ê ›½ ±´µ ¨ ± ª)œ²É©pžŸ¢GªT¦ ´ ¢ œ£f¯¶¡¢ ´µ ²Éœ ± © µ œRž œ¥R¢GªT¦·ž ¦¸£f¦·ž «k¯w¢G¯W¦ ± ¨W¢ŸžŸž¤½f¯ ´± ¦ ´ ¨˜§ ­[« ´µ ¨ ° œRžŸž œb¡-¢¤£P¥Ø¨)Ök¬ ¦¸²É©pž ¨˜¯ °®± œ²º¦ Ú ¯ ´ «žŸ¢G¯ ´ ¢GªT¦·žŸž «Â²Éœ ± ¨1­f¦·žG¦R£fª)¨˜§hà £P¨T¡¯!©f¦R©f¨ ± ª)œ ± ©P½f¯¡@¨ ¦ ± ¨¡œ ±³ ¢¤£P¥ œ£pB ‚ ƒ+„4ˆ ‚#§0„®§4…#†s–+­†»Hª#«† ­0•7žš†4‰-ƒs¡:+›f‰ ™ž4†s‰šs¨#­¡ž[¡¨ ©4‰¤ˆ ‚ ¬#„£žQ„†4‰ —4‰~«+–N†0­0•¤ž[ˆ‚+ª „4ˆ‰#ª šp…N+‰ ‚žQ„£ž%…#†s§0„†4¤ˆ‚ ¬¤„¹«+–†¡ ©ˆ+‚+ª„sˆQ‰ ©Q¡Q‰ ‚žQ„£ž%…#†+—7§0„†4¤ˆ‚ ¬¤„¹«+–†#•sª-ž[ˆ+‚ª „4ˆ‰›¨ •N“p‰ ‚+’ ƒ„sˆ ‚ ƒ+„4ˆ ‚#§0„®§4…#†s–+­†»Hª#«†¨ ¬½?žQšQ†s‰ ¬#„+ƒóòϬ¡‰ ™ž4†s‰šs¨ ƒƒ¢f‰¤ˆ ‚ ¬#„£žQ„†4‰ —4‰~«+–N†Q¨ ¬ˆ+‚ª „sˆQ‰–¨ œs‰ ‚žQ„£ž%…#†s§0„†4¤ˆ‚+ª „4ˆ‰–!¨ œ4+‰ ‚žQ„£ž%…#†+—7§0„†4¤ˆ‚+ª „4ˆ‰”s…•N–+“f‰ ‚žQ„£ž%…#†+‡7§0„†4¤ˆ‚+ª „4ˆ‰”s…•N–+“f‰ ‚žQ„£ž%…#†0“£§0„†4¤ˆ‚ ¬¤„¹«+–†¡ ¬ˆ+‚+ª„sˆQ‰ ¬Q¡Q‰ ‚+’ ƒ„sˆ Ëw£ ´µ ¨(ë ± ¯ ´ ¨)֝¦¸²É©pž ¨¼¦R£ ¢ £pëPÖô/Ì Í¢G¯Ø¢ £P¬ ¯À¨ ±´ ¨˜§M­f¨ ´ ¡¨T¨T£ ´µ ¨ ´ ¡œª µ ¦ ± ¦ª ´ ¨ ± ¯Ñœ °´µ ¨¡@œ ± § ÙõÁ-0´Pʺø¯C¢G¯2ª)œ²É²Éœ£Í¢ £ žî¢ £p½P¥R¢G¯ ´ ¢GªÂ¦R£f§ ª)œY²É©P½ ´ ¦ ´ ¢ œ£f¦bž ± ¨˜¯!¨˜¦ ± ª µ œ£&¾ µ ¢ £P¨˜¯!¨¼xœY½ ± ¯!¨T¥¬ ²É¨T£ ´ ¦ ´ ¢ œ£ ²ÉœP§p½pž ¨ôÿý¯!¨T¥²É¨T£ ´ ¯<´µ ¨ ´ ¨)Ö ´x± ¦ ´µ ¨ ± ´µ ¦R£ ÿIž ¨T²É²†¦ ´ ¢RIT¨˜¯PÊË ´ œ½ ´ ©P½ ´ ¯ ´µP± ¨T¨Øœ£P¨T¬ ª µ ¦ ± ¦ª ´ ¨ ±´ œ ³ ¨T£f¯T¼¡ µ ¢>ª µ ¼¢ £¹¾ µ ¢¤£P¨˜¯À¨¼Ç¦ ± ¨K¦·žŸž Ì ¦bžî¢G§Â§Y¢Gª ´ ¢ œ£f¦ ± «5¨T£ ´± ¢¤¨˜¯ýÊ#ÔØ¢ ´Iµò´Iµ ¨˜¯!¨ ´ œ ³ ¨T£f¯ ¦¯C¢ £P©P½ ´ ¼¶¡@¨2ž ¨T²É²¦ ´ ¢JIT¨¼­P½ ´ ¡¨ ´ ¦ ³ ¨ÕªT¦ ± ¨ ´ œW²†¦ ³ ¨É¯!½ ± ¨ ´µ ¦ ´´µ ¨Éœ ± ¢ ¥R¢ £f¦·žœ½ ´ ©P½ ´ œ °´µ ¨ ¯!¨T¥²7¨T£ ´ ¦ ´ ¢ œ£P¬ ´ ¦R¥¥¸¢¤£P¥ ´ œ[œ¸ž[¢G¯ ± ¨˜ª)œ Ì ¨ ± ¦¸­pž¤¨Ê Ûµ ¨ ¡œ ± § ½P£p¢ ´ ¢G¯É¦ªTª)¨˜¯¯!¨˜§ ´IµP± œ½P¥ µ2´µ ¨7ë ± ¯ ´ £Pœ£P¬ ¦ ñÉրª µ ¦ ± ¦ª ´ ¨ ± ¼4¦¸£f§W¢ ´ ¯M¯!«p£ ´ ¦Yª ´ ¢Gªé¦¸£f§K¯À¨T²†¦R£P¬ ´ ¢GªÉ¦ ´I´± ¢ ­P½ ´ ¨˜¯ Ú ÉÄ/¦¸£f§Þö ÷Cà µ ¦ Ì ¨ Ì ¦·ž ½P¨˜¯ ª)œ ±± ¨T¬ ¯!©hœ£f§Y¢ £P¥ ´ œ ´µ ¨ ¡ µ œ¸ž¤¨¡œ ± §¼P£Pœ ´´µ œp¯!¨ ª)œ ±± ¨T¬ ¯!©hœ£f§Y¢ £P¥ ´ œ ´µ ¨ ª µ ¦ ± ¦ª ´ ¨ ± Ê Ûµ ¨¯!¨˜ª)œ£f§7¨)֝¦¸²É©pž ¨/©hœ[¯!¨˜¯x¨ Ì ¨T£†²Éœ ± ¨© ± œ­P¬ ž ¨T²†¯ ° œ ± ´µ ¨1¯w¢ ²É©pž ¨‘²†¦ ±³ ½P©&²7¨ ´µ œk§×¢ £ÂĨ˜ª)¬ ´ ¢ œ£5äfÊ ÝYÊ Ûµ ¨1¡œ ± §×¢>¯¶²†¦§p¨C½P©Âœ ° ´µ ¨€¯w¢ £P¬ ¥Rž ¨ ´ ¡œ¬Œª µ ¦ ± ¦ª ´ ¨ ± §Y¢>ª ´ ¢¤œY£f¦ ± «¨T£ ´± «T.0 Ù 8#@+.´PÊ ãœb¡¨ Ì ¨ ± ¼ ´µ ¨É²Éœ ± © µ œRž œ¥R¢GªT¦·ž@© ± œkª)¨˜¯¯,œ °± ¨˜§p½P¬ ©pžŸ¢GªT¦ ´ ¢ œ£ µ ¦¯7­f¨T¨T£Â¦R©P©pžŸ¢ ¨˜§¼¦R£f§Ø¨˜¦ª µ œ °´µ ¨ ´ ¡œ¹ª µ ¦ ± ¦ª ´ ¨ ± ¯Á¢G¯ ± ¨T©f¨˜¦ ´ ¨˜§ ´ œÍ¥R¢ Ì ¨ò¦ ° œ½ ± ¬ ª µ ¦ ± ¦ª ´ ¨ ± ¡œ ± §Ê ø¯ ´µ ¨¯!¨T¥²É¨T£f¦ ´ ¢¤œY£P¬ ´ ¦R¥¥R¢ £P¥ ²ÉœP§p½pž ¨û§pœp¨˜¯Ø£Pœ ´ ž ¨T²É²†¦ ´ ¢JIT¨û¦R£f§ §pœ[¨˜¯Ø£Pœ ´ ²É¨˜§P§Yž ¨Â¡¢ ´µÎ´µ ¨Õœ ± §p¨ ± ¢ £¡ µ ¢Gª µ ´Iµ ¨ ª µ ¦ ± ¬ ¦ª ´ ¨ ± ¯œkªTª)½ ± ¼Ç¢ ´ ¯Wœ½ ´ ©P½ ´ ¢G¯ Ú ¯!œ²É¨T¡ µ ¦ ´ ¨ ±± œ¬ £P¨Tœ½f¯wž «hà ´µ ¨ ° œY½ ± ¬ ´ œ ³ ¨T£É¯!¨˜èY½P¨T£fª)¨œ ° .0 Ù·. 0Ù 8:. ´˜8:. ´PÊ Û-µ ¨˜¯!¨€ª µ ¦ ± ¦ª ´ ¨ ± ¯W¦ ± ¨Á¦bžîž Ì ¦·žŸ¢G§ §Y¢Gª ´ ¢ œ£f¦ ± «C¨T£ ´I± ¢ ¨˜¯¢ £€¾ µ ¢¤£P¨˜¯À¨ ´Iµ ¨T²†¯!¨)ž Ì ¨˜¯T¼h­P½ ´ ´µ ¨T«5§pœÕ£Pœ ´ ª)œ² ­p¢ £P¨ ´± ¦R£f¯!©f¦ ± ¨T£ ´ ž « ´ œ ° œ ± ² ´µ ¨ ¡œ ± §m. 0Ù. 0Ù 8:. ´+8:. ´P¼4¡ µ ¢>ª µ ¢G¯¦W§p¨T¬ ± ¢ Ì ¨˜§C¡œ ± §Êˌ£ œ½ ± ¦¸£P£Pœ ´ ¦ ´ ¢ œ£4¼œY£pž¤« ´µ ¨Më ± ¯ ´   }sØ­f¨˜¦ ± ¯ ´µ ¨¡œ ± §M½P£p¢ ´´ ¦R¥hÊ4Ô µ ¨T£ ´µ ¨/§p½P¬ ©pžŸ¢GªT¦ ´ ¨˜§C¡œ ± §7¢G¯/¦YªTª)¨˜¯¯!¨˜§ ° œ ± ¯À«[£ ´ ¦ª ´ ¢Gª¦R£f¦bž¤«p¬ ¯w¢G¯T¼-¢ ´ ¯  Ã_s ¦ ´´± ¢ ­P½ ´ ¨¶¢>¯C¦ Ì ¦·¢ŸžG¦R­pž ¨ ´ œ2¥R¢ Ì ¨ ´µ ¨ £P¨˜ª)¨˜¯¯¦ ± «‘¥ ± ¦R²É²†¦ ´ ¢>ªT¦bž”¢ £ ° œ ± ²†¦ ´ ¢¤œY£4Ê Ûµ ¨ ´µP± ¨T¨ ´± ¦·¢ŸžŸ¢ £P¥  Ãy}sÂ¦ ± ¨£P¨T¥Rž ¨˜ª ´ ¨˜§Êxø¯¯ µ œR¡£ ¢ £ ´µ ¨M¨)֝¦R²7©pž¤¨¼Ñ¡ µ ¨T£1¡¨ô§p¨˜¦bž{¡¢ ´µ1´µ ¢G¯ ³ ¢ £f§ œ ° ²Éœ ± © µ œRž œ¥«¼¡¨ ´ ¦ ³ ¨KªT¦ ± ¨ ´ œ ¨T£f¯!½ ± ¨ ´µ ¦ ´ ´µ ¨œ ± ¢ ¥R¢ £f¦·žœ½ ´ ©P½ ´ œ °´µ ¨†¯!¨T¥Y²É¨T£ ´ ¦ ´ ¢ œ£ ´ œ[œ¸ž ¢G¯© ± ¨˜¯!¨ ±Ì ¨˜§Ê Ûµ ¨1¯!¨˜ª)œ£f§&¨)Öf¦R²É©pž ¨1¦·žG¯!œª)œ£ ´ ¦·¢ £f¯¦1²Éœ ± ¬ © µ œRž œ¥R¢GªT¦·ž¦ ñÉÖ>0 ˜ÊË ´ ²†¦·«­f¨ £Pœ ´ ¨˜§ ´µ ¦ ´´µ ¨ ÷Ïø Ì ¦·ž ½P¨Éœ °´µ ¨   }s ´µ ¦ ´ ªT¦ ±I± ¢ ¨˜¯ ´µ ¨ ¡œ ± § ½P£p¢ ´´ ¦R¥,¢G¯£PœR¡Â¦žŸ¢G¯ ´Ú ¢¤£ ´µ ¨ ° œ ± ²Ïœ ° ¦ èY½Pœ ´ ¨˜§ ª µ ¦ ± ¦ª ´ ¨ ± ¯ ´± ¢ £P¥ ´ œ‘ª)œY£ ° œ ± ² ´ œ1ÄPÅÆKÈ{àñÊ Ûµ ¢G¯ ¯!¨ ±Ì ¨˜¯ ´ œƒ¢îžŸž ½f¯ ´± ¦ ´ ¨ µ œR¡¿² ½pž ´ ¢ ©pž ¨Õ¦ ñÉÖP¨˜¯‘¦ ´ ¬ ´ ¦ª µ ¨˜§ ´ œ†¦7¯ ´ ¨T²*¦ ± ¨ §p¨˜¦·ž ´ ¡¢ ´µ Ê Ô%ZJY ]·a4r bkù%^`^Jbg£^Îlút|`^ q kb^Aap{pc Ûµ ¨¶ž>¦Yª ³ œ ° ¥Y¨T£P¨ ± ¦·žÇ¦R¥ ± ¨T¨T²É¨T£ ´ ¦R²Éœ£P¥ ´µ ¨Tœ¬ ± ¨ ´ ¢GªT¦·žf¦R£f§Mª)œ²É©P½ ´ ¦ ´ ¢¤œY£f¦·žžŸ¢ £P¥½p¢G¯ ´ ¯x¦R­hœ½ ´”µ œR¡ ²7œ ± © µ œRž œ¥«M¢G¯ ´± ¨˜¦ ´ ¨˜§M¢ £1¾ µ ¢ £P¨˜¯!¨ µ ¦¯x²¦§p¨¢ ´ §Y¢Ìñþª)½pž ´° œ ± žŸ¢ £P¥½p¢G¯ ´ ¢GªT¦·žŸž «€¦R£P£Pœ ´ ¦ ´ ¨˜§ ± ¨˜¯!œ½ ± ª)¨˜¯ ´ œ­f¨¯ µ ¦ ± ¨˜§Ê Ô€¨² ½f¯ ´ ¦§pœ© ´ ¦©f¦ ±´ ¢>ª)½pžG¦ ± ¦R©P¬ © ± œ[¦ª µ ¢ £5œ½ ± œR¡£Ø¡œ ±³ ¼­P½ ´ ­p«&© ± ¨˜¯!¨ ±Ì ¢ £P¥ ´Iµ ¨ ²Éœ ± © µ ¨T²É¨˜¯/¦¸£f§Kª µ ¦ ± ¦Yª ´ ¨ ± ¯`¢ £ ´µ ¨)¢ ± œ ± ¢ ¥R¢ ¬ £f¦bžk©hœ[¯w¢ ´ ¢ œ£f¯T¼¸¢ ´ ¢G¯ µ œ©f¨˜§ ´Iµ ¦ ´ œY½ ± ¦R£P£Pœ ´ ¦ ´ ¢¤œY£f¯ ¡-¢îžŸž”­h¨ ½f¯!¨ ° ½pž ´ œ œ ´µ ¨ ±± ¨˜¯À¨˜¦ ± ª µ ¨ ± ¯TÊ û „†94:L')üŒŠwšP‡f—Šw94ˆ–ËёŠ®—…Eý%*ˆÑ—¸‡šp—ŠŒš ;W™‡š þp"—ŠŒˆ‰ ÿsZ\ ÛÏ^ónrn!{bÅt[kr:cnzcbr0d q b#dfr nc k q q asr#e%^Î{4ß?baie%^ónr n{bykt|t%r a!k q jsnc ø(²Éœ ± ¨¯!¨ ± ¢ œ½f¯©hœ ±´ ¦¸­p¢îžŸ¢ ´ «‘ª)œ£fª)¨ ± £Wœ ° ¯!«[£ ´ ¦ª)¬ ´ ¢>ª ¦R£P£Pœ ´ ¦ ´ ¢¤œY£5¢G¯ ´µ ¨ ° ¦Yª ´W´µ ¦ ´ §Y¢Ÿí4¨ ± ¨T£ ´‘± ¨T¬ ¯À¨˜¦ ± ª µ ¨ ± ¯²†¦·«&© ± ¨ ° ¨ ±‘´ œ&½f¯!¨€§Y¢Ÿí4¨ ± ¨T£ ´ ¥ ± ¦¸²É¬ ²¦ ±° œ ± ²†¦·žŸ¢G¯!²†¯ ¦R£f§ ¯!«p£ ´ ¦ª ´ ¢Gª ´µ ¨Tœ ± ¢ ¨˜¯Ñ¢¤£ ´µ ¨)¢ ± ¡œ ±I³ Ê|Í í4œ ±´ ¯{žŸ¢ ³ ¨ ´µ ¨_C)Íöö Û4± ¨T¨T­f¦R£ ³‘Ú ÆK¦ ± ¬ ª)½f¯ô¨ ´ ¦bž Ê>¼/ݘÞYÞRäPà µ ¦ Ì ¨ ´I± ¢ ¨˜§ ´ œ1² ¢ £p¢¤²ô¢RIT¨ ´µ ¨ §Y¢Ÿí4¨ ± ¨T£fª)¨˜¯Ø­p«+¦Y§pœ© ´ ¢ £P¥ ¦ ­f¦¯w¢Gª5­ ± ¦ª ³ ¨ ´ ¢ £P¥ ¯Iª µ ¨T²É¨¼­P½ ´/´µ ¨© ± œ­pž ¨T²¢>¯ ´µ ¦ ´ §Y¢Ÿí4¨ ± ¨T£ ´ ¦R©P¬ © ± œ[¦ª µ ¨˜¯x²†¦·« ± ¨˜èY½p¢ ± ¨ §Y¢Ÿí4¨ ± ¨T£ ´ ¡¦·«k¯ œ ° ­ ± ¦ª ³ ¬ ¨ ´ ¢¤£P¥ ° œ ±´Iµ ¨Ç¯¦¸²É¨žŸ¢ £P¥½p¢G¯ ´ ¢Gª,¨)Ök© ± ¨˜¯I¯w¢ œ£4ʾœ£P¬ ¯Ã¢>§p¨ ±´µ ¨ ° œRžŸž œb¡-¢¤£P¥¨)Öf¦R²É©pž ¨ °®± œ²œ½ ± ª)œ ± ©P½f¯B ‚ ƒ+„Lƒf…#†+¼ˆ ‚#§0„®§4…#†s–+­† “L›0•†ž!¦™‹ª#«N†Qª •Q½ ™ž4†s‰¥s¡+ª¨ „4™+¡‰#ˆ+‚¤¬¤„£«+–†½+œž[ˆ+‚ª „4ˆ‰¢4…•[+‰ ‚#§0„®§4…#†+—7–+­† “L›0•†™ „¥ºª#«N†¤•s¨ ™ž4†s‰›œ0:žX‰#ˆ ‚ ¬#„®«+–†#•s¨¤¬Qˆ+‚ª „4ˆ‰”4…•N–—s‰#‚+ª„sˆ‰#™ š4… “f‰ ‚#§0„®§4…#†+‡7–+­† “L›0•†#•s¡:–Hª#«N†Q¨ ¬ ­ ™ž4†s‰•s¡#–f‰¤ˆ‚ ¬¤„¹«+–†#¥+„4ˆ+‚+ª„sˆQ‰-¥+„N“p‰ ‚#§0„®§4…#†0“£–+­†¼‹›0•†¨ ½ «Hª#«N†0­+« ™ž4†s‰#ª œ¤•N›œ #žX‰#ˆ+‚¤¬¤„£«+–† ­0•¤ž[ˆ+‚ª „4ˆ‰-š4¡ —4‰ ‚#§0„®§4…#†+Ÿ7–+­† “L›0•†œ¤¥Q½ª#«N†¤•s¨ ™ž4†s‰#™#«s…ƒ+„Q©Q¨#«4…#œ#•%‰¤ˆ ‚ ¬#„®«+–†#•s¨¤¬Qˆ+‚ª „4ˆ‰– „p…N+‰¤‚ª „4ˆ‰–¡ —‰ ‚#§0„®§4…#†+¼7–+­†4+àŒŒ+Œ<ˆXŒ+ŒŒ ‚#§0„®§4…#†sŠ˜–+­N†s‡˜›0•Q†¤ƒ«®ª#«†4‰[ŒŒ‰¤ˆ ‚ ¬#„®«+–†4‰[ŒŒ‰¤ˆ‚+ª„sˆ‰pŒ+Œ-‰ ‚+’ ƒ„sˆ Û-µ ¨ë ± ¯ ´ ¡œ ± § Ú·äËå ÝRàM#4Ù Ú<; ­h¨˜ªT¦R½f¯!¨ïŸà¢G¯ ²¦ ±³ ¨˜§W¦¯¦É¯!½P­fœ ± §Y¢ £f¦ ´ ¢ £P¥‘ª)œ£Rê!½P£fª ´ ¢ œ£ Ú ÉÄ å ÉhàñÊÑˌ£œ½ ± ¦R£P£Pœ ´ ¦ ´ ¢ œ£4¼¸¢ ´ ¢G¯`¥œ Ì ¨ ± £P¨˜§7­[« ´µ ¨ µ ¨˜¦Y§ ¡œ ± § Ú·ä|å äPàœ °´µ ¨¨T²é­h¨˜§P§p¨˜§É¯!¨T£ ´ ¨T£fª)¨ Úìä å Ý ´ œ ìä å ßkàñ¼| è Ú<; ª)œ£ ° œ ± ²Ï=b¯¦ ´ ¢G¯ ° «ïŸàñ¼ ° œ ± ¡ µ ¢Gª µ ¢ ´ ° ½P£fª ´ ¢¤œY£f¯¯!œ²7¨T¡ µ ¦ ´ žŸ¢ ³ ¨Ç¦yÃy P:  ¢ £1ãìC-ÄPÅ Ú C œRžŸžG¦ ± §W¦¸£f§1Ä[¦R¥f¼ÑݘÞÞRäPà)ʔËw£Kã·CÄPÅ ¼ ´µ ¢G¯¢G¯C¦&© ± ¨T©hœ[¯w¢ ´ ¢ œ£ ڌ´µ ¦ ´ §pœ[¨˜¯¶£Pœ ´‘µ ¦ Ì ¨€¦ £Pœ½P£C© µP± ¦¯!¨ ª)œ²7©pž¤¨T²7¨T£ ´ à)¼f­P½ ´ ¦Y¯¢ ´ ¢G¯£P¨ Ì ¨ ± ¬ ´µ ¨)ž ¨˜¯¯ œ ° ´ ¨T£&ª)œ£f¯w¢G§p¨ ± ¨˜§ ´ œ‘­h¨‘¦1§P¦R½P¥ µp´ ¨ ± œ ° ´µ ¨ Ú ¨T²é­h¨˜§P§p¨˜§hàǯ!¨T£ ´ ¨T£fª)¨¼ ´µ ¨ ± ¨ ¢G¯£Pœ‘¯À¨ ± ¢ œ½f¯ ©f¦ ± ¯À¨é¯ ´± ½fª ´ ½ ± ¨É§Y¢G¯¦R¥ ± ¨T¨T²7¨T£ ´ ­h¨ ´ ¡¨T¨T£€ã·CÄPÅ ¦R£f§1œY½ ± ¦R£P£Pœ ´ ¦ ´ ¢ œ£4ʆãœR¡¨ Ì ¨ ± ¼¢ ° œ£P¨ ° œRžŸž œR¡/¯ ²Éœ ± ¨@©hœ©P½pžG¦ ± ¦R©P© ± œ[¦Yª µ ¨˜¯ÑžŸ¢ ³ ¨ ´µ ¨¾ µ œ²†¯ ³ «[¦R£ ¯ª µ œ[œ¸ž ¼RœY£P¨@²¦˜«© ± ¨ ° ¨ ±x´ œ ± ¨ Ì ¨ ± ¯!¨ ´µ ¨@§Y¢ ± ¨˜ª ´ ¢¤œY£ œ °´µ ¨ §p¨T©f¨T£f§p¨T£fª)« ´µ ¨É¯!½P­fœ ± §Y¢ £f¦ ´ ¢ £P¥‘ª)œ£P¬ ê!½P£fª ´ ¢ œ£‘¡¢ŸžŸžx­f¨ µ ¢ ¥ µ ¨ ± ½P©¶¢ £ ´µ ¨ §p¨T©h¨T£f§p¨T£fª)« µ ¢ ¨ ± ¦ ± ª µ « ´µ ¦R£ ´µ ¨ © ± ¨˜§Y¢GªT¦ ´ ¨ Ì ¨ ± ­W¢ £ ´µ ¨M¨T²É¬ ­f¨˜§P§p¨˜§1¯!¨T£ ´ ¨T£fª)¨Ê ø¯ ´µ ¨ ± ¨ ¢>¯Ñ£Pœ¥¨T£P¨ ± ¦·ž[ª)œ£f¯!¨T£f¯!½f¯ ¦R­fœY½ ´ ¡ µ ¦ ´ ©f¦ ± ¯À¨é¯ ´± ½fª ´ ½ ± ¨ ´ œÉ¥¸¢ Ì ¨ ´ œ†¦ôžŸ¢ £P¥½p¢G¯ ´ ¢Gª ¨)ÖP© ± ¨˜¯!¬ ¯w¢ œ£4¼·¢ ´ ¡¢ŸžŸžk­h¨`¢¤£ Ì ¦·¢ £ ° œ ± ½f¯ ´ œÇ¦ ´I´ ¨T²É© ´4´ œëf£f§ ©f¦ ± ¯À¨ô¯ ´± ½fª ´ ½ ± ¨˜¯ ´µ ¦ ´ ¦ ± ¨É¥Y¨T£P¨ ± ¦·žŸž «Á¦YªTª)¨T© ´ ¨˜§Ê Ô µ ¦ ´ ¡¨†ªT¦¸£×§pœf¼x¦R£f§ µ ¦ Ì ¨þ§pœ£P¨¼Ñ¢G¯ ´ œWžG¦R­h¨)ž ¦ ± ªT¯,œ ° œ½ ± ©f¦ ± ¯!¨W¯ ´± ½fª ´ ½ ± ¨˜¯¡¢ ´µ€´Iµ ¨þ§p¨T©h¨T£P¬ §p¨T£fª)« ± ¨)žG¦ ´ ¢¤œY££f¦R²7¨˜¯T¼ ´µ ½f¯Çž ¨˜¦ Ì ¢ £P¥ ´µ ¨ µ œY©f¨ ¦·žŸ¢ Ì ¨ ´Iµ ¦ ´ œ½ ± ©f¦ ± ¯À¨‘¯ ´± ½fª ´ ½ ± ¨˜¯ ²†¦·«€­f¨1ª)œ£P¬ Ì ¨ ±´ ¢¤­pž ¨ ° œ ± ½f¯!¨¡¢ ´µ œ ´µ ¨ ± ¦R©P© ± œp¦ª µ ¨˜¯ýÊ ÿ4Zw ÛÏ^ µnNr n!{byh k+g%cÂbairn+k`^ n krß[d%lmn!{bc Ûµ ¨2½f¯!¨ ° ½pž¤£P¨˜¯I¯€œ °µ ¦ Ì ¢ £P¥ ´µ ¨£f¦¸²É¨˜¯Cœ ° §p¨T¬ ©f¨T£f§p¨T£fª)« ± ¨)žG¦ ´ ¢ œ£f¯¢ £ ´Iµ ¨é¦R£P£Pœ ´ ¦ ´ ¢ œ£W²†¦·«­f¨ ° ½ ±´µ ¨ ± ¢Ÿžîž ½f¯ ´I± ¦ ´ ¨˜§Í­p« ´Iµ ¨×§Y¢Ÿí4¨ ± ¨T£ ´ ¡¦·«P¯ ´ œ ¦R£f¦·ž «P¯!¨é¦¸½pÖ[¢ŸžŸ¢G¦ ± ¢ ¨˜¯ ¦R£f§¶²ÉœP§P¦·žG¯Tʾœ£f¯w¢G§p¨ ±´µ ¨ ° œRžŸž¤œR¡¢ £P¥W¨)Öf¦R²É©pž ¨ Ú ½P£P£P¨˜ª)¨˜¯I¯¦ ± «1§p¨ ´ ¦·¢ŸžG¯œ² ¢ ´ ¬ ´ ¨˜§hà€B ‚ ƒ+„4ˆ ‚#§0„¹§s…#†4?–+­†»™ž4†s‰#ª+¨ •[‰¤ˆQ‰-•s¡•–N—s‰ ‚#§0„¹§s…#†—‹–+­†49™ž4†s‰#ª+¨#+¢Lœ#„«f‰¤ˆ ‰©!…:‡s‰‹‰”s…•N–—4‰ ‚#§0„¹§s…#‡‹–­†+—L™žs†4‰¤ª¤œ¤•«!¨ª#«p‰#ˆ ‰šs¡¤—s‰‹‰¢0„s¡:“p‰ ‚+’ ƒ„sˆ ˌ£ œ½ ± ¦¸£P£Pœ ´ ¦ ´ ¢ œ£4¼ ´µ ¨*¦R½pÖ[¢ŸžŸ¢G¦ ± «  +. è Ú<; ªT¦¸£4ïŸàf¥œ Ì ¨ ± £f¯ ´µ ¨ Ì ¨ ± ­Ï/Ì@¶#8#@+. è Ú<; ªT¦ ±± «œ½ ´ ïŸà)¼ ¡ µ ¢Gª µ ¥œ Ì ¨ ± £f¯ ¢ ´ ¯ÕœR¡£¦ ± ¥½P²É¨T£ ´ §P¦R½P¥ µp´ ¨ ±  è #} ´ Ú<; ª)œ£ ´± ¦ª ´ ïŸàñÊ ãœb¡¨ Ì ¨ ± ¼P¢ £ ± ¨˜ª)¨T£ ´ §p¨ Ì ¨)ž œ©P²É¨T£ ´ ¯œ ° ã·CÄPÅ ²Éœ ´ ¢ Ì ¦ ´ ¨˜§M­p« ´µ ¨¯ ´ ½f§p« œ ° Å/¨ ± ²†¦R£ Ú ö¨ ± ­hœ£P£P¨ ¨ ´ ¦·žŒÊ ¼&ݘÞÞ¸äPàצR£f§ ´µ ¨#圲†¦¸£fª)¨ÂžG¦R£P¥½f¦¸¥¨˜¯ Ú õ¦·žG¦ ± ¢x¦R£f§‘Ù¢ £p¢Œ¼xݘÞÞìkàñ¼ ´µ ¨é¦R½pÖp¢ŸžŸ¢>¦ ± « Ì ¨ ± ­W¢G¯ ª)œ£f¯w¢G§p¨ ± ¨˜§ ´ œ©P½pžŸž ½P© ´Iµ ¨Á¦ ± ¥½P²É¨T£ ´ ¯7œ ° ´µ ¨ ž ¨)Ö[¢GªT¦·ž Ì ¨ ± ­¢ £ ´ œ7¢ ´ ¯œb¡£‘§pœ²†¦·¢ £‘­p«²É¨˜¦R£f¯œ ° ¦2© ± œPª)¨˜¯¯ ³ £PœR¡£¹¦¯m0³.0}Ãy  Å 8:     ͜ ±  Ã(3N¤- @0fÊ£f§p¨ ± ¯!½fª µ ¦R£é¦R£f¦bž¤«P¯w¢G¯T¼ ´µ ¨¯!«[£P¬ ´ ¦ª ´ ¢Gª§p¨T©f¨T£f§p¨T£fª)«1¯ ´± ½fª ´ ½ ± ¨¡¢Ÿžîžx­f¨ B ‚ ƒ+„4ˆ ‚#§0„®§4…#†s–+­†»H™žs†4‰¤ª¨ •%‰#ˆ‰•s¡ •N–—4‰ ‚#§0„®§4…#†+—7–+­†4˜™žs†4‰¤ª¨#+¢‹œ#„«f‰¤ˆ ‰©!…:‡s‰‹‰”s…•N–—4‰ ‚#§0„®§4…#‡˜–­†sL‰#ª œ¤•N«+¨+ª#«f‰¤ˆ ‰šs¡¤—s‰‹‰¢0„s¡:“p‰ ‚+’ ƒ„sˆ Û-µ ¨ ´ ¡œò¯ ´I± ½fª ´ ½ ± ¨˜¯C¦ ± ¨&§Y¢îí4¨ ± ¨T£ ´ Ê»›Pœ ±I´ ½P¬ £f¦ ´ ¨)ž «¼Ç¦Y¯£PœÕ§Y¢ ± ¨˜ª ´‘± ¨ Ì ¨ ± ¯w¢ œ£Âœ ° §pœ² ¢ £f¦R£fª)¨ ¢G¯Ç¢¤£ Ì œRž Ì ¨˜§¼xª)œ£ Ì ¨ ± ¯Ã¢¤œY£€œ ° ©f¦ ± ¯!¨†¯ ´I± ½fª ´ ½ ± ¨˜¯¢ £ ¨)¢ ´µ ¨ ± §Y¢ ± ¨˜ª ´ ¢ œ£Í¢G¯C©fœp¯¯w¢ ­pž ¨&¢ °†´µ ¨2£f¦ ´ ½ ± ¨2œ ° ´Iµ ¨ ± ¨)ž¤¨ Ì ¦R£ ´ §p¨T©f¨T£f§p¨T£fª)«7žî¢ £ ³ ¯ Ú ¢ £ ´ ¨ ± ²†¯Ñœ °´µ ¨ ¥ ± ¦R²É²†¦ ±° œ ± ²†¦·žŸ¢G¯!²¿½f¯!¨˜§hà¢G¯ ¦ Ì ¦·¢ŸžG¦R­pž ¨¢ £ ´µ ¨ ¦¸£P£Pœ ´ ¦ ´ ¢ œ£4Ê  ý{Ó`:Â:Šwˆ‰2Ó&' Ëw£€œ½ ± ¦R£P£Pœ ´ ¦ ´ ¢ œ£€¯ª µ ¨T²É¨¼ ´Iµ ¨É­f¦¯w¢Gª ½P£p¢ ´M° œ ± ­P½p¢ŸžG§Y¢ £P¥û¯À«[£ ´ ¦ª ´ ¢Gª&§p¨T©h¨T£f§p¨T£fª)«Í¢>¯ ´µ ¨¡œ ± §Ê õ`¨)ž œR¡ ´µ ¨/¡œ ± §¼[¡¨ µ ¦ Ì ¨²Éœ ± © µ œRž œ¥R¢GªT¦·ž4½P£p¢ ´ ¯ ¦¸£f§ §Y¢Gª ´ ¢ œ£f¦ ± «*¨T£ ´± ¢ ¨˜¯T¼‘¦R£f§ ´µ ¨¹ª µ ¦ ± ¦ª ´ ¨ ± ¯ ° œ ± ² ´µ ¨ ´ ¨ ± ² ¢ £f¦·ž ´ ¨)Ö ´ ¨)ž ¨T²É¨T£ ´ ¯TÊ Û-µ ¨ò¦§p¨˜èY½f¦ª)«Ïœ ° œ½ ± ¦R£P£Pœ ´ ¦ ´ ¢ œ£ ¯Iª µ ¨T²É¨ ªT¦¸£­h¨‘¥[¦R½P¥Y¨˜§€­p«×ª)œY²É©f¦ ± ¢ £P¥1¢ ´ ´ œ ´µ ¨ÿIžG¦·«[¬ ¨ ± ¯<ɜ ° ¦R£P£Pœ ´ ¦ ´ ¢ œ£W¢ £ ´Iµ ¨é© ± ¨)žî¢ ² ¢ £f¦ ± « ± ¨˜ª)œ²É¬ ²7¨T£f§P¦ ´ ¢ œ£f¯WœY£¹¯!«p£ ´ ¦Yª ´ ¢Gª¦R£P£Pœ ´ ¦ ´ ¢ œ£5œ ° ª)œ ± ¬ ©hœ ± ¦2œ ° Í øÅÇÈ[ÍÄ Ú Ý·ÞÞápà€B Ú ¦pàé­ ± ¦ª ³ ¨ ´ ¢ £P¥Õœ ° ¯À¨T¥²É¨T£ ´ ¯T¼ Ú ­àMžG¦R­f¨)žŸžŸ¢ £f¥ ´µ ¨€ªT¦ ´ ¨T¥œ ± «&œ ° ¯!¨T¥¬ ²7¨T£ ´ ¯ý¼ Ú ª·à@¯ µ œR¡¢ £P¥†§p¨T©f¨T£f§p¨T£fª)« ± ¨)žG¦ ´ ¢¤œY£f¯T¼ Ú §hà ¢ £f§Y¢GªT¦ ´ ¢ £P¥ ¯!«[£ ´ ¦ª ´ ¢>ª ° ½P£fª ´ ¢ œ£ žG¦R­h¨)ž>¯ý¼ Ú ¨Rà4²†¦ ±³ ¬ ¢ £P¥¯!½P­ªIžG¦¯I¯w¢ŸëªT¦ ´ ¢ œ£œ ° ¯!«p£ ´ ¦Yª ´ ¢Gª¯!¨T¥²7¨T£ ´ ¯ý¼ Ú®° à ž œ¥¸¢>ªT¦bž ± ¨)žG¦ ´ ¢ œ£f¯`œ °Ì ¦ ± ¢¤œY½f¯ ³ ¢ £f§P¯T¼ Ú ¥Pࢠ£ ° œ ± ²†¦R¬ ´ ¢¤œY£Õ¦R­fœ½ ´7´µ ¨ ± ¦R£ ³ œ ° ¦1¯!«p£ ´ ¦ª ´ ¢Gª½P£p¢ ´ ¦R£f§ Úwµ ࢤ£ ° œ ± ²¦ ´ ¢ œ£&¦R­fœ½ ´ ¯!©fœ ³ ¨T£1žG¦R£P¥½f¦R¥Y¨†£Pœ£P¬ Sf½P¨T£fª)« © µ ¨T£Pœ²É¨T£f¦PÊ Ö ½ ± ¦R£P£Pœ ´ ¦ ´ ¢ œ£¯Iª µ ¨T²É¨ ¢ £fªIž ½f§p¨˜¯ žG¦·«¨ ± ¯ Ú ª·àó¦R£f§ Ú §hà4§Y¢ ± ¨˜ª ´ ž «ÊÈ4¦·«¨ ± ¯ Ú ¦kà ¦¸£f§‘¦ ± ¨ ª)œ Ì ¨ ± ¨˜§W­p« ´± ¨˜¦ ´ ¢ £P¥W§p¨T©f¨T£f§p¨T£ ´ ¯ Ú ¦R£f§ ± ¨˜ª)½ ± ¯w¢ Ì ¨)ž «§p¨T©f¨T£f§p¨T£ ´ ¯ ´µ ¨ ± ¨Tœ ° à@œ ° ¦W¡œ ± §‘¦¯ ¢ ´ ¯`© µP± ¦Y¯¦·ž© ± œê!¨˜ª ´ ¢ œ£4ÊxÈ4¦·«¨ ±Ú ¨Rà4¢G¯ ´ ¦ ³ ¨T£†ªT¦ ± ¨ œ ° ­p«†¦Çž>¦ ± ¥¨ ± ¬ ´µ ¦¸£P¬®² ¢ £p¢¤²¦·ž ´ ¦R¥p¯!¨ ´ ¦R£f§7¥ ± ¦¸²É¬ ²¦ ´ ¢GªT¦·žx¢ £ ° œ ± ²†¦ ´ ¢ œ£&¯ ´ œ ± ¨˜§C¢ £ ´Iµ ¨ ž ¨)Ö[¢GªT¦·ž/¨T£P¬ ´I± ¢ ¨˜¯T¼R¡ µ ¢Gª µ ¦·žG¯!œ µ ¨)ž © ´ œ ª)œ Ì ¨ ± ž>¦·«¨ ±Ú ° àñÊø¯ ¡@¨ §pœ £Pœ ´†µ ¦ Ì ¨¢ £ ´ ¨ ± ²É¨˜§Y¢G¦ ´ ¨K© µP± ¦¯¦·žÇªT¦ ´ ¨T¥œ ± ¢ ¨˜¯ ¢ £Wœ½ ± §p¨T©h¨T£f§p¨T£fª)« ° œ ± ²†¦·žŸ¢G¯!²‘¼YžG¦·«¨ ±Ú ¥à4¢G¯£Pœ ´ ± ¨)ž¤¨ Ì ¦R£ ´ Ê#È4¦·«¨ ±‘Úwµ à ¢G¯£Pœ ´W± ¨)ž ¨ Ì ¦R£ ´° œ ±´ ¨)Ö ´ ª)œ ± ©fœ ± ¦PÊ ËŒ£œ½ ± ¦R£P£Pœ ´ ¦ ´ ¢¤œY£ ¯ª µ ¨T²É¨¼ ´Iµ ¨ò¯!½P­P¬®¡œ ± § ½P£p¢ ´ ¯œ ° ²Éœ ± © µ œRž œ¥¸¢>ªT¦bžó½P£p¢ ´ ¯ý¼f§Y¢Gª ´ ¢ œ£f¦ ± «½P£p¢ ´ ¯ ¦R£f§ ´µ ¨ ´ ¨ ± ² ¢ £f¦·ž ´ ¨)Ö ´ ¨)ž ¨T²É¨T£ ´ ¯ Ú ª µ ¦ ± ¦ª ´ ¨ ± ¯ià ª)œRžŸžG¦R­fœ ± ¦ ´ ¨ ´ œ ´ ¦ ³ ¨{ªT¦ ± ¨xœ ° ²7œ ± © µ œRž œ¥R¢GªT¦·žfª)œ²É¬ ©pžŸ¢GªT¦ ´ ¢ œ£f¯TÊ7Ԁ¨ ´ ¦ ³ ¨ ªT¦ ± ¨ ´ œ© ± ¨˜¯!¨ ±IÌ ¨ ´µ ¨ ´ ¨ ± ¬ ² ¢ £f¦·ž ´ ¨)Ö ´ ¨)ž¤¨T²7¨T£ ´ ¯ ´ œ ¨T£f¦R­pž ¨ ± ¨T¬Œ¦R£f¦·ž «P¯w¢G¯¦R£f§ ± ¨˜ª)œ Ì ¨ ± « °®± œ²¨ ±± œ ± ¯ýÊÅ ± ¦R²É²†¦ ´ ¢GªT¦·ž ° ½P£fª ´ ¢ œ£f¯ ¦ ± ¨Ø²†¦ ±³ ¨˜§ ´ œ ° ¦ªI¢ŸžŸ¢Ÿž>¦ ´ ¨¹ª)œ£ Ì ¨ ± ¯w¢ œ£ ° œ ± ½f¯!¨ ¡¢ ´µ œ ´µ ¨ ± ¥ ± ¦R²7²†¦ ´ ¢GªT¦·ž ° œ ± ² ½pžG¦ ´ ¢ œ£f¯TÊ  „†94ˆšüwÓ–RŠŒ94ˆ Ԁ¨ µ ¦ Ì ¨K§p¨˜¯ª ± ¢ ­h¨˜§ µ œR¡ ¡¨‘©h¨ ±!° œ ± ²º¯!«[£ ´ ¦ª)¬ ´ ¢GªW¦R£f§ ²Éœ ± © µ œ¸ž¤œY¥R¢GªT¦·žÇ¦R£P£Pœ ´ ¦ ´ ¢ œ£2œ ° ¦1¯!²†¦·žŸž ª)œ ± ©P½f¯ œ ° ¾ µ ¢ £P¨˜¯!¨ ´ ¨)Ö ´ Ê Û-µ ¨ÄPÅÇÆ1Ȭ®­f¦¯À¨˜§É¦R£P¬ £Pœ ´ ¦ ´ ¢ œ£‘¯ª µ ¨T²É¨ µ ¦¯­f¨T¨T£€§p¨˜¯w¢ ¥£P¨˜§ ´ œ7¨T£f¯!½ ± ¨ ¥œpœk§ ´ œRž ¨ ± ¦R£fª)¨Cœ ° ¨ ±± œ ± ¯ œ ° ¦R£f¦bž¤«P¯w¢G¯¦ ´†Ì ¦ ± ¢ ¬ œ½f¯Çž¤¨ Ì ¨)ž>¯7¦R£f§€©hœ ±´ ¦¸­p¢îžŸ¢ ´ « ´ œ1¦R£f§ °®± œ² œ ´µ ¨ ± ¥ ± ¦¸²É²†¦ ´ ¢GªT¦·ž ° œ ± ²†¦bžî¢G¯!²¯†¦R£f§Ø¯!«p£ ´ ¦Yª ´ ¢Gª ´µ ¨Tœ¬ ± ¢ ¨˜¯Tʑ›½ ´ ½ ± ¨¨)í4œ ±´ ¯¡¢ŸžŸž/­h¨É¢ £ ´ ¡œ‘§Y¢ ± ¨˜ª ´ ¢ œ£f¯ ¯ªT¦bžî¢ £P¥Ø½P©Â¡¢ ´µ ¦&¯ ´ «YžŸ¢>¯ ´ ¢GªT¦·žŸž¤«(²Éœ ± ¨1­f¦·ž ¬ ¦R£fª)¨˜§#ª)œ ± ©P½f¯T¼é¦¸£f§(¢¤²7©fœ ±´ ¢¤£P¥Í¦R£f§¹¦Y§P¦R© ´ ¢ £P¥ ¦R£P£Pœ ´ ¦ ´ ¢ œ£f¯ ©f¨ ±À° œ ± ²É¨˜§É¦ªTª)œ ± §Y¢¤£P¥ ´ œœ ´µ ¨ ± ¦R©P¬ © ± œ[¦Yª µ ¨˜¯TÊ A1š þxˆ9QÑzü2"hÒ‰p"h:L"fˆÑ—– Ûµ ¨ ± ¨˜¯!¨˜¦ ± ª µ ± ¨T©fœ ±´ ¨˜§ ¢¤£ ´Iµ ¢>¯ ©f¦R©f¨ ± ¢G¯ ¯!½P©P©hœ ±´ ¨˜§ ­[« Å ± ¦R£ ´ æbçYççììì#¦¸£f§ Å ± ¦R£ ´  æbçYçPݘççù(œ °€´µ ¨#¾ ¢ ´ «£p¢ Ì ¨ ± ¯w¢ ´ « œ ° 㜣P¥ vœ£P¥fÊ m"<"f™0"fˆš"h–   !#"$&% $'($)+*-,/.+0$#12 )3546* 7'!8'96:<;6=8= >?:A@CBED FHGI+JKML IONPKG Q D Q&RL SKIUTWVXRGEY6K[Z?\]R^3J_\]^RKH`CRG&F D FHGR 4ba !8%c.?dAfe8gA!&hji Z?kmlMknKJ\5^RKMo B \'KY :OpC3,q r .3s?%c 0$&7' ! )?96437$)ht!8*: u $&' j"v! .?*!8)n4w,/x)3A1[$yzxX% a $?4 u  0{3%|P$& )3!8)n4 }~% dO$€2{+$$&764!8)6‚$€2)3!8% %]4ƒ$)* }2)3)+$ u $80% $ 0{3% $)n:v;6= =8>3:}„pC$&9'C7'.*x[c)[q5d†…3% dA)7ˆ‡  )3‰12…)+*)0x8‡ˆ"($96*Š|[$d†dA$'9E:ƒq5)}~% $) r ! %  .n‹ '[$)+*Hx% a $ )H[${+$&)3 486* 7'!8'964 T(R B JKKQ D L IŒEY BˆŽiC kloM` DtMi k[‘ ’“•” B R_–6YV B_—˜B I‰T(R BED JKYYL IŒ B5C™ K — KšI+Q KšI+Jš› D5œ GYšKQž`CRG&FfFHGRY 4…$ š9 > >Ÿz=& +43¡C)?ca8'9 73¢ P* u ! )73¢ 6$&%]4?}~.38.39'7E: £ $80¤8.39 p(! .37' )¥$)+*1[$d†cš)¦|[)7{3 $%5:§;E= =8>3: r $9' )3ލCc7{‰12…+š)+*)+0_x¦©Cš%$&7' ! )?9U$&)+*ª©C!&‡ s3.397 r $&'9c)?+:bq5)P}~% $) r !8%  .n‹ $)*Px% a $ )P«$‡ {+$&)3 43E* 7'! 964 TR B JKKQ&L IŒEY B5MiC kml_of` DtMi k«‘ ’“ ” B R–EYV B—¬B I<TR B JKYYL IŒ B5M™ K — KI+Q8KIJ› D5œ GEY6KQ `CRG&FfFHGRY 4­…+$89®=8gEŸ;6¯3;84§¡°)3 a 9' 73¢ ±* u !8)X73¢ 6$&%543}~.? .3976: u 0{+$&%P}M:~p(!ac)? 7'!8)n:²;6=8= ¯3: r $9' )3³12 90!8)‡ 7 )X.?! .39Wp(! )?9'7' 7.3)7'9W )H12…+š)+*)+0_xA|[$&dAdy$EE: iB F — ^8\ˆG\]L B I+G´/kwL IzŒ^L Y\5LtJY 4;Eµ¶t ·¸º¹ »& Ÿ3¹ » µ?: ¼ }~|P, ¼ /:y;E= =8µ3:H©C60_! dAd†)+*3$&7' ! )?9(ht! [7'{?Ax)‡ 7$ 07 0P}2)3)3! 7$7c!8)½!&h¾p(! …+!8$?: ¼ }~|P, ¼ ½*z!z0_.‡ d†)7 ¼ }~|2‡5¿~p(À³|2‡ˆz}«3|PÁ?; :º>3: r q'}f¸p(! )?9'  %  !  $7c!8)+$% P*% % f©°0_0{3 :/q5)39'7c7!A*W, )38.3 9'7' 0$ p(!8dA…3.?7' ! )+$&%  :b¶t}~.?7'{3!8'96¸W|Ã:X,/60{n4+©Ä:X"($)37E4 r :3[$&{3'% ·š: Å $ dŠ|P$&ÆhÇdy$)b:f;E= µ g?:H1~š…+)*)+0x„zxX97'd†9P$)+* r {3$&9'‡'7'.+07.3'x9'7'šdA96: lšI 6B RšFjG\]L B IÈGIQ iB I\5R B ´ 43>3¸º» ¯& Ÿ3» » e?: ¼ a $ Å $‚5É 0!a $?:[;6= =?; :CÊ?'MÀ-! *yË2*f1290' s6* ÀÌc7{3! .?7«¡C)?)36099$&'x-p(! dA…?%c_Íz 75x:HÎ VzK B RK\5LtJG´ kÏL IŒ^L Y\]LtJ_Y 4;6e?¸ =8=EŸ;6¯ µ?: 1[$Ea *„|H: Å $Ex96:ª;6=8µ :³1~š…+)*)+0xÌ¿C{3! x¸#} Ê?! dy$&%c 9dÐ$&)+*<! d†ÑË2s39'a $&7' ! )?96: knGIŒ^3GEŒ K 4  ¯3¸ºg3;8;Ÿ3g ¹ g?: r 7 Å š%c% ¨C +:Ò;6=8> µ3:Ó12…)+*š)+0x¥¡C)3ÕÔm0š$7' !8) |Ñ$d†dy$&6:Öq5) T(R B JKKQL×IŒ6Y B5ŽØ Ø \ÇV‰lI\'KšRI+G D \5L B I+G´ iB I  KšRKšI+JK B I iB F — ^8\'GE\5L B IG´[kÏL IŒ^XLY D \5LtJ_YÑÙ iC kml_of`H‘ “XÚÛ 4…+$89f;E= gEŸ;6=8=3: p({$)38)3 )3 Å .+$)?+4p({.3)h]$vÜ.$)n4$)*P{3 dAš r $&)n: ;E= =8¹3:<Ü.?%c $&! Ý.n4ÏÞ{3 9'{3 Å .3!3¤8. Å  £ .h]$-Ê?)Í3 ¶5p(! …+!8$34Ï2)3!¨C% 6*8H}«0¤8.3 9' 7c!8)ß$&)+*-x)7$80‡ 7 0 r $&'9' )3·:ªàB ^RI+G&´ B5-i VL I+K_YšKÃlI 6B RFHGE\5L B I T(R B JK_YYL IŒ 43µ¶]»8·¸ ;Ÿ3µ3: ©° 0'{$* Å .+*9! )n:O;E= > : ” B RQß`¾RGFMFjG&R :f"v% $ 0Ý8‡ ¨(% %543˰ÍXhÇ! */:  $)+0_xq'* 4|[ r ' 975‡'12! 'dO$)n4á$)+* £ 6$) â ¢ '!8)3 96:Ó;E= = µ?:Šp(! …3.39 ¼ )+0!3* )3¬7$)+*?$*/: ¼ Íz…7W}«*a 9'!8'xf|Ñ'! .?…f!8)j,b$)38.+$8 ¼ )38c)?5‡  )3H7$)+*3$&*9E:,b$9'7Cd†!3*ÕÔ+6*H¹ ¯ u $0{#¹ ¯8¯ ¯?: ¿Ï!8dã"v! )?‡ÇÜ.3)3ä,n$&ß$&)+*²p({+$)? )3 )3 Å .+$)?+: ;E= =8> $3:b}~)†}~…3…?'!X$80{y7!j12…)+*z)+0x#|[$dAdO$ hÇ! yp({?c)?9' :åq5)Ìܾ$)?æ|[.n4°6* 7'!864 Z3\]^3Q&LtKY-L I i VL I+K_Y6KMkwL×IŒ^L Y\5LtJ_Y 4/…+$89A; »EŸ;6µ »?:+,/c)? .3 9'7 0 z!z0_cš75xA!h Å !8)3A2! )34 Å ! )?A~!8)3+: ¿Ï!8d®"v! )3çÜš.3)3¥,n$&U$&)+*åp({+$)38)3 )3 Å .+$)?+: ;E= =8>sn: p(!8dA…3% d†)7'9-$)+*•}«*‚_.3)+079³ )È12‡ …)+*z)+0xä|[$&dAdO$ r $&'9c)? ¼ dH.3% $&7'6*sx¥$ p(!8)39'7$&c)?6*„p(!8)X7Í375‡'Ê3#|[$&dAdO$E6:†q5)„}~% $&) r !8%c8.n‹ «$)*Hx%ca $& )f[${+$&)3 46* 7! '9E4 TR B JKKQ D L×IŒ6Y B5ÌiC kloM` DtMi k[‘ ’“å” B R–EYV B—ÈB ITR BD JK_YY_L×IŒ B5°™ K — KšI+Q8KI+Jš› D5œ GEY6KQO`CRG&FfFHGRY 48…+$89 ;E¯ ¹Ÿm;6¯8>34¡C)3 a8'9c73¢ P* u !8)X73¢ 6$&%54}~.38.39'7E: ¿Ï!8d®"v! )3çÜš.3)3¥,n$&U$&)+*åp({+$)38)3 )3 Å .+$)?+: ;E= =8= $3:¡°)3ÕÔm0$&7' ! )‡5s+$&9'6* r $&'9c)?è¡°9' )3³}2)‡ )?! 7$&7'6*ª12…)+*z)+0xª©C.3% 9E:åq5) £ ! 9'7<|[ …3…+š'7 $&)+* r 7'#Ë2%  aX E4W6*zc7! 964[é ^´º\5L ´ÆL×IŒ^3G&´cK iB R D —3B RGê iB Q&LtKRš^XIzŒ6ë³Z?\]R^8–E\]^RLÇKR^IŒEë  I+G&´Æ›EY6K D Ø Ø8ì«à GEVRK_Y\ˆGŒ^IŒ½Q KšRO`~KY6Kš´ ´ºY6J_VzG  \ Ïí ^RMkÏL IŒ^XLY D \5L Y6JV3K ™ GE\'KIáî+KšRGREïKšL \5^IŒæÙš`k ™ î~‘ ’8’Û 4P…$ š9 ¹ »8gEŸ3¹   : ¼ )3  dO$Hp(! '…! $7c!8)n4 r ${+$?4n1260dÇ s6: ¿Ï! d€"v! )3çÜš.3)3¦,b$½$&)+*¥p({+$&)3 )?c)? Å .$)3: ;6=8= =&sn:~¡C)?ÆÔ0$7c!8)‡5s+$96* r $9' )3y¡°9' )3†}~)3)3!&‡ 7$&7'6*ž1~š…+)*)+0x#©°.3% 96:Ïq5) TR B JKKQ&L IŒEY Bˆ[ð \ÇV oÑG\]^RG´?k/G&IŒ^3GŒ&K2T(R B JK_YYL IŒ[T¾G JšL ñJ @ L FçZ+›F D —3B YL ^F Ø ’8’ ’òo2knT @ Z‘ ’ ’ 42…+$& 9<;6¯ ¹Ÿm;E¯ e34("WšƇ ‚ )34  !a šdHs+E: ¿P:º"~: ÜH:w,b$54W:ºp[:w,/.3)n4(p[: Ê(:W.?)n4$&)+* u :º:Ï.3)b: ;6=8= ¹?:}ó¿ô$& 8c)?‡'"$&9'6*ªÊb '975‡'Ë~* u $&'Ý8!a u !3*%C}2…3…3!X$ 0{ 7'!<}~.?7'! dO$70ÄÀ-! *-q'*š)X7ÕÔ3‡ 0$&7' ! )OhÇ! ~p({3 )39f)7)+09E:~q5) £ .3% c.?9C¿P:3¿Ï! . $)* £ !89'…3{ £ :C, $)?+4WE* 7'! 964 lI\'Kš´ ´ÆL Œ KšI\PZ+›EY D \'KFÄY 6B RHTR B JK_YY_L×IŒ  RLtKšI\'G´vk/G&IŒ^3GŒ&K_Y½Ù5T(R BED JKKQ&L IŒEY B5 \tV3K Ø ’8’õ<lI\'KRšI+GE\5L B IG´ iB I  KšRKšI+JK B I iB F — ^ \'KšRCTR B JK_YYL IzŒ B5Äi VXL IKY6K[GI+Q  RLÇKI D \'G´(knGIŒ^3GEŒ KYÇÛ 4(…+$89#;6eŸz¹ »?4n¿W$&dA…+$?4wÊb% ! *?$34 1260dHsE:p({3 )39#,n$)? .+$& yp(!8dA…3.?7'H!30Շ 75x: ¿Ï! dƒ"~: Üj:[,n$&54 u :º:~.3)b4f:ºp[:[,.3)n4M$)+*ª"2:ºM: ¿C9'!8.n:ö;E= =8>3:²¡C9' )3ªx)7$807' 0$&%c% x u ! 7' a $&7'6* ¿W$89„ ) u $Ý !a u !z*z%OÀ-! *š dAš)X7$7c!8)n: q5) TR B JKKQL IŒEY B5„Ø ’ ’&“ælšI\ˆKšRI+G\]L B I+G&´ iB I  KR D KšI+JK B I i VXL×I+KY6K†lI EB RFHGE\5L B IÌTR B JK_YYL IzŒ÷ÙÇl iÏD i lTM‘ ’“_Û 4?…+$89~¹3;EgEŸ3¹ ¹ ¹?4X"v ‚ )3: u  70{3š%c% u $0_.3964±|[$80 2cd<4 u $&'x }2)3) u $&0 )3Ý ¨C 0ø 4ä$)*öš7ù$&%5: ;6= =& +: ¿C{3 r )3)n:M¿Ï'šs+$)?Ým¸¥$&)3)3!87$7 )3¬…?'6* 0$&7'¬$&'8.‡ dAš)X7b9'7'.+07.3'8:Ïq5) TR B JKKQ&L IŒEY B5WÑ@ T  Z — KKJ_V GIQHoPGE\5^RG´knGIzŒ^3GEŒ K<” B R–EYV B—: q5 !8}M: u š%]ú É 0.?Ým:(;6=8> >?:W™ K — KIQ KšI+J›†Z+›Iz\ˆGûê Î V3K D B Rš›fGI+QÑTRG J_\]LtJK :n7$7'~¡°)3 a 9' 75xP!&h  ¨òÜ! Ý r 9'9E4  ¨ÌÜ!8'Ý: u $&Ý ! 7!  $$!+: ;E= = »?:-p(.3')7y7$7'.?9A$)+*òÊ3.3ˆ‡ 7'{?Ï¿w)+*9b!h  $&7'.3$%3,n$&)3 .$  r !309'9 )3+:Ïq5) TR B JKKQL IŒEY B5(ü[œ°ý°ü Z‘ ’&þ 48…+$89W»3;_Ÿz» 4¿Ï! Ýx8!+: £ !8{3)  's! )3)? 4 2% $.39  7'764$)+*[pC$&'% £ : r ! % % $*4 6* 7! '9E:P;6=8= : `~KRšFjG&I½L IžNÑKG8Q D5™ RšL SKšIžTWVXRGEY6K Z3\5R^3J\5^RKU`¾RGFMFjG&R :  .3dHs[ XµÃc)<pC3,q[,60‡ 7'.?'  ! 7'š96:CpC3,q r .3s?%c 0$&7' ! )?964z7$)zht! *æ¡°)3Շ a š'9' 75x: pC$% r ! % % $*ò$)+*òq5a $&)æ3$&+:ª;6=8= : NÑKG Q D5™ RL×SKšI TWVRGEY6KÐZ3\5R^3J\5^RKå`CRG&FfFHGR : ¡°)3 a 9' 75x²!h p({3 0$& ! r 9'9E4mp({?0š$ !: £ $&)3 £ :n©°! s3 )39! )n:H;E= e ¯?:P12…+š)+*)+0_xß7'.+0_7'.39 $)*<¿w$)39ˆht! dy$&7' ! )O©C.3% 9E: k/G&IŒ^zGŒ&K 4Ï Xµ?¸º¹ g =Ÿ ¹ >8g3: 3| u ,W:[;6=8> µ3:vq5)ht!8'dy$&7' ! ) r '!30š9'9' )3†‡¿ÏÍ37[$)+* ˰ÿy0Px9'7dA9v‡ô7$)+*?$*†|[)3š$%  ø6* u $Ý.3… ,n$&)3 .$ ö]3| u ,n·:P|Ñ)h¸Cq5)7')+$&7' ! )$%WË2'X$‡ )3 ø6$&7' ! )†ht!8¾7$)+*?$*zcøE$7' !8)n:nq'?Ë«>8> =8e3: z7$)?%cšxß7$! 9'7$3:³;E= >8>3:¬Î V3K i GEY6K 6B RHknKûLtJGEY6K : r c)7 r .3s3%  9'{?'9E4m,/! )+*z! )n: ,/.+0 )P¿Ï9)3]‹ '8:;E= g =?:´ KFHKIz\tY¾Q8KvY›I\ˆGû&K¾Y\]R^3J D \5^RG&´cK :w2%  )+0ÝX9 60Ým4 r $' 9E: p({.3)zh]$Ü(.+$)$)* p({$)38)3 )3 Å .+$&)3+: ;6=8= ¹3: 2)3!¨C% 6*z Ž}«0¤8.3 9' 7c!8)˜$&)+*¥p({3 )3š9' r $9' )3 "($96*•! )Èp(!8'…3.39E: q5) TR B JKKQL IzŒ6Y B5 iC k D l_of`H‘ ’&õ 43…$ š9P;6» ¯8¯ ¯Ÿm;6»8¯ ¯& +4  $&)X796: u c)?jÞ{3!8.½$)*#p({+$)? )3 )3 Å .+$&)3+:[;E= = :W}2) ¼ h ‡ Ô0 )7ÑzxX)7$ 070Ñ¿ô$& 8c)?H¿w!!8%ht!8«p(! …+!8$?:~q5) T(R B JKKQL×IŒ6Y B52iC kloM`j‘ ’ 4š…+$89w=& X=Ÿz= g8g342x8‡ !87'!:
2000
33
    !"#$ &%'#(#()&* ,+ )-.* /#(10 2 "3"54(678:9; 8<>=#())#()? @BA,CD5EGFIHJHLKNMMPORQ5S:TJU)C?EWVYX Z)V<[]\^U3_`VaV bcGd ORe8f g c Qhf/iRjIklQjmiRe gnO fWopiqQsr3tGo c Qt c u Q5opv c e w8opflxyi j{z)iq|hxqi } iqQ5~qin€l‚ƒ€…„R†5‡<ˆ5Q|xRi €l|ˆ z)iq|hxqi‰„q„Š‚ƒ€l‹q‹q‚q‚3†5ŒqO d ORQ Ž c`c †f w ˆƒ8opoa‘ ’o“wG”•wG”•ˆ3€–fWiq|hxRi5”—O tR”  d ˜ A5MqEš™›ZA,CDœyVYž bc`d O e8fWg c QhfŸiRjI <iqg d ˆf c eŸr3t`o c Q5t c ¡ iqe c O u Q5opv c e8w opflx „]¢€¤£¥O›¦Q,ORg§€ b iqQ5~†,r c iqQ5~q¨5ˆ|h€¤£©ˆ r c iqˆ Ž „Š‚Rªƒ€R‹3„q† ¡ iqe c O e8o“g«’(Q Ž d ”|Riqe c O”•ORt ”•|e ¬®­{\š¯Š°A,XR¯ ±³²'´¤µ·¶¹¸»º·¼^º5½…¾À¿{½Lº¾½Á¸l½Á²ƒ´Âº·¼^¾¤´³ÃmĊÅpà ¸¤º½…½ÁÆ8µÇ´¤¼^ȊȊ½…¾¸7É·¼Š¸l½ÁÊÇā² µ·¶¹ÊÊh½Á² Ë«¼^¾¤ÌŠÄGÍÏÎ(ÄqÊh½Áй¸…ÑB¿Nµ¶¹Æ8µÏ¼ŠÊhĊº·´¼й½Á¸¤¸ ¸¤´l¾¶¹ÆW´ÀË«¼^¾¤ÌŠÄšÍ'¼Š¸¤¸ÒÎ(º´¤¶Óā²Ô´lÄÕÆWā²qà ¸¶¹Êh½…¾?¾¶¹Æ8µ(ÆWⲃ´l½WÖq´¤¸…×±³²(Î(ÄqÊh½Áй¸J¿Nµā¸l½ º3¼^¾¼ŠÎ(½…´l½…¾¸Ø¼^¾¤½Ù͊½…¾¤ÚÛ¸lº5½Áƅ¶ÝÜ3ÆPй¶Ó̊½ й½WÖq¶¹Æ…¼ŠÐÞ¶Ó߅½ÁÊ>ā²h½Á¸…Ñ›¸lº·¼^¾8¸l½WÃYʼ^´¤¼!º·¾¤ÄŠÉhà й½ÁÎà¶¹¸(͊½…¾¤ÚÀ¸¤½…¾¶ÓāÒ¸©¼Š²·ÊÀ¼ŠÐ¹¸lÄÏÆWā²·Ê¶Ýà ´¤¶¹Ä²¼ŠÐº¾¤ÄŠÉ·¼^É3¶¹Ð¹¶Ó´¤¶Ó½Á¸«´l½Á²Êá´lÄ!É5½»½Á¸³Ã ´¤¶ÞΩ¼^´l½ÁÊPÒ·²h¾¤½Áй¶¹¼^É3ÐÓÚŠ× âãÄäĚ͊½…¾ÆWāÎ(½ Ê·¼^´¤¼`ÃY¸lº·¼^¾¸l½Á²½Á¸¤¸…Ñh¼¸¤¶¹Î(º·ÐÞ¶ÝÜ·½ÁÊ(͊½…¾¸¶Óā² ĊŠ´¤µh½¿1½ÁйÐÓÃmÌR²hÄG¿N²É·¼ŠÆ¤Ì ÃmÄ^坸¤Î(ÄRĊ´¤µqà ¶Þ²hÈLÎ(½…´¤µhÄqÊæ¶Þ¸sÒ¸l½ÁÊ,×ÇâãÄçΩ¶¹´¤¶Óȁ¼^´l½ Ò·²h¾¤½Áй¶¹¼^É3ÐÓ½n½Á¸¤´¤¶¹Î©¼^´¤¶Óā²çº¾¤ÄŠÉ·Ð¹½ÁΧÑèāÒh¾ ΩÄRÊh½ÁÐÞ¸s¼Š¸¤¸¤ÒΩ½®é³Ä¶Þ²ƒ´s¶¹²·Êh½…º5½Á²Êh½Á²ÆW½ ¶Þ²¸l´l½Á¼ŠÊnĊÅ)ÆWā²Ê¶Ó´¤¶¹Ä²¼ŠÐ¶¹²·Êh½…º5½Á²Êh½Á²ÆW½ É5½Áƅ¼ŠÒ¸¤½èé³Ä¶Þ²ƒ´;º¾Ċɷ¼^É·¶¹Ð¹¶¹´¤¶Ó½Á¸èµ¼Á͊½]´¤µh½ ¸¼ŠÎ(½‰Êh½…ÈŠ¾½…½ÏĊÅ(½Á¸l´¤¶¹Î¥¼^´¤¶Óā²!¾¤½ÁÐÞ¶¹¼^É·¶¹ÐÝà ¶¹´Yڊ×걖²Â½WÖRº5½…¾¶ÞÎ(½Á²ƒ´¤¸]œĊ¾]´¤µh½®ë{¾¤Äš¿ ² ÆWĊ¾º·Ò¸…ÑÎ(ÄRʽÁй¸s¿N¶¹´¤µ>¾¶¹Æ8µÕÆWā² ´l½WÖR´¤¸ ¼ŠÆ8µ¶Ó½…ÍŠ½è¾¤½Áй¼^´¤¶¹ÍнÁÐÓÚ©µ¶Óȁµ¼ŠÆ…Æ…Òh¾¼ŠÆWÚ¼Š²Ê ¸¤ÄÎ(½ìΩÄRÊh½ÁÐ޸ϼЏ¤¸¤Ò·Î©¶¹²hÈÀé³Ä¶¹² ´‰¶Þ²Êh½Wà º5½Á²Ê½Á²ÆW½s¸¤µhÄG¿ É5½…´l´l½…¾¾½Á¸¤ÒÐÓ´¤¸´¤µ¼Š² ´¤µ½/ÆWĊ¾¤¾¤½Á¸lº5ā²Ê¶Þ²hÈ(í Ë«Ën¸Á× î ï C㯊°H?ð)U)XR¯VYHC ñJ¼^¾¤´³ÃmĊÅpÃY¸lº5½…½ÁƵóòôñIõ/ö÷I´¤¼^Ȋȁ¶Þ²hȧƅ¼Š²nÉ5½øÊh½WÜ3²h½ÁÊ ¼Š¸ì¼æº¾ÄRÆW½Á¸¤¸ì¶¹²ù¿Nµ¶¹Æ8µ'¼æº·¾¤ÄŠº5½…¾Âñ{õŸöÔ´¤¼^È ¶¹¸Â¼Š¸¤¸¤¶Óȁ²h½ÁÊù´lÄæ½Á¼ŠÆ8µP¿{Ċ¾ÊP¶¹²Ø´l½WÖq´¤¸ì¼Š²·Êù¸lÄ ¶Ó´óƅ¼Š²ùÉ5½!ÍR¶¹½…¿1½ÁÊP¼Š¸ó¼æÆ…ÐÞ¼Š¸¤¸¤¶ÝÜ3ƅ¼^´¤¶¹Ä²Øº¾¤ÄŠÉhà ÐÓ½ÁÎòôË«¶Ó´¤Æ8µh½ÁйÐúÑ<ûÁüŠüýŠ÷8×þõ<͊½…¾¼yÊh½Áƅ¼ŠÊh½ŠÑΩ¼Š² Ú ¿{Ċ¾¤ÌR¸óÅpĊ¾þñ{õŸöä´¤¼^Ȋȁ¶¹²hÈäµ¼Á͊½ÿÒ¸l½ÁÊP¼>¿N¶¹Êh½ ¾¼Š²hȊ½þĊÅÎ¥¼ŠÆµ¶¹²½þй½Á¼^¾²¶¹²hÈL´l½ÁƵ²¶ Òh½Á¸ ¸¤Ò·Æµ ¼Š¸®¼þµ¶¹ÊÊh½Á²LËn¼^¾¤ÌŠÄGÍêΩÄRÊh½ÁЛòôí<ËnËy÷òIµ¼^¾là ²¶¹¼^ÌÛ½…´ù¼ŠÐú×ÓÑ'ûÁüŠü÷8ÑÔ¼àΩ¼`Öh¶¹ÎøÒÎ ½Á² ´l¾¤ÄŠº Ú Î(ÄqÊh½ÁÐ òN¼^´¤²·¼^º·¼^¾¤Ìqµ¶úѧûÁüŠü÷8Ñ]´l¾8¼Š²¸lœĊ¾Ω¼^´¤¶Óā² ¾Ò·ÐÓ½Á¸Àòôë{¾¶¹Ð¹ÐúÑ ûÁüŠü ÷8ћ¼!Êh½Áƅ¶Þ¸¤¶Óā²ÿ´l¾¤½…½ÿò ½…½ó½…´ ¼ŠÐú×ÓÑ{ûÁüŠüŠü÷8ÑJ¾¤½Áй¼`Öh¼^´¤¶Óā²sй¼^É5½Áй¶¹²È®òôñJ¼ŠÊh¾ ÄhÑIûÁüŠü÷8Ñ ëI¼Áڊ½Á¸¶¹¼Š²ç¶Þ²hÅp½…¾½Á²ÆW½Âòmöq¼ŠÎøÒh½Áй¸¤¸¤Ä²,Ñ(ûÁüŠü÷8Ñʶ¹¸là ÆW¾¶ÞΩ¶¹²¼^´¤¶Ó͊½{ÐÓ½Á¼^¾²¶Þ²hÈøò ã¶¹²,ÑhûÁüŠü÷8Ñ ¼è²h½ÁÒh¾8¼ŠÐ ²h½…´³Ã ¿{Ċ¾¤ÌsòmöqƵ·Î©¶¹Ê,Ñ,ûÁüŠü ÷8Ñ5¼Š²ʧ¸lĩā²,× ±³²þ´¤µ¶¹¸º·¼^º5½…¾§¿{½yº¾¤ÄŠº5ā¸l½®µ¶¹ÊʽÁ²ìË«¼^¾¤ÌŠÄGÍ Î(ÄqÊh½Áй¸ùÅpĊ¾ º·¼^¾¤´³ÃmĊÅpÃY¸lº5½…½ÁƵ ´¤¼^Ȋȁ¶¹²ÈhÑä¿ µ¶¹Æµ ¼ŠÊhĊº·´B¼<ÐÓ½Á¸¤¸B¸l´l¾¶¹ÆW´JË«¼^¾¤ÌŠÄšÍø¼Š¸¤¸¤ÒÎ(º·´¤¶Óā²òI¶¹²·Ð¹¼^¾ÁÑ ûÁüý÷)´lÄÆWⲏ¤¶¹Êh½…¾1¾¶¹Æ8µ¥ÆWā² ´l½WÖR´¤¸…×1ë1½Áƅ¼ŠÒ·¸l½;¸¤ÒƵ Î(ÄqÊh½Áй¸µ¼šÍн®¼ й¼^¾Ȋ½«²RÒÎɽ…¾Ċşº3¼^¾¼ŠÎ(½…´l½…¾¸…Ñ ´¤µh½…ÚÿÎøÒ¸l´y¸¤Òqå5½…¾yÅp¾āÎǸlº·¼^¾¸l½WÃYÊ·¼^´¤¼º¾¤ÄŠÉ·Ð¹½ÁÎ Ò²·ÐÓ½Á¸¤¸<´¤µh½…Ú®µ¼Á͊½¥¼Š²®½Á²āÒhȁµy͊āйÒΩ½øÄŠÅ{´l¾¼Š¶¹²hà ¶¹²ÈsÆWĊ¾¤º·Ò¸…×ÀËnĊ¾¤½…ÄG͊½…¾ÁÑ É5½Áƅ¼ŠÒ¸l½ ¸¤Ò·ÆµóΩÄRÊh½ÁÐÞ¸ ¼Š¸¤¸ÒÎ(½nÆWā²·Ê¶Ó´¤¶Óā²¼ŠÐ ¶¹²Êh½…º5½Á²Ê½Á²ÆW½ŠÑ1´¤µ½º¾¤ÄŠÉà ¼^É·¶Þй¶Ó´YÚP½Á¸l´¤¶¹Î©¼^´l½Á¸LĊʼn´¤µ½Á¶Ó¾!º·¼^¾¼ŠÎ©½…´l½…¾¸LΩ¼šÚ µ¼šÍнø¸¤´¤¼^´¤¶¹¸l´¤¶¹Æ…¼ŠÐ¹Ð¹Ú›Ê·¶Ýå½…¾½Á²ƒ´ ¾½Áй¶¹¼^É·¶¹ÐÞ¶Ó´YÚ]´¤µ¼^´ Êh½Wà º5½Á²Ê¸nā²ÿ´¤µh½ ² ҷΟÉ5½…¾®ÄŠÅ¥¸¤¼ŠÎ(º3ÐÓ½Á¸«ÄŠÅ]ÆWā²Ê¶Óà ´¤¶Óā²·¼ŠÐ´l½…¾Î¥¸…×;âěĚ͊½…¾8ÆWāÎ(½(´¤µh½Ü·¾¸l´<º¾¤ÄŠÉ·Ð¹½Á뤄 ¼<¸¤¶¹Î(º3й¶ÝÜ·½ÁÊè͊½…¾¸¶ÓⲟĊŷ´¤µh½1¿{½ÁйÐÝÃmÌq²hÄG¿N²ŸÉ3¼ŠÆ¤Ì ÃmÄ^å ¸¤Î©Ä Ċ´¤µ¶Þ²hÈÎ(½…´¤µÄRÊ]¶¹¸1Ò·¸l½ÁÊ,×?âÄøÎ©¶Ó´¤¶Óȁ¼^´l½èÒ²h¾½Wà й¶Þ¼^É·ÐÓ½{½Á¸l´¤¶¹Î©¼^´¤¶Óā²º·¾¤ÄŠÉ·ÐÓ½ÁΧъāÒ¾Î(ÄRʽÁй¸J¼Š¸¤¸¤ÒÎ(½ élā¶¹² ´)¶¹²Êh½…º5½Á²Ê½Á²ÆW½É5½…´Y¿{½…½Á²¥¾¼Š²·ÊhāÎáÍ^¼^¾¶¹¼^ɷй½Á¸ ¶¹²·¸l´l½Á¼ŠÊ ĊŠÆWā²Ê·¶Ó´¤¶Óā²¼ŠÐ{¶¹²Êh½…º5½Á²Ê½Á²ÆW½¥É5½Áƅ¼ŠÒ¸l½ élā¶¹² ´º¾¤ÄŠÉ·¼^É3¶¹Ð¹¶Ó´¤¶Ó½Á¸Jµ¼šÍнN´¤µh½<¸¤¼ŠÎ(½Nʽ…ÈŠ¾¤½…½NĊÅ,½Á¸³Ã ´¤¶¹Î¥¼^´¤¶Óā²§¾¤½Áй¶Þ¼^É·¶¹Ð¹¶Ó´aÚŠ×  ˜ E ­1A5\`M·ð®@¯ŠADDVYC)D J¶ÓȁÒh¾½û]¸µhĚ¿N¸¼й¼^´l´¤¶ÞÆW½¸l´l¾8ÒÆW´¤Òh¾¤½©ÄŠÅ ¼Š²)²hà ȁй¶Þ¸¤µ¸l½Á² ´l½Á²ÆW½ŠÑJй¶Ó½Á¸§Ð¹¶Ó̊½s¼3Ě¿{½…¾Á×! qÑ/¿Nµh½…¾½ ½Á¼ŠÆ8µó²hÄqÊh½§µ¼Š¸©¼®¿{Ċ¾Ê»¼Š²Ê»¶Ó´¤¸(ñIõ/ö‰´¤¼^ÈϼвÊ ¿Nµ½…¾¤½§´¤µh½¸l½"RÒh½Á²ÆW½«ÆWā²²h½ÁÆW´l½ÁÊóÉ Ú É5āйʻй¶¹²½Á¸ ¶¹²·Ê¶¹Æ…¼^´l½Á¸I´¤µh½ŸÎ©Ä¸l´Nй¶Ó̊½ÁйÚ]¸l½"RÒh½Á²ÆW½Š× #%$& ')(+*-,/.0*21 .435%.0687 9y½ÀÉ3¼Š¸¤¶¹Æ…¼ŠÐ¹ÐÓڝœāйÐÓÄG¿ ´¤µh½»²hĊ´¤¼^´¤¶Óā²>ĊŮòIµ¼^¾là ²¶Þ¼^Ìn½…´Ÿ¼ŠÐm×ÓÑ{ûÁüŠü÷;´lÄnÊh½Á¸¤ÆW¾8¶ÓÉ5½ëI¼šÚнÁ¸¤¶¹¼Š²Î(ÄRÊhà ½Áй¸œĊ¾§ñ{õ/öÀ´¤¼^Ȋȁ¶¹²hÈh×>±³²ó´¤µ·¶¹¸¥º3¼^º½…¾šÑ ¿1½y¼Š¸³Ã ¸¤Ò·Î(½»´¤µ¼^´;:"<>=+?@<BA?DCECEC"?@<GFIH¶¹¸y¼¸¤½…´sĊŠJLKMJ NPO!Q!RTS KVUWUYX NPO!Q!RTS K[Z]\%^ O!Q!_`R KLa%X O!Q!_`R KVbcU O!Q!_`R KLdMd O!Q!_`R K[Z]\ e KVf g e K[bhU e KVUWU i j[k Rml KVUWU ijk Rml KVZ0\ n K n JLKMJ B¶¹ÈÒh¾¤½©ûoqpêй¼^´l´¤¶¹ÆW½/ĊÅrJй¶Ó½Á¸ÐÞ¶Ó̊½/¼s·ÄG¿1½…¾×! ¿{Ċ¾Ê¸…Ñt:"u = ?vu A ?wCECEC"?;umxyHL¶¹¸ ¼ÿ¸l½…´»ÄŠÅ«ñ{õ/ö ´¤¼^ȁ¸…Ñ ¼›¸l½"RÒh½Á²ÆW½Ċž¼Š²·ÊhāΠÍ`¼^¾¶Þ¼^É·ÐÓ½Á¸rz ={ |~} z = z A CECECz | ¶¹¸y¼¸l½Á² ´l½Á²ÆW½ÂĊŀÔ¿{Ċ¾Ê¸…Ñ ¼Š²Êì¼ ¸l½" Ò½Á²ÆW½nĊş¾¼Š²ÊhāÎÛÍ^¼^¾¶¹¼^É·ÐÓ½Á¸‚ ={ |ƒ}  =  A CECECY | ¶¹¸?¼;¸l½"RÒh½Á²ÆW½ĊÅ)§ñ{õ/öŸ´¤¼^ȁ¸…×ë{½Wà ƅ¼ŠÒ¸l½½Á¼ŠÆ8µ©ÄŠÅ¾¼Š²ÊhāÎÿÍ^¼^¾¶¹¼^É3ÐÓ½Á¸qz ƅ¼Š²©´¤¼^̊½ ¼Š¸ ¶Ó´¤¸<Í^¼ŠÐ¹Òh½¼Š² ÚĊÅ)´¤µh½¿{Ċ¾Ê¸è¶¹²n´¤µh½øÍŠÄRƅ¼^É3Òй¼^¾¤ÚŠÑ ¿{½]Êh½Á²hĊ´l½]´¤µh½¥Í^¼ŠÐ¹Òh½¥ÄŠÅ„zt…É Út<†… ¼Š²Ês¼º·¼^¾là ´¤¶¹Æ…ÒÐÞ¼^¾ ¸l½"RÒh½Á²ÆW½ĊÅÍ^¼ŠÐ¹Òh½Á¸ ÅpĊ¾>z‡… { ˆ òh‰„ŠŒ‹ ÷ ÉRÚ <†… { ˆ ×±³²§¼©¸¤¶¹Î¥¶¹Ð¹¼^¾{¿{¼šÚŠÑ·¿{½ŸÊh½Á²hĊ´l½/´¤µ½;Í`¼ŠÐ¹Ò½;ĊЁ…É Úrum…·¼Š²Êø¼ º·¼^¾¤´¤¶¹Æ…ÒÐÞ¼^¾ã¸l½"RÒh½Á²ÆW½{ĊÅhÍ`¼ŠÐÞÒh½Á¸œĊ¾  … { ˆ òh‰†ŠŒ‹R÷ ÉRÚu … { ˆ ×BhĊ¾;Ȋ½Á²h½…¾8¼ŠÐ¹¶Ó´YڊÑ´l½…¾Ω¸Ž< … { ˆ ¼Š²Êum… { ˆ òh‰q‹ ÷I¼^¾¤½Êh½WÜ3²h½Áʧ¼Š¸NɽÁ¶Þ²hȽÁÎ(º´YÚŠ× â µ½§º·Òh¾¤º5ā¸l½ĊÅ;ë{¼šÚнÁ¸¤¶¹¼Š²ÂÎ(ÄRʽÁй¸ÅpĊ¾]ñ{õ/ö ´¤¼^Ȋȁ¶¹²hț¶¹¸N´lÄ¥Ü3²·Ê ´¤µh½øÎ(ā¸l´èй¶Ó̊½ÁÐÓڛ¸l½"RÒh½Á²ÆW½øÄŠÅ ñ{õŸö´¤¼^ȁ¸©Å“ÄŠ¾]¼yȁ¶¹ÍнÁ²ó¸l½"RÒh½Á²ÆW½ ĊÅè¿{Ċ¾Ê¸…ÑI¼Š¸ œāйÐÓĚ¿ ¸Eo q‘“’]”–• —+˜ ™ e lšM› e[œ Ÿž–  ¡£¢ l[‘¤ ”–• — ™~¥ ”–• —§¦L¨‚”–• — ™ ’ ”–• — ˜©‘ª˜ ™ e lšM› e[œ Ÿž–  ¡ ¢ l[‘ ¥ ”–• —§¦ ’ ”–• — ˜ ‘«L˜ ™ e lšM› e[œ Ÿž–  ¡ ¢ l[‘ ¥ ”–• —¬ ’ ”–• — ˜ ¢ lV‘“’ ”–• — ˜ ™ e lšM› e[œ Ÿž–  ¡£¢ l[‘ ¥ ”–• — ¬ ’]”–• —˜ ‘Ÿ­M˜ qR²,×{û©É5½ÁÆWāÎ(½Á¸sqR²,×WÉ5½Áƅ¼ŠÒ¸l½¥¾½…Åp½…¾¤½Á²·ÆW½¥´lÄ ´¤µh½¾¼Š²·ÊhāÎÿÍ`¼^¾8¶¹¼^É·ÐÓ½Á¸B´¤µh½ÁΩ¸¤½ÁÐÓ͊½Á¸ƅ¼Š²(É5½IāΩ¶Ó´³Ã ´l½ÁÊ,×~qR²,×W§¶¹¸/´¤µh½Á²y´l¾¼Š²¸¤ÅpĊ¾Ω½ÁÊs¶¹² ´lÄ®qR²,×W ¸¤¶¹²·ÆW½/ñ¾šòh< ={ | ÷¶¹¸ ÆWⲏl´¤¼Š² ´ œĊ¾N¼ŠÐ¹Ð%u ={ | × â µ½Á²,Ñ,´¤µh½(º¾¤ÄŠÉ3¼^É·¶¹Ð¹¶Ó´aÚ ñ¾šòhu ={ | ?< ={ | ÷<¶¹¸<ɾ¤Ä^à ̊½Á²ÊhĚ¿N²¶¹² ´lį° ²,×± ŸÉRÚ©Ò¸¤¶¹²È/´¤µh½ Æ8µ¼Š¶¹²]¾ÒÐÓ½Š× ¢ l[‘ ¥ ”–• — ¬ ’]”–• —+˜ ™ — ² ³ ´ ” µ ¢ l[‘ ¥ ³ ¦ ¥ ”–• ³“¶ ”¬ ’ ”–• ³Ÿ¶ ” ˜ · ¢ l[‘“’ ³ ¦ ¥ ”–• ³ ¬ ’]”–• ³“¶ ”¸˜§¹ ‘“º`˜ í Ě¿{½…ÍŠ½…¾ÁÑ ¶¹´ã¶¹¸½Á¶Ó´¤µh½…¾J¶¹Î(º·Ð¹¼ŠÒ·¸¤¶ÓÉ·ÐÓ½JĊ¾?¶¹Î(º5⏤¸¶ÓÉ·ÐÓ½ ´lÄ ÆWāÎ(º3Òh´l½©ñ¾Gòhum…B»2u ={ …h¼ = ?< ={ …h¼ = ÷<¼Š²Êyñ¾šòh<†…Ž» u ={ …¸?< ={ …h¼ = ÷{¶¹²° ² ×½ h× âNµh½>¸l´¤¼Š²Ê¼^¾Ê í<Ë«Ë ¸¤¶ÞÎ(º·Ð¹¶ÝÜ3½Á¸À´¤µh½ÁÎ ÉRÚ Î©¼^Ìq¶¹²hÈ´¤µh½«ÅpāÐÞÐÓĚ¿N¶Þ²hÈÏ´a¿1ÄÀ¸l´l¾8¶¹ÆW´Ë«¼^¾¤ÌŠÄGÍ켊¸³Ã ¸¤Ò·Î(º´¤¶Óā²sòôÆWā²Ê·¶Ó´¤¶Óā²¼ŠÐ,¶¹²·Êh½…º5½Á²Êh½Á²ÆW½G÷8Ñ2° ²,×- ¼Š²ʾqR²,×BRÑ´lÄÂȊ½…´®¼óΩĊ¾¤½‰´l¾8¼ŠÆW´¤¼^É·ÐӽϜĊ¾뤄 ° ²,×3ý × ¢ l[‘ ¥ ³ ¦ ¥ ”–• ³Ÿ¶ ”¬ ’ ”–• ³“¶ ” ˜2¿ ¢ l[‘ ¥ ³ ¦ ¥ ³“¶ÁÀ • ³“¶ ” ˜ ‘ÂL˜ ¢ l[‘“’ ³ ¦ ¥ ”–• ³ ¬ ’]”–• ³Ÿ¶ ”¸˜2¿ ¢ lV‘“’ ³ ¦ ¥ ³ ˜ ‘ŸÃM˜ ¢ l[‘ ¥ ”–• — ¬ ’]”–• —+˜-¿ — ² ³ ´ ” µ ¢ l[‘ ¥ ³ ¦ ¥ ³Ÿ¶À • ³Ÿ¶ ”¸˜ · ¢ l[‘“’ ³ ¦ ¥ ³ ˜ ¹ ‘ÄL˜ â µ½]¸l´¤¼Š²Ê·¼^¾ʉí<Ë«Ë¼Š¸¤¸¤Ò·Î(½Á¸Ÿ´¤µ·¼^´´¤µh½º¾¤ÄŠÉà ¼^É·¶Þй¶Ó´YÚóĊŸ¼»Æ…Òh¾¾¤½Á²ƒ´§´¤¼^Ȍu¸…ÆWā²Ê¶Ó´¤¶¹Ä²¼ŠÐ¹ÐÓÚÀÊh½Wà º5½Á²Ê¸ā²Àā²ÐÓÚÏ´¤µ½ º¾¤½…Íq¶ÓāÒ¸ÆÅ ´¤¼^ȁ¸‚um…h¼ Ç { …h¼ = ¼Š²Ê!´¤µ¼^´ ´¤µh½sº·¾¤ÄŠÉ·¼^É·¶¹ÐÞ¶Ó´YÚÀĊżÀƅÒh¾¾¤½Á²ƒ´¿1Ċ¾Ê <†…ÆWā²Ê·¶Ó´¤¶Óā²¼ŠÐ¹Ð¹Ú¥Êh½…º5½Á²Ê¸ā²ā²ÐÓÚ´¤µh½ŸÆ…Òh¾¾¤½Á²ƒ´ ´¤¼^ÈP=š×›±³²y´¤µh½¸l´¤¼Š²Ê¼^¾ÊÏÎ(ÄRʽÁÐ òÅ } ûš÷8ÑBÅpĊ¾ø½WÖ Ã ¼ŠÎ(º3ÐÓ½ŠÑ´¤µh½§º¾Ċɷ¼^É·¶¹Ð¹¶¹´YÚyĊÅè¼y²hÄRʽ;l¼yÈ p?âG yĊŠ´¤µh½Î(ā¸l´Nй¶¹ÌŠ½ÁÐÓÚ]¸l½"RÒh½Á²ÆW½¶¹²J¶ÓȁÒh¾¤½©ûŸ¶¹¸ ƅ¼ŠÐ¹Æ…Òqà й¼^´l½ÁÊ¼Š¸ œāйÐÓĚ¿ ¸Eo ñ¾`òɄÊ»ËÌË@Í0?LÎÐÏ(÷ Ñ ñ)¾GòÒÓ»Ʉè÷ Ô ½Á²h½…¾¼ŠÐ¹ÐÓڊу´¤µh½ ¸l´¤¼Š²Ê¼^¾8Ê(í Ë«ËÙµ¼Š¸?¼/й¶¹Î¥¶Ó´¤¼`à ´¤¶Óā²s´¤µ¼^´Ÿ¶Ó´/ƅ¼Š²‰²hĊ´Ÿ¸lāÐÓ͊½©ÆWāΩº·Ð¹¶¹Æ…¼^´l½Áʉ¼ŠÎŸÉ·¶Óà ȁÒ¶¹´¤¶Ó½Á¸ É5½Áƅ¼ŠÒ¸l½¶Ó´NÊhÄR½Á¸ ²Ċ´NÆWⲏ¤¶¹Ê½…¾N¾¶¹Æ8µÆWā²qà ´l½WÖq´¤¸…ןâãÄ]ÄG͊½…¾ÆWāÎ(½©´¤µ·¶¹¸ ÐÞ¶¹Î©¶Ó´¤¼^´¤¶Óā² Ñ·´¤µh½(¸l´¤¼Š²qà ʼ^¾8Ê]í<ËnË7¸¤µhāÒйʥÉ5½N½WÖq´l½Á²Êh½Áʛ¸lÄø´¤µ¼^´¶Ó´1ƅ¼Š² ÆWⲏÒÐÓ´¾¶¹Æ8µ§¶¹²hœĊ¾Ω¼^´¤¶Óā²§¶Þ²§ÆWⲃ´l½WÖq´¤¸…× #%$Ÿ# ՎÖ%(×68,W.06±.Ø3Ù5%.06P7cÚ p<² ½WÖR´l½Á²Ê½ÁÊ í Ë«ËÏÑÜÛèòc0Ý Ç { Þß ?LzÝáà { âVß ÷8ÑÂÅpĊ¾ ñIõ/ö(´¤¼^Ȋȁ¶Þ²hÈ(¶¹¸1ʽWÜ3²h½ÁÊ]É Ú¥Î¥¼^ÌR¶¹²ȟ´¤µh½èœāйÐÓĚ¿ à ¶¹²È<´a¿1ÄøÐÓ½Á¸¤¸?¸l´l¾8¶¹ÆW´Ën¼^¾̊ĚÍ(¼Š¸¤¸¤ÒΩº´¤¶Óā²,ÑÁ° ²,×yã ¼Š²Ê®qR²,×·üRÑ·¼Š¸œāйÐÓÄG¿N¸Eo ¢ l[‘ ¥ ³ ¦ ¥ ”–• ³“¶ ” ¬ ’]”–• ³Ÿ¶ ”¸˜ ¿ ¢ l[‘ ¥ ³ ¦ ¥ ³Ÿ¶À • ³Ÿ¶ ” ¬ ’ ³Ÿ¶ä • ³Ÿ¶ ”¸˜‘Ÿå`˜ ¢ l[‘“’ ³ ¦ ¥ ”–• ³ ¬ ’]”–• ³“¶ ”¸˜2¿ ¢ l[‘“’ ³ ¦ ¥ ³Ÿ¶æ • ³ ¬ ’ ³Ÿ¶ç • ³Ÿ¶ ”¸˜‘Ÿè`˜ é ‘¤½ê À • äMë ¬ì¨ ê æ • ç¸ë ˜ ¦ ™ ¢ l[‘ ¥ ”–• —y¬ ’ ”–• — ˜ ¿ — ² ³ ´ ” µ ¢ l[‘ ¥ ³ ¦ ¥ ³“¶ÁÀ • ³“¶ ”¬ ’ ³Ÿ¶ä • ³“¶ ” ˜ · ¢ lV‘“’ ³ ¦ ¥ ³“¶Áæ • ³ ¬ ’ ³“¶ç • ³“¶ ”¸˜r¹ ‘ªíM˜ ±³²s¼§Î(ÄqÊh½ÁÐYÛèòc Ý Ç { Þß ?Lz ݤà { âLß ÷8Ñ´¤µh½¥º¾¤ÄŠÉ·¼^É·¶Þй¶Ó´YÚ ÄŠÅ,´¤µh½èƅÒh¾¤¾½Á²ƒ´´¤¼^Èsu … ÆWā²Ê¶Ó´¤¶¹Ä²¼ŠÐ¹ÐÓÚøÊh½…º5½Á²Ê¸?ā² ”–î S–ï e O!OñðEòLóQ!SPô×RmõRml›†Q!ö RTô e S ªW‘“÷ Q!šVl e › e S8Q!ö„‘ a)ø e l–ù ö Q e _@R¸õ e O n ò>ªèMèL­`˜–˜ j lú«Ü‘¤õ–lQ!šVl e › e SÆQûöü‘ýŽRmlQ e Oþô j ò ªèLèת[˜–˜ n É5Ċ´¤µ;´¤µh½º·¾¤½…ÍR¶¹ÄÒ¸%Åä´¤¼^ȁ¸Iu …c¼ Ç { …c¼ = ¼Š²Ê/´¤µh½?º¾¤½Wà Íq¶ÓāÒ¸Gÿn¿{Ċ¾Ê¸ <†…h¼ ÞM{ …h¼ = ¼Š²Ê´¤µh½<º¾¤ÄŠÉ3¼^É·¶¹Ð¹¶Ó´aÚĊŠ´¤µh½<ƅÒh¾¤¾¤½Á² ´)¿{Ċ¾ʂ<†…ãÆWā²Ê¶Ó´¤¶Óā²·¼ŠÐ¹ÐÓÚʽ…º½Á²·Ê¸?ā² ´¤µh½<ƅÒh¾¤¾¤½Á² ´)´¤¼^È(¼Š²Ê¥´¤µh½ º·¾¤½…ÍR¶¹ÄÒ¸´¤¼^ȁ¸ u¸…h¼ à { … ¼Š²Ê´¤µh½Ÿº¾½…ÍR¶Óāҷ¸]¿1Ċ¾Ê·¸G<†…h¼ â`{ …h¼ = ×±–²½WÖRº5½…¾là ¶¹Î(½Á² ´¤¸…Ñq¿{½/¸l½…´„Å ¼Š¸;ûèĊ¾GRэÿy¼Š¸øÄо„ÅÑÀ¼Š¸ ûøÄо>RÑ ¼Š²Ê§¼Š¸ ]Ċ¾ N×;±aņÿ ¼Š²Ê§¼^¾¤½߅½…¾¤ÄhÑ ´¤µh½Ÿ¼^É5ÄG͊½øÎ(ÄqÊh½Áй¸N¼^¾¤½²hā²hÃYÐÓ½WÖq¶Þƅ¼ŠÐ¹¶Ó߅½ÁʧÎ(ÄqÊh½Áй¸…× õ<´¤µh½…¾¿N¶¹¸l½ŠÑq´¤µh½…Ú§¼^¾¤½ŸÐ¹½WÖq¶¹Æ…¼ŠÐÞ¶Ó߅½ÁʛÎ(ÄRʽÁй¸…× ±³²s¼Š²y½WÖq´l½Á²Êh½ÁʉÎ(ÄRʽÁÐYÛ<òc Ý AM{ A[ß ?Lz Ý AM{ A[ß ÷8ÑœĊ¾ ½WÖh¼ŠÎ(º·ÐÓ½ŠÑR´¤µh½Nº·¾¤ÄŠÉ·¼^É·¶¹ÐÞ¶Ó´YÚøÄŠÅ ¼Ÿ²hÄqÊh½~l¼yÈ p?℠øÄŠÅ ´¤µh½ŸÎ©Ä¸l´Nй¶Ó̊½ÁйÚ]¸l½"RÒh½Á²ÆW½Ÿ¶Þ²J¶ÓȁÒh¾¤½©û/¶Þ¸ ƅ¼ŠÐ¹Æ…Òqà й¼^´l½ÁÊ ¼Š¸NÅpāÐÞÐÓĚ¿N¸"o ñ¾šòÉG » ˇË@Í0?Lίπ? c‰y? c‰G÷ Ñ ñ)¾šòÒ» ÉGG?[ˇËtÍ]?LÎÐÏ~?  h‰Á? h‰`÷  øA5°A,žìMh¯ŠM°«M·\G¯VYžìA¯ŠVYHC ë{½Áƅ¼ŠÒ¸l½À´¤µ½»½WÖR´l½Á²Ê½ÁÊáÎ(ÄqÊh½Áй¸yµ¼šÍнó¼Ð¹¼^¾¤Èн ²RÒΟÉ5½…¾ĊÅJº·¼^¾¼ŠÎ©½…´l½…¾¸…Ñ´¤µh½…Ú›ÎøÒ¸l´N¸¤Òhå½…¾ œ¾¤ÄÎ É5Ċ´¤µ‰¸¤º·¼^¾¸l½WÃYʼ^´¤¼«º¾ĊɷÐÓ½ÁÎ ¼Š²ʉҷ²h¾¤½Áй¶¹¼^É3ÐÓ½©½Á¸³Ã ´¤¶¹Î©¼^´¤¶¹Ä²yº¾¤ÄŠÉ·Ð¹½ÁΧ×(â µh½©Î(ÄqÊh½Áй¸;¼ŠÊhĊº·´/¼ ¸¤¶¹Îà º·Ð¹¶ÓÜ·½ÁÊÉ3¼ŠÆ¤Ì ÃmÄ^åó¸¤Î(ÄRĊ´¤µ¶¹²hț´l½ÁƵ·²¶ŸRÒh½(¼Š¸/¼ ¸lÄ^à йÒh´¤¶¹Ä²´l𴤵h½Ü·¾¸¤´Bº·¾¤ÄŠÉ·ÐÓ½ÁΧс¼Š²·Ê é³Ä¶¹² ´¶¹²Ê½…º½Á²hà Êh½Á²ÆW½ ¼Š¸¤¸¤ÒÎ(º·´¤¶Óⲩ¼Š¸)¼¸lāйÒh´¤¶¹Ä²(´lÄ/´¤µh½<¸l½ÁÆWā²Ê,× %$–& '307068.! 0*"$#&%V5'¾Ú×3Ù5%5-(()h,+* ±³².¸¤Òº½…¾ÍR¶¹¸¤½ÁÊàÐÓ½Á¼^¾²¶Þ²hÈhѝ´¤µh½Ù¸¤¶¹Î(º·ÐÞ¶Ó½Á¸l´>º·¼`à ¾¼ŠÎ(½…´l½…¾æ½Á¸l´¤¶¹Î©¼^´¤¶Óā² ¶¹¸ê´¤µh½'Î¥¼`Öq¶¹ÎøÒΠй¶Ó̊½Wà й¶¹µÄ ÄqÊòôˇ B÷½Á¸l´¤¶¹Î¥¼^´¤¶Óā²ò,èÒʼؽ…´ê¼ŠÐú×ÓѝûÁüý÷ ¿Nµ¶ÞƵÀΩ¼`Öh¶¹Î©¶¹ß…½Á¸(´¤µh½º·¾¤ÄŠÉ·¼^É·¶¹ÐÞ¶Ó´YډĊÅè¼Ï´l¾¼Š¶¹²qà ¶¹²hȸl½…´…×â µh½Nˇ «½Á¸l´¤¶¹Î©¼^´l½<ĊÅ,´¤¼^ȧòÅ.-(ûš÷YÃmȊ¾¼ŠÎ º¾¤ÄŠÉ3¼^É·¶¹Ð¹¶Ó´aڊÑ3ñ)¾0/ à òhu … »Pu …h¼ Ç { …h¼ = ÷8Ñ ¶¹¸ ƅ¼ŠÐÞÆ…Òй¼^´l½ÁÊ ¼Š¸ œāйÐÓÄG¿N¸Eo ñ¾ / à òhum… »+um…h¼ Ç { …h¼ = ÷ } ]òhum…h¼ Ç { …ô÷ ]òhum…h¼ Ç { …h¼ = ÷ ò³ûŠûš÷ ¿Nµh½…¾½´¤µh½!ÅôÒ²ÆW´¤¶Óā² ]5ò215÷¾¤½…´¤Ò¾²¸ ´¤µh½!œ¾¤½Wà RÒh½Á²ÆWÚĊÅ31϶¹²«´¤µh½(´l¾8¼Š¶¹²¶¹²hț¸l½…´…× 9ìµ½Á²sÒ¸¤¶¹²È ´¤µh½yˇ ÿ½Á¸l´¤¶¹Î©¼^´¤¶Óā² Ñ Ê¼^´¤¼ ¸lº·¼^¾8¸l½Á²h½Á¸¤¸›¶¹¸]½…ÍŠ½Á² Î(Ċ¾¤½(¸l½…¾8¶ÓāÒ¸<¶¹²´¤µ½ø½WÖq´l½Á²Êh½ÁÊ®Î(ÄqÊh½Áй¸ ´¤µ¼Š²®¶¹² ´¤µh½¥¸¤´¤¼Š²Ê¼^¾ÊsΩÄRÊh½ÁÐÞ¸;É5½Áƅ¼ŠÒ¸l½©´¤µh½¥ÅpĊ¾Ω½…¾Ÿµ¼Š¸ ½…ÍŠ½Á²ΩĊ¾¤½/º·¼^¾¼ŠÎ©½…´l½…¾¸´¤µ¼Š²§´¤µ½ŸÐ¹¼^´l´l½…¾Á× òIµ½Á²,Ñ!ûÁüŠü÷8Ñó¿Nµ½…¾¤½æÍ^¼^¾¶ÓāҸ縤Î(ÄRĊ´¤µ¶¹²hÈ ´l½ÁƵ·²¶ŸRÒh½Á¸¿I¼Š¸B´l½Á¸l´l½ÁÊøÅpĊ¾J¼NÐÞ¼Š²hȁÒ¼^Ȋ½IÎ(ÄRʽÁЁÉRÚ Ò¸¤¶Þ²hÈ;´¤µh½Nº5½…¾¤º3ÐÓ½WÖq¶¹´YÚÎ(½Á¼Š¸¤Òh¾¤½ŠÑ ¾¤½…º5Ċ¾¤´l½ÁÊ©´¤µ·¼^´{¼ É·¼ŠÆ̃ÃmÄ^囸¤Î(ÄRĊ´¤µ¶¹²hÈ·ò4/¼^´lߊÑhûÁüãýŠ÷º5½…¾¤Å“ÄŠ¾Ω¸ É5½…´³Ã ´l½…¾(ā²»¼®¸¤Î©¼ŠÐ¹Ð1´l¾¼Š²¶Þ²hÈ«¸¤½…´ø´¤µ·¼Š² Ċ´¤µh½…¾©Î(½…´¤µqà Äqʸ…×B±–²(´¤µ½É·¼ŠÆ¤Ì ÃmÄ^帤Î(ÄRĊ´¤µ¶¹²hÈhу´¤µh½ ¸¤Î(ÄRĊ´¤µh½ÁÊ º¾Ċɷ¼^É·¶¹Ð¹¶¹´YÚêĊŠ´¤¼^È'òÅ.-(ûš÷YÃmȊ¾¼ŠÎ ñ¾057698Iòhu … » um…h¼ Ç { …h¼ = ÷I¶Þ¸ƅ¼ŠÐ¹Æ…Òй¼^´l½ÁÊ ¼Š¸NÅpāÐÞÐÓĚ¿N¸"o ñ)¾ 57698 òhu … » u …h¼ Ç { …h¼ = ÷ } :<;>= ñ¾ / à òhu¸…q»+um…h¼ Ç { …h¼ = ÷ ¶ÓÅ@?s! A òhum…h¼ Ç { …h¼ = ÷Rñ¾ 57698 òhu¸…q»+um…h¼ ÇCB ={ …c¼ = ÷l¶¹Å+? }  ò³û"÷ ¿Nµh½…¾¤½ ? } ],òhu¸…h¼ Ç { …ú÷L??>D } ò2?-ûš÷  = B =  = ;E= } =GF =H Ý = B =¸ßI±|$JLK ” | ” û H Ý = B =¸ßI±|MJLK ” | ” ±³²Ï´¤µ½]½"RÒ¼^´¤¶Óā² ¼^É5Ě͊½ŠÑ  = ʽÁ²hĊ´l½Á¸´¤µh½›² ÒÎ(à É5½…¾ÀĊÅóòÅN-ûš÷YÃmȊ¾¼ŠÎ ¿Nµh⏤½!Åp¾½" Òh½Á²·ÆWÚä¶¹¸O?RÑ ¼Š²Ê󴤵h½®ÆWÄ ½QP]ƅ¶¹½Á²ƒ´ ;>= ¶¹¸]ƅ¼ŠÐ¹ÐÓ½ÁÊ󴤵h½nʶ޸¤ÆWāÒ² ´ ¾¼^´¤¶¹ÄhÑó¿Nµ¶ÞƵ ¾¤½`·½ÁÆW´¤¸ÿ´¤µh½ Ô ÄRÄRÊqÃYâBÒh¾¶Þ²hÈù½Á¸³Ã ´¤¶¹Î¥¼^´l½ƒò Ô Ä ÄqÊ,ÑìûÁü÷ A × qR²,×yû"两¼ÁÚq¸´¤µ¼^´ ñ)¾ 5E698 òhum…4»‚u¸…h¼ Ç { …h¼ = ÷®¶¹¸®Ò²Êh½…¾¤Ãm½Á¸l´¤¶¹Î©¼^´l½ÁÊÿÉRÚ ;E= ´¤µ¼Š²»¶¹´¤¸øÎ¥¼`Öq¶¹ÎøÒΠй¶¹ÌŠ½Áй¶¹µhÄRÄRÊϽÁ¸l´¤¶¹Î©¼^´l½ŠÑ{¶ÓÅ ?ƏRqѷĊ¾<¶¹¸ É·¼ŠÆ̊½ÁÊÄ^åsÉRÚ§¶Ó´¤¸N¸Î(Ä ÄŠ´¤µ·¶¹²hÈ(´l½…¾Î ñ)¾ 5E698 òhum…Ì» um…h¼ ÇCB ={ …c¼ = ÷¥¶¹²óº·¾¤ÄŠº5Ċ¾¤´¤¶Óā²À´lÄ ´¤µh½ Í^¼ŠÐ¹Òh½;ĊÅB´¤µh½;ÅôÒ²ÆW´¤¶Óā² A òhu¸…h¼ Ç { …h¼ = ÷1ĊÅJ¶Ó´¤¸ ÆWā²Ê¶Óà ´¤¶Óā²·¼ŠÐ,´l½…¾Î u …h¼ Ç { …h¼ = Ñ·¶¹Å@? } q× í Ě¿{½…ÍŠ½…¾ÁÑÉ5½Áƅ¼ŠÒ¸l½>qR²,× û"¾½" Ò¶¹¾¤½Á¸ÆWāÎ(º·Ð¹¶Óà ƅ¼^´l½ÁÊ©ÆWāΩº·Òh´¤¼^´¤¶Óā²(¶¹² A òhu¸…h¼ Ç { …h¼ = ÷8с¿{½ ¸¤¶¹Î(º3й¶ÓÅ“Ú ¶Ó´I´lÄ(Ȋ½…´ ¼ÅôÒ²ÆW´¤¶Óā²]ĊÅB´¤µh½èÅp¾¤½"RÒh½Á²ÆWÚ]ĊÅã¼(ÆWā²qà ʶ¹´¤¶Óā²¼ŠÐ´l½…¾Χѷ¼Š¸NÅpāÐÞÐÓĚ¿N¸"o A ò]5òhum…h¼ Ç { …h¼ = ÷ }TS ÷ } U Ñ WV ],òhu …h¼ Ç { …h¼ = ÷ }TSYX Z[\^]`_  V ],òhu¸…h¼ Ç { …h¼ = ÷ }TSYX ò³û"÷ ¿Nµ½…¾¤½ U } û H ZNa ³“¶ÁÀ • ³ { =cb _ ñ¾ 57698 òhu¸…V» um…h¼ Ç { …h¼ = ÷ Z.a ³Ÿ¶À • ³ { =cb _ ñ)¾ / à òhum…V» u¸…h¼ Ç { …h¼ = ÷ ?  V ]5òhu¸…h¼ Ç { …h¼ = ÷ }TSYX)} d a ³Ÿ¶À K ”–• ³ { = ]`_ { ef Ý a ³“¶ÁÀ • ³“¶ ” ß ]\ ñ¾ 57698 òhum…» um…h¼ ÇCB ={ …h¼ = ÷ ±³²Ì° ² ×ãû"RÑ5´¤µh½Ÿ¾8¼Š²hȊ½ĊŠS ¶¹¸ É3ÒƤ̊½…´l½ÁÊy¶¹² ´lěý ¾¤½…ȁ¶¹Ä²¸¸ÒƵ ¼Š¸ S } P?Áû?V8?V8? ½?Vs¼Š²·Ê Shg  ¸¤¶Þ²ÆW½/¶Ó´¶¹¸ ¼ŠÐ¹¸¤Ä©Ê¶iP]ƅÒй´1´lÄ¥ÆWāΩº·Òh´l½/´¤µ¶Þ¸I½" Ò¼`à ´¤¶Óā² ÅpĊ¾N¼ŠÐÞÐ5ºā¸¸¤¶ÓÉ·ÐÓ½<Í^¼ŠÐ¹Òh½Á¸ĊŠS × j<¸¤¶¹²È/´¤µh½NœĊ¾Ω¼ŠÐÞ¶¹¸¤Î>ĊÅ,āÒh¾{¸¤¶¹Î(º3й¶ÝÜ·½ÁÊøÉ·¼ŠÆ̃à Ä^å ¸¤Î(ÄRĊ´¤µ¶¹²hÈhѧ½Á¼ŠÆ8µ'Ċūº¾¤ÄŠÉ·¼^É3¶¹Ð¹¶Ó´¤¶Ó½Á¸Ï¿Nµhā¸l½ ˇ ½Á¸l´¤¶¹Î¥¼^´l½á¶¹¸ì߅½…¾¤Ä'¶¹¸ìÉ·¼ŠÆ¤ÌнÁÊ:Ä^å ÉRÚP¶Ó´¤¸ ÆWĊ¾¤¾½Á¸lº5ā²Ê¶¹²hȝ¸¤Î©Ä Ċ´¤µ¶Þ²hÈ!´l½…¾Î§× ±³²>½WÖRº5½…¾¶Óà Î(½Á² ´¤¸…Ñ©´¤µ½À¸¤Î(ÄRĊ´¤µ¶¹²hȝ´l½…¾Î¥¸yĊÅñ)¾ 5E6@8 òhum…Ø» k b ö ‘ml e õonLò±ªTèMå`ÄL˜9p J ™ ªYQrqsut~ n u …c¼ Ç { …c¼ = ?< …h¼ ÞM{ …h¼ = ÷I¼^¾¤½ŸÊ½…´l½…¾Ω¶¹²h½Áʧ¼Š¸ ÅpāййĚ¿N¸Eo ñ)¾ 5E698 òhum… » um…c¼ ÇCB ={ …h¼ = ? <†…h¼ Þ B ={ …h¼ = ÷.¶ÓÅ0Å g û?`ÿ܏Lû ñ)¾ 5E698 òhum… » u¸…h¼ Ç { …h¼ = ÷ ¶ÓÅ0Å g û?`ÿ } û ñ)¾G5E698òhu … » u …h¼ ÇCB ={ …h¼ = ÷ ¶ÓÅ0Å çû?`ÿ }  ñ)¾ v&wèòhum…ú÷ ¶ÓÅ0Å } û?`ÿ }  p<й¸lÄhѧ´¤µh½!¸¤Î©Ä Ċ´¤µ¶Þ²hÈç´l½…¾8Ω¸ Ċūñ)¾ 5E698 òh<†…» um…c¼ à { …m?<†…c¼ âE{ …c¼ = ÷{¼^¾¤½Êh½…´l½…¾Ω¶¹²½Áʧ¼Š¸ œāйÐÓĚ¿ ¸Eo ñ)¾ 5E698 òh<†…q» u …h¼ à B ={ … ? <†…c¼ â B ={ …c¼ = ÷ ¶ÓÅx g û? €Lû ñ)¾ 5E698 òh<†…q»+um…h¼ à { …m÷ ¶ÓÅx g û?  } û ñ)¾ 5E698 òh<†…q»+um…h¼ à B ={ …m÷ ¶ÓÅx g û?  }  ñ)¾ v&wèòh<†…m÷ ¶ÓÅx } P?  }  ±³²!´¤µh½‰½" Ò¼^´¤¶¹Ä²¸n¼^É5ÄG͊½ŠÑŸ´¤µh½Ò²¶ÓȊ¾¼ŠÎº¾¤ÄŠÉhà ¼^É·¶¹ÐÞ¶Ó´¤¶Ó½Á¸¼^¾¤½nƅ¼ŠÐÞÆ…Òй¼^´l½ÁÊ»ÉRÚ»Ò¸¤¶¹²hÈy¼Š²Â¼ŠÊʶӴ¤¶Ó͊½ ¸¤Î(ÄRĊ´¤µ¶¹²Èó¿N¶Ó´¤µ<y } ûz ¼ A‰¿Nµ¶¹Æ8µç¶¹¸®Æ8µhā¸l½Á² ´¤µh¾¤ÄÒȁµÀ½WÖqº½…¾8¶¹Î(½Á² ´¤¸…ם⠵h½½"RÒ¼^´¤¶Óā²óÅpĊ¾´¤µh½ ¼ŠÊʶ¹´¤¶Ó͊½1¸¤Î©Ä Ċ´¤µ¶Þ²hÈòµh½Á²,ÑhûÁüŠü÷B¶¹¸B¼Š¸JÅpāййĚ¿N¸Eo ñ¾ v`w òhu¸…q» um…h¼ Ç { …h¼ = ÷ } ]5òhu¸…h¼ Ç { …ú÷9-{y ZNa ³ ò],òhu¸…h¼ Ç { …ú÷-{y^÷ %$# |5&,)(},/.067q68,W.W6P,+"Á6 â µh½èº·¼^¾8¼ŠÎ(½…´l½…¾¸ ĊÅJ¼Š² í<ËnË Î©¼šÚ›µ·¼Á͊½ŸÊ¶Óå½…¾là ½Á² ´Êh½…ÈŠ¾¤½…½ĊÅ3¸¤´¤¼^´¤¶¹¸l´¤¶¹Æ…¼ŠÐq¾¤½ÁÐÞ¶¹¼^É·¶¹Ð¹¶¹´YÚ;É5½Áƅ¼ŠÒ¸l½Iº·¼`à ¾¼ŠÎ(½…´l½…¾¾¤½Áй¶¹¼^É3¶¹Ð¹¶Ó´aÚÊh½…º5½Á²Ê¸èā²Ï´¤µh½©Å“¾¤½" Ò½Á²ÆWÚ ÄŠÅ·ÆWā²Ê·¶Ó´¤¶Óā²¼ŠÐ ´l½…¾Χ×]hĊ¾?½WÖh¼ŠÎ(º·ÐÓ½ŠÑŠÐÓ½…´J¼èÆWĊ¾¤º·Ò¸ ÆWⲏ¤¶Þ¸l´ ĊŠûΩ¶Þйй¶Óā²§¿{Ċ¾Ê¸è¼Š²Ê«ÐÓ½…´<´¤µh½ÅpāÐÞÐÓĚ¿à ¶¹²hÈ]º·¼^¾¼ŠÎ(½…´l½…¾8¸ É5½½WÖR´l¾¼ŠÆW´l½Áʮœ¾¤ÄÎP´¤µh½ÆWĊ¾¤º·Ò¸ ÉRÚ]Ò¸¤¶Þ²hȟ´¤µh½;Ω¼`Öh¶¹ÎøÒÎÔй¶Ó̊½Áй¶ÞµhÄ ÄqÊ(½Á¸l´¤¶ÞΩ¼^´¤¶Óā²,× ñ)¾GòÒh÷ } PC~hû ñ)¾Gò ; » Òh÷ } PC¹û ñ)¾Gò€…÷ } PC~hû ñ)¾Gò ; »M…÷ } PC¹û ñ)¾Gò‚Á÷ } PC~hûÇñ)¾Gò ; »$‚Á÷ } PC¹û ±³²ó´¤µ¶Þ¸¥Æ…¼Š¸l½ŠÑ ´¤µh¾¤½…½®ÆWā²Ê¶Ó´¤¶Óā²·¼ŠÐ º¾¤ÄŠÉ3¼^É·¶¹Ð¹¶Ó´¤¶¹½Á¸…Ñ ñ¾šò ; »BÒq÷8Ñ;ñ)¾Gò ; »ƒ…÷8Ñ;¼Š²ʝñ¾`ò ; »W‚Á÷›¼^¾¤½s¼ŠÐ¹Ð q×¹ûÉ·Òh´Nñ)¾šò ; »Òh÷ ¶Þ¸ ¸l´¤¼^´¤¶¹¸l´¤¶Þƅ¼ŠÐ¹ÐÓÚ§Î(Ċ¾¤½¾¤½Áй¶¹¼^ɷй½ ´¤µ¼Š²ìĊ´¤µh½…¾¸›É5½Áƅ¼ŠÒ¸¤½«¶Ó´¤¸›¸¤¼ŠÎ©º·ÐÓ½®¸¤¶Ó߅½»ò³ûzqÑ~ ¿{Ċ¾Ê¸ } ûNÎ¥¶¹Ð¹Ð¹¶Óā² Ñ ñ¾WòÒq÷l÷)¶¹¸É3¶ÓȊȊ½…¾)´¤µ¼Š²]Ċ´¤µqà ½…¾¸…×]p<ÆW´¤Ò¼ŠÐ¹ÐÓڊÑ^´¤µ¶¹¸º·µh½Á²hāΩ½Á²hā²¶Þ¸ ͊½…¾¤ÚŸ¸¤½…¾¶ÓāÒ¸ ¶¹²½WÖR´l½Á²Ê½ÁʫΩÄRÊh½ÁÐÞ¸…Ñ·½…ÍŠ½Á²«´¤µhāÒhȁµ«º·¼^¾8¼ŠÎ(½…´l½…¾¸ ĊÅ´¤µ½èΩÄRÊh½ÁÐÞ¸1¼^¾¤½;¸l½…½Á²§¶Þ²¥´¤µh½è´l¾¼Š¶Þ²¶¹²hÈøÆWĊ¾¤º·Ò¸…× âãÄÆWⲏ¤¶ÞÊh½…¾Ÿ¸¤ÒÆ8µÏ¸l´¤¼^´¤¶Þ¸l´¤¶¹Æ…¼ŠÐ¾½Áй¶¹¼^É·¶¹ÐÞ¶Ó´YÚ ÄŠÅ¼ º¾¤ÄŠÉ3¼^É·¶¹Ð¹¶Ó´aÚ»½Á¸l´¤¶¹Î©¼^´l½ŠÑ;¿{½s¶¹² ´l¾¤ÄqÊÒÆW½®´¤µh½yÆWā²qà ÆW½…º´ ĊÅB¿{½Á¶Óȁµ ´¤¶¹²hȥ˫¼^¾¤ÌŠÄGÍ ¼Š¸¤¸¤ÒÎ(º·´¤¶Óā²,Ñ¼Š¸ œāÐÝà ÐÓÄG¿N¸Eo ñ¾šòhum…q»+u ={ …h¼ = ?< ={ …h¼ = ÷„ ñ¾šòhum… »+um…h¼ Ç { …h¼ = ?<†…h¼ Þ`{ …c¼ = ÷ Ñ 9 òhu¸…h¼ Ç { …h¼ = ?<†…h¼ ÞM{ …h¼ = ÷ ò³û` ÷ ñ¾Gòh<†…q»+u ={ …¸?< ={ …h¼ = ÷…„ ñ¾Gòh<†…q»+um…h¼ à { …m?<†…h¼ â`{ …h¼ = ÷ Ñ 9 òhum…c¼ à { …m?<†…c¼ âE{ …c¼ = ÷ ò³û"÷ ±–Å<´¤µh½§º¾¤ÄŠÉ·¼^É3¶¹Ð¹¶Ó´aÚsœҷ²ÆW´¤¶Óā²,Ñ)ñ)¾ÁÑ1¶Þ¸©Ò¸l½ÁÊ»¼Š¸ ´¤µh½¿{½Á¶Óȁµ ´;œÒ²·ÆW´¤¶Óā²,Ñ 9 Ñ,´¤µh½½" Ò·¼^´¤¶Óⲏ;¼^É5Ě͊½ É5½ÁÆWāÎ(½ê½" Ò·¼^´¤¶Óⲏþ¼Š¸¤¸¤ÒÎ¥¶¹²hȝélā¶¹² ´Â¶¹²·Êh½…º5½Á²qà Êh½Á²·ÆW½/É5½…´Y¿{½…½Á² ¾¼Š²ÊāÎÔÍ`¼^¾¶Þ¼^É·ÐÓ½Á¸¼Š¸ œāйÐÓÄG¿N¸Eo ñ¾Gòhum…Y»u ={ …h¼ = ?< ={ …h¼ = ÷†„ ñ¾`òhu¸…m?u¸…h¼ Ç { …h¼ = ?<†…h¼ ÞM{ …h¼ = ÷ ò³û"÷ ñ¾šòh< … »+u ={ … ?< ={ …h¼ = ÷„ ñ)¾šòh<†…T?u¸…h¼ à { …¸?<†…h¼ â`{ …h¼ = ÷ ò³ûšýŠ÷ â µ½§½" Ò¼^´¤¶¹Ä²¸©¼^É5Ě͊½¼Š¸¸¤ÒÎ(½ ´¤µ·¼^´(´¤µh½ º¾¤ÄŠÉà ¼^É·¶Þй¶Ó´YÚĊÅ;´¤µh½ƅÒ¾¤¾¤½Á² ´¥´¤¼^ÈÙum…{élā¶¹²ƒ´¤Ð¹ÚÊh½…º5½Á²Ê·¸ ā²É5Ċ´¤µ´¤µh½ º·¾¤½…ÍR¶¹ÄÒ¸°Å:´¤¼^ȁ¸ u …h¼ Ç { …h¼ = ¼Š²Ê´¤µh½ º¾½…ÍR¶Óāҷ¸„ÿ®¿1Ċ¾8ʸ <†…c¼ Þ`{ …h¼ = ¼Š²Ê´¤µ¼^´I´¤µh½èº¾¤ÄŠÉà ¼^É·¶Þй¶Ó´YÚĊÅ,´¤µh½<ƅÒh¾¤¾¤½Á² ´)¿{Ċ¾ʀ<†…Ré³Ä¶Þ²ƒ´¤ÐÓÚ(Êh½…º5½Á²Ê·¸ ā²ó´¤µh½ƅÒh¾¤¾½Á²ƒ´©´¤¼^È ¼Š²Ê»´¤µ½º¾¤½…Íq¶ÓāÒ¸‡Õ´¤¼^ȁ¸ um…h¼ à { …J¼Š²ʧ´¤µh½;º·¾¤½…ÍR¶¹ÄÒ¸C¥¿{Ċ¾Ê¸ <†…h¼ â`{ …h¼ = ×±–ÅB¼ ëI¼Áڊ½Á¸¶¹¼Š²‰Î©ÄRÊh½ÁÐ?¼Š¸¸¤ÒÎ(½Á¸{élā¶¹² ´/¶¹²Êh½…º5½Á²Ê½Á²ÆW½ŠÑ ¿{½ŸÆ…¼ŠÐ¹Ð ¶¹´ ¼ é³Ä¶¹² ´ ¶¹²Ê½…º½Á²·Êh½Á²ÆW½;Î(ÄqÊh½ÁÐò€ˆŠ±³Ë®÷8× p<ÆW´¤Ò¼ŠÐ¹Ð¹ÚŠÑ5Ò·¸¤¶¹²hÈ´¤µh½øº¾¤ÄŠÉ·¼^É·¶Þй¶Ó´YÚ§ÅôÒ²ÆW´¤¶¹Ä²n¼Š¸ ´¤µh½¥¿1½Á¶Óȁµ ´ŸÅôÒ²ÆW´¤¶¹Ä²y¶¹¸/Ω¼^´¤µh½ÁÎ¥¼^´¤¶¹Æ…¼ŠÐ¹ÐÓÚy¶¹²ÆWĊ¾¤Ã ¾¤½ÁÆW´<¼Š²Ê¶ÞÎ(º·Ð¹¼ŠÒ¸¶ÓÉ·ÐÓ½Š× hĊ¾<½WÖh¼ŠÎ(º·ÐÓ½ŠÑ3¿Nµ¶¹Ðӽ贤µh½ ¸¤Ò·ÎùĊÅ?º¾Ċɷ¼^É·¶¹Ð¹¶¹´¤¶Ó½Á¸IĊżŠÐ¹ÐB¸l½Á²ƒ´l½Á²·ÆW½Á¸ ¿ ¶Ó´¤µ ´¤µh½ ¸¤¼ŠÎ©½¥ÐÓ½Á²hȊ´¤µyÉ5½ÁÆWāÎ(½Á¸¥û^×~¶¹²y¼Š²sí<Ë«ËÏÑB¶Ó´;É5½Wà ÆWāÎ(½Á¸?²·¼^´¤Òh¾¼ŠÐ¹ÐÓÚ;ÐÓ½Á¸¸ã´¤µ·¼Š²û^×~è¶¹²ø¼ƒˆŠ±³ËÏ×^â µh½…¾¤½Wà œĊ¾¤½ŠÑ…ˆŠ±³Ën¸¸¤µhāÒйÊy²Ċ´/É5½©Ò¸l½ÁÊs¶¹²Ïƅ¼ŠÐ¹Æ…Òй¼^´¤¶¹²È ´¤µh½º¾¤ÄŠÉ·¼^É3¶¹Ð¹¶Ó´aÚ]Ċż]¸l½Á²ƒ´l½Á²·ÆW½Š× í Ě¿{½…ÍŠ½…¾ÁÑ ¶ÓÅJ¿1½ ¿I¼Š²ƒ´1´lÄ/Ü3²·Ê(´¤µh½NÎ(⏤´1й¶Ó̊½ÁйÚ¸l½"RÒh½Á²ÆW½ ÅpĊ¾1½Á¼ŠÆµ ¸l½Á² ´l½Á²ÆW½/¼Š²Ê]´¤µh½1élā¶¹²ƒ´{º¾¤ÄŠÉ3¼^É·¶¹Ð¹¶Ó´aÚĊÅã½Á¼ŠÆµ›º·¼`à ¾¼ŠÎ©½…´l½…¾<¶¹¸N¾¤½…ȁ¼^¾Êh½ÁÊ«¼Š¸<¼¥¸¤ÆWĊ¾¤½ŠÑˆŠ±³Ë«¸Nµ¼šÍнø²hÄ º¾ĊɷÐÓ½ÁÎ§× ë{Ú]¾¤½…º·Ð¹¼ŠÆ…¶¹²ÈÆWĊ¾¤¾¤½Á¸¤ºā²·Ê¶¹²hÈ/º3¼^¾¼ŠÎ(½…´l½…¾¸…Ñh¼Š² ½WÖq´l½Á²Êh½Áʉí Ë«Ë&ƅ¼Š²yÉ5½(´l¾¼Š²¸lœĊ¾Î(½ÁÊs¶Þ²ƒ´lħ´¤µh½ ÆWĊ¾¤¾½Á¸lº5ā²Ê¶¹²hȉˆŠ±³ËÏÑ¿ µ¶¹Æµþ¶¹¸›Êh½Wܲh½ÁÊþ¼Š¸›ÅpāÐÝà ÐÓÄG¿N¸Eo Š;òcWÝ Ç { Þß ?Lzݤà { âLß ÷„» } ñ)¾Gòhu ={ | ?< ={ | ÷ „ | ² … ] =Œ‹ ñ¾šòhum…T?u¸…h¼ Ç { …h¼ = ?<†…c¼ Þ`{ …h¼ = ÷ Ñ ñ¾šòh< … ?u …h¼ à { … ?< …h¼ â`{ …h¼ = ÷‡ ò³û"ã÷ ±³²¼Š²½WÖq´l½Á²Êh½Áʎˆ±–ˉÑWŠ;òc0Ý AM{ A[ß ?LzÝ AM{ A[ß ÷8ÑèÅpĊ¾ ½WÖh¼ŠÎ(º·ÐÓ½ŠÑR´¤µh½ º¾¤ÄŠÉ·¼^É·¶Þй¶Ó´YÚøÄŠÅ ¼ø²hÄRʽúl¼yÈ p?℠øÄŠÅ ´¤µh½ŸÎ©Ä¸l´Nй¶Ó̊½ÁйÚ]¸l½"RÒh½Á²ÆW½Ÿ¶Þ²J¶ÓȁÒh¾¤½©û/¶Þ¸ ƅ¼ŠÐ¹Æ…Òqà й¼^´l½ÁÊ ¼Š¸NÅpāÐÞÐÓĚ¿N¸"o ñ)¾GòÉGG?[ËÌË@Í0?LίÏ~?  h‰y? h‰G÷ Ñ ñ¾GòÒ ?[ɄB?[ËÌË@Í0?LίÏ~?  h‰y? h‰G÷ â µ½/º·¼^¾¼ŠÎ(½…´l½…¾8¸NĊÅ?¼}ˆ±–Ë ¼^¾½/½Á¸l´¤¶¹Î©¼^´l½ÁÊnÉRÚ Ò¸¤¶Þ²hÈ!´¤µh½ìº·¼^¾¼ŠÎ(½…´l½…¾8¸‰ÄŠÅ´¤µh½ìÆWĊ¾¾¤½Á¸lº5ā²Ê¶¹²È í<ËnË ¼Š¸ œāйÐÓÄG¿N¸Eo ñ)¾ 5E698 òhum…m?u¸…h¼ Ç { …h¼ = ?<†…h¼ ÞM{ …h¼ = ÷ } ñ)¾05E6@8òhu … » u …c¼ Ç { …c¼ = ?< …h¼ ÞM{ …h¼ = ÷ Ñ ñ¾Gv`wèòhu¸…h¼ Ç { …h¼ = ?<†…c¼ Þ`{ …h¼ = ÷ ñ)¾ 5E698 òh<†…T?um…h¼ à { …m?<†…h¼ â`{ …h¼ = ÷ } ñ¾ 57698 òh<†…q»+u¸…h¼ à { …¸?<†…h¼ â`{ …h¼ = ÷ Ñ ñ¾ v`wèòhu¸…h¼ à { …¸?<†…h¼ â`{ …h¼ = ÷ ñ)¾ v&w òhum…h¼ Ç { …ú÷ } 05òhu …c¼ Ç { … ÷9-{y Z a ³Ÿ¶À • ³ ò05òhum…c¼ Ç { …ô÷9-‰y^÷  ’‘…“ M° VYžìM·C㯁\ hĊ¾(½WÖqº5½…¾¶¹Î(½Á² ´¤¸…ÑJ¿1½§Ò¸¤½ÁÊÏ´¤µh½§ë{¾¤ÄG¿N² ÆWĊ¾¤º·Ò¸ ¿Nµ¶ÞƵ ÆWⲏ¶¹¸l´¤¸ĊÅû^ѹûŠû"Rѹû"ãM›¿{Ċ¾Ê¸ ¼Š²ʇRÑûãã ¸l½Á² ´l½Á²ÆW½Á¸¥¼Š²·Ê»¶¹¸ø´¤¼^ȊȊ½ÁÊÀ¿N¶Ó´¤µvãyñIõ/öÏ´¤¼^ȁ¸ ”Š× ±–´{¿I¼Š¸ ¸l½…ȁΩ½Á²ƒ´l½ÁÊ ¶Þ²ƒ´lÄ´Y¿{Ä(º·¼^¾¤´¤¸ÁÑR´¤µ½;´l¾¼Š¶¹²¶¹²È ¸l½…´›ÄŠÅøüME•$¼Š²Êì´¤µh½®´l½Á¸l´ ¸l½…´›ÄŠÅ]ûzE•Ñè¶¹²ì´¤µh½ ¿I¼ÁÚþ´¤µ¼^´ ½Á¼ŠÆ8µL¸l½Á² ´l½Á²ÆW½Ï¶¹²þ´¤µ½y´l½Á¸l´¸l½…´ ¿I¼Š¸ ½WÖq´l¾¼ŠÆW´l½ÁÊÅp¾āÎ!½…ÍŠ½…¾¤Ú¥ûz<¸l½Á² ´l½Á²ÆW½Š×?±³²/´¤µh½I¸¤¼ŠÎ(½ ¿I¼Áڊъ¿{½1Ω¼ŠÊh½ ûz`ÃmÅpÄÐ¹ÊøÊ¼^´¤¼ ¸l½…´œĊ¾1ûz`ÃmÅ“ÄÐ¹ÊøÆW¾¤Ä¸¤¸ Í^¼ŠÐ¹¶¹Ê¼^´¤¶Óā² × ±³²Ċ¾8Êh½…¾è´lě¼Š¸¸¤¶Óȁ²®¼ŠÐ¹Ðãºā¸¸¤¶ÓÉ·ÐÓ½Ÿ´¤¼^ȁ¸è´lĽÁ¼ŠÆ8µ ¿{Ċ¾Ê,Ñ¿{½«Î©¼ŠÊ½n´a¿1Ä ¼Š¸¤¸¤ÒΩº´¤¶Óā²%oyƅÐÓā¸l½ÁÊÂ͊Ä^à ƅ¼^É·ÒÐÞ¼^¾¤ÚŸ¼Š¸¤¸¤Ò·Î(º´¤¶Óā²ø¼Š²ÊøÄŠº5½Á²ŸÍŠÄqƅ¼^É·Òй¼^¾Ú¼Š¸³Ã ¸¤ÒΩº´¤¶Óā²,× hĊ¾ÀƅÐÓā¸l½ÁÊÕ͊ÄRƅ¼^É3Òй¼^¾¤Ú漊¸¸¤ÒÎ(ºhà ´¤¶Óā²,Ñ<¿{½ÏÐÓÄRĊ̊½ÁÊ!Òhº!¼»Ê¶¹ÆW´¤¶¹Ä²¼^¾¤Ú󴤼жÞÐÓĊ¾¤½ÁÊþ´lÄ ´¤µh½ ë{¾¤Äš¿N²ÿÆWĊ¾¤º3Ò¸…×7±³²ç´¤µ¶¹¸«Æ…¼Š¸l½ŠÑ´¤µ½¼šÍн…¾là ¼^Ȋ½‰²RÒÎÉ5½…¾§ÄŠÅ´¤¼^ȁ¸nº5½…¾§¿1Ċ¾Ê!ɽÁƅ¼ŠÎ©½óû^×û h× hĊ¾©ÄŠº5½Á² ͊Äqƅ¼^É·Òй¼^¾ډ¼Š¸¤¸¤Ò·Î(º´¤¶Óā²,Ñ)¿1½ ÐÓÄRĊ̊½ÁÊ Òhºê¼ÂÊ¶ÞÆW´¤¶Óā²¼^¾¤Úþ´¤¼Š¶¹ÐÓĊ¾½ÁÊLā²йÚì´lÄþ¼Â´l¾¼Š¶¹²¶¹²È ¸l½…´y¶¹²çĊ¾Êh½…¾y´lĝ¼Š¸¤¸¤¶¹È²çº5⏤¸¶ÓÉ·ÐÓ½s´¤¼^ȁ¸s´lÄ윾¤½Wà RÒh½Á²ƒ´<¿{Ċ¾Ê¸<¿Nµhā¸l½øÅ“¾¤½" Ò½Á²ÆWÚ ¶¹¸ Ȋ¾¤½Á¼^´l½…¾;´¤µ¼Š² R×J±–²øÆ…¼Š¸l½IĊž¼^¾½1¿{Ċ¾Ê¸…Ñ^´¤¼^ȁ¸?¶¹²´¤µh½Iʶ¹ÆW´¤¶Óā²·¼^¾¤Ú ¿{½…¾¤½n¼Š¸¤¸¶Óȁ²h½ÁÊ󼊲·ÊÀ´¤µh½Á²Øy´¤¼^ȁ¸]¿N¶Ó´¤µÂµ¶Óȁµh½Á¸l´ ¸¤ÆWĊ¾¤½N¿1½…¾¤½ ¼Š¸¤¸¤¶Óȁ²h½Á橃 ÚÒ¸¤¶¹²È輟²¼Š¶Ó͊½NëI¼Áڊ½Á¸¶¹¼Š² ƅй¼Š¸¤¸¶ÝÜ·½…¾GòôË«¶Ó´¤Æµ½ÁйÐúÑ ûÁüŠüýŠ÷¥ÆWⲏ¤¶ÞÊh½…¾¶¹²hÈyÆ8µ¼^¾¼ŠÆ à ´l½…¾Nœ½Á¼^´¤Òh¾¤½Á¸N¼Š¸ œāйÐÓĚ¿ ¸Eo ñ¾šòhum…T?<†…ú÷ } ñ¾šòhum…ú÷ Ñ ñ¾šòh<†…q» um…ú÷ – U j õRqõ ø e õ0S j ›†R°S–RTöEõRTö$—TRmSTò k ø Q~— øÐø e™˜ R†— j ›Cš j S–QñõR õ e šLS‘“S–ï$— ø e S3›œ Zx]g9ž)Ÿ Q!ö‡›€ ¡¢£¤¡ Ÿ ˜¸ò¥› b¦7¦¨§Y©]f)¦EŸ õ e š+ò j l’› UWb¦EŸ õ e š+ò k RmlR†lRT› j ˜ Rôqál j › õ ø R \ l j[k ö‡— j loš ï+S e ö ô õ e šLS k Qñõ ø ›€ª Ÿ ‘“ö j õ¸˜2S–ï$— ø e S3› \`§ ^ ª Ÿ k RmlRWlR«š+O e —TRô ÷Eð— j l–lRTS€š j öô×Qûö+š õ e šLS k Qñõ ø j ï+õ›€ª Ÿ S–ï— ø e S› \`§ ^>Ÿ n „ ñ¾šòhum…m÷ Ñ e ² ˆ ] = ñ¾šò S ˆ … »+um…m÷ ¿Nµ½…¾¤½ S ˆ … ¶¹²Ê¶Þƅ¼^´l½Á¸„‹^Ãm´¤µsÆ8µ¼^¾¼ŠÆW´l½…¾øÅp½Á¼^´¤Òh¾½Á¸;ĊŠ<†…¼Š²ʬ ¥ò } û"÷¶¹¸´¤µh½{²RÒÎÉ5½…¾ ĊŷƵ·¼^¾¼ŠÆW´l½…¾JÅp½Á¼`à ´¤Òh¾½Ï´YÚRº5½Á¸n¶Þ²ƅйÒʶ޲hȉº·¾¤½WÜÖq½Á¸‰ò“¿Nµh⏤½‰ÐÓ½Á²hȊ´¤µ ¶¹¸®û«´¤µh¾¤ÄÒȁµ ÷8Ñ踤ҭP¥ÖR½Á¸®ò“¿Nµhā¸l½yÐÓ½Á²Ȋ´¤µì¶Þ¸«û ´¤µh¾āÒhȁµú ÷8Ѷ¹Å%<†…ÆWⲃ´¤¼Š¶Þ²¸I² ÒÎÉ5½…¾¸…ÑR¶ÓÅ%<†…ãÆWā²qà ´¤¼Š¶¹²·¸B¼Š²(¶Þ²¶Ó´¤¶¹¼ŠÐ Òhºº5½…¾ƅ¼Š¸l½IÐÓ½…´l´l½…¾Áу¶¹Å2< … ÆWā² ´¤¼Š¶¹²¸ ¼Š² ڟ²hā²qÃY¶Þ²¶Ó´¤¶¹¼ŠÐÒºº5½…¾ƅ¼Š¸l½1й½…´l´l½…¾Áс¶ÓŽ<†…·ÆWā² ´¤¼Š¶¹²¸ µ Ú º3µh½Á²¸…×1±–²§´¤µ·¶¹¸ ƅ¼Š¸l½ŠÑ3´¤µh½¼šÍн…¾¼^Ȋ½ø²RÒÎÉ5½…¾ ĊŠ´¤¼^ȁ¸›º5½…¾›¿1Ċ¾8ÊÂÉ5½Áƅ¼ŠÎ(½@R×~»¼Š²Êì´¤µh½«¾¼^´l½®ÄŠÅ ¿{Ċ¾Ê¸1´¤µ¼^´Iµ¼Á͊½è´¤µh½<ÆWĊ¾¤¾¤½ÁÆW´I´¤¼^Èø¼ŠÎ©Ä²hÈø¼ŠÐÞÐ3¼Š¸³Ã ¸¤¶¹È²h½Áʛ´¤¼^ȁ¸NÉ5½Áƅ¼ŠÎ(½øüŠüR×ûã>•× J¶ÓȁÒ¾¤½Ð¶¹Ð¹Ð¹Ò·¸l´l¾¼^´l½Á¸IȊ¾¼^º·µ·¸¸¤µhĚ¿ ¶¹²hÈ(´¤µh½/¼šÍƒÃ ½…¾¼^Ȋ½(¼ŠÆ…Æ…Òh¾8¼ŠÆWÚ ¾¼^´l½Á¸<ĊÅí ˫˫¸<¼Š²·Ê®ˆŠ±³Ë«¸ Ò²hà Êh½…¾è´¤µh½ƅÐÓā¸l½Áʫ͊Äqƅ¼^É·Òй¼^¾Ú ¼Š¸¤¸¤ÒÎ(º·´¤¶Óā²,× í ½…¾¤½ŠÑ й¼^É5½Áй¸ ¶¹²ä´¤µh½þÖRÃY¼`Öh¶¹¸À¸lº5½Áƅ¶ÓÅ“ÚæÎ(ÄqÊh½Áй¸»¶¹²ä´¤µh½ ¿I¼ÁÚP´¤µ¼^´ Ç { Þ à { â Êh½Á²hĊ´l½Á¸4Û<òc Ý Ç { Þß ?Lz ݤà { âVß ÷»Äо Š;òc Ý Ç { Þß ?Lz Ýáà { âVß ÷8×Lâ µh½ΩÄRÊh½ÁÐÞ¸¥¼^¾¤½«¼^¾¤¾¼Š²hȊ½ÁÊ ÉRڝ´¤µh½‰¼Š¸ÆW½Á²Ê¶¹²hÈóĊ¾Êh½…¾ĊÅ(´¤µ½…ÄŠ¾¤½…´¤¶¹Æ…¼ŠÐø² ÒÎ(à É5½…¾/ĊÅIº·¼^¾¼ŠÎ©½…´l½…¾¸…×]âNµh½Ü·¾¸l´;´a¿1Ä«Î(ÄqÊh½Áй¸/¼^¾¤½ ¸l´¤¼Š²·Ê¼^¾ʛÎ(ÄqÊh½Áй¸I¼Š²Ê´¤µh½èĊ´¤µh½…¾¸¼^¾¤½è½WÖq´l½Á²Êh½ÁÊ Î(ÄqÊh½Áй¸Á×nâ µh½¼Á͊½…¾¼^Ȋ½n¼ŠÆ…Æ…Òh¾¼ŠÆWÚϾ8¼^´l½Á¸É5½…ڊā²Ê ´¤µh½®¾¼Š²Ȋ½«ÄŠÅ½Á¼ŠÆµþȊ¾¼^º3µì¼^¾½¥é¤Ò¸l´É5½ÁÐÓĚ¿Ù´¤µh½ ܷȁÒ¾¤½Š× ±³²´¤µ¶¹¸)ܷȁÒh¾¤½ŠÑq¿{½;ƅ¼Š²›ÄŠÉ·¸l½…¾¤ÍŠ½è´¤µ¼^´I´¤µh½;¸¤¶¹Î(à º·ÐÞ¶ÝÜ·½ÁÊ<É·¼ŠÆ¤Ì ÃmÄ^å]¸¤Î©Ä Ċ´¤µ¶Þ²hÈ´l½ÁƵ²·¶Ÿ Ò½Ω¶Ó´¤¶Óȁ¼^´l½Á¸ ¸lº3¼^¾¸l½WÃYʼ^´¤¼ùº¾¤ÄŠÉ·ÐÓ½ÁÎ¥¸¶¹²ÙÉ5Ċ´¤µ:í ˫˫¸ç¼Š²Ê ˆ±–Ë«¸…ׄp<¸N½WÖqº5½ÁÆW´l½ÁÊ,Ñ9ˆŠ±³Ën¸ ¼ŠÆµ¶Ó½…ÍŠ½Á¸èµ¶¹Èµh½…¾ ¼ŠÆ à ƅÒh¾8¼ŠÆWÚ<´¤µ·¼Š²/´¤µh½)ÆWĊ¾¾¤½Á¸lº5ā²Ê¶¹²ÈIí<Ë«Ën¸ã¶¹²/¸lāÎ(½ ½WÖq´l½Á²Êh½ÁÊ Î(ÄqÊh½Áй¸êÆWⲏ¤Òй´¤¶¹²hÈ'¾¶ÞƵ ÆWⲃ´l½WÖq´¤¸…× ±–´§¶¹¸]¸¤´¤¼^´¤¶¹¸l´¤¶¹Æ…¼ŠÐ¹Ð¹ÚÀ¸¤¶Óȁ²¶ÝÜƅ¼Š²ƒ´]¿N¶Ó´¤µìÆWā²qÜ3Êh½Á²ÆW½ üŠü`´¤µ¼^´´¤µh½Î(ÄqÊh½ÁÐúÑxŠ/òc Ý AM{ A[ß ?Lz Ý ={ñ=¸ß ÷¥òúüãR×~Á>•©÷8Ñ ¶¹¸É5½…´l´l½…¾ ´¤µ¼Š²Ô¼Š²ƒÚÕĊ´¤µh½…¾ÀÎ(ÄqÊh½Áй¸¶¹²ƅйҷʶ¹²hÈ ´¤µh½«¸l´¤¼Š²Ê¼^¾8ÊÀÉ·¶ÓȊ¾8¼ŠÎàí Ë«Ë‰Ñ ÛèòcWÝ ={ _ ß ?LzÝ _ { _ ß ÷ òúüý ×ûý•©÷)¼Š²Ê¥´¤µh½É5½Á¸l´)í<ËnˉѱÛèòc Ý ={ñ=¸ß ?Lz Ý ={ _ ß ÷ òúüý ׏ü>•©÷8× J¶ÓȁÒ¾¤½ú Êh½…º3¶¹ÆW´¤¸;Ȋ¾¼^º·µ·¸/¶¹²Ê¶Þƅ¼^´¤¶¹²hț´¤µh½¥¼šÍƒÃ ½…¾¼^Ȋ½(¼ŠÆ…Æ…Òh¾8¼ŠÆWÚ ¾¼^´l½Á¸<ĊÅí ˫˫¸<¼Š²·Ê®ˆŠ±³Ë«¸ Ò²hà Êh½…¾è´¤µh½Ċº5½Á²«ÍŠÄRƅ¼^É3Òй¼^¾¤Ú¼Š¸¸¤ÒÎ(º´¤¶¹Ä²,ׯj<²й¶¹Ìн J¶ÓȁÒh¾½ RÑ^´¤µh½1ΩÄRÊh½ÁÐmÑ ÛèòcWÝ AM{ _ ß ?LzÝ ={ _ ß ÷8ъ¼ŠÆ8µ¶Ó½…ÍŠ½Á¸ ´¤µh½nÉ5½Á¸l´¥¼ŠÆ…Æ…Òh¾¼ŠÆWÚó¾¼^´l½ òúüR×ûã>•¥÷©¿ ¶Ó´¤µÀÆWā²qÜà Êh½Á²·ÆW½üŠü>•× ° ™]HC1X±YU)\^VYHC 9y½®µ¼Á͊½nº¾¤½Á¸l½Á² ´l½ÁÊÀ´¤µ½ ½WÖR´l½Á²·Êh½ÁÊÂí<Ë«Ën¸¥ÅpĊ¾ 1²hȁй¶¹¸¤µ(ñIõ/öø´¤¼^Ȋȁ¶¹²Èhу¿Nµ¶ÞƵ©Æ…¼Š²]ÆWⲏ¤¶¹Êh½…¾)¾¶¹Æµ ²³«´ µ ²³«´ ¶ ²³«´ · ²³«´ ¸ ²³«´ ¹ ²¹«´ µ ²¹«´ º ={ _ _ { _ AM{ _ _ { _ ={ _ ={ _ AM{ _ ={ _ ={ñ= _ { _ ={ñ= ={ _ ={ _ AM{ _ AM{ _ AM{ _ ={ñ= AM{ _ ={ _ ={ñ= AM{ _ ={ñ= ={ñ= ={ñ= AM{ A _ { _ AM{ A ={ _ AM{ A AM{ _ AM{ A ={ñ= ={ _ AM{ A AM{ _ AM{ A ={ñ= AM{ A AM{ A AM{ A »†¼½¼ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ ¾¿o¼ À À À À À À À À À À À À À À À èVº n èMí è` n ªTå è` n ÂLíè` n ºMè èM n Ã`Äè` n ÂVí èMà n åL­ B¶¹ÈÒh¾¤½Ð8oqN½Á¸¤ÒÐÓ´¤¸Ò²·Êh½…¾´¤µh½ŸÆ…ÐÓ⏤½Áʧ͊ÄRƅ¼^É3Òй¼^¾¤Ú›¼Š¸¤¸¤ÒΩº´¤¶Óā² ¶¹²hœĊ¾Ω¼^´¤¶¹Ä²Ÿ¶¹²øÆWⲃ´l½WÖq´¤¸…×J±³²´¤µ½{Î(ÄqÊh½Áй¸ÁÑ`¼<¸¤¶¹Îà º·Ð¹¶ÓÜ·½ÁÊ;͊½…¾¸¤¶¹Ä²/ĊÅÉ·¼ŠÆ̃ÃmÄ^帤Î(ÄRĊ´¤µ¶¹²hÈ<¶¹¸Ò¸¤½ÁÊ/´lÄ Î©¶Ó´¤¶¹È¼^´l½Ê¼^´¤¼]¸lº·¼^¾¸l½Á²½Á¸¤¸ º¾ĊɷÐÓ½ÁΧ×Iâ µh½ŸÎ©ÄRÊqà ½Áй¸]¼Š¸¤¸¤ÒÎ(½©é³Ä¶Þ²ƒ´¥¶Þ²Êh½…º5½Á²Êh½Á²ÆW½§É5½…´a¿1½…½Á²Â¾¼Š²qà ÊhāÎÿÍ^¼^¾¶¹¼^ɷй½Á¸B¶Þ²Ċ¾8Êh½…¾?´lÄ/Ω¼^̊½ ´¤µh½º3¼^¾¼ŠÎ(½…´l½…¾ ½Á¸l´¤¶¹Î¥¼^´¤¶Óā² Î(Ċ¾¤½;¾½Áй¶¹¼^É·ÐÓ½Š× h¾ā䤵h½Ï½WÖRº5½…¾¶¹Î©½Á²ƒ´¤¸…Ñ;¿{½Ïµ¼šÍн‰ÄŠÉ3¸l½…¾¤ÍнÁÊ ´¤µ¼^´®½WÖq´l½Á²Êh½ÁÊÿÎ(ÄqÊh½Áй¸¼ŠÆ8µ¶Ó½…ÍŠ½ÁÊÿ½…ÍŠ½Á²ÿÉ5½…´l´l½…¾ ¾¤½Á¸¤Ò·ÐÓ´¤¸(´¤µ¼Š²ó´¤µh½¸l´¤¼Š²Ê·¼^¾ÊóÎ(ÄqÊh½Áй¸¥¶¹²Àƅ¼Š¸l½nĊŠÉ5Ċ´¤µÿí Ë«Ën¸®¼Š²ÊÁˆ±–Ë«¸…Ñø´¤µ¼^´n´¤µ½ ¸¶¹Î(º·Ð¹¶ÓÜ·½ÁÊ É·¼ŠÆ̃ÃmÄ^廸¤Î(ÄRĊ´¤µ¶¹²hÈ´l½ÁƵ·²¶ŸRÒh½øÎ¥¶Ó´¤¶Óȁ¼^´l½Áʮʼ^´¤¼ ¸lº·¼^¾8¸l½Á²h½Á¸¤¸]RÒ¶Ó´l½I½Wå½ÁÆW´¤¶¹ÍнÁÐÓڊу¼Š²Ê´¤µ¼^´?¸lāΩ½{½WÖRà ´l½Á²Êh½Áʈб³Ë«¸)āÒh´lº5½…¾¤Å“ÄŠ¾Î(½ÁÊ¥´¤µh½ ÆWĊ¾¾¤½Á¸lº5ā²Ê¶¹²È í<ËnË«¸…×…j ²·Êh½…¾´¤µh½IƅÐÓā¸l½ÁÊ͊Äqƅ¼^É·Òй¼^¾¤Ú/¼Š¸¸¤ÒÎ(ºhà ´¤¶Óā²,ѧ´¤µ½ìÉ5½Á¸l´{ˆŠ±³Ë āÒ´lº½…¾ÅpĊ¾Ω½ÁÊæ´¤µh½þÉ5½Á¸l´ í<Ënˉ×)õè²Ï´¤µ½ÆWā² ´l¾¼^¾¤ÚŠÑ{Ò²Êh½…¾´¤µh½›ÄŠº½Á²‰ÍŠÄ^à ƅ¼^É·ÒÐÞ¼^¾¤Ú§¼Š¸¤¸¤ÒÎ(º·´¤¶Óā²,Ñh´¤µh½É½Á¸¤´Ní Ë«Ë ÄÒh´lº5½…¾là œĊ¾Î(½ÁÊn´¤µh½É5½Á¸l´ƒˆŠ±³ËÏ×3±–² ´¤Ò¶Ó´¤¶¹ÍнÁÐÓÚ§¸lº5½Á¼^ÌR¶¹²ÈhѶӴ ¶¹¸è½ÁÎ(º3¶Ó¾¶¹Æ…¼ŠÐ¹Ð¹Ú º¾¤ÄG͊½Á²s´¤µ¼^´/´¤µ½<é³Ä¶¹² ´/¶¹²Ê½…º½Á²hà Êh½Á²ÆW½§¼Š¸¤¸ÒÎ(º´¤¶Óā²¶¹¸øÎ(Ċ¾¤½›½Wå5½ÁÆW´¤¶Ó͊½ ´¤µ¼Š²´¤µh½ Ë«¼^¾¤ÌŠÄšÍ¥¼Š¸¸¤ÒÎ(º´¤¶¹Ä²©¶¹²©¸lāΩ½ Î(ÄqÊh½Áй¸?´¤µ·¼^´)ÆWā²qà ¸¤Òй´I¸¤º½Áƅ¶ÓÜ3Æ<Åp½Á¼^´¤Òh¾½Á¸N¸¤ÒÆ8µ ¼Š¸Nй½WÖq¶¹Æ…¼ŠÐÞ¶Ó߅½Áʛā²h½Á¸…× Ô ½Á²½…¾¼ŠÐ¹ÐÓڊÑ´¤µh½«Ò²¶ÓœĊ¾Î ½WÖq´l½Á²¸¤¶¹Ä²óĊşΩÄRÊqà ½Áй¸{¾¤½"RÒ¶Ó¾¤½Á¸1¾¼^º·¶ÞÊ]¶¹²·ÆW¾¤½Á¼Š¸l½<Ċź·¼^¾¼ŠÎ©½…´l½…¾¸…Ñ¼Š²Ê µh½Á²ÆW½s¸Òqå½…¾8¸›Åp¾¤ÄÎ$ÐÞ¼^¾¤Èнy¸l´lĊ¾¼^Ȋ½Ï¼Š²ʝ¸¤º·¼^¾¸l½ ʼ^´¤¼q×®N½ÁÆW½Á²ƒ´¤Ð¹Ú϶¹²ÏÎ¥¼Š²ƒÚs¼^¾¤½Á¼Š¸ø¿Nµ½…¾¤½¥í<ËnË«¸ ¼^¾¤½/Ò¸¤½ÁÊ,ÑhΩ¼Š²ƒÚ]½WåĊ¾´¤¸ ´lÄ(½WÖq´l½Á²ʧÎ(ÄqÊh½Áй¸I²hā²qà Ò²¶¹ÅpĊ¾Î¥ÐÓÚ(µ¼Á͊½èÉ5½…½Á²›Î©¼ŠÊh½ŠÑh¸lāÎ(½…´¤¶¹Î©½Á¸I¾¤½Á¸¤ÒÐÓ´³Ã ¶¹²hț¶¹²®²hĊ´¤¶¹ÆW½Á¼^É3ÐÓ½©¶¹Î(º¾Ě͊½ÁÎ(½Á² ´…ׯhĊ¾Ÿ´¤µ¶¹¸<¾¤½Á¼`à ¸lā² Ñã¿{½¼^¾½¥´l¾¤Úq¶¹²hÈ ´lÄn´l¾¼Š²¸lœĊ¾Î7āÒh¾Ò²·¶ÓÅpĊ¾8Î Î(ÄqÊh½Áй¸;¶Þ²ƒ´lÄ ²hā²hÃYÒ²¶ÓœĊ¾ÎÙÎ(ÄqÊh½Áй¸…Ñ,¿Nµ·¶¹ÆµyΩ¼šÚ É5½ ΩĊ¾¤½ ½Wå5½ÁÆW´¤¶Ó͊½;¶¹²¥´l½…¾Ω¸1ĊŠÉ5Ċ´¤µ¥¸lº·¼ŠÆW½èÆWāÎà º·Ð¹½WÖq¶Ó´aÚ©¼Š²Ê]¾¤½Áй¶¹¼^ɷй½ ½Á¸l´¤¶¹Î©¼^´¤¶¹Ä²¥ÄŠÅ º3¼^¾¤½ÁÎ(½…´l½…¾¸…Ñ ¿N¶¹´¤µhāÒh´ ÐÓ⏤¸IĊÅ?¼ŠÆ…Æ…Òh¾8¼ŠÆWÚŠ× p<²Êø¼ŠÐ¹¸lÄhÑ^¿{½{¼^¾¤½{´l¾¤Úq¶¹²hÈ ´lÄè¼^ºº·Ð¹Ú;āÒh¾BΩÄRÊh½ÁÐÞ¸ ´lÄʶÝå5½…¾¤½Á²ƒ´ ¼^¾½Á¼Š¸ ¸¤Ò·Æµ ¼Š¸<¶¹²hœĊ¾Ω¼^´¤¶Óā²›½WÖq´l¾¼ŠÆ à ´¤¶Óⲟ¶¹²è´¤µh½?É·¶ÓÄ^ÃYΩāÐÓ½ÁƅÒй¼^¾ ÊhāΥ¼Š¶¹²,ÑG²āҲ躷µh¾¼Š¸¤½ Æ8µ Ò²·Æ¤Ìq¶¹²hÈhÑ·¼Š²ʧ¸¤Ä©Ä²,× œ®MóM°M3C)XqM·\ ĆÅcÆ)ÇÈÊÉËÉÅ@Ì0ÍÍÎEÅ¥ÏMÐуÒ+ӆÔ>ÕQÖ×EØ ÒcÙÈË×WÚYÇÖ×EÙLÛÜÐÇуÖ^ÝÈËÐ×Þ ÆxÖÙLÒcÔàßÖ^ÇÝOÐ^ÛÏMá¨ÒGÒcØ«âãÚYÖääÈÊ×>ä7Å ¿€×æå)çècé0ê èLë ìÜí>î ï^ðGñò óWôzìQõ öø÷è^ù0ë êúèzù ûuç™ì2ü ý+éGü¤ôzö þ ùEì€î ömöÿüîGù¨é«îÜûuû†û†þ   Å ĆÅ+âEÖ^Ç×>ÈiÖ  uÅ@»†ÒG×EԁÇÈiØMÙÐ× Å@¾$ÖØGÐEÙÐ× @Ö^×EÔ ¼ Åß`ÒGÇÐ …ÈÊÝ"!Å Ì0ÍÍ#>ŎÄ$&%EÖzÝÈÊÐ×EْÛÜÐǽßÖ^ÇÝLÞÐ^ÛmÞ Ïá7Ò0Ò0Ø«â Ú&Ö^ääÈË×>äEÅ ¿€×Rå)çè0écê èLë ìÜí>î.ïï ñò‰óô^ìcõ ö ÷9è^ù0ë ê­èzùWûuç™ì2ü ý+éGü¤ôzö^þ™ù7ìîGömö~ü$î ù­é«î ¤û†ûuûþ'(  )^Î   )ÍEÅ Ï­Å+*)Å,+â>Ò0×ÅÂÌ0ÍÍ->Å/.10ümö32zümù&½åxçè4«ô4Gümö~ü65™ì2ü¤é87 è92î ö:5 ë è^ç’óô^ì;0çôzö<Yôzù= 0Eô9îGÅ?>uЁؙÝÐÇ«Ö^É@>uÈËÙÙÒGÇÝÖzÝÈÊÐ×A »uÖ^ÇÕzÖ^Ç«ÔCB×EÈÊÕÒGÇ«ÙLÈÊÝD B3ρÓŠĆÅ++ÈÊ×EÉËÖÇ0Å Ì0Í= E>Ńþ ùEì2çè92 0Eé ì2ü¤èzù ìè/F7ì€è0é«í>ô 5«ì2ü¤éƒå)çè  é«î'55 î'5™Å9ß9ÇÒG×$ÝÈiØ ÒGÞ»uÖ^ÉËÉ; G†Ò ‰¾Ò0ÇÙÒ DÅ HCÅJIWÅK>L%EÔ>ÖÖ×EÔ/HCÅ7ĆÅE»†ÖÇLÝcÅuÌcÍ& #>Å)å…ô^ìì€îGç™ù®÷`öÊô 5 5™ü ý)éô^ìü¤è^ù ô^ù2MF­é«î ù­îuûuù­ôzöONP5™ü65™Å9¾Ðâ>×RQ.ÈÊÉËÒ DÅ ²¸«´ µ ²¸«´ ¶ ²¸«´ · ²¸«´ ¸ ²¸«´ ¹ ²³«´ µ ={ _ _ { _ AM{ _ _ { _ ={ _ ={ _ AM{ _ ={ _ ={ñ= _ { _ ={ñ= ={ _ ={ _ AM{ _ AM{ _ AM{ _ ={ñ= AM{ _ ={ _ ={ñ= AM{ _ ={ñ= ={ñ= ={ñ= AM{ A _ { _ AM{ A ={ _ AM{ A AM{ _ AM{ A ={ñ= ={ _ AM{ A AM{ _ AM{ A ={ñ= AM{ A AM{ A AM{ A »†¼½¼ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ ¾¿o¼ À À À À À À À À À À À À À À èM­ n ª­ èL­ n «Lå èM­ n ÄL« èL­ n ÂL­ è` n åVº èL­ n ÃMå èM­ n ­ è` n º J¶ÓȁÒ¾¤½Ð8oq ½Á¸ÒÐÓ´¤¸ Ò²·Êh½…¾I´¤µh½/Ċº5½Á²§ÍŠÄqƅ¼^É·Òй¼^¾¤Ú›¼Š¸¤¸¤Ò·Î(º´¤¶Óā² Q{ÅSÅS*>Ç«Ö^×EØGÈËÙWÖ^×EÔ »Å+TU%V Ø ÒGÇ«Ö>ÅÂÌcÍ)>Å/WçîX 0Eî ù­é'N ûuù­ôzöONP5™ü65èoë@Y)ù=^öÿü65Lí[Z5Gô9î\1<¥î]ü¤éè^ù}ôzù2/^xçô _` _ ôzç™Å`»Ð%>äâ$ÝÐ×¼‡Èba ÈÊ×C+ÐуáEÖ×=D zÆ)ÐÙLÝÐ× ^¼½ÖÙLÞ ÙÖØ«â=%EÙÒ ÝÝÙ0Å ¿™ÅM¾EÅ=c3Ð$ЁÔÅxÌcÍE #EÅ@dÚxâ>҆ß`ÐáK%>ÉËÖ^ÝÈËÐ×e*>ÇÒ9$&%>Ò0×EØ ÈËÒ0Ù@Ð^Û ÏMá¨Ò0ØGÈÊÒcÙÖ×EÔ Ýâ>Ò¬Ä@ÙLÝÈËÑ ÖzÝÈËÐ× ÐÛß`Ðá%>ÉiÖzÝÈÊÐ× ßÖzÞ Ç«Ö^ÑƒÒ ÝÒGǫ٠f½¿€×g.ü¤è _ î ì2ç™übhô EÎ&iKj;#zÞÎ&kl: #=9 -^Î7ŠϭҼ ÅmTCÖzÝ!Å Ì0Í)=MÅ!Ä+ÙoÝÈÊÑ Ö^ÝÈËÐ×.Ð^Û¯ß9ÇÐEÖ>ÈÊÉËÈÊÝÈËÒ0Ù ÛÜÇÐщÏMáEÖ^Ç«ÙÒn>uÖ^ÝÖ)ÛÜÐÇÝâ>ÒoYÖ^×>ä%EÖ^äÒ¼‡ÐÔÒGÉG+ÐÑ¯Þ á¨Ð×>Ò0×$Ý&ÐÛ7Ö3ρá7Ò0Ò0Ø«â`HÒ0Ø Ðä×>È3!GÒ0Ç0Å¥¿€×¯þ"YYnYqp>çôzùK5 ôé ì2ü¤èzùK5}èzùNû†é«è 0=5™ì2ü¤é5rsF tEî«îé«íôzù2uF¨ü^ù­ôzöxåxçè  é«î'55™ümù=¤ûvFwFåx #&Ejy#&kl ÎiiP$Î&i>ÌÅ Ï­Å Þz`ÅuoÒ0Ò O¾7Å Þ{>Å|TŒÈÊÑR }Q{Å Þ»Å|H1D=%A OÖ×EÔæ»Å Þ uÅeH…ÈËÑ Å Ì0ÍÍÍ>Å Ó ßÖ^ÇÝLÞÐ^ÛmÞoÏMá¨ÒGÒcØ«âhÚYÖääÈË×>ä ¼‡ÐÔÒ0É~B†ÙÈÊ×>ä[o¥Ò'ÈËØ0Ö^ÉUH1%>ÉËÒ0هÆxÖÙÒ0ÔNÐ׀+ÐÇáK%EÙ Ï$Ý«ÖzÝÈËÙLÝÈiØGÙ0Å ¿€×åxçè0écêŒèoë¬ìÜí>îþ ùEì€î ç™ù­ôzì2ü¤èzù¨ô^öŒ÷è^ùK ë îGçîGù¨é«î{èzù ÷9è _@t0ì€î ç.å)çècé«î55™ümù&TèL낁xç™ü¤îGùEì€ôzö <Yôzù= 0Eô9î'5 mþ^÷x÷åUn<,' P #)&E=#ÍiEÅ Ï­Å Þz`ÅxoÒ0ÒÅ Ì0ÍÍÍ>Å óî ƒ}F7ì€ôzì2ü65™ìü¤é«ô^ön7 è92î öO5uë èzç¯ûL0G ì€è _ ôzì2ü¤é’åvF„p7ô ^ümù=^Ņ>uЁؙÝÐÇ«Ö^Ɇ>uÈËÙÙLÒ0ÇLÝ«ÖzÝÈÊÐ×A TŒÐÇÒcÖ`B†×>ÈÊÕÒGÇ«ÙLÈÊÝD GTŒÐÇÒcÖ>Å ‡WŠވuʼnoÈË×A ڌŠހ»Ŋ+â>ÈËÖ×>äK Ö×EÔ TƒÅ ދ‡WÅ Ï=%¥Å ÌcÍÍ>Å >uÈËÙØ ÇÈËѯÈË×EÖ^ÝÈËÐ׌IuÇÈËÒG×$ÝÒcÔ ß@ÇÐEÖ EÈÊÉËÈËÙLÞ ÝÈËØ ÚYÖääÈË×>ä7Å ¿€× å)çècé0ê èLë{ìÜí>™ñò þ™ùEì€îGç™ù¨ô  ì2ü¤èzù­ôzö@÷9è^ù0ë îGçî ù­é«î9\†Žuî5Gî«ôzçéí}èzù ÷9è _@t0ì€ôzì2ü¤èzù¨ô^ö <ümù&0ü65«ì2ü¤é'5PŽ`Œ÷<þLóM^m  )EPMÍ-EÅ Æ3Å9¼‡ÒGÇÈiÖ^ÉiԁÐEÅÌ0ÍÍEÌÅ Ú&Ö^ääÈË×>ä ÚYÒ'MÝe …ÈÊÝâÖ ß@ÇÐÞ Ö>ÈÊÉËÈiÙoÝÈËØ ¼‡ÐÔÒ0ÉÅ ¿€×.åxçè0écêŒèoë¬ìÜí>î þ ùEì€î ç™ù­ôzì2ü¤èzù­ôzö ÷9èzùcë î çî ù­éî èzù ûué«è 0=5™ìü¤é'r}F tEî«î«éí ô^ù2‘F7üù¨ô^ö å)çè0é«î'55™ümù=Üþz÷¥ûUFKFåmï )iÍ =)ÉŠڌż Å9¼‡ÈÊÝØ«âEÒGÉËÉÅ.ÌcÍÍ&Åu7 ôéíümù¨î’<Yî«ôzç™ù7ümù&œÒ ‡9ÐÇ"l¼½Øc3ÇÖP xހ»ÈËÉÊÉ2Å o@ÅÂßÖԁÇ” ÐEÅ Ì0ÍÍ-EÅ åvF•p7ô ^ümù=ŒZx5™ümù&–ކî  öËô ]Môzì2ü¤èzù—<¥ô4«î ö~ümù=^ŘH…ÒcÙLÒcÖ^ǫثâ—H…ÒGá¨ÐÇݙo&ÏM¿ÞÍ-zÞ ÌizÞ{HCŚ>†Ò0áEÖ^ÇÝуÒ0×Ý Ô>҅oÉËÒG×>ä%EÖzÝäÖzÝäÒcÙ}ȬÏMÈËÙLÞ ÝÒGуÒ0Ù½¿€×ÛÜÐÇќ› ÖzÝÈiØGÙ9 ~B×>ÈËÕÒ0ÇÙÈÊÝÖzݽß`ÐÉÊÈÊÝ9› ÒcØ ×>ÈiØGÖ®ÔÒ )Ö^ÝÖÉb%>×=D$Ö>Å ÓÅUH†ÖzÝ×7Ö^áEÖÇ"Mâ>È2Å Ì0ÍÍ-EÅ Ó ¼½Ö ÈÊÑ`%>Ñ Ä9×$ÝÇÐá=D ¼½ÐMÔ>ÒGÉWÛÜÐÇ}ßÖÇLÝÞ2ÐÛmހρá7Ò0Ò0Ø«â‰ÚYÖääÈË×>ä7Åà¿€× å)çècé0ê èLë¯ìmíEîY_†t­ümç™ü¤é«ôzö,7 îGìmíEè92 5¯ümù óWôzì‹0çôzöA<¥ô^ù&0Eô $î åxçè0é«î'55«ümù=½÷9è^ù0ë îGçî ù­é«î yY7 óL<Yå&ž YÌ##P­ÌGÎ=Å uÅρÖџ%>Ò0ÉËÙÙÐ×Å ÌcÍÍ#EÅ ¼‡ÐÇá>â>ÐÉÊÐäÈiØGÖ^ɯÚYÖääÈÊ×>ä ÆxÖÙLÒcÔ Ä9×$ÝÈÊÇÒGÉ3DƒÐגÆxÖPDÒ0ÙÈËÖ׿€×ÛÜÒGÇÒG×7Ø ÒÅ`¿€×‡å)çècé0ê èLë…ìÜí>îL ñò¯óè^ç"2zü¤éW÷9èzùcë î çîGù¨é«îŒèzù ÷9è _@t0ì€ôzì2ü¤èzù­ôzö <ümù&0ü65™ìü¤é'5 wEG#)EÅ »Ůρثâ>уÈiÔÅ Ì0ÍÍ^ÎEÅ ßÖ^ÇÝLÞÐ^ÛmÞoÏMá¨ÒGÒ0Ø«â ÚYÖääÈÊ×>ä …ÈÊÝâ †Ò %>Ç«Ö^Éq†Ò Ý )ÐǁÙGÅ ¿€× å)çècé0ê èoë ìmíEî þ ùEì€î ç™ù­ôzì2ü¤èzù­ôzöT÷9èzùcë î çîGù¨é«î èzù ÷9è _@t0ì€ôzì2ü¤èzù­ôzö <ümù&0ü65™ìü¤é'5 ÷m<þóe^m  ¥ÌP­ÌP -EÅ
2000
34
      ! "#$ %'&"('  )  *+'-,.,/  102*1435%6,8787-9' ( :;=<?>;=@BA@DC EGFIHGJ6K!LNM6OQPRJG>;TSVUWJXFIY'Z\[^]1_ `a^b Ldc-egf a MGe#hdijkh f b6l e a c#m'n^o a M6n a j a MGe a c#i=hdc2pqLNM6r l Ldr a LdM6Osm bta^a nvuxwch'n azyy o{MXr | h uXM y#} h b6~ oM y#€ M6o‚ a c y o‚eƒ „ Ld…‚ego{f†h c ad‡‰ˆs`‹Š‰ŒŠ'ŒŽ  y o{…‰o l‡ ƒ Ldc‘h’ y~ ƒ”“N•n yz– — u l1–˜a O l ™›šœ[žŸFIJ”Cd   ¡-¢-£z¤¦¥9§©¨«ª¥¦¬^­¨®¢-¯ °®£±ªI§ž¥T¤¦²{¬ž³´²¶µ¦ªN¢¢¡9· ¤9§©¸ž¸^°®£¸¹¢µ¦ªN¢‘¡°´§©¨®¨®º»³¼¬^¥x£¢½¾¨´§©£¸^¿I§©¸^¢µ ³¼¬^¥\½·°´¡9·À¨®°®¯"°Á¤¦¢‘Â瞣£¬©¤9§v¤Ä¢»¥Ä¢µ¦¬ž¿¥9¡¢‘µ §ž¥¦¢Å§gÆv§©°®¨´§©­¨®¢ž¹°´µ¢µT¤¦°®¯2§v¤Ä°Á£I¸R¤¦·I¢R°´µ=¤Ä¥¦°Á² ­¿¤¦°®¬ž£Ç¬©³¨Á¢-ȏ°®¡§©¨ª¥¦¬^­I§©­I°Á¨®°Á¤¦°®¢‘µ³¼¬ž¥É¿£² Ê £¬v½£Ë½1¬ž¥9µÌÎÍ·°´µÏªI§žªN¢¥†°®£z¤¦¥Ä¬Â¿I¡¢‘µ §†£¢-½ÐªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡µ¦°®¯"°®¨®§ž¥¦°Á¤=º$¯"¢§^µT¿¥Ä¢ §ž£IÂѪI¥¦¢‘µT¢£^¤9µË§Ò¯"°®£°®¯2§ž¨Á¨®ºÓµ¦¿ªd¢-¥ÄÆz°´µ¦¢ ¨®¢§ž¥¦£I°Á£I¸1§žªª¥Ä¬^§ž¡9·œ¡¬^¯Ô­I°Á£I°Á£I¸t¢-Õd¢‘¡¤¦°®Æ^¢+µT¢-² ¨®¢¡¤¦°®¬ž£§©£I½1¢-°®¸ž·z¤Ä°Á£I¸¯"¢¤Ä·¬Âµ­I§^µT¢‘¬ž£ ªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡Ö§©£NÂs¡-¬ž£z¤¦¢-ȟ¤¦¿I§ž¨µT°®¯"°®¨´§©¥Ä°˜¤=º ¯"¢‘§žµ¦¿¥¦¢‘µ2ªd¬žª¿I¨®§©¤¦¢‘Âų¼¥Ä¬ž¯‹¨´§©¥Ä¸ž¢W×z¿I§ž£z¤¦°Á² ¤Ä°Á¢‘µ8¬©³Ø°®£¢-ȏªN¢£IµT°®Æ^¢Ô¥9§g½«¤Ä¢ȟ¤kÂI§v¤Ä§ÌÖÍ·°´µ §žªª¥Ä¬^§ž¡9·°®µX·°®¸^·¨®ºD¨´§ž£¸ž¿I§ž¸ž¢+°®£I¢-ªd¢-£I¢£z¤ §ž£I†¥Ä¢×z¿°®¥Ä¢µ£¬¯"¬ŸÂ°˜Ùd¡-§v¤Ä°®¬ž£¤¦¬$¤Ä·¢Ú§©¨Á² ¸^¬ž¥Ä°˜¤Ä·¯Û¬ž¥Ü°®¯"ª¨®¢-¯"¢£z¤Ä§v¤Ä°®¬ž£$¤¦¬Ïµ¦·°Á³´¤Ü­d¢² ¤=½1¢-¢£#¨´§©£¸^¿I§©¸^¢µtµT¿I¡9·§žµݍ¥Ä¢-£N¡9·§©£NÂ2Þ+£² ¸^¨Á°´µ¦·GÌ ß à K1ŸFIYSØ@áCd;=YâK ã §©¥¦¤T²¶¬©³´²=䟪d¢-¢¡9·å¤Ä§ž¸ž¸ž°®£¸Ã¬©³$Þ+£¸ž¨®°´µ¦·À·I§žµÉ¥Ä¢§^¡9·¢ §æ¨®¢ƞ¢-¨s½·°´¡9·çµ¦¢-¢-¯2µÀ¤¦¬è¥¦¢‘µT°´µT¤Ç§©£Ÿº°®¯"ª¥Ä¬vƞ¢² ¯"¢-£z¤ÌÒéÚ¢¤Ä·¬ÂµÖ¨®° Ê ¢Í'¥Ä§ž£Iµ=³¼¬^¥¦¯2§©¤¦°®¬ž£²{­I§^µT¢‘Â\¤Ä§©¸ž² ¸ž°®£¸Ïêët¥Ä°®¨®¨¹tìížízîžï¹XéÖ§©ÈŸÞâ£^¤ê‚ðا©¤¦£I§žªI§©¥ Ê ·°{¹tìí^ížñzï9¹ ë1¬Ÿ¬^µT¤¦°®£¸ÜꂠحI£¢-ºk¢¤â§ž¨ÌÁ¹Iì‘ížížízï9¹žÍ'£IÍqòvéW§©¥ Ê ¬vƜ¯"¬ÂŸ² ¢-¨´µÜê‚ë1¥9§©£z¤Äµ-¹'ó©ô^ôžôzïq§^¡9·°®¢-ƞ¢k§ž¡¡¿¥9§ž¡-°Á¢‘µq¡-¬ž¯"ªI§ž¥Ä§ž­¨®¢ ½°Á¤¦··Ÿ¿¯2§©£ªN¢¥T³¼¬^¥¦¯2§ž£I¡¢á³¼¬ž¥¤Ä·°´µ1¤Ä§žµ Ê Ì õ ¬v½1¢-ƞ¢¥-¹z°Á³ö½t¢á­¥¦¢‘§ Ê ¤¦·¢D¥Ä¢µ¦¿¨Á¤Äµ+°®£z¤¦¬¤=½t¬ÔªI§©¥¦¤Äµ-¹ ³¼¬ž¥ Ê £¬v½£4§©£NÂÏ¿£ Ê £¬v½£4½1¬ž¥9µ-¹”½t¢#¡§©£†µ¦¢-¢#¤¦·I§©¤ ¤¦·I¢8ªN¢¥T³¼¬^¥¦¯2§ž£I¡¢D¬©³XÞ+£¸ž¨®°´µ¦·2¤Ä§ž¸ž¸^¢-¥9µt°´µ1¯k¿I¡9·¨®¬v½1¢-¥ ¬ž£s¤Ä·¢4¨´§v¤¦¤¦¢-¥Ì Í·¢4µ¦°Á¤¦¿I§©¤¦°®¬ž£x°®µ¢ƞ¢£s½1¬ž¥9µ¦¢³¼¬^¥ ¨´§©£¸^¿I§©¸^¢µ¬ž¤¦·¢¥Ö¤¦·N§©£÷Þ+£I¸ž¨®°´µT·G¹Ô¢µ¦ªN¢‘¡°´§©¨®¨®ºÅ°®£øI¢¡² ¤¦°®Æ^¢†¨´§©£¸^¿I§©¸^¢µ-¹³¼¬ž¥¤=½1¬x¥Ä¢§^µT¬^£IµùúÙN¥ÄµT¤-¹k¤¦·¢¥¦¢›°´µ ¿Iµ¦¿I§©¨®¨®º"¨®¢µÄµ§©££I¬©¤Ä§©¤¦¢‘Âڍ§v¤9§"§gÆg§ž°®¨®§ž­¨®¢q§©£NÂÚµT¢‘¡¬ž£N ¹ ¤¦·I¢Ö¡¬vƞ¢¥Ä§ž¸ž¢¬©³Dµ¦¿I¡9·É§v¤9§°´µÔ¯k¿I¡9·›¨®¬v½t¢¥k¿I¢¤Ä¬ ¤¦·I¢Ô·°®¸^·Ö£Ÿ¿¯Ô­d¢-¥D¬©³â°˜Õ ¢-¥Ä¢-£z¤D½t¬^¥ÄŸ²{³¼¬ž¥Ä¯2µ°®£Ú¤Ä·¢µ¦¢ ¨´§©£¸^¿I§©¸^¢µÔê´³¼¬ž¥8¡-¬ž¯"ªI§ž¥¦°´µ¦¬ž£¬©³1ª¥¦¬^ªN¢¥T¤Ä°®¢µD§©£IÂ֤ħ©¸ž² ¸ž°®£¸k¥Ä¢µ¦¿¨Á¤Äµâ³¼¬ž¥Øµ¦¢-ƞ¢¥Ä§ž¨GµT¿I¡9·¨´§©£¸^¿I§©¸^¢µtµT¢¢Üê õ §©û=°®ü©¹ ó©ô^ôžôzïTï9Ì6éÚ¬ž¥Ä¢-¬vÆ^¢-¥¹‘¯2§ž£Ÿº8¬©³I¤Ä·¢½1¬ž¥9µ‰£I¬©¤6³¼¬ž¿£NÂÔ°®£ ¤¦·I¢qꂵ¦¯2§©¨®¨¼ï¤Ä¥Ä§ž°®£°®£¸§v¤9§§©¥Ä¢”°Á£D³‚§^¡¤‰°®£øI¢‘¡¤Ä¢ÂD³¼¬ž¥Ä¯2µ ¬©³8×z¿°Á¤¦¢Ú¡¬^¯"¯"¬ž£\½t¬^¥Äµ-ÌRý?£›¤¦·¢W½t¬^¥ Ê Â¢µÄ¡¥Ä°Á­d¢ ·¢¥¦¢°®£›½1¢Ö¤Ä·¢-¥Ä¢³¼¬^¥¦¢Ö¡-¬ž£I¡-¢-£z¤¦¥9§v¤Ä¢Ö¬ž£É¤Ä·¢WªI¥¦¬^­¨®¢-¯ ¬ž³â¿£ Ê £¬v½£½t¬^¥Äµá°®£Ö¤Ä·¢k¡-¬ž£z¤¦¢-ȟ¤D¬ž³âª¥Ä¬ž­I§ž­°®¨Á°´µT¤¦°´¡ ¤9§©¸ž¸^°®£¸Ì  Ø¨Á¤¦·I¬ž¿¸^·\¤Ä·¢§ž££¬ž¤Ä§v¤Ä¢›¥Ä¢µ¦¬ž¿¥9¡¢‘µ"§©¥Ä¢Ö¨®°Á¯"°Á¤Ä¢ ¬^¥á¢ƞ¢-£W£¬ž£²¶¢ȏ°´µT¤¦¢-£z¤Ø³¼¬^¥á¯"¬zµ=¤D¨´§©£¸^¿I§©¸^¢µ-¹I¤¦·I¢Ô¥9§g½ ¤Ä¢ȟ¤§gÆv§©°®¨´§©­¨®¢œ¬ž£¨®°®£¢Ô°®µD¢-Õd¢‘¡¤¦°®Æ^¢-¨®º¿£¨®°®¯"°˜¤Ä¢Â½°Á¤¦· ¥Ä¢µ¦ªN¢‘¡¤Ö¤Ä¬Å¤¦·I¢4£¢-¢‘Âú¬©³"¯"¬zµ=¤Wþáÿ ã §©ªIª¨®°®¡§v¤Ä°Á¬^£Iµ-Ì Í·°´µkªI§žªN¢¥kªI¥¦¢‘µT¢£^¤9µÜ§W£¢½¨®º4¢-Æ^¢-¨®¬žªd¢ÂϪN§©¥9§žÂ°®¸©² ¯2§©¤¦°´¡#µ¦°®¯"°Á¨´§ž¥¦°Á¤=ºÚ¯"¢§^µT¿I¥¦¢¤¦·I§©¤Ü¤¦¥Ä°Á¢‘µÔ¤¦¬¯2§vȏ°®¯"°-¢ ¤Ä·¢Ï­N¢£¢ÙI¤Äµ¤Ä·I§v¤W¡§©£s­N¢Ï¬^­¤Ä§ž°Á£I¢Âų¼¥Ä¬ž¯ ¨®°Á¯"°Á¤Ä¢ §ž££¬©¤9§v¤Ä¢Â̵֥¦¬ž¿I¥Ä¡-¢µØ¿NµT°®£¸#§¨´§©¥Ä¸ž¢§ž¯"¬ž¿£z¤D¬©³â¥9§g½ §©¤Ä§­Ÿº†¯2§ž¸ž£°Á³¼ºŸ°®£¸Ö¤Ä·¢Ú°®¯"ªI§ž¡¤2§©£IÂÉ¡¬vÆ^¢-¥9§©¸^¢#¬©³ ¤Ä·¢µT¯2§ž¨®¨I¤9§©¸ž¸^¢Â§v¤9§žµ¦¢¤9µÌ ͉¬¢¯"¬ž£IµT¤¦¥9§v¤Ä¢¤¦·I¢Ô¢-Õd¢‘¡¤¦°®Æ^¢-£¢‘µ¦µD§©£IÂW¨´§©£¸^¿I§©¸^¢ °®£I¢ªN¢£I¢£I¡¢D¬©³ö¤Ä·¢DªI§©¥9§žÂ°®¸^¯2§v¤¦°´¡1µT°®¯"°®¨´§©¥Ä°Á¤=ºD¯"¢‘§v² µ¦¿¥Ä¢›°®£Ë¡¬^¯Ô­°®£I§©¤¦°®¬ž£5½°Á¤¦·Ë¡¬^£z¤¦¢ȟ¤Ä¿I§©¨"¯"¢‘§žµ¦¿¥¦¢‘µ¹ ¤Ä·¢-º§ž¥¦¢1¢-Æv§©¨®¿I§©¤¦¢‘œ°Á£¤¦·¢¡¬^£z¤¦¢ȟ¤6¬©³IªN§©¥¦¤T²¶¬©³´²?µTªd¢-¢‘¡9· ¤9§©¸ž¸^¢-¥ªN¢¥T³¼¬^¥¦¯2§ž£I¡¢³¼¬ž¥†¢¯Ô­d¢ÂI°®£¸4§©¨®¸ž¬^¥¦°Á¤Ä·¯2µ ¿Iµ¦°®£¸xÝI¥¦¢£I¡9·«§ž£IÂ÷Þ+£I¸ž¨®°´µT·÷§^µW¥Ä¢-ª¥Ä¢µ¦¢-£z¤9§v¤¦°®Æ^¢µ¬©³ ­d¬©¤Ä·°®£øN¢¡¤Ä°®Æž¢œ§©£I§©£I§ž¨®º^¤Ä°´¡-§©¨N¨´§©£¸^¿I§©¸^¢µ-Ì   FIY+šá<=E PÅE‰[ŸCdFI; ᝟;TY+K ÛY”;?>6JXŸ;TY+K J6KáS  FIEG>;=Y+@D[ Y”FI] ý?£ ¤Ä·°´µÅªI§©ªd¢-¥¹½1¢«µ¦·I§©¨®¨Ú¿Iµ¦¢5¤¦·I¢ú¤¦¢¥¦¯2µ    ¬^¥"!#%$    ³¼¬^¥§\¸ž°®Æ^¢-£Å½1¬ž¥9Âɤ¦¬\¥¦¢-³¼¢-¥¤¦¬ ¤Ä·¢ªI¥¦¬^­I§©­°®¨®°Á¤=º'&)(*,+ -/.¬©³ ã §ž¥T¤¦²{¬ž³´²?䏪N¢¢¡9·«ê ã10 äï ¤9§©¸^µÔ³¼¬^¥"½t¬^¥ÄÂ'-V°®£I¢-ªd¢-£I¢£z¤Ü¬ž³D¡-¬ž£z¤¦¢-Èz¤¹1§^µ"°´µ=² ¤Ä°Á£N¡¤Ø³¼¥Ä¬ž¯Ç½·I§v¤q½1¢¡§©¨®¨ ¤¦·¢  23!#   )4 2,!  5%67!8 9 &)(8*,+ :,;=<* >=?@*AB-/.¹á§©£IÂɧž¨®µ¦¬°´µ=¤Ä°Á£N¡¤Ü³¼¥¦¬^¯æ¤Ä·¢W¡¬^£² ¡-¢-ª¤1¬©³CD79@9E%GFH 4 % E   &)(JI/.¹z½·°´¡9·"¥Ä¢³¼¢¥Äµâ¤¦¬ ¤Ä·¢"ª¥Ä°Á¬^¥áªI¥¦¬^­I§©­°®¨®°Á¤=º¬ž³1§¤Ä§ž¸ÚµT¢‘×^¿I¢-£I¡-¢KIó¼¥Ä¬ž¯ § ¸^¢-£¢¥Ä§©¤¦°®£¸Ôµ¦¬ž¿I¥Ä¡-¢©Ì ͉¬³‚§ž¡-°®¨Á°Á¤9§v¤¦¢2¡-¨®¢§©¥Ô¢ȏªd¬^µ¦°˜¤Ä°®¬ž£G¹”½1¢¿Iµ¦¢·¢-¥Ä¢#¤Ä·¢ L °®¥¦¢‘¡¤NM订ȏ°´¡-§©¨$ª¥Ä¬ž­I§ž­°®¨®°˜¤=ºV¬©³\¤Ä§ž¸Ð¸ž°®Æž¢£ ½1¬ž¥9 &)(8*,+ -/.¹­¿¤1¡-¬ž¥Ä¥¦¢‘µTªd¬ž£N°®£¸D§ž¥¦¸^¿¯"¢-£z¤9µ6·¬ž¨´Â³¼¬ž¥+¤Ä·¢ ¡-¨®§^µ¦µ¦°´¡-§ž¨ õ éÖéÐë§gºž¢‘µT°´§©£Ô¯"¢¤Ä·¬Âê#Ot·I§©¥Ä£°´§ Ê ¢¤1§ž¨{̘¹ ì‘ížíQPzït¿IµT¢‘­zº#¤¦·¢¤Ä§©¸^¸ž¢¥Äµ½1¢¡¬^£IµT°´Â¢¥¦¢‘Â"³¼¬^¥Ø¢-Æv§©¨Á² ¿I§©¤¦°®¬ž£ª¿¥ÄªN¬zµT¢œ¬ž³'¤Ä·¢8ª¥Ä¢µ¦¢-£z¤½t¬^¥ Ê Ì RGS8T UWVX7Y8Z[Y8Z[\]X_^`XbadceX_VX7fg^ihVY8j%^iYfQjlkHY^`c m hnj`oWhnfg^dU1prqZtsZepnk)Zvuwp@V=xej 㠥Ģ-Ɵ°®¬ž¿NµT¨®º«¿I£IµT¢¢-£ ꂬž¥ L ¿I£ Ê £¬v½£nMTï½t¬^¥ÄÂIµ4¬©³´¤Ä¢-£ ¥Ä¢-ª¥Ä¢µ¦¢-£z¤8§µ¦°®¸ž£°ÁÙN¡§©£z¤qªd¬ž¥¦¤¦°®¬ž£W¬©³â¤Ä·¢Üƞ¬¡-§ž­¿¨´§©¥Äº©¹ §žµÔ°®¨®¨Á¿Nµ=¤Ä¥Ä§©¤¦¢Â°®£†ÝX°®¸ž¿¥Ä¢Öì2³¼¬^¥ÔÆv§ž¥¦°®¬ž¿Nµ8ƞ¬¡-§ž­¿¨´§©¥Äº µ¦°y¢µ-Ìâþجž¤¦¢œ¤¦·N§v¤Ø³¼¬^¥¤¦·¢ÝI¥¦¢£I¡9·¤¦¥9§©°®£°®£¸k§v¤9§¹¤Ä·¢ 0 ¿¤¦²{¬ž³´²{z”¬¡-§ž­¿¨´§©¥Äº#ê 0|0 zœï”¥9§v¤¦¢á¥¦¢¯2§©°®£Iµ”¥Ä¢-¨´§©¤¦°®Æž¢¨Áº ·°®¸ž·³¼¬ž¥8­d¬©¤Ä·$¤¦¬ Ê ¢-£Iµ"ꡬ^¥¦ª¿Nµœ°Á£Nµ=¤9§©£I¡-¢µD¬©³1½t¬^¥Äµ9ï §©£NÂܤ=ºŸªN¢‘µáê‚Æž¬¡-§ž­¿¨´§©¥Äº½1¬ž¥9µÄï¹z§^µ”³¼¬ž¿I£IÂ2°®£2§8·¢¨´ÂŸ² ¬ž¿¤âµT¢-¤+¬©³6ì`}¹ ôžô^ôؤĬ Ê ¢-£IµØê¼³¼¥¦¬^¯÷¤Ä·¢ØÝ¥Ä¢-£I¡9·Ü¨®¢ȏ°´¡-§ž¨®¨Áº §©£I£¬©¤9§v¤¦¢‘†µ¦°´Â¢¬ž³Ø¤¦·I¢ õ §©£Nµ¦§ž¥Äµ9ï9̛Í·¢¥9§v¤Ä¢µÜ§©¥Ä¢ ¡¬^¯"ª¿¤Ä¢Âk°®¸ž£¬^¥¦°®£¸q¡§©ª°Á¤9§©¨®°§v¤Ä°®¬ž£§©£NÂÔ£I¬ž¥Ä¯2§©¨®°-°®£¸ §©¨®¨G£Ÿ¿¯k­N¢¥Äµ1¤Ä·I§v¤D§©ªªd¢§ž¥Ø°®£¤Ä·¢8¤Ä¢ȟ¤-¹ µT¬Ü¤Ä·I§v¤Ø¤Ä·¢-º §©¥Ä¢œ£¬©¤Ø¡-¬ž¿£z¤¦¢‘§^µ¿£ Ê £¬v½£½t¬^¥ÄÂIµÌ ÿ‰§ž£¸ž¿N§©¸ž¢žù6Ý6ðÞtþ~O õ 70% 10% 20% 30% 40% 50% 60% 80% 90% OOV ratio 5% 86.24% 32.99% 35.74% 8.89% 2k 4k 8k 30k 60k Training Set Size (number of tokens) 15k tokens types ÝX°®¸ž¿¥Ä¢ì©ùH€Wƒ‚N„,…‡†d„%ˆ~„K‰yŠŒ‹…‡Ži‹‘%‰1’„%…N„Kˆ ސ“†•”…‡–Œ† — „%‹Œˆ‡„,‚N’ˆB˜ Ý'°®¸^¿¥¦¢óÚµT·I¬v½Øµ¤Ä·¢§^Æv§©£z¤Ä§ž¸ž¢2¬ž³Ø¿IµT°®£¸§©£†§^Ÿ² °Á¤¦°®¬^£I§©¨ö¨´§ž¥¦¸^¢8¿£I§©£I£¬©¤9§v¤¦¢‘ÂÖ¡¬^¥¦ªI¿IµÌ8äz¤Ä§ž¥T¤Ä°®£¸2½°Á¤¦· ¬ž£I¨Áº¤¦·¢ 0|0 zŽ1¬ž¥9µ”°®£Ü¤Ä·¢¤¦¢‘µ=¤µT¢-¤â¥Ä¢-¨´§v¤Ä°ÁÆ^¢1¤¦¬8¤Ä·¢ §©£I£¬©¤9§v¤¦¢‘¤¦¥9§©°®£°®£¸2µ¦¢¤œ¬©³1ñžô¹ ôžô^ôܤ¦¬ Ê ¢£Iµ¹ ½1¢k¡-¬ž¯Ü² ª¿¤Ä¢#¤Ä·¢#ªd¢-¥9¡¢£z¤Ä§©¸^¢2¬©³¤Ä·¢µ¦¢#½1¬ž¥9µ¤¦·I§©¤k§©¥Ä¢2£¬ž¤ µ¦¢-¢-£›¢-Æ^¢-£É°®£4¤Ä·¢Ú¨´§©¥Ä¸ž¢¡¬ž¥Äª¿Iµ-ÌR Ø¨®¯"¬^µT¤kí¬^¿¤"¬©³ ìô¬©³á¤¦·¢Ú¬^¥¦°®¸^°Á£N§©¨ 0|0 z ½t¬^¥Äµk¬Ï§©ªIªN¢‘§©¥Ü°®£†¤Ä·¢ £¢½«ê¼¥9§g½qïâÂI§v¤Ä§¹½·°´¡9·2¯"¢§ž£Iµ+½1¢D¡§©£·¬žªd¢q¤Ä¬k¡-¬ž¨Á² ¨®¢¡¤Ô§žÂÂ°˜¤Ä°®¬ž£I§ž¨âµ=¤9§v¤Ä°®µT¤¦°´¡µœ¬ž£Ï¤Ä·¢-¯̚™$¢µ=¤Ä°Á¨®¨+·N§gƞ¢ ¤¦¬k¿Iµ¦¢Dµ¦¯"¬Ÿ¬©¤Ä·°®£¸8¤¦¬k¢µT¤¦°®¯2§©¤¦¢¤9§©¸ªI¥¦¬^­I§©­°®¨®°Á¤¦°®¢µX³¼¬^¥ ¤¦·I¢8¥¦¢¯2§©°®£°®£¸2ìgó›Ã¬©³X¤¦·I¢ 0|0 z5½1¬ž¥9µÌ ÿ‰§ž£¸ž¿N§©¸ž¢žù6Ý6ðÞtþ~O õ 3 million 6 million 12 million 12% 14% 16% 18% 20% 22% Untagged Corpus Size (number of tokens) types tokens 12.95% 12.18% 18.34% 20.36% % of OOV words not found in the raw corpus œežQŸ7 ¡£¢_¤e¥€¥W¦¨§ª©%«N¬­®Œ©%¯[­ °°® °±=°®~²®|³1´²¶µ`µi°«t·®n³`®Q¸ ®Œ©%¯N³%¯‡°3¬ ¯‡°¹g¯»ºy«‡©i¼¯‡½°£­‡³`¼|°€¾©`«‡¿Œ·­€À¯‡½Œ°£Á1³`®­‡³%«N¬­ ÃÄJà ÅdÆÈÇ`ÉnÊË8Ì[ɑÍÌ[ËÎ@ɌÏÇ`ÆÈÊ1ÐCɌÑËÒQÆÈÊ1Ó ÏË8ÔÈÏ Õ ÔÖeÉnÊ ×/Ø yÙȝڝÜÛgÝßÞÈÛ Ø ¡`ݝyÙÈ¡gà Úá7¡ãâ7 äQÞÈÛQÞ7Ýyyڇåçæ7 Ø ÚB Þ7Ÿ_ڝyäÙ äè¡% 1é£ê~ë)ÚÛgž ØWì ä £â7 B¡`ènyäŸ Ø ÝyåKŸ7Ù Ø ¡%¡`Ùîí1äQ æ ØCïñð ÛgÙ ÞE¡ßÛgâ7âÈ Bä=ò_ólÛÚ¡`æHÞnå•Û Ø Ù7žÝ¡ôólÛò_ódŸÈóHõ{ݝög¡%Ýyyá7änänæ ÚÛgžH¡ Ø ÚBólÛÚ¡ Ø áÈÛQ B¡iælÞnå)ÚBáÈ¡ ì Ÿ7ÝÝèQä ð ÛQÞ7Ÿ7ÝÜÛg åE¤ ÷ ø)ù8ú,û ï/ü1ý ø)ù8ú ü × ÙÈÛgÚBŸ7 ÛgÝ7 ¡,þ@Ù7¡%ó)¡`ÙÚ» Ø Úä|¡,ò ð ÝŸÈæ_¡£Úá7¡€ó)ä Ø Ú ì  ¡,õ ÿ Ÿ7¡`ÙÚ í1äQ æ Ø yÙ ÚBá7¡rÛgÙÈÙ7ägÚÛÚB¡iæ ð äQ â7Ÿ Ø•ì  Bäó ÚBá7 Ø ì  ¡ ÿ Ÿ7¡`Ù ð å'æ7 Ø ÚB Þ7Ÿ_ڝyäÙ ð äó)â7Ÿ_ÚÛÚBäÙ  Ø ¡%¡Qà ì ä )¡,ònõ ÛQó)â7Ý¡gà   ÛgÙŒÚ Ø àE¢ á7¡`ّÚBá7¡ Ø  %¡|ä ì ÚBá7¡ ÛQÙ_õ Ù7äQÚÛgÚB¡`æ ð äQ â7Ÿ Ø  Ø Ù7äQڕÝÜÛg žQ¡¡%Ù7äŸ7žQáGà£í1¡ ð ÛgÙvŸ Ø ¡ ÛQÙ7ägÚá7¡% ~Ÿ7ÙÈÛQÙ7Ù7ägÚÛÚ¡`æ ð ä Bâȟ Ø Úä•Üæ_¡`Ùڝ ì å•Úá7¡dó)ä Ø Ú ì  ¡ ÿ Ÿ7¡`Ùڀí1äQ æ Ø ÙlÚáÈÛÚôÝÜÛgÙȞQŸÈÛQžQ¡ ÃÄ  ÆeË`Æ7Ê8ËÆ`Ë8Ô@Ì × Ø ¡ ð äÙÈærÞÈÛ Ø ¡`ÝyÙ7¡•ó)änæ7¡%Ý ð äQÙ Ø Üæ_¡`  Ø ø)ù8ú,û ï~ü ÚäšÞE¡ Ø ¡%Ù Ø yÚBèQ¡ ÚBä ð Ûgâ7yÚÛQݝ iÛڝyäÙG¤ ÷ ø)ù8ú,û ï/ü1ý ø)ù8ú,û  "!#$%&(')Œù ï/üNü *£á7 ؕð ÛgÙvÞE¡¨Ûgâ7âÈÝy¡iæ'ÚäbönÙ7äí€Ùví1äQ æ Ø Û Ø íC¡`ÝÝ8à ÞEänä Ø ÚBÙ7žHÚBáÈ¡ â7 äQÞÈÛQÞ7Ýڇå)ä ì â7 Bäâ@¡`  Ù7äŸ7Ù Ø yÙ ð Ûgâ7yõ ÚÛgݝ %¡`æ ð äٌÚB¡,ònÚ Ø ÛgÙ@æ‘Ýyäí1¡% Ù7žKÚ/yÙÚá7¡ äQÚBá7¡`  ð äÙ_õ Ú¡,ònÚ Ø ,+ ä B¡ Ø äâ7á7 Ø Ú ð ÛÚ¡`æ"ó)ä_æ_¡%Ý Ø ¡,ò_â7ÝäQyÚBÙ7ž‘Úá7¡ ð Ûgâ7yÚÛQݝ iÛڝyäÙ ì ¡iÛÚBŸÈ B¡ Ødð ÛgÙbÞE¡ ì äQŸ7ÙÈæ'Ù .ánŸ7  ð áGà /1022 WÛQÙÈæ 3 ¡4+ Ûg  ð öQ¡`ÙGà /(050 5 ÃÄ6 798;:rÑ <>=ªÆ7ÇiÉnÖ Ó ÏË8ÔÈÏ@?~Ç(`ËA'Æ`Ë8Ô@Ì *£á7¡€ó)¡%ÚBá7ä_ædÚBá@ÛÚ Ø ¡%¡`ó Ø ÚBä|í1äQ ö|Þ@¡ Ø Ú ì ä  Ÿ7Ù7önÙ7äí€Ù í1äQ æ Ø Ù ÙB@¡ ð Ú¡`æ ÝÜÛgÙ7žŸÈÛgž¡ Ø ólÛgöQ¡ Ø Ÿ Ø ¡vä ì Úá7¡ Ø ŸCHòÛQÙÈÛgÝå Ø  Ø ä ì íCä æ Ø ënŸCHònõ{ÞÈÛ Ø ¡iæáÈÛgÙ@æ_ݝyÙȞ ä ì Ÿ7Ù7önÙ7äí€Ùbí1äQ æ Ø áÈÛ Ø ÞE¡%¡`Ùbâ7 äQâEä Ø ¡`æÙbèÛg äQŸ Ø í1äQ ö Ø  "¡` Øð á7¡iæ_¡%Ý¡,ÚôÛQÝD yà /(005EF ë_ÛQódŸ7¡`Ý ØBØ äQÙGà /(005EF *£á7¡iæ_¡gà /(0052FHG ÛI‡KJàÈ¢5 F   ÛgÙŒÚ Ø àÈ¢55 L ËÑɌÖM<ÐCÉnÌ;NO(PM<Q7R8M:rÑ Ó Ï=Ë8ÔÈÏÇ œ7ä  ÛgÝÝ»ÝÜÛgÙ7žŸÈÛgž¡ Ø í€yÚBárÛÚßÝ¡iÛ Ø Ú ó)Ù7ólÛQÝ[ÙBÈ¡ ð õ ڝyè¡'â7 äQâE¡% BÚB¡ Ø à í€áȝ ð áwyÙ ð ÝŸÈæ_¡ ØTS Ù7žÝ Ø á ÛgÝ Ø ä7à yÚ  Ø âEä ØØ Þ7Ý¡HÚä Ÿ Ø ¡)ÚBá7¡îÙ ì äQ ólÛڝyäÙ¨äÞ_ÚÛQyÙÈ¡`æ ì  Bäó Úá7¡VU Ø ŸCHòW  Þnå•í€á7 ð á‘í1¡|ó)¡`ÛgÙîÚBáÈ¡ßíCä æ_õ8þÈÙ@ÛgÝ Ø ¡,õ ÿ Ÿ7¡`Ù ð ¡dä ì Ý¡,ÚBÚB¡%  Ø  ¡%žŒÛg æ_Ý¡ ØØ ä ì í€á7¡%ÚBá7¡` /yÚ ÞE¡%ÝäÙ7ž Ø ÚäîÚá7¡HÚB ÛQæ7ÚäQÙÈÛQÝ Ø ŸCHò¨äQ  ¡%Ù@æ_Ù7ž ð ÛÚ¡%žäQ y¡ Ø  ì äQ  ۚó)äQ ¡)þÈÙ7¡%õ{ž ÛQÙ7¡`æ"¡ Ø ÚBólÛgÚBäQÙä ì Úá7¡ÚÛQžšæ_ Ø ÚB õ Þ7Ÿ7ÚBäQÙ â7 äQÞ@ÛgÞ7ÝyÚB¡ Ø ì äQ ôÚá7¡ ŸÈÙ7önÙ7äí€Ù íCä æ Ø X*£á7¡ Ø ó)â7Ý¡ Ø ÚCó)ä_æ_¡%Ý ð äQÙ Ø Üæ_¡%  Ø Û þÈòn¡iænõ#Ýy¡`Ù7žgÚá Ø ŸCHò¤ ÷ ø)ùú,û ï/üCý ø)ùú,ûY;Z[')  $'Q\[]!"^   _Q`aZQù ï/üNü *[ÛQÞ7Ý¡ / Ø á7äí Ø Úá7¡l Û=í£õ ð äQŸ7ÙŒÚ æ_ Ø ÚB yÞȟ_ÚBäQÙ Ø äÞ_õ Ø ¡% èQ¡i樝ÙÛgÙ S Ù7žQݝ Ø á¨ólÛgÙnŸÈÛQÝÝyå‘ÛgÙÈÙ7ägÚÛÚB¡iæ Ú¡,ònÚßä ì / ó)ÝyݝäQٚíCä æ Ø  ì  Bäó ÚBá7¡b ëHc ð äQ â7Ÿ Ø  ì äQ  Ø ¡%èŒõ ¡` ÛQÝ1í1äQ æ Ø áÈÛ=ènÙ7ž ÚBáÈ¡ Ø ŸCHòbõQ!.'edf*£á7¡‘Þnåõ{ÚBäöQ¡`Ù ÛQÙÈælÞnåŒõ8ڇånâE¡/æ_ Ø ÚB Þ7Ÿ_ÚBäÙ Ø»Ø áÈäí€Ù•ÛÚ1ÚBá7¡~ÞEägÚNÚäQó ä ì Úá7¡|ÚÛQÞ7Ý¡ áÈÛ=èQ¡ ÞE¡%¡%Ù ð äQó)â7Ÿ7ÚB¡`æ ì  äQó ÛQÝÝÚBá7¡ßí1äQ æ Ø áÈÛ=ènÙ7ždÚBá7¡ Ø ŸCHòlõQ!.'|ÙîÚBá7¡ Ú ÛQyÙȝyÙȞ æ7ÛÚÛ *[ÛQÞ7Ý¡ / ÝÝŸ Ø Ú ÛgÚB¡ Ø ÚBáÈ¡ ÛQæ_èÛgٌÚÛgžQ¡ ä ì Ÿ Ø yÙȞƒÛ Ø ŸCHòñó)änæ7¡%ÝKäèQ¡% ¨ÚBá7¡ Ÿ7Ù7èQ¡`  Ø ÛgÝ Ýy¡%ò_ ð ÛgÝßâÈ BäQ ¨yÙ ¤¦·N§v¤"¤¦·I¢¬^­Iµ¦¢-¥Äƞ¢†¨®¢ȏ°´¡-§©¨1ª¥Ä°®¬ž¥Ô³¼¬ž¥2§ٍȏ¢›µT¿HgkÈ êihNQ!{gïW¬©³´¤Ä¢-£Ã°˜Õ ¢-¥9µWµ¦¿­IµT¤Ä§ž£z¤¦°´§©¨®¨®ºÅ³¼¥Ä¬ž¯ ¤¦·¢›ªI§ž£² ƞ¬¡§©­¿¨´§ž¥¦ºÖ¿I£°®Æž¢-¥9µÄ§©¨”¤9§©¸WªI¥¦¬^­I§©­°®¨®°Á¤¦°®¢µÜêÍ'§ž­¨®¢óžïÌ j9k j9kHl m m m m l nok pp qsr>t qsuetvwx y z { { { { qs|>}(qs~s}€ivwx z  ‚ { { { ƒ ~st…„†.vwx { { { { { ‡ ~.ˆ‰w%Š.„†.vwx  { { { { { ‹ rŒŠ.„†.vwx y { { { { { Ž vwx z ‡  { { {  }erqsqsue€ivwx { { { { { ‘  }(r ƒ ~s’ uvwx { { { { { ‚ tvwx { { {   ‘ “ | ƒ ~s€ivwx   { { { y>” • €  – vwx { { { z { — €ivwx   z>‡z { { { ˜"ue€"€|>™1vwx { {  { {  š uQ›œˆœDvwx • €  |€MžŸŒ œ|¡>~s}1¢ £ ‘ £ {‚ £ y” £ iy £ { £ >‘ š ue›¤ˆœ#vwx • €  |€;ž…Œ œ  • ~i¢ £ ‚{ £ {‡ £ ‘ £ {” £ {{ £ >— „%¥w…Š.„†.vwx ¦ ¦ ¦ ¦ ¦ ¦ ÍX§©­¨®¢œì©ù¨§g1©|†»†ª„e©¬«­†ˆ`‰H®ª%‚N’ˆt†‹n’Qސ‹¯€Ž‹±°²³"´C‰y‚‡(© „¶µ>°©|Ž…­%­Ži‹°®e`‚N’ô…N„(¯1¯`†3’±·‹¯(­¶Žˆ –/`‚i«ŠŒˆ„`‹Œ’ …‡–†¸­†ªQސ3„(­ «‚‡Ž%‚‡ˆd‰%‚Kˆ й±ª»ºD²1³"´š„%ˆK„%‹r†ˆ#…‡Ž%©ß„%…‡Ži‹‰y`‚HŠ‹¼g‹Œ ® ‹ ®ª%‚N’¾½¿[³À½KÁi²³"´H”K­i‹5¯i†ˆ#…Wˆ Š5¹ÂªÃ©ß„,…‡B–d†©¬«Œ–Œ„`ˆ ސ“†3’n˜ j9k j9kHl m m m m l nOk pp Ä  Ž ~s€"˜ Å }  – £ • €  |€Rž…Œ |¡>~s}1¢ £ {yy £ {Qz £ i‘y £ z £ {y>— £ {‡z £ z‘  Å }  – £ • €  |€MžŸŒ œ  • ~i¢ £ {y>{ £ {{‡ £ i‡>‚ £ >‚‘ £ {{>{ £ ‘>‚ £ y—” ÍX§©­¨®¢2óŸùÇÆC‹ŒŽ%È=†‚‡ˆ‡„e­;«Q‚‡Ž`‚‡ˆô‰`‚ÂÉËÊ[X§•…N„(¯`ˆ/1©¬«ŒŠQ…‡†3’ QÈ=†‚W„õ>°©|Ž…­%­Ži‹°®e`‚N’H„%‹Œ‹`…N„%…‡†’Ë·t‹5¯1­Žˆ –H%‚i«ŒŠŒˆ 0 £¤Ä·¢8¬©¤Ä·¢-¥·N§©£I ¹dÍ'§ž­¨®¢kì8§ž¨®µ¦¬k°Á¨®¨®¿IµT¤¦¥9§v¤Ä¢µ”¤=½1¬ ª¥Ä¬ž­¨®¢¯2µ6½°˜¤Ä·#µ¦¿gÜÈz²¶­I§^µT¢‘ÂÜ¢µT¤¦°®¯2§v¤Ä°Á¬^£³¼¬^¥tªN§©¥¦¤T²¶¬©³´² µ¦ªN¢¢¡9·ª¥¦°®¬^¥Äµ-ù ™R·°®¨Á¢"§ª¥Ä¢-Ɵ°®¬ž¿NµT¨®ºÚ¿£Iµ¦¢-¢£½t¬^¥Äµ¦¿I¡9·Ï§žµ)Ü9E!  Kh g!#D°´µâª¥Ä°®¯2§©¥Ä°®¨Áº§ž£#§žÂvû=¢¡¤Ä°®Æž¢©¹z¤¦·I¢œÂ¬ž¯"°®£I§©£z¤+ªI§ž¥T¤¦² ¬©³´²?µ¦ªN¢¢¡9·s³¼¬ž¥Ú¤Ä·¢Ïٍȏ¢‘Ÿ²{¨®¢£¸©¤Ä·sµ¦¿gÜÈ ²,g!#4°´µÖþáþ 꼿NµT°®£¸¤Ä¬ Ê ¢£²¶½t¢°Á¸^·z¤¦¢Â¢µT¤¦°®¯2§©¤¦°®¬ž£N﹉¬ž¥dzáëV꼿Iµ¦°®£¸ ¤=ºŸªN¢-²{½1¢-°®¸^·^¤Ä¢Â#¢µT¤¦°®¯2§v¤Ä°Á¬^£Nï9Ì éÚ¬Â¢-¨®°®£¸Ó¨®¬ž£I¸ž¢-¥Ëµ¦¿gÜȟ¢‘µxû=¿Iµ=¤»¯2§ Ê ¢µ÷¤¦·°®£¸zµ ½1¬ž¥9µT¢”°®£œ¤¦·I°®µ'¡-§^µT¢ž¹g§^µ+ì%q¬ž¿¤X¬©³öì‘î1¬ž³¤¦·¢t½t¬^¥ÄÂIµG¢-£I² °®£¸#°®£W²g!#k°®£Ö¤Ä·¢k¤Ä§©¸^¸ž¢‘ÂÖ¡¬^¥¦ª¿Nµá§ž¥¦¢Ô¢ȍ¡¨®¿Iµ¦°®Æž¢-¨®º záë÷¬ž¥~záë ã ¹‰§©£IÂW¤¦·¢Ô¤=½1¬#³¼¬^¥¦¯2µá¢-£I°Á£I¸#°®£Ö²  g!# êBŒ!  Q!{§©£IÂÍÌ,5  Q!{gï°Á£$¤Ä·¢2¤Ä¥Ä§ž°Á£I°Á£I¸#¤Ä¢ȟ¤Ô§©¥Ä¢ §©¨´µ¦¬Ü¢ȍ¡¨®¿Iµ¦°®Æž¢-¨®ºÜ¤9§©¸ž¸^¢Â§žµôzáëáÌG䏿gÜȟ¢‘µq¡-¨Á¢‘§©¥Ä¨®º#¬ £¬ž¤q¡-§©ª¤¦¿¥Ä¢§©¨®¨ ¥¦¢¨®¢-Æv§©£z¤°®£³¼¬ž¥Ä¯2§v¤Ä°Á¬^£"°®£ªI¥¦¢‘°´¡¤¦°®£¸ ¤Ä§ž¸Üª¥¦¬^­I§©­I°Á¨®°Á¤¦°®¢‘µ6³¼¬ž¥¿£ Ê £I¬v½£½1¬ž¥9µ-Ì Î Y8Z[hnX_V@Ï%Z^`hŒV=oWpoÐX_^iYpEZvpOÑXҀYÓhŒxMÔ Î hnZ[\@^`c ÕMÖM× ÓfØ pxehÐ8j  qµ"§W¤¦·°®¥9Â4­I§^µT¢¨Á°®£¢#½t¢Ú¡¬^£Iµ¦°®Â¢-¥Ä¢Â4§ž£†°®£z¤¦¢-¥Äªd¬©² ¨´§v¤Ä¢ÂµT¿Hgkȯ"¬Â¢-¨‰¤¦¬#¢-¯"¬ž£Nµ=¤Ä¥Ä§©¤¦¢D¤¦·¢Ô¥¦¢¨´§v¤¦°®Æ^¢á¢-³´² ³¼¢¡¤¦°®Æž¢£¢µÄµ+¬ž³G¤¦·¢‘µT¢œ§©ªIª¥¦¬z§ž¡9·¢‘µ”½·¢£#¥Ä¢µT¤¦¥Ä°®¡¤¦¢‘¤Ĭ ¢‘µ=¤Ä°Á¯2§©¤¦°®¬ž£W­zº$µT¯"¬Ÿ¬ž¤¦·¢‘ÂÚٍȏ¢Â$¨®¢-£I¸©¤¦·Ïµ¦¿gÜȯ"¬ÂŸ² ¢¨®µ-Ìr™¢¿NµT¢‘ÂϤ¦·I¢°®£z¤¦¢¥¦ªd¬ž¨´§©¤¦°®¬ž£$¬©³ PÖٍȏ¢‘Ÿ²{¨®¢£¸©¤Ä· µ¦¿gÜȪ¥Ä°®¬ž¥9µù Ù &)(8*,+ -~.œÚÜÛfÝ>Þ;ß àáà â;ã Ýåä &)(*G+æ(ç9è Ý (J-/.B. ½·¢¥¦¢éæeç9è Ý (J-~.2¢£¬©¤Ä¢µÜ¤Ä·¢µ¦¿gÜț¬ž³D¨®¢£¸©¤Ä·ëꆬ©³ ½1¬ž¥9•-Ì ì X_VYXHí¸Ð8hÔ Î hnZe\È^ic ÕRÖ;× Ó,Ø ptx[h[Ðj î °®Æž¢-£Ü¤Ä·I§v¤1¤¦·I¢á¨®¢-£I¸©¤¦·"¬ž³ö°®£³¼¬ž¥Ä¯2§v¤Ä°ÁÆ^¢µT¿gÜÈ2¡¬^£² ¤Ä¢ȟ¤âÆv§©¥Ä°®¢µ”¡¬^£Iµ¦°®Â¢-¥9§©­¨®º8§ž¡-¥¦¬zµ¦µ6µ¦¿gÜȟ¢‘µ¹Ÿ¬ž¿I¥6³¼¬ž¿¥¦¤¦· §ž£IÂxÙI£I§©¨Ô­I§žµ¦¢-¨®°®£¢¯"¬ŸÂ¢-¨¿Iµ¦¢µW§ÅµT¯"¬Ÿ¬©¤Ä·¢ ¤¦¥Ä°®¢² ­I§^µT¢‘†§ž¥Ä¡9·°Á¤Ä¢¡¤Ä¿¥Ä¢ꂵ¦°®¯"°®¨®§ž¥œ¤¦¬$¤¦·I¢¬^£¢ª¥¦¢‘µT¢£z¤¦¢ °®£4ê{Ot¿N¡¢-¥§ž£Ú§©£IÂðï⧞¥¦¬v½Øµ Ê º©¹‰ì‘ížížízït³¼¬ž¥Ø£N§©¯"¢Â¢£² ¤Ä°˜¤=º¡¨´§žµÄµT°ÁÙN¡§v¤Ä°Á¬^£Nï'¤¦¬"¢‘µ=¤Ä°®¯2§v¤¦¢ Ù &)(8*,+ -/. ÚÜñ ÝQò9ß ã[(Ÿêóæ(ç9è Ý (J-/.B. ä ñ ô¤õ ¥ö>÷¥ ø#ùiú Ý ( ô .¤Ú ø#ùiú Ý ( ÷ . &)(8*,+ û_. êTì‘ï ý?£ µ¦¬ž¯"¢ç¡-§^µT¢‘µ¹B¤¦·I°®µæÆg§ž¥¦°´§ž­¨®¢²¶¨Á¢£¸©¤Ä· µ¦¿gÜÈ ¯"¢-¤¦·¬ÂÖ¯2§gº·I§gÆ^¢¤Ä·¢Ü¬žªªd¬^µ¦°Á¤¦¢ªI¥¦¬^­¨®¢-¯V¬©³âÙIȟ¢‘Ÿ² ¨®¢-£I¸©¤¦·W¯"¢¤¦·I¬ŸÂG¹ ¬vÆ^¢-¥¦²¤Ä¥Ä§ž°®£°®£¸Ô­Ÿº¸ž°®ÆŸ°®£¸Ü¤¦¬Ÿ¬¯Ô¿N¡9· ½1¢-°®¸ž·z¤ö¤Ä¬q¤Ä·¢1ª¥¦¬^ªN¢¥T¤Ä°®¢µö¬ž³¤¦·¢1¯"¬ž¥Äª·¬^¨®¬ž¸ž°´¡§©¨‘³¼¬ž¥Ä¯ ¯"¬zµ=¤œµT°®¯"°®¨´§©¥1¤¦¬¤¦·¢k¸ž°®Æ^¢-£Ú½1¬ž¥9 -«¢£I¡¬^¿£z¤¦¢-¥Ä¢ÂÖ°Á£ ¤Ä·¢¤¦¥9§©°®£°®£¸q¤¦¢ȟ¤áꂰ®£¤Ä·¢¬hNg!#¢-ȏ§ž¯"ª¨®¢©¹v¤Ä·¢¢µT¤¦°®¯2§v² ¤Ä°Á¬^£Ï­N¢‘¡¬^¯"¢µ8½1¬ž¥9µT¢"§^µ½1¢#¡¬^£IµT°´Â¢¥¦¢‘ÂW¨®¬^£¸ž¢¥§©£I ¨®¬ž£I¸ž¢-¥áµT¿gÜȏ¢µ9ï9Ì8äz¤¦°®¨®¨{¹I¬ž¿I¥q¢ȏªd¢-¥Ä°®¯"¢-£z¤ÄµØµ¦·¬v½s¤¦·N§v¤ Æv§©¥Ä°´§©­¨®¢-²{¨®¢-£I¸©¤¦·q¬ž¿¤ÄªN¢¥T³¼¬^¥¦¯2µd¤Ä·¢”ٍȏ¢²{¨®¢-£I¸©¤¦·œ°®£z¤¦¢¥T² ªd¬ž¨´§v¤Ä°®¬ž£2¯"¬Â¢¨®µ-Ì õ ¬v½1¢-Æ^¢-¥¹^¯2§©£ŸºÔ½t¬^¥Äµâ½°Á¤¦·#µ¦°®¯"°Á¨´§ž¥'°®£z¤¦¢¥¦£I§ž¨Iµ¦¿³´² ٍȏ¢‘µ§©¥Ä¢2¯"°´µ¦¨Á¢‘§žÂ°®£¸#°Á£N°´¡-§v¤Ä¬ž¥9µœ¬©³¤¦·I¢#¨®¢ȏ°´¡-§ž¨”ª¥¦°Á² ¬^¥Äµ³¼¬ž¥k¿£Iµ¦¢-¢£†½1¬ž¥9µÌ 0 ¿¥Ü¸^¬^§©¨{¹X¤¦·¢¥¦¢-³¼¬ž¥Ä¢©¹â°´µ¤¦¬ ­d¬ž¥Ä¥¦¬v½5¨®¢ȏ°´¡-§ž¨öª¥Ä¬ž­I§ž­°®¨®°˜¤=º"¢‘µ=¤Ä°®¯2§v¤¦¢‘µ³¼¥Ä¬ž¯Ð§2¯"¬^¥¦¢ ª¥Ä¢°®¡¤¦°®Æž¢DµT¢-¤Ø¬©³Xª¥¦¢Ɵ°Á¬^¿Iµ¦¨ÁºÜµ¦¢-¢-£¢-ȟ¢¯"ª¨´§©¥9µÌ Í·¢œ³¼¬ž¨®¨®¬v½°®£¸µ¦¢¡¤Ä°®¬ž£Iµ1ª¥Ä¬žªd¬^µ¦¢Dµ¦¿I¡9·¯"¢¤Ä·¬Âµ-Ì ü :1;‡ ;=<=JXFI;=©_ æEöJ”[^@qFIE‰[ 𢑡-§ž¨Á¨D¤Ä·I§v¤¤¦·I¢¡-¢-£z¤¦¥9§©¨D¤9§žµ Ê ¬©³Ô¨®¢-ȟ°´¡§©¨qªI¥¦°®¬ž¥¢µT² ¤Ä°Á¯2§©¤¦°®¬ž£ °®µÚ¢¤¦¢¥¦¯"°®£°®£¸\·¬v½B¯Ô¿I¡9·s½1¢-°®¸ž·z¤¢§ž¡9· ª¥Ä¢-Ɵ°®¬ž¿NµT¨®ºz²¶µ¦¢-¢£Ñ¢-ȟ¢¯"ª¨´§©¥(ý µ»Â°®µT¤¦¥Ä°®­¿¤Ä°Á¬^£èµ¦·¬^¿¨´Â ¡-¬ž£z¤¦¥Ä°®­¿¤¦¢É¤Ä¬Ã§ž£À¿£ Ê £¬v½£»½1¬ž¥9 ý µ4°´µ=¤Ä¥¦°®­¿¤¦°®¬ž£GÌ ð¢½¥¦°Á¤Ä°Á£I¸Ô³¼¬^¥¦¯k¿¨´§ê=ìgï°®£¤¦·¢³¼¬ž¨®¨®¬v½°Á£I¸Ô¢‘×^¿I°ÁÆv§ž¨Á¢£z¤ ³¼¬^¥¦¯ù Ù &)(8*,+ -/. ÚÜñþw&)(*,+ û_. ä ãOÿN(8:eæŒ("ûOóB-/.B. ê{óžï ½·¢¥¦¢8:eæŒ( ä ó ä .s¥¦¢ª¥Ä¢µ¦¢-£z¤Äµs¤Ä·¢À¨®¬ž£¸^¢µT¤÷¡¬ž¯"¯"¬^£ µ¦¿gÜÈ ¬©³ ¤=½t¬ ½1¬ž¥9µ¹±§©£I ã ÿ (Dæ(ç9èn(J-~.N. Ú ã[(óæ(ç è ß (J-~.N.  ã[( óæ(ç èn(J-/.B.9¹¬ž­Iµ¦¢-¥Äƞ¢¤¦·N§v¤ ¤Ä·°´µ”°´µ6¯"¢-¥Ä¢-¨®º§µ¦ªN¢‘¡°´§©¨¡§žµ¦¢¬©³G§œ¯"¬ž¥Ä¢1¸^¢-£¢¥Ä§ž¨¥¦¢ª² ¥Ä¢µ¦¢-£z¤Ä§©¤¦°®¬ž£öù Ù &)(8*,+ -~.œÚ4ñ þ &)(*,+ ûn. ä µ¦°Á¯‘(Jó>û_. ê8P^ï ½·¢¥¦¢8µ¦°®¯Öê6¹zï+¡§©£­N¢8§ž£zº#½t¢°Á¸^·z¤¦°®£¸¬ž³‰ªN¬ž¤¦¢£^¤Ä°´§©¨ ¢-ȟ¢¯"ª¨´§©¥9µ³¼¬ž¥8§¤Ä§ž¥¦¸^¢¤œ½t¬^¥ÄÂåê\°´µD§£¬^¥¦¯2§ž¨Á² °§©¤¦°®¬ž£2³‚§ž¡¤Ä¬ž¥ïqÌ ë1¿¤#½·I§©¤#µT·¬^¿¨´Â4¤Ä·°´µ2µ¦°Á¯"°®¨´§©¥Ä°Á¤=ºW¯"¢‘§žµ¦¿¥¦¢¤Ä§ Ê ¢ °®£z¤¦¬Ü§ž¡¡¬ž¿I£^¤    ˜K€  Œeue  |>}(˜ r>9{ • |˜    |} ! "$# %  ! "$# & } € €"r ˜ t |> Ž ~s€o— †#xs¥w…Š#x £ ‡ y £ {>{{ £ {>{Q £ {>{{ £ >—>‚   ˜K€ueq.u‰Š#x £ ‡z  £ {{  £ { ‡  £ >” £ {{   “ | ¥‰w…Š#x £ yy  £ > £ ‚z— £ {y>y £ {>{‡  £ {‚” Ô`T   ˜KD€  ŒeuQ  |}e˜O| – ~s€O˜"ue›¤ˆQ~s˜ rMi˜K • |˜    |} ! ß # %  ! ß # r  ˜ rt r>t ~ r>ueˆ ~s} ~ ~s€ ~s€"r ~s˜ r>  |>} ~st |> Ž ~s€o{ qs~s}€ £ {>{{  £ {”” £ i‘‡ £ {{y £ {{{ £ z £ {{Q £ {>{{Q £ {—    ˜"€"ueq.Due€  £ {y £ {>{‘ £ {{>‚ £ {{{— £ ‡—>” £ {Qi‚  £ iz— £ {>{ £ {{>{—  “ |}€     £ {—‡ £ y>— £ y— £ {‚ £ {>{z   £ y i— Ô3R   ˜K€  Œ(uQ  |}e˜O| – ~s€o˜"ue›¤ˆQ~s˜Or>R} ƒ • |˜    |>}' ! ()# %  ! ()# r  }(~  “ ~ €"r  ˜ €"rt €"r>t ~ €"ruQˆ €"~ €"~s} €~s€ €"~s€"r €~s˜ €~st u(€  |>} €"|}e˜ €* €*s˜ | Ž ~s€ ‚{ qs~s} £ {  £ {>{ £ {{>{Q £ {‡> £ ‚ £ {>{y £ ‘‡ £ {>{{  £ {{Q £ {>{{Q £ iz>‚  £ {{y     ˜KD€"ueq.u    £ {‚ £ {{‘ £ {>{z £ ‡”y £ {>{{— £ {Qi‚  £ iz‘ £ {>{Q      “ |}       £ {‡” £ {” £ i{” £ {{— £ {>{ ² ² £ {Q— £ {z>— £ {Qi{ £ ‘>z>‚ lHÄ š ~s˜K  “ r>~ r˜ }e|ue} r>˜ – ~s€"Œ †.x.¥‰w%Š#x £ ””,+ £ { s ˜KD€"ueq.u‰Š#x £ ””,+ £ { s “ |Q¥w…Š#x £ { s £ ”>”)+ Í'§ž­¨®¢ Pùœ§gŠ5¹±ªd’Q޶ˆ#… ‚‡Ž.-Š…‡Ži‹ ‰`‚±Á>´¿³Às´0/21³À,3Á ³3Às´0/Ȅ%‹n’5476(¿[³Às´|„%ˆ 8-Œˆ †B‚iÈ=†3’dސ‹)„4µ:9Q°©|Ž%­%­Ž¶`‹5°"®ª%‚N’<;‚‡†‹B–d`‚i«ŠŒˆ =S8T ÕMÖ;× Ó Ô>íªX7j`hnx>/X_VX7x[Y8\@?'X_^`Y8f]Yj`^`XÈZ[fh Í·¢‹ª¥Ä°®¯2§©¥Äº °®£z¤¦¿I°˜¤Ä°®¬ž£)­d¢-·°®£I ¤¦·I¢³¼¬ž¨®¨®¬v½°Á£I¸ ªI§ž¥Ä§^°®¸ž¯2§v¤Ä°´¡Ã°´µT¤Ä§ž£I¡¢Ç¯"¢§žµ¦¿¥Ä¢À°´µ÷¤Ä·I§v¤»½1¬ž¥9µ ½·°´¡9·Å·I§gÆ^¢WµT°®¯"°®¨´§©¥kª¥Ä¬ž­I§ž­°®¨Á°´µT¤¦°´¡#°®µT¤¦¥Ä°®­¿¤Ä°Á¬^£Iµk¬©³ §žÂI¢ÂµT¿Hgkȏ¢‘µ½°Á¨®¨G§©¨´µ¦¬¤Ä¢-£I¤Ĭk·N§gƞ¢œµT°®¯"°®¨´§©¥+ªI§ž¥T¤¦² ¬©³´²?µ¦ªN¢¢¡9·¤Ä§©¸"°´µT¤¦¥Ä°®­¿¤¦°®¬^£IµÌ Ot¬ž£Iµ¦°´Â¢-¥G¤Ä·¢ݍ¥Ä¢-£I¡9·8½1¬ž¥9µC%9@!  1§©£IÂ)23!  6È`!6  ¹ ½·°´¡9·5¡-§©£ú­d¢Ï­d¬©¤¦·÷µ¦°®£¸ž¿I¨®§ž¥£¬ž¿£NµÖ§©£I ì ã òP ã ² µ¦°Á£I¸ž¿¨´§©¥¦²¶ª¥¦¢‘µT¢£z¤ Æ^¢-¥Ä­Iµ-¹g½°Á¤Ä·8¤¦·¢£¬^¿£¿Nµ¦§ž¸ž¢tµ¦°®¸ž£°Á³´² °´¡-§ž£^¤Ä¨®º8¯"¬ž¥Ä¢1¡-¬ž¯"¯"¬ž£³¼¬ž¥+­d¬©¤Ä·Ü½t¬^¥Äµ-Ìt™R·I°Á¨®¢1¤¦·I¢-°®¥ °®£z¤¦¢-¥Ä£I§ž¨IµT¿Hgkȏ¢‘µt°ÁÕ ¢-¥1§v¤â¤Ä·¢ P©¥9ÂܪN¬zµT°Á¤¦°®¬^£êh‡!  qƏµ-Ì h 6  gï9¹d­N¬ž¤¦·Ö½1¬ž¥9µ¢-ȏ·°®­°Á¤§2ƞ¢-¥Äºµ¦°®¯"°®¨®§ž¥t°´µT¤¦¥Ä°®­¿² ¤¦°®¬^£¬©³¬^­IµT¢¥¦Æ^¢Â8§^¢ÂÔµT¿Hgkȏ¢‘µtêµT·¬v½£°Á£ÔÍ'§ž­¨®¢WP^ïÌ ë1¬©¤Ä·Ü§ž¥¦¢¬^¯"°Á£N§v¤¦¢‘­Ÿº¤Ä·¢£¬^¿£²?¡¬ž£NµT°´µT¤¦¢-£z¤”µT°®¸^£I§v² ¤¦¿I¥¦¢‘µBA"§©£IÂDC£29¹N½°Á¤Ä·Ö§"¯Ô¿N¡9·ÖµT¯2§ž¨Á¨®¢¥1°´µT¤¦¥Ä°®­¿¤¦°®¬^£ ¬vƞ¢¥¤¦·I¢ÔÆ^¢-¥Ä­²¶¡-¬ž£Iµ¦°´µ=¤Ä¢-£z¤ØµT°®¸^£I§v¤Ä¿¥¦¢‘µFEd9@!‚¹GE  §ž£I E  vÌ6ý?£¡¬^£z¤¦¥9§žµT¤-¹Ÿ¤¦·¢œ½1¬ž¥9 F) 9E!  Üê§©¨®¯"¬^µT¤t¢-ȍ¡¨®¿² µ¦°ÁÆ^¢-¨®ºÖ§ì ã òP ã ²¶µ¦°®£¸©²¶ª¥Ä¢µ¦¢-£z¤DÆ^¢-¥Ä­Nï9¹X½·°®¨®¢k¢-ȏ·°®­°Á¤T² °®£¸xµ¦¿ªd¢-¥¦ÙN¡°´§ž¨Ô°®£z¤¦¢¥¦£I§ž¨Ôµ¦¿gÜÈ«µT°®¯"°®¨´§©¥Ä°Á¤=ºÉ¤¦¬ %9 h !  ¹¢ȏ·°®­°Á¤9µÔ§$ƞ¢¥¦º\°ÁÕd¢¥¦¢£^¤"§žÂI¢›µT¿HgkÈɏ°´µ=¤Ä¥¦°Á² ­¿¤Ä°®¬ž£GÌ\Í·I¢°ÁÆ^¢-¥Ä¸ž¢£I¡¢2°´µ³¼¿¥¦¤¦·¢¥k°®¨®¨®¿IµT¤¦¥9§v¤¦¢‘­Ÿº ¡¬^£Iµ¦°®Â¢-¥Ä°Á£I¸#§žÂÂ¢Â$µT¿gÜȏ°´µ=¤Ä¥¦°®­¿¤Ä°®¬ž£Iµáµ=¤9§©¥¦¤¦°®£¸§©¤ ì²?¡9·I§ž¥Ä§^¡¤¦¢¥G§©£IÂóv²¶¡9·I§ž¥Ä§^¡¤Ä¢-¥d¤¦¥Ä¿£I¡§v¤¦¢‘ÂD³¼¬ž¥Ä¯2µG¬©³Ÿ¤Ä·¢ ¤Ä§ž¥¦¸^¢¤½1¬ž¥9µœê¼¢©Ì ¸Ì£2,!  6È%!86  §©£Iš2,!  6È%!86Nï9Ì Í·°´µ4°´µ=¤Ä°®£I¡¤Ä°Á¬^£Ã°´µ†°Á¯"ªd¬ž¥¦¤Ä§ž£z¤­d¢¡§©¿Iµ¦¢ %9@!   ½§žµÔ£¢-Æ^¢-¥k¬ž­Iµ¦¢-¥Äƞ¢‘ÂϽ°Á¤¦·†§ªI§©¥¦¤T²¶¬©³´²?µ¦ªN¢¢¡9·¤Ä§ž¸°®£ ¤¦·I¢#µT¢¨®¢¡¤Ä¢Â$ñžô Ê §©££I¬©¤Ä§©¤¦¢‘Â֤Ģȟ¤-¹”§©£I°Á¤9µD¤Ä§©¸Ö°®µT² ¤¦¥Ä°®­¿¤Ä°Á¬^£\£I¢-¢ÂIµÜ¤¦¬Ï­d¢Ö¢µT¤¦°®¯2§©¤¦¢›§žµ#§©£É¿£ Ê £¬v½£ ½1¬ž¥9 ÌQ͉¥9§žÂ°˜¤Ä°®¬ž£I§ž¨°®£^¤Ä¢-¥Ä£I§©¨Üµ¦¿gÜÈz²¶­I§^µT¢‘Âú¯"¬Â¢-¨´µ ½1¬ž¿¨´ÂÔ­I§žµ¦¢¤¦·I°®µ+¢µT¤¦°®¯2§v¤Ä¢1¬^£k¤¦·¢á¯"¬ž¥Ä¢1¬^¥T¤Ä·¬ž¸^¥Ä§žª·² °´¡-§ž¨Á¨®ºWµ¦°®¯"°®¨®§ž¥dF) 9E!  ¹+½·°´¡9·4°´µÔµT¢¢-£4°®£Ï¤Ä·¢µ¦¯2§©¨®¨ ¤Ä§ž¸ž¸^¢ ¡-¬ž¥Äª¿Iµ¬^£¨®ºR§žµÚ§›Æž¢¥¦­G¹8§žµ½1¢-¨®¨8§žµ¬ž¤¦·¢¥ ¬ž¥¦¤¦·I¬ž¸ž¥9§©ªI·°´¡-§©¨®¨®ºµ¦°®¯"°®¨®§ž¥q½1¬ž¥9µµ¦¿I¡9·Ï§žµ g9E%9@!   ꂵ¦¢-¢£Ï§žµœÆž¢¥¦­dï9¹£ g9@!  Ú꼪I¥¦¢ªN¬zµT°Á¤¦°®¬^£Nï9¹‰§ž£I  ,9 9 h !  ꂢ-£I¡-¬ž¿£z¤Ä¢-¥Ä¢Âܧžµ”­d¬©¤Ä·"ƞ¢¥¦­"§©£NÂÜ£¬ž¿£dï9¹Ÿºz°®¢¨®Â°Á£I¸ ¤¦·I¢k¯"°´µ¦¨®¢§žÂ°Á£I¸Ü¡-¬ž£I¡-¨®¿IµT°®¬^£¤Ä·I§v¤•%9@!  k°´µqª¥Ä¢¬ž¯Ü² °®£I§©£z¤Ä¨Áº\§Ïƞ¢¥¦­GÌúý?£ ¡-¬ž£z¤¦¥9§žµT¤-¹~2,!  6È`!86  ¹½·°´¡9·É°´µ ªI§ž¥Ä§^°®¸ž¯2§v¤Ä°´¡-§©¨®¨Áº¤¦·¢¯"¬zµ=¤µT°®¯"°®¨´§©¥"½1¬ž¥9ÂRª¥Ä¢µ¦¢-£z¤ °®£Å¤¦·I¢$µ¦¯2§©¨®¨Ø¤9§©¸^¸ž¢ ¡¬ž¥Äª¿Iµ-¹q¬¡-¡-¿¥9µ2¤Ä·¢-¥Ä¢¢-ȍ¡¨®¿² µ¦°ÁÆ^¢-¨®ºÚ§žµ8§£¬^¿£G¹X§ž£IÂ$°´µq¤Ä·Ÿ¿Iµ§¯k¿I¡9·W­d¢¤¦¤¦¢-¥¤Ä§©¸ž² °´µT¤¦¥Ä°Á­I¿¤¦°®¬ž£«¢ȏ¢-¯"ª¨´§ž¥W³¼¬^¥ ,9E!  ¹2½·°´¡9· °®µÏ§ž¨´µT¬ ¬vƞ¢¥¦½·I¢-¨®¯"°®£¸ž¨®º§Ü£¬^¿£GÌ Ý¬^¥¦¯2§ž¨Á¨®ºž¹©¨Á¢-¤H«­N¢D§Æ^¬¡-§©­I¿¨´§©¥Äº¢ȟ¤¦¥9§ž¡¤¦¢‘Âܳ¼¥¦¬^¯ §ž£Ö¿£I§ž££¬©¤9§v¤Ä¢ÂW¡-¬ž¥Äª¿IµJIˬvƞ¢¥á§2¨´§ž£¸ž¿I§ž¸ž¢Ks§©£I L 6:Må¤Ä·¢œµT¢-¤1¬ž³‰ªN¬zµ¦µ¦°®­¨®¢qµ¦¿gÜȟ¢‘µ+³¼¬^¥1¤¦·I§©¤1¨´§©£I¸ž¿I§ž¸ž¢©¹ ¢-Èz¤Ä¥Ä§^¡¤Ä¢›§©¨´µT¬W³¼¥¦¬^¯Ñ¡¬^¥¦ªI¿IµNIÓ­Ÿº†¡-¬ž£Iµ¦°´Â¢-¥Ä°®£¸W§©¨®¨ ¤Ä·¢ØµT¿Hgkȏ¢‘µ¤æ1³¼¬ž¥”½·°´¡9·Ô¤¦·¢¥¦¢¢-ȟ°´µT¤Äµ6§œ¡¢¥T¤9§©°®££Ÿ¿¯Ü² ­d¢-¥2ꏢ-ªd¢-£I¢-£z¤¬ž£¤¦·¢Ü¨´§©£I¸ž¿I§ž¸ž¢§ž£I¡-¬ž¥Äª¿Iµœ¡¬^£² µ¦°´Â¢-¥Ä¢ÂI﬩³œÂ°´µ=¤Ä°Á£N¡¤Ô½1¬ž¥9µd°Á£OHÒµ¦¿I¡9·†¤¦·I§©¤k¤Ä·¢ ¡-¬ž£I¡§v¤¦¢£I§v¤Ä°®¬ž£Iµ»-Âæá§ž¥¦¢á§©¨´µT¬8°®£"¤Ä·¢qÆ^¬¡-§©­I¿¨´§©¥Äº©Ì'ÝI¬ž¥ ¤Ä·¢"µ=¤Ä¿I°®¢Â¨´§©£¸^¿I§©¸^¢µØ½1¢k¨Á°®¯"°Á¤Ø¤Ä·¢Ü¨Á¢£¸©¤Ä·W¬ž³1µ¦¿³´² ٍȏ¢‘µö¤Ä¬œîبÁ¢-¤T¤Ä¢-¥9µ'§ž£IÂ8½1¢t¢-ÙI£¢t¤¦·¢1¢ȟ¤¦¢£I¢‘µ¦¢¤Äµ'¬©³ Æv§©¨®°´Â#µ¦¿gÜȏ¢µ§^µ L 6:MQP ÚSR?oæC+`+ ?e+UTOVó[+ ?oæŒ+ WDXóOæY L 6:Mó[Z]\ ß ó^ ó,\_ö°®µT¤¦°®£I¡¤XµT¤¦¥Ä°®£¸^µ'µT¿I¡9·¤¦·I§©¤!\ Ý ?Oæ`YaH ³¼¬^¥¶êbYc IJdz¹'½·¢-¥Ä¢HI𴵜§¨´§©£¸^¿I§©¸^¢Ô§©£N¡-¬ž¥Äª¿Iµ ¢ªN¢£I¢£^¤X¤¦·I¥¦¢‘µT·¬^¨´Â Ìz+§©¥Ä°´§v¤¦°®¬^£Iµ ¬©³¤Ä·°´µö¢ȟ¤Ä¢-£Iµ¦°®¬ž£Iµ ¡§©£œ­N¢1¡¬^£IµT°´Â¢¥¦¢‘³¼¬^¥ö¨´§©£¸^¿I§©¸^¢µd½°Á¤¦·8µTªd¢¡-°´§©¨©°®£øI¢‘¡² ¤Ä°Á¬^£I§©¨ ª¥Ä¬žªd¢-¥¦¤¦°®¢‘µDêµT¿N¡9·§^µ 6_Fdg9@!t°®£ î ¢¥¦¯2§ž£Nï9Ì Í‰¬k­¿°®¨´Âܤķ¢8µT¿gÜÈ2³‚§ž¯"°®¨Á°®¢‘µfe€(J-¬ó)V‡.9¹½1¢D¡-¬ž£Iµ¦°´Â¢-¥ §ž¨Á¨Ø¤¦·¢ƞ¬¡-§ž­¿¨´§©¥Äº4¢-£z¤Ä¥¦°®¢µ"¤¦·I§©¤¡-§ž£Å­N¢¬ž­¤9§©°®£¢‘ ³¼¥Ä¬ž¯ ¤¦·¢½t¬^¥ÄÂã- ­zºÉµT¤¦¥Ä°ÁªIª°®£¸$¤¦·¢¨®§^µ=¤gVܨ®¢¤¦¤¦¢¥Äµ §ž£I§žÂÂ°®£¸Ü§kÆv§©¨®°´Â2µT¿Hgkȳ¼¥¦¬^¯ L 6:M P ù e€(Jó,V‡. ÚcRæ`Y L 6:M P + ß ( -Bh]i P æYjHkd Í·¢Ø½1¬ž¥9Âk­¥¦¢‘§ Ê °Á£k³¼¥Ä¬ž£z¤â¬ž³d¤Ä·¢Ø¨´§žµT¤€'¨®¢-¤T¤¦¢¥Äµ+½°Á¨®¨ ­d¢¡-§ž¨Á¨®¢‘ !DÈߤ¦·  =23J!8 9IÌ Í·¢°´µ=¤Ä¥¦°®­¿¤Ä°®¬ž£³¼¿£I¡¤Ä°®¬ž£IµËèª(J-¬ó)V‡.Nlme€(-¬ó,V‡.<n (oóp^q‰§©¥Ä¢á¬^­¤Ä§ž°®£¢Â#­zº¡¬ž¿I£^¤Ä°®£¸¤Ä·¢8¬¡-¡-¿¥¦¥Ä¢-£N¡¢µt¬©³ ¤Ä·¢#Æ^¬Ÿ¡§©­¿I¨®§ž¥¦ºÚ¢£z¤¦¥Ä°Á¢‘µ ß h]i P æ#°Á£IV§ž£I£¬ž¥¦² ¯2§ž¨Á°°Á£I¸œ¤¦·¢¡-¬ž¿£z¤Äµ-Ì Í·¢"¯"¬©¤Ä°®Æg§©¤¦°®¬ž£³¼¬^¥¡¬^£IµT°´Â¢¥¦°®£¸µT¿Hgkȍ§©¤¦°®¬ž£$°´µ=² ¤Ä¥¦°®­¿¤Ä°®¬ž£Iµö³¼¥¦¬^¯s¯k¿¨Á¤¦°®ª¨®¢”½1¬ž¥9Â8ªd¬^µ¦°˜¤Ä°®¬ž£IµG°´µ‰¤¦·I§©¤'¤Ä·¢ µ¦¿gÜÈR³‚§©¯"°®¨®°®¢µ2§©¤¤Ä·¢Ïôv¤Ä·xªd¬^µ¦°Á¤¦°®¬ž£ ¡§©£x¬©³´¤¦¢£x­d¢ µ¦ªI§©¥9µ¦¢W§©£IÂɯ"°´µ¦¨®¢§žÂ°Á£I¸¹âªI§ž¥T¤Ä°´¡¿¨´§©¥Ä¨®º³¼¬ž¥°Á£øI¢¡¤¦¢ ¬^¥¥9§©¥Ä¢-¨®º4¢£I¡¬^¿£z¤¦¢-¥Ä¢ÂR½t¬^¥Äµ-ÌÃÝI¬ž¥¢ȍ§©¯"ª¨®¢ž¹1¤Ä·¢ µ¦°®¯"°Á¨´§ž¥ÖªI§©¥¦¤T²¶¬©³´²?µ¦ªN¢¢¡9· ­N¢·I§gƟ°Á¬^¥W³¼¬^¥Ï¤¦·I¢ÅÞ+£¸ž¨®°´µ¦· D_:  §ž£I  `!  ^=  ê‚ÍX§©­¨®¢£zï6°´µ6£¬ž¤âµT¿Hg"¡-°®¢-£z¤¦¨®º ¢Æz°´Â¢£z¤t³¼¥Ä¬ž¯»¤Ä·¢8°´µ=¤Ä¥¦°®­¿¤¦°®¬ž£Iµt§v¤1¤¦·¢ô©¤¦·ªN¬zµT°Á¤Ä°Á¬^£ §ž¨Á¬^£¢©¹d¿¢8¤Ä¬"¤¦·¢¨Á¬v½ ³¼¥¦¢‘×z¿¢-£I¡-º¬ž³6¤¦·¢½1¬ž¥9³¼¬ž¥Ä¯ D_:  Ìq q¨´µT¬I¹I§žÂgû=¢‘¡¤¦°®Æ^¢µ§©£N£I¬ž¿£IµØ¢-£I°®£¸"°®£²)r ¯2§gºD·I§gƞ¢tµT°®¯"°®¨´§©¥GµT¿HgkÈ8³‚§ž¯"°®¨Á°®¢‘µd§©¤'¤Ä·¢ov¤¦·ÔªN¬zµT°Á¤Ä°Á¬^£ R0Adkꂢ©Ì ¸Ì»:s>=>,t@\mjR8AudqƏµÌvt[wxVyzŒ<*{s;$tx\|jR0Adžï9¹z­¿¤ ¤Ä·¢µ¦¿gÜȳ‚§ž¯"°Á¨®°®¢µ1§v¤¤Ä·¢g‘µ=¤qªd¬^µ¦°Á¤¦°®¬ž£¡-§žª¤¦¿¥Ä¢°Á³´² ³¼¢¥¦¢£^¤ L £¬^¯"°®£I§©¨Mt§©£N L §žÂvû=¢¡¤Ä°®Æg§ž¨Mtª¥Ä¬žªd¢-¥¦¤¦°®¢µ-¹ž¯2§ Ê ²   ! ")# %  ! ")# & ˜ €"~.€  ~ – ~s€ £ ”>‚  £ {z” rq Ž( ~ – ~s€ £ z £ z ÔiT  ! ß # %  ! ß # & ƒ € ˜ €˜ rŒet ~ “ ~s} €"~.€  ~ – ~ £ z>—>{ £ yz £ {z>” £ {{‚ £ {>{‘   r>q Že ~ – ~ £ ‚‘  £ y” £ {z £ {{y £ {z £ {{>{z £ iyz Ô,R  ! ()# %  ! }(,# rŒet ~ r>t rt ˜ ~ ~ ƒ ~s€ ~s€"˜ ~s˜  }e™ ~sr>Œ(t ~ ~ “ ~s} €~.€  ~ – £ {>{ £ {‡> £ {{y £ ‚‡>{ £ >—>‚ £ {‚— £ {{y £ {>{z £ >   rq Ž( ~ – £ {>{‘   £ ‚>{z £ y‚ £ {{>‚ £ {{‚ £ {>{” £ >z £ {{>{z £ > ÍX§©­¨®¢ ù §Qй±ªd’ސˆ#… ‚‡Ž.-ŒŠ…‡Ž`‹K‰%‚±Às´³À½"´)~e´À•”09/Š‚ ‚‡†‹†ˆB˜»„`‹Œ’T²1Á,€½"´)~e´Àl”9/Š‚ ‚‡†‹†ˆ$ ˜~޶‹H„`‹g‚8ƒQ°©|Ž…­%­Ži‹°®e`‚N’ ·‹¯1­Žˆ –H%‚i«ŒŠˆ °®£¸¤¦·¢"°´µT¤¦°®£I¡¤Ä°®¬ž£Ú­d¢¤=½1¢-¢-£¤¦·I¢Ô¤=½1¬¡¨´§žµÄµT¢‘µá¡-¨®¢§©£ §©£NÂÆz°´µ¦°®­¨®¢kꂢ©Ì ¸Ìô:s>=>,tNDR8A[ó,V >‰æi*ó,Vy\oó,V#<v„oóæó,\xdDƏµ-Ì t[wxVyzŒ<*{s;$tJbR0V ::z]só)V >æó)Vsæi*ó)Vsæi*iæó,\xdŸ¹v§^µâ¬^­Iµ¦¢-¥Äƞ¢Âܰ®£ ¤¦·I¢Ô¡¬^£Iµ¦°®Â¢-¥Ä¢Â¿£z¤Ä§ž¸ž¸ž¢‘ÂÚ¡¬^¥¦ª¿NµÄïÌØý¶¤D°´µ¤Ä·z¿Nµq¯"¬^¥¦¢ ¥Ä¬ž­¿IµT¤k¤¦¬§©¨´µT¬°®£I¡¨®¿I¢µT¿Hgkȏ¢‘µk°®µT¤¦¥Ä°®­¿¤Ä°Á¬^£Iµ8¬vÆ^¢-¥ µ¦¢-ƞ¢¥Ä§ž¨d¤¦¥Ä¿£I¡-§©¤¦¢‘Â2³¼¬^¥¦¯2µ§žµ½1¢-¨®¨{Ì ý¶¤á½§žµØÂ¢-¤¦¢¥¦¯"°®£¢‘Â#¢-ȟªd¢-¥Ä°®¯"¢-£z¤Ä§ž¨®¨Áºk¤Ä·I§v¤Ø¤Ä·¢Ô°®µT² ¤¦¥Ä°®­¿¤Ä°Á¬^£Iµ‰§©¤”ªN¬zµT°Á¤¦°®¬^£Iµö¸^¥¦¢‘§v¤¦¢¥ö¤Ä·I§©£HP᧞£IÂ8¤Ä·¢¬ž£¢‘µ ¬ž­¤Ä§©°®£¢‘³¼¬ž¥D½1¬ž¥9µáµ¦·¬^¥T¤Ä¢-¥Ø¤Ä·I§©£š¨®¢-¤T¤¦¢¥Äµá§©¥Ä¢£¬ž¤ ¿Iµ¦¢³¼¿¨{ÌÍ·°´µD¬Ÿ¢µá£¬©¤8¥Ä¢-ª¥Ä¢µ¦¢-£z¤á§¯2§vû=¬^¥qª¥Ä¬ž­I¨Á¢¯ ­d¢¡-§ž¿Iµ¦¢Ô¿£ Ê £I¬v½£W½1¬ž¥9µØ¤Ä¢-£IÂW¤¦¬·I§gÆ^¢¨®¬ž£I¸2³¼¬ž¥Ä¯2µ °®£¯"¬zµ=¤¨´§©£¸^¿I§©¸^¢µ-Ì z+§©¥Ä°Á¬^¿Iµ†Â°®µT¤Ä§ž£I¡¢ ¯"¢§žµ¦¿¥Ä¢µxꂡ¬zµT°®£¢xµT°®¯"°®¨´§©¥Ä°˜¤=ºž¹ Þ+¿I¡-¨®°®Â¢§©£Å°´µT¤Ä§ž£I¡¢ž¹BK ß £¬^¥¦¯#ïܧž£IÂɰ®£z¤¦¢-¥Äªd¬ž¨´§v¤Ä°Á¬^£ ¯"¢¤Ä·¬Âµ½1¢-¥Ä¢2¿Iµ¦¢Â4°Á£†¬ž¿¥Ô¢ȏªN¢¥¦°®¯"¢£^¤9µœ¤¦¬$¢¤Ä¢-¥¦² ¯"°®£¢"¤Ä·¢¯"¬zµ=¤ÜµT¿I°˜¤9§©­¨®¢"³¼¬^¥¦¯k¿¨´§³¼¬ž¥Ô¤¦·I¢ªI§ž¥Ä§^°®¸©² ¯2§v¤Ä°´¡á°´µT¤Ä§ž£I¡¢žÌ Í·¢­d¢µT¤DµÄ¡¬ž¥Ä¢µØ½t¢¥¦¢8¬^­¤Ä§ž°Á£I¢Â³¼¬ž¥JK ß £¬^¥¦¯V¿IµT² °®£¸#§#½t¢°Á¸^·z¤¦¢ÂªI¥¦¬Â¿I¡¤D¡-¬ž¯k­°®£I§v¤Ä°Á¬^£5…uViæi*3(J-¬óûn.¶Ú †j‡ ˆ Qˆ ‰ˆ þ ˆ # P Þ " (Š P ‹…uViæ`*3(-¬ó>ûOó,V‡.B.W§ž£IÂ÷§cŒ^§ž¡¡-§©¥9Ÿ²{¤=ºŸªN¢ ê䍧©¨Á¤¦¬^£§ž£IÂ4éÖ¡ î °®¨Á¨{¹âì‘íQ}P^©¨Á¤Ä¢-¥9§v¤¦°®¬^£Ö¤¦¬WªN¢£I§©¨®°-¢ ¤¦·I¢D¡§žµ¦¢µ+°®£2½·I°®¡9·#¯2§vû=¬ž¥t°ÁÕd¢¥¦¢£I¡¢‘µ+°®£"¤¦·I¢á¿£N¢-¥¦² ¨®ºŸ°Á£I¸Dµ¦¿gÜÈÔ³‚§ž¯"°®¨Á°®¢‘µâê¼£I¬©¤â¬^£¨®º°®£Ô¤Ä·¢q°´µT¤¦¥Ä°®­¿¤¦°®¬^£IµÄï §©¥Ä¢D³¼¬ž¿£N ù …uVsæi*3(-¬ó>ûOó,V‡. Ú ñ )Ž %  ! P # %  þ P # + èª(J-¬ó)VAæ=.¶èª(û ó)VBA æ=.%+  ‘_(J-¬óû ó)V‡. ñ )Ž %  ! P #’“ %  þ P # + èª(-¬ó,VAæ=.”ëèª("ûOó,VA æi.`+ ë1§^µT¢‘ÂW¬^£W¤Ä·¢"ªI§ž¥Ä§^°®¸ž¯2§v¤Ä°´¡8°´µ=¤9§©£I¡-¢k¡-¬ž¯"ª¿¤Ä¢ °®£¤¦·°´µ8½§gº©¹6°Á¤°´µªd¬^µÄµT°®­¨®¢Ô¤¦¬ÖÙN¨˜¤Ä¢-¥¬^¿¤¤Ä·¢#½1¬ž¥9µ ½°Á¤¦·$µT°®¯"°®¨´§©¥¢-£I°Á£I¸^µØ­I¿¤8¬Ÿ¡¡¿¥Ä¥Ä°Á£I¸2½°Á¤¦·W°ÁÕ ¢-¥Ä¢-£z¤ µ¦¿gÜÈú³‚§©¯"°®¨®°Á¢‘µÖ§©£N«°´µT¤¦¥Ä°Á­I¿¤¦°®¬ž£Nµ̋ݍ¿¥¦¤¦·I¢-¥Ä¯"¬ž¥Ä¢©¹ ¤¦·I°®µÙI¨Á¤¦¢¥·I§žµ¤¦·¢§žÂÆv§ž£^¤9§©¸^¢"¬©³Ø­d¢-°®£¸¤¦¥9§©°®£¢‘¬ž£ ¡¬^¯"ª¨®¢¤Ä¢-¨®ºq¿£z¤9§©¸ž¸^¢Â¡-¬ž¥ÄªN¬^¥Ä§¹g§áªd¬©¤¦¢£z¤¦°´§©¨®¨®ºq¿£I¨Á°®¯Ü² °Á¤¦¢‘Â#¥Ä¢µ¦¬ž¿¥9¡¢žÌ 䟷¬ž¿I¨®Âܧ8½1¬ž¥9ÂÜ£¬ž¤t§©ªªd¢§ž¥+¢-Æ^¢-£"°®£Ü¤¦·I¢q¨´§©¥Ä¸ž¢1¥Ä§g½ ¤¦¢-ȟ¤6¡-¬ž¥Äª¿Iµ-¹vµ¦¬ž¯"¢1µT¯"¬Ÿ¬ž¤¦·°®£¸¤Ä¢¡9·£°´×z¿¢+­N§žµ¦¢Â¬^£¨®º ¬ž£µ¦¿gÜȵ¦°Á¯"°®¨´§©¥Ä°Á¤=º8½t¬^¿¨´Â#µT¤¦°®¨®¨ ­N¢8£¢¢¢‘ÂêµT¿N¡9·§^µ ٍȏ¢Â¬^¥1Æv§ž¥¦°´§©­I¨Á¢á¨®¢-£¸ž¤¦·µ¦¿gÜȰ®£z¤¦¢¥¦ªd¬ž¨´§©¤¦°®¬ž£NïÌ =SÜR adpEZ^`hÓG^ Ö XHÐ Õ Y?bYDÐX_VY^)•  qµ”§D¡¬^¯"ª¨®¢-¯"¢£^¤‰¤¦¬D¤¦·¢µ¦¿gÜȟ²{­I§^µT¢‘ªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡ °´µT¤Ä§©£N¡¢kª¥¦¬^ªN¬zµT¢‘ÂÖ°®£¤¦·°´µœªI§žªN¢¥-¹'§½1¬ž¥9Ÿ²?¡¬ž£z¤Ä¢ȟ¤T² ­I§^µT¢‘ÂŵT°®¯"°®¨´§©¥Ä°˜¤=º$¯"¢§^µT¿I¥¦¢·I§^µ2­d¢-¢-£xµT·¬v½£É¤Ä¬†­d¢ ¿Iµ¦¢³¼¿¨q³¼¬ž¥"¤9§©¸^¸ž°®£¸$¿£ Ê £¬v½£É½t¬^¥ÄÂIµÌxë1¥Ä°®¨Á¨8êTìížízîžï ¿¤Ä°Á¨®°-¢‘ÂD½t¬^¥Ä¡-¬ž£z¤¦¢-ȟ¤'£¢°Á¸^·Ÿ­N¬^¥¦·¬Ÿ¬Âµö¤¦¬q¯"¬Â¢¨z§ž£I ª¥Ä¢°´¡¤D¤9§©¸zµœ³¼¬ž¥¿I£ Ê £¬v½£½t¬^¥Äµ-Ì䏡9·]–¤-¢WêTìížíP^ï ¢ȏª¨®°´¡°Á¤Ä¨Áº³¼¬ž¥Ä¯Ô¿I¨®§©¤¦¢‘¤ķ¢8¡¬^£I¡¢ª¤¬©³‰ªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡ — ˜5r>™ l €  |>€   ˜"€  Œeue  |>} ˜  “ ž  }€  qsr>~,™ ô ¢ j9k j9kHl m m m m l nOk pp l r€"r ƒe ™ “ £   ˜KDr>}(qs~ š |>}D~.ˆ uert š  “  t r€     }(r>qsqsu(€ivwx { { { { { ‘ {Q£ {>‚{ { £ {Qi‘  }er ƒ ~s’ u5vwx { { { { { ‚ {Q£ {>‚{ { £ {‘> ƒ ~st%„†#vwx { { { { { ‡ {Q£ iz>‚ { £ i‚  ˜"ue€"€|>™1vwx { {  { {   £ >z— {$›)› “ | ƒ ~s€ivwx   { { { y>”  £ —‚” { £ {Qiz • €  – vwx { { { z { —  £ ”>z‡ { £ {—>z qsrt qsu(tvwx y z { { { { ” £ >— { £ {>” ~.ˆ‰w…Š#„†#vwx  { { { { { y£ ‡> {Q£ {>{{‘ qs|}eqs~s}€vwx z  ‚ { { { ‘£ ‘”‚ { £ {Qi‡ ‹ rŒŠ.„†.vwx y { { { { { y>‚ £ {”>‡ {Q£ {>{{‚ €ivwx   z>‡z { { { s{‡£ —>{” { £ {‡>z tvwx { { {   ‘ >i‚ £ ‚{ { £ ”‚ Ž vwx z ‡  { { { i£ z>{y { £ {y>”  }€  qsrD~ ¦ ¦ ¦ ¦ ¦ ¦ œ}ž{Ÿ¡ $¢$£¤¥G¦ §¨ž{Ÿª© « ¬¤­2®¯‰° l r>€r ƒ( ™ “ r>  q ƒe ˜K€  Œ(uQ  |}8› { { { { {  œyœ±y²{© ¢¤­@ž{¬Q© « ¬$¤­U³{¯¨®° ­ ´žµ¶¬f¦·ž{§ ž{Ÿ¡ ²{§« ­ž{¬ ÍX§©­¨®¢ÏîŸù¹¸ª–Œ†b®ª`‚N’QˆH‰y‚‡(©º¸„p-5­† µ „,‚‡†‘`‚N’Q†‚‡†3’»-½¼ ˆ Š5¹±ª°«n„,‚N„i’QŽ…¯(©ß„%…‡Žß’ސˆ#…N„%‹Œ†¬® Ž …‡–‘‚‡†ˆs«_†… …‡H…‡–Œ† …N„,‚s° ¯i†…O®ª%‚N’4½¿³À>½KÁi²³"´0¡¾[`…‡–€…‡–Œ†R«n„,‚N„i’QŽ…¯(©ß„%…‡Žª„%‹n’£`‹…‡†ª° …‡Šn„e­©|†„`ˆ Š‚‡†ˆo®ª†‚‡†e1©¬«Š…‡†3’1‰‚‡1©v„%‹7‚8ƒQ°©|Ž…­%­Ži‹°®e`‚N’ ŠŒ‹Œ„`‹Œ‹`…N„,…‡†3’d%‚i«ŒŠˆ$ µ¦°®¯"°Á¨´§ž¥¦°Á¤=ºØ¬vÆ^¢-¥6£¢§ž¥¦­Ÿº8½1¬ž¥9ÂÔ¡-¬ž£z¤¦¢-ȟ¤Äµ-¹ž¿Iµ¦°Á£I¸q¤Ä·°´µ6°Á£ §ž£änzJ¿ú³¼¥9§©¯"¢½t¬^¥ Ê ³¼¬ž¥ªI§ž¥T¤¦²{¬ž³´²¶µ¦ªd¢-¢¡9·#¤Ä§ž¸ž¸ž°®£¸IÌ ™$¢Ø§ž¨´µT¬á¿¤¦°®¨®°-¢Â8¤¦·°´µX¥¦¢¨´§v¤¦°®Æ^¢-¨®ºq¬^¥T¤Ä·¬ž¸^¬ž£I§ž¨^°®£³¼¬^¥T² ¯2§©¤¦°®¬ž£Ïµ¦¬ž¿¥9¡¢2§^µ§W¡-¬ž¯"ª¨®¢¯"¢-£z¤œ¤¦¬Ú¤Ä·¢ªI¥¦¬^ªN¬zµT¢‘ µ¦¿gÜȟ²{­I§^µT¢‘Â#ªN§©¥9§žÂ°®¸ž¯2§©¤¦°´¡°´µ=¤9§©£I¡-¢©Ìª™¢¡9·¬^µ¦¢á¿I£² °®¸ž¥9§©¯Ñƞ¢¡¤¦¬^¥ÄµÔ¤¦¬†¯"¬ŸÂ¢-¨¨®¢³´¤§ž£I›¥¦°®¸^·^¤"£¢°®¸ž·Ÿ­N¬^¥T² ·¬Ÿ¬Âµ-¹t§©£NÂ4¿Iµ¦¢Â\¡¬^µ¦°®£¢#µ¦°®¯"°Á¨´§ž¥¦°Á¤=º³¼¬ž¥k°Á¤ÄµÔ¥¦¬^­¿IµT¤T² £¢‘µ¦µ-̋ët¢‘¡-§ž¿IµT¢\¡¬^µ¦°®£¢†µ¦°®¯"°Á¨´§ž¥¦°Á¤=º›¬vƞ¢-¥W£z¿I¯"¢-¥Ä¬ž¿Iµ ¨´§©¥Ä¸ž¢-²{Æ^¬¡-§©­I¿¨´§©¥Äº¡-¬ž£z¤¦¢-ȟ¤Äµk¡-§©£\­N¢ƞ¢¥¦º¢-ȟªd¢-£NµT°®Æž¢ ¤Ä¬›¡-¬ž¯"ª¿¤Ä¢©¹Ø½1¢¬^£¨®º\°®£I¡-¬ž¥ÄªN¬^¥Ä§©¤¦¢‘†¤Ä·°´µ2¯"¢‘§žµ¦¿¥¦¢ ½·¢£Ü¤¦·¢áµT¿gÜȟ²¶­I§žµ¦¢ÂÔªI§©¥9§žÂ°Á¸^¯2§v¤Ä°®¡â°´µ=¤9§©£I¡-¢1¯"¢‘§v² µ¦¿¥Ä¢D½§žµ½°Á¤Ä·°®£§Ü¡¢¥T¤9§©°®£2¤Ä·¥¦¢‘µT·I¬ž¨´Â2¬ž³'Ɵ°´§©­°®¨®°Á¤=º©Ì =S= qîjiYZe\"^`ceh Õ Y?bYÐ8X_VYJ^$• Ø hnX7j Ö Vhnj ÍX§©­¨®¢ÉîR°®¨®¨®¿Iµ=¤Ä¥Ä§©¤¦¢‘µ¤¦·¢Å§žªª¨®°´¡-§v¤Ä°®¬ž£5¬©³#­d¬©¤Ä·«¤Ä·¢ µ¦¿gÜȏ¢Ÿ²¶­I§žµ¦¢ÂªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡q°´µT¤Ä§ž£I¡¢§ž£IÂÚ¡¬^£^¤Ä¢ȟ² ¤Ä¿I§©¨âµ¦°®¯"°®¨®§ž¥¦°Á¤=º"¯"¢‘§žµ¦¿¥¦¢‘µá¤¦¬ª¥Ä¢°´¡¤Ä°®£¸2¤Ä·¢"¨®¢ȏ°´¡-§©¨ ª¥Ä°®¬ž¥"°´µT¤¦¥Ä°Á­I¿¤¦°®¬ž£†³¼¬ž¥"¤Ä·¢Öª¥Ä¢-Ɵ°®¬ž¿NµT¨®º¿£Iµ¦¢-¢£›Þ⣏² ¸^¨Á°´µ¦·2½1¬ž¥9 Ü9@!  g!#ÌâÍ·I¢8ªN¬ž¤¦¢-£z¤Ä°®§ž¨ ¢ȏ¢-¯"ª¨´§ž¥t¡-§ž£² °´Â§©¤¦¢‘µ¹µ¦¿I¡9·§^µµT·I¬v½£#°®£ÍX§©­I¨Á¢k쩹I§ž¥¦¢D¬ž¥9¢¥¦¢‘Â2­Ÿº ¤Ä·¢ªI§©¥9§žÂ°®¸^¯2§v¤¦°´¡X°´µ=¤9§©£I¡-¢â¯"¢‘§žµ¦¿¥¦¢ž¹gÙI¨Á¤¦¢-¥Ä¢Â­zºœ¤Ä·¢ ¯"¬^¥¦¢œ¢ȏªd¢-£Iµ¦°®Æž¢8§©£N¨Á¢‘µ¦µ¢-Õd¢‘¡¤Ä°ÁÆ^¢8¡¬ž£z¤Ä¢ȟ¤áµ¦°Á¯"°®¨´§©¥¦² °Á¤=º#µÄ¡¬^¥¦¢‘µ1§^µ1£¬©¤Ä¢Â§©­d¬vƞ¢žÌ ™$¢8°®£ŸÆž¢‘µ=¤Ä°Á¸z§v¤Ä¢Â2µ¦¢-Æ^¢-¥9§©¨ ½t¢°®¸ž·z¤¦°®£¸³¼¿£I¡¤Ä°®¬ž£Iµ³¼¬^¥ ¡¬^¯"ª¿¤Ä°Á£I¸4¤¦·I¢4¡¬ž£NµT¢£IµT¿NµÚ°´µ=¤Ä¥¦°®­¿¤Ä°®¬ž£Å³¼¥Ä¬ž¯‹¤Ä·°´µ µ¦ªI§ž¡-¢©Ì ™R·°®¨®¢†¿Iµ¦°®£¸Ïû=¿IµT¤¤¦·¢Åµ¦°®£¸ž¨®¢4¡¨®¬zµT¢‘µ=¤$¢ȟ² ¢-¯"ªI¨®§ž¥eý µ+°´µT¤¦¥Ä°®­¿¤¦°®¬^£kªd¢-¥¦³¼¬ž¥Ä¯"¢Â#µT¿¥Äª¥Ä°´µT°®£¸^¨Áº8½1¢-¨®¨{¹ ¤¦·I¢s­d¢µT¤ÅªN¢¥T³¼¬^¥¦¯2§ž£I¡¢ ½1§^µ†¬^­¤Ä§ž°Á£I¢»­ŸºÀ§Ã¿£°Á² ³¼¬ž¥Ä¯ç½1¢-°®¸ž·z¤Ä°Á£I¸4¬©³Ô¤¦·¢4°´µT¤¦¥Ä°®­¿¤¦°®¬^£Iµ"³¼¥Ä¬ž¯ç¢-ȏ¢-¯Ü² ª¨´§©¥9µÚ½°Á¤¦·I°Á£÷§ž£÷¢ȏªd¢-¥Ä°®¯"¢-£z¤Ä§ž¨Á¨®ºÅ¢-¤¦¢-¥Ä¯"°®£¢Âú°®µT² ¤Ä§ž£I¡¢É¤Ä·¥Ä¢µ¦·¬ž¨´Â Ì 0 £¸ž¬^°Á£I¸x½1¬ž¥ Ê °´µ4¡¬^£Iµ¦°®Â¢-¥Ä°Á£I¸ ½1¬ž¥9ÂܨÁ¢£¸©¤Ä·#§©£NÂ"½t¬^¥ÄÂ"³¼¥¦¢‘×^¿I¢-£I¡-ºÜµ¦°®¯"°®¨®§ž¥¦°Á¤=º8§žµ+³¼¿¥¦² ¤¦·I¢-¥ªd¬©¤¦¢£z¤¦°´§©¨G¡¬^¯"ªN¬^£¢-£z¤9µâ¬ž³‰¤¦·I°®µ1½1¢-°®¸ž·z¤Ä°Á£I¸8³¼¿£I¡² ¤¦°®¬^£GÌ À Á 5šE'SqSØ;TKÃÂÀ™É<Â+Y+FI;?UÄ  䟰®£I¡-¢8½t¢œ¬ž­¤Ä§©°®£§k¤Ä§ž¸"ª¥¦¬^­I§©­I°Á¨®°Á¤=ºÜ°´µ=¤Ä¥¦°®­¿¤Ä°®¬ž£2³¼¬^¥ §©£Ÿº¿£ Ê £¬v½£Ú½1¬ž¥9 ¹ °˜¤á°®µá×z¿°Á¤¦¢µT¤¦¥9§©°®¸ž·z¤¦³¼¬ž¥Ä½1§ž¥ÄÂܤĬ ¿Iµ¦¢"¤¦·I°®µÔ°´µ=¤Ä¥¦°®­¿¤¦°®¬ž£°Á£$¤Ä·¢¡-¬ž£z¤¦¢-Èz¤Ô¬©³Ø§©£ŸºWªI¥¦¬^­² §©­I°Á¨®°´µT¤¦°´¡8¤9§©¸ž¸^¢-¥¹'°®£I¡-¨Á¿N°®£¸#¤Ä·¢#µT¤Ä§ž£I§©¥9 õ éÖ飍² ¸ž¥9§©¯s¤9§©¸ž¸^¢-¥9µÌ'ý?£¤Ä·°´µXµT¤¦¿Nº©¹ž½1¢¿Iµ¦¢t­°®¸^¥Ä§ž¯2µ‰§žµ'¤Ä·¢ ­I§^µT¢D¯"¬Â¢-¨{¹µT°®£I¡-¢á½1¢D§ž¥¦¢œÂ¢‘§©¨®°®£¸½°Á¤¦·§Ô¥Ä¢-¨´§©¤¦°®Æž¢¨Áº ¨®°®¯"°˜¤Ä¢Âk¤Ä¥Ä§ž°Á£I°Á£I¸Ô§©¤Ä§Ì ™$¢›¡¬ž£z¤Ä¥Ä§^µ=¤Ä¢Âx³¼¬ž¿¥$µT¢‘§©¥9¡9·s§ž¨®¸ž¬ž¥Ä°Á¤¦·¯2µ-ù÷ꂧzïÖ§ ¡¨´§^µ¦µ¦°´¡-§©¨ ­d¢§ž¯Ü²Äì8µ¦¢§©¥9¡9·Ïê‚ë1¢§ž¯Òìgï$Ŕ꼭Nïq§Öê´¤9§©¸vÆ ¨®¢³´¤¦² ·°´µT¤¦¬ž¥Äº©¹ ¥Ä°Á¸^·z¤T²¶·°´µ=¤Ä¬ž¥Äºï6¡-¬ž¯k­°®£I§v¤Ä°®¬ž£2¬ž³'³¼¬^¥¦½§©¥9§ž£I ­I§^¡ Ê ½§©¥9­d¢§©¯Ü²9ì#µ¦¢§ž¥Ä¡9·¢‘µ2ê‚ÿö²¶ðå­N¢‘§©¯‹ìgï9¹âÆv§©¥Ä°´§v² ¤¦°®¬^£4µT¿¸^¸ž¢‘µ=¤Ä¢Â­Ÿº$¤¦·¢·°®¸ž·4¡-¬ž¯"ª¨®¢-¯"¢£z¤Ä§©¥ÄºÚ¥9§v¤¦¢‘µ §žµq¢ÙI£I¢Â°®£$êë1¥¦°®¨®¨ö§©£IÂG¹'ì‘íží}^ïâ²âÆv§©¨®¿¢‘µ°Á£¤Ä·¢ ó©ô©²{zôŒ›æ¥Ä§ž£¸ž¢Å#ꂡ‘阮³¼¿¨®¨~zq°Á¤¦¢¥¦­°8µ¦¢§ž¥Ä¡9·2Å#êÂIï§©£ §žÂvû=¿Iµ=¤Ä¢ÂÜÆv§©¥Ä°´§v¤Ä°Á¬^£¬©³ ¤Ä·¢Ø¨´§v¤¦¤¦¢-¥6¤Ä·I§v¤t¿IµT¢‘µØê´¤9§©¸vÆ ¨®¢³´¤¦² ¤Ä§ž¸¹ ¥¦°®¸^·^¤¦²¤9§©¸Ÿï6¤¦¥Ä°®¸ž¥9§©¯2µâ³¼¬ž¥q§#¡¬ž¥Ä¥Ä¢¡¤Ä°®¬ž£2ªI§^µ¦µê‚ÿG²?ð zq°Á¤Ä¢-¥Ä­°¼ï9Ì ý¶¤µ¦·¬^¿¨´Â"­N¢œ£¬ž¤¦¢‘Â2¤Ä·I§v¤¬ž¿¥¯"¢-¤¦·¬Â2¬ž³'¢‘µ=¤Ä°Á¯2§©¤T² °®£¸¨®¢ȏ°´¡-§ž¨¤Ä§ž¸ª¥Ä°®¬ž¥9µ+¡-§ž£2­d¢á¿Iµ¦¢Â#°®£2¬©¤Ä·¢-¥t¤Ä§ž¸ž¸ž°®£¸ ªI§ž¥Ä§^°®¸ž¯2µ-¹qµT¿I¡9·5§žµÚ§›¯2§©ÈŸ°®¯k¿¯è¢-£z¤¦¥Ä¬žªŸºÉ¤Ä§ž¸ž¸^¢-¥ ê‚ðq§v¤¦£N§©ªI§ž¥ Ê ·°{¹”ìí^ížñ^ï¹ö§žµD½t¢¨®¨X§^µá£I¬ž£²¶ª¥Ä¬ž­I§ž­°®¨Á°´µT¤¦°´¡ ¤Ä§ž¸ž¸^¢-¥9µ¹µ¦¿I¡9·§^µd¤¦·¢âë1¥Ä°®¨Á¨#ý µI¥Ä¿¨®¢²¶­I§^µT¢‘Âؤ9§©¸ž¸^¢-¥”ê‚ë1¥Ä°®¨Á¨{¹ ìí^í^î^ï9¹­Ÿº2°®£°Á¤¦°´§©¨®°-°®£¸á¤¦·¢D¤9§©¸ž¸^¢-¥½°Á¤¦·§¤9§©¸"¡-§ž£I°Á² §©¤¦¢µT¢-¤³¼¬^¥¢ƞ¢¥¦ºÉ¿£ Ê £¬v½£ ½t¬^¥ÄÂÅ­N§žµ¦¢ÂR¬^£R¤Ä·¢ ¨®¢ȏ°´¡-§ž¨Nª¥Ä°®¬ž¥1¢µT¤¦°®¯2§v¤Ä¢µ-Ì Ç Á >6J6<=@qJX;=Y+K ™¢›·N§gƞ¢\¤¦¢µT¤¦¢‘Â÷¤Ä·¢É£¢-½ ¯"¢¤Ä·¬ÂµW¬^£«¤=½1¬s¨®§ž£² ¸ž¿N§©¸ž¢‘µ¹WÝI¥¦¢£I¡9·Ç§ž£IÂÇÞ+£¸ž¨®°´µ¦·G¹Ú¿IµT°®£¸ ¬ž£I¨ÁºÀµ¦¯2§©¨®¨ §©¯"¬^¿£z¤Äµt¬©³'§ž££¬ž¤Ä§v¤Ä¢Â2¤¦¢ȟ¤³¼¬ž¥t¤¦¥9§©°®£°®£¸ê‚ñžô Ê ¯2§vÈ Ì ³¼¬ž¥áݍ¥Ä¢-£I¡9·G¹ ó©ôžô Ê ¯2§vÈ Ì+³¼¬^¥ØÞ+£¸^¨®°®µ¦·Nït§©£I¥Ģ-¨´§©¤¦°®Æž¢¨Áº ¨´§©¥Ä¸ž¢Ü¿£I§ž££¬ž¤Ä§v¤Ä¢Â4¡-¬ž¥ÄªN¬^¥Ä§Ïꂬž£Ï¤Ä·¢#¬^¥Ä¢-¥¬ž³¤¦¢-£Nµ ¬©³X¯"°®¨Á¨®°®¬ž£Ô½t¬^¥ÄÂIµÄï+³¼¬ž¥¡¬^¯"ª¿¤Ä°Á£I¸8¤¦·¢œªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡ °´µT¤Ä§©£N¡¢â§ž£I¡¬^£z¤¦¢ȟ¤Ä¿I§©¨zµT°®¯"°®¨´§©¥Ä°˜¤=ºžÌN Ø¨®¨zªI§©¥9§©¯"¢-¤¦¢¥Äµ ¬©³t¤¦·¢Ü¢¯Ô­d¢°Á£I¸#¯"¢¤Ä·¬ÂµD½t¢¥¦¢k¢µT¤¦°®¯2§©¤¦¢ÂÖ­I§žµ¦¢ ¬ž£É§Ïݍ¥Ä¢-£N¡9·›Â¢ƞ¢¨Á¬^ª¯"¢-£z¤ÜµT¢-¤#§©£IÂ\¿Iµ¦¢›¿£¯"¬Â°Á² ÙI¢‘Â8³¼¬ž¥XÞ+£¸^¨®°®µ¦·G¹‘³¼¿¥¦¤¦·¢¥X¢¯"ª·I§^µT°-°®£¸1¤Ä·¢t¥Ä¢-¨´§©¤¦°®Æž¢¨Áº ¨´§©£¸^¿I§©¸^¢á°®£I¢-ªd¢-£I¢£z¤q¿IµÄ§©¸ž¢8¬ž³6¤¦·¢§ž¨®¸ž¬ž¥Ä°Á¤¦·¯»­I¿¤ §©¨´µ¦¬ªI§©¥¦¤¦°´§©¨®¨®º¢ȏª¨´§©°®£°®£¸#¤¦·¢#¨Á¬v½1¢-¥œ­d¬Ÿ¬^µT¤¬ž£Ïªd¢-¥¦² ³¼¬ž¥Ä¯2§©£N¡¢D¬ž£Þ+£¸ž¨®°´µ¦·GÌ Í'§ž­¨®¢ñɪI¥¦¢‘µT¢£^¤9µ¤¦·¢†¥¦¢‘µT¿¨Á¤9µ¬ž­¤9§©°®£¢Âx­ŸºR¤Ä·¢ °ÁÕ ¢-¥Ä¢-£z¤$¯"¢-¤¦·¬ÂµW³¼¬ž¥·I§ž£I¨®°®£¸ ¿I£ Ê £¬v½£«½1¬ž¥9µ °®£z¤¦¬†¤¦·¢ÏÿG²?ðФ9§©¸^¸ž¢-¥9µ-̻ͷI¢ ã §ž¥Ä§^°®¸ž¯2§v¤Ä°´¡²9ì¥Ä¬v½ ¥Ä¢-ª¥Ä¢µ¦¢-£z¤9µ#¤Ä·¢ÏÆv§©¥Ä°´§v¤¦°®¬^£R°®£x½·°´¡9·x¬ž£¨®ºÅ¤Ä·¢ÙI¥9µ=¤ ªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡-§ž¨Á¨ÁºµT°®¯"°®¨´§©¥ ½1¬ž¥9ÂD³¼¬ž¿£I°´µ‰¿Iµ¦¢ÂG¹©½·°®¨Á¢ ã §©¥9§žÂ°Á¸^¯2§v¤Ä°®¡²92¢-£¬ž¤¦¢µt¤Ä·¢¡¬^¯Ô­°®£I§©¤¦°®¬ž£Ü¬ž³‰¿ª¤¦¬ 9 ¯"¬^µT¤µ¦°Á¯"°®¨´§©¥½1¬ž¥9µW§^µÖ¢µT¤¦°®¯2§©¤¦¬ž¥9µ-ÌÛ áµW¯"¢£² ¤Ä°Á¬^£¢ ª¥Ä¢-Ɵ°®¬ž¿NµT¨®º©¹2µ¦¿I¡9·»½1¬ž¥9µ¯2§gº÷£¬ž¤\§©¨®½§gºŸµ ­d¢É³¼¬ž¿£N ¹2¤¦·I¢-¥Ä¢³¼¬ž¥Ä¢\¤¦·¢Rµ¦¿gÜȟ²{­I§^µT¢‘«µ¦¯"¬Ÿ¬©¤¦·I°Á£I¸ µÄ¡9·¢-¯"¢°´µÜ¿Iµ¦¢Â\³¼¬ž¥"­I§^¡ Ê ²{¬žÕR°Á£\¤¦·I¢µ¦¢Ö¡-§žµ¦¢µ-ÌRÍ·¢ ¥Ä¢µ¦¿¨Á¤Äµ½t¢¥¦¢\¬ž­¤9§©°®£¢‘Âú¬ž£ §R¤Ä¢µT¤µ¦¢¤Ï¬©³ì`} Ê ¤Ä¬©² Ê ¢-£IµD³¼¥Ä¬ž¯Ð¤Ä·¢2ݍ¥Ä¢-£N¡9·$µ¦°´Â¢Ô¬ž³t¤Ä·¢ õ §ž£IµÄ§©¥9µD¿IµT°®£¸ ¤=½1¬Å°ÁÕd¢¥¦¢£^¤¤Ä¥Ä§ž°Á£I°Á£I¸©²?µT¢-¤µT°¢µ-¹kìgî Ê ¤Ä¬ Ê ¢£IµꂧgÆz² ¢¥Ä§ž¸ž¢ 0|0 zӥħ©¤¦°®¬Åì0ÈŸÌ Pn›kï"§©£I ñžô Ê ¤¦¬ Ê ¢£IµWê 0|0 z ¥9§v¤Ä°Á¬ }Ì ín›Ôï¹6§ž£IÂϧ©£Ï¿I£I§©££I¬©¤Ä§©¤¦¢‘ÂW¤Ä¢ȟ¤¬ž³œìgó¯"°®¨˜² ¨®°®¬ž£V½t¬^¥ÄÂIµ›³¼¥¦¬^¯ ¤¦·I¢«µ¦§ž¯"¢ú¡¬ž¥Äª¿Iµ-Ì Í·I¢úÙI¥9µ=¤ ³¼¬^¿¥Ø¥Ä¬v½ØµtªI¥¦¢‘µT¢£^¤¥Ä¢²¶°®¯"ª¨®¢-¯"¢-£z¤9§v¤¦°®¬^£Iµ6¬©³6µ=¤9§©£I§ž¥Ä ¯"¢-¤¦·¬Âµ^Ť¦·¢ íCpOÐ8x;Ñ#X7fhÔ^$•GoWhnx ¯"¢¤Ä·¬ÂµÜ¿Iµ¦¢¤Ä·¢ £¢½4ªI§ž¥Ä§^°®¸ž¯2§v¤Ä°´¡'°´µT¤Ä§ž£I¡¢+ªI¥¦¬^ªN¬zµT¢‘Â8·¢-¥Ä¢áê{䟢¡¤¦°®¬ž£ PÌÁìgï9Ì)záÿXäÖ¯"¢-¤¦·¬Â¿IµT¢‘µ8§ª¥¦¬^­I§©­I°Á¨®°´µT¤¦°´¡á¤¦¥Ä°Á¢-²¶µ¦¿gÜÈ ¯"¬Â¢¨Ì Í'§ž­¨®¢ÉÈ÷µT¿¯"¯2§ž¥¦°¢µ$¤¦·I¢5¡-¬ž£Iµ¦°´µ=¤Ä¢-£z¤†°®¯"ª¥Ä¬vƞ¢-² ¯"¢£^¤ú§ž¡9·I°Á¢ƞ¢‘ÂЭŸº ¤¦·¢»§žÂI°Á¤¦°®¬ž£Ó¬ž³†¤Ä·¢»µT¿gÜȟ² ªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡W§ž£IÂú¡¬^£^¤Ä¢ȟ¤¦¿N§©¨¯"¬Â¢¨®µÚ¤¦¬RÆv§©¥Ä°®¬ž¿Iµ ­°®¸^¥Ä§ž¯Ò¤9§©¸^¸ž¢-¥9µ-ÌÏÍ·I¢¥Ä¢µ¦¿¨Á¤ÄµÔ¬ž­¤Ä§©°®£¢‘³¼¬ž¥"ë1¥Ä°Á¨®¨#ý µ §ž¨Á¸^¬ž¥Ä°Á¤¦·¯¹¤Ä¥Ä§ž°Á£I¢ÂÚ¿Iµ¦°®£¸"¤Ä·¢kµÄ§©¯"¢Ô§v¤9§Wê=ìgî Ê òvñ^ô Ê ½1¬ž¥9µ§©£I£¬©¤9§v¤¦¢‘¡-¬ž¥ÄªN¬^¥Ä§¹Gì‘óÔ¯"°®¨Á¨®°®¬ž£Ü½1¬ž¥9µ¿£I§ž£² £¬ž¤Ä§©¤¦¢›¡¬^¥¦ªI¿IµÄï¹§ž¥¦¢§©¨´µ¦¬ªI¥¦¢‘µT¢£^¤Ä¢ ¹1°®£›¡-¬ž£vû=¿£N¡² ¤Ä°Á¬^£½°Á¤Ä·#¤Ä·¢œ°Á¯"ªI¥¦¬vÆ^¢-¯"¢-£z¤â°®£§ž¡-¡-¿¥9§ž¡º"¸z§©°®£¢‘Â2­Ÿº ¤Ä·¢µ¦§ž¯"¢á§ž¨Á¸^¬ž¥Ä°Á¤¦·¯ ½·¢-£¢-Æ^¢-¥Äº"¿£ Ê £¬v½£½t¬^¥ÄÂ#°Á£ ¤Ä·¢¤¦¢‘µ=¤µT¢-¤Äµ”°´µ”¥Ä¢-ªI¨®§^¡¢‘½°Á¤¦·k¤Ä·¢qªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡-§ž¨˜¨Áº ¯"¬zµ=¤µT°®¯"°®¨´§©¥ Ê £¬v½£½1¬ž¥9Â2³¼¥Ä¬ž¯»¤Ä·¢œ¤¦¥9§©°®£°®£¸ÔµT¢-¤Äµ-Ì Í'§ž­¨®¢W}ª¥Ä¢µ¦¢-£z¤9µG¤¦·I¢â¥Ä¢µ¦¿¨Á¤ÄµG¬^­¤Ä§ž°Á£I¢Âá³¼¬ž¥'Þ+£¸ž¨®°´µ¦· ¬^£Ö§2¡¬^£^¤Ä°®¸ž¿¬^¿IµT¨®º"µ¦¢-¨®¢¡¤¦¢Â¤Ä¢µT¤Dµ¦¢¤Ø³¼¥Ä¬ž¯Ç¤¦·¢ß™xä Œ ¡-¬ž¥Äª¿Iµ-¹z¿Iµ¦°®£¸¡-¬ž£z¤¦°®¸ž¿I¬ž¿Iµ6¤Ä¥Ä§ž°®£°®£¸µT¢-¤Äµâ³¼¥¦¬^¯»Â°ÁÕd¢¥T² ¢£^¤”¥Ä¢-¸^°Á¬^£Iµ'¬©³I¤Ä·¢ØµÄ§©¯"¢t¡-¬ž¥Äª¿Iµ-ÌXþq¿¯Ô­d¢-¥9µ6§©£IÂk¡-§žª² °Á¤Ä§ž¨®°y‘§v¤Ä°Á¬^£#Æv§©¥Ä°´§©£I¡-¢½1¢-¥Ä¢£¬ž¤q¤¦¥Ä¢§©¤¦¢‘ÂW§žµá¿£ Ê £¬v½£ ½1¬ž¥9µ °®£œ¢-Æv§©¨®¿I§©¤¦°®¬ž£á¸ž°®Æž¢£q¤Ä·¢-°®¥G¢§^µT¢”¬ž³ ã£0 äDª¥¦¢‘°´¡² ¤Ä°Á¬^£GÌ"Í·¢µ¦¢Ü¥¦¢‘µT¿¨Á¤9µD§ž¨´µT¬µT·I¬v½÷¸ž¬Ÿ¬ÂW°®¯"ª¥Ä¬vƞ¢¯"¢-£z¤ ¥Ä¢-¨´§v¤Ä°®Æž¢á¤¦¬"¤Ä·¢­I§^µT¢¨®°Á£I¢Dªd¢-¥¦³¼¬ž¥Ä¯2§©£N¡¢D³¼¬ž¥¤Ä·¢ÔµÄ§©¯"¢ ¢¯Ô­d¢°Á£I¸k§ž¨Á¸^¬ž¥Ä°Á¤¦·¯2µ-Ì Ê µ¦°®£¸Ú¬ž¿¥ª¥¦¬^ªN¬zµT¢‘ÂW¯"¢-¤¦·¬Â³¼¬ž¥Ôª¥Ä¢°´¡¤Ä°®£¸#¤Ä·¢ ¤9§©¸°´µT¤¦¥Ä°Á­I¿¤¦°®¬ž£Nµö³¼¬^¥+ª¥Ä¢-Ɵ°®¬ž¿Iµ¦¨®º8¿£Iµ¦¢-¢-£Ü½1¬ž¥9µ”¡¬^£² µ¦°´µ=¤Ä¢-£z¤¦¨®º#°®¯"ª¥Ä¬vƞ¢‘µ1¤¦·¢k¥Ä¢µ¦¿¨Á¤Äµ³¼¬^¥D§½°´Â¢¥9§©£¸^¢¬©³ ¤Ä¥Ä§ž°Á£I°Á£I¸†µT¢-¤ÖµT°-¢‘µ§^µ½t¢¨Á¨{¹á§^µ°Á¨®¨®¿IµT¤¦¥9§v¤Ä¢›·¢-¥Ä¢$°Á£ ÝX°®¸ž¿¥Ä¢µCPÔ§ž£IÂlÔ¿Iµ¦°Á£I¸kóԏ°ÁÕd¢¥¦¢£^¤1¢-¯k­N¢‘°®£¸§ž¨®¸ž¬©² ¥Ä°Á¤¦·¯2µâ¬ž£ÝI¥¦¢£I¡9·ÂI§v¤Ä§Ì+Í·¢á¬^£¢D¢ȍ¡¢ª¤¦°®¬ž£2¤¦¬Ô¤¦·°´µ ¤Ä¥¦¢£I°´µX¬ž­NµT¢¥¦Æ^¢Â8³¼¬^¥”¬ž£¨®ºœ¤Ä·¢µT¯2§ž¨Á¨®¢‘µ=¤ö¤Ä¥Ä§ž°®£°®£¸qµ¦¢¤ µ¦°-¢â¬ž³dó Ê ½1¬ž¥9µ‰³¼¬ž¥X¤¦·I¢ÿö²¶ð'zq°Á¤¦¢¥¦­I°ž¤9§©¸ž¸^¢-¥Ì'ý?£¤¦·°´µ ªI§ž¥T¤Ä°´¡¿¨´§©¥8¡§žµ¦¢©¹6¤¦·¢µTªN§ž¡¢#°Á£Ï½·I°®¡9·ªI§ž¥Ä§^°®¸ž¯2§©¤¦°Á² ¡§©¨®¨®ºáµ¦°Á¯"°®¨´§©¥d½1¬ž¥9µö·I§gÆ^¢”¤¦¬D­d¢tµ¦¢§©¥9¡9·¢‘œ°®µ‰Æž¢¥¦ºD¨®°Á¯Ü² °Á¤¦¢‘Â8§©£IÂD¤Ä¥¦°®¢”°®£z¤¦¢¥¦ªd¬ž¨´§v¤Ä°®¬ž£Ø¯"¢-¤¦·¬ÂD¿Iµ¦¢Â8§žµ‰­I§ž¡ Ê ²¶¬©Õ °®£¤¦·¢k¡-§^µT¢µ¦¿I¡9·Ú½1¬ž¥9µ§©¥Ä¢8£¬©¤q³¼¬ž¿£IÂÖ¸ž°®Æž¢‘µt¢-ȍ¡¢µT² µ¦°®Æž¢8½1¢-°®¸ž·z¤¤¦¬#¤¦°®£Ÿº§gÆv§©°®¨´§©­¨®¢DµT¢-¤ÄµØ¬ž³”¤Ä§ž¸ž¸ž¢‘¢ȏ¢-¯Ü² ª¨´§ž¥Äµ-¹Ÿ§Ôª¥Ä¬ž­¨®¢¯Ã¤Ä·I§v¤Ø¡-¬ž¿¨´Â"­d¢8§žÂÂ¥Ä¢µÄµ¦¢Âܤķ¥¦¬^¿¸ž· ¯"¬^¥¦¢œ¡¬^£IµT¢¥¦Æv§©¤¦°®Æž¢q¤¦¥Ä°Á¢œµ¦¯"¬z¬ž¤¦·°®£¸¤¦¢‘¡9·£°´×z¿¢µ-Ì ÿ‰§ž£¸ž¿N§©¸ž¢žù6Ý6ðÞtþ~O õ ·MÈ`„(­Šn„,…‡Ži‹N¸¡¼5«_† ;Š­%­ ʆ‚ ‰%‚i©ß„`‹Œ† Ë»Š‚N„%)¼ i‹l€  …‡(¼=†‹Œˆ ¾eŽ%¯`‚N„e©±°-Œ„`ˆ †3’ÌË ­%¯i%‚‡Ž…‡–5©<Í Î °Ï»-_†3„(© µ ÎH°Ï Cސ…‡†B‚-ŒŽ Î °Ï-_†3„(© µ ÎH°Ï'CŽ …‡†‚-ŒŽ Î@†ªQސ3„(­OÊG‚‡Ž¶%‚BÐKg’Q†­ µ:ÑQ¼ É8ƒe¼ µ:ÑQ¼ Épƒ(¼ µ:ÑQ¼ É8ƒe¼ µ:ÑQ¼ Épƒ(¼ Æ1‹Ž…Èi†‚‡ˆ‡„(­ ÊG‚‡Ž¶%‚ ‚8ҁ Ó8‚ Ò8 ÔeÉ Ò8ƒ ‚(É Ò^Óu Ó½9 Ñ89 (É Ñp ƒ(É Ñ8Ô½ ÑpÔ ÑeɁ ƒ8ƒ Õ „(«Ž …N„(­Ž“3„%…‡Ži‹î1‹5­.¼ Ò8ƒ Ñ^ƒ ÒpÓu Ô:Ó Òµp Ò89 Ò8с 9p‚ Ñ8ԁ Ò8Ñ É8 ƒpÒ É09½ ɵ Ép ‚0Ñ Ö ‹…‡†‚i«_(­¶„%…‡†3’ §gŠ5¹±ªl” Ö §n˜ Ò09 8Ô Ò0с Ó8 Ò8 9eÉ Ò8с É8Ò É8‚%µÉ É8‚ Ò(É Ôpƒ Óµ Ép‚ Ôµ Ö ‹g…‡†B‚i«_1­¶„%…‡†3’ǧQй±ªØ× Õ „(«@ Ò09 Ôp9 Ò0с ‚p ÒpӁ…µ9 Ù]ÚxÛ ÜxÝ É8ҁ Ñ^Ò Ô89½ Ô^ Ô8ѽ щµ Þ¶ßxÛ Úuà á|â¶ã:â]ä[åæUçèâ¶éå¨ê^ë ݪìîíï Ò8 ‚p‚ Ò1Ɂ Ñ^ƒ ÒpӁ pƒ Ò(Ɂ ‚pÓ Ô^Óu Ép Ô8Ô½ ‚pƒ Ô8Ô½ Ép ‚pƒ Ò0Ñ ámâuã^â¶ävåæUçèâuéå¨êpëð ìí:ï ÒpÓu%µ$ Ò1Ɂ Ô^ ÒpӁ Ó0Ñ Ò8ԁ ƒpÓ Ô8с É8Ñ Ôpҁ Ô^Ò Ôpҁ ƒ89 ‚p ƒ0Ô [„%‚‡Ž¶„^-­†'Î@†‹5¯`…‡–˧QŠ5¹Âªî”܁“Î §Œ˜ ÒpÓu ^Ó Ò1Ɂ ÑeÉ ÒpӁ Òµ ÙUÞ Û‰àUñ Ôp‚ Ñ^‚ Ôpҁ Òp‚ ‚8ƒ Ôp9 ñ]ÜxÛ¨ÝÙ á|â¶ã:â]ä[åæUçèâ¶éå¨ê^ëð ìóòÌô”ï ÒpÓu ÑpÑ Ò1Ɂ ‚(É ÒpӁ ‚p‚ Ò8ԁ 9p Ô8ԁ Ò^Ó ‚µp ‚p ‚8ƒ ‚pÒ ‚^Óu É0Ô ámâuã^õÌå¨ö$é)ëð ìø÷Ãù ð é$úû@é ìòØôQï ÒpÓu Ô:Ó Ò1Ɂ Ò(É ÒpӁ ÒpÒ ÙUÞ Û ÜxÝ Ôp‚ Ô^Ò ‚09½…µÔ ‚µp É^Ó ñ]ßxÛ.ü¶ü Í'§ž­¨®¢œñù¤Ê†B‚ ‰`‚i©ß„%‹Œ†€%‰GÒÂÈ`„%‚‡ŽiŠˆWŠ‹¼g‹Œ ® ‹ ®ª%‚N’Ç«‚‡Ž`‚W†ˆ#…‡Ž%©ß„,…‡Ž¶`‹Ë©|†…‡–g’ˆ ÿ‰§ž£¸ž¿I§ž¸ž¢žù6Ý6ðÞâþ O õ ²Øì‘î Ê ½t¬^¥ÄÂIµâ¤Ä¥Ä§ž°®£°®£¸ ¾e†3„e© µ Î °Ï-_†3„(© µ 1Ž …‡†‚-Ž Î °Ï CŽ …‡†‚-ŒŽ ¾[‚‡Ž%­…­ ¾t‚‡Ž…­%­v× Æ1‹Ž…Èi†‚‡ˆ‡„(­@Î@†ª MÊG‚‡Ž`‚ ‚1Ɂ Ñ^Ó ‚pҁ Ó0‚ Ò8ƒ pƒ Ò8ƒ ‚(É ”܈#…N„%‹n’Œ„,‚N’_˜ ÊG„,‚ý£Žˆ#…s°iµ Ö ‹…‡†‚i«@R§QŠ5¹ÂªØ× Õ „e«@ Ò8ƒ Òµ Ò89 Ô89 Ò8 9^Ò ÒpӁ…µ9 ;ŒŠ­%­@Î@†ª MÊG‚‡Ž`‚ÐKg’†­ Ò8 0Ñ Ò^Óu Ô^Ó ÒpÓu%µ)Ó ÒpӁ ÒpÒ Òp ‚pÓ Ò8 Ò(É ÿ‰§ž£¸ž¿I§ž¸ž¢žù6Ý6ðÞâþ O õ ²âñžô Ê ½t¬^¥ÄÂIµâ¤Ä¥Ä§ž°®£°®£¸ ¾e†3„(© µ Î °Ï-_†3„(© µ 1Ž …‡†‚-ŒŽ ÎH°Ï Cސ…‡†B‚-ŒŽ ¾t‚‡Ž…­%­ ¾[‚‡Ž%­%­2× Æ1‹ŒŽ%È=†B‚‡ˆ‡„(­@Î@†ª MÊG‚‡Ž¶%‚ Òµ8 p Òp Ô(É Ò^Óu 8 ÒpÓu Ó09 ”܈#…N„`‹Œ’Œ„,‚N’_˜ ÊG„,‚ý£Žˆ#…s°µ Ö ‹…‡†‚i«@M§gŠ5¹±ªÌ× Õ „e«@ ÒpÓu pƒ Ò8с ‚8 Ò(Ɂ ƒ8‚ Ò1Ɂ µ ;Š­%­xÎ@†ª MÊG‚‡Ž¶%‚BÐKg’Q†­ Ò0с ԉµ Ò(Ɂ Ò1É Ò8ԁ ƒ1É ÙUÞUÛ ÜxÝ Ù]ÚxÛ Ù]ñ ÙUÞ Û¨Ý0ü ÍX§©­¨®¢Èzù¤Ê†‚ ‰y`‚i©ß„`‹†€`‰GV«‚‡Ž`‚W†ˆ#…‡Ž%©ß„%…‡Ž`‹Ë©|†…‡–Œg’Qˆ¸® –Œ†‹HŠŒˆ †3’ ސ‹)’Ž·þ7†‚‡†‹… †©F-_†3’Œ’Qސ‹¯/„(­%¯i%‚‡Ž…‡–5©|ˆ ÿ‰§ž£¸ž¿N§©¸ž¢žù6Ý6ðÞtþ~O õ 78% 80% 82% 84% 86% 88% 90% 92% 94% 96% 98% Full Model (Paradigm + Context + VLS) Interpolated Suffix Universal Prior 2k 4k 8k 30k 60k 80.76% 84.23% 86.62% 96.96% 95.83% 93.76% Accuracy Training Set Size (number of tokens) 15k ÝX°®¸ž¿¥Ä¢ôPù¨Ê†‚ ‰y`‚i©ß„%‹Œ†C`‰[ «Q‚‡Ž`‚»†ˆ#…‡Ž%©ß„,…‡Ž¶`‹ ©|†…‡–Œg’Qˆ ސ‹ÌÎH°Ï»¾e†3„(© µ“¸„e¯1¯i†B‚CŠˆ ސ‹¯¶Èi„,‚‡Ž¶`ŠŒˆ»ˆ ސ“†1… ‚N„%ސ‹ŒŽ‹¯/ˆ †…‡ˆ ÿ AY+KáC <T@á[z;=Y+K Í·°´µ+ªN§©ªd¢-¥·I§žµâª¥Ä¢µ¦¢-£z¤¦¢‘Â2§£¬vÆ^¢-¨{¹^¢eg2¡°®¢-£z¤1§ž£IÂ"¢-³´² ³¼¢¡¤¦°®Æž¢1¯"¢¤Ä·¬Â8³¼¬^¥X¢‘µ=¤Ä°®¯2§v¤¦°®£¸Ø¤¦·¢¨®¢ȏ°´¡-§ž¨©¤Ä§ž¸áªI¥¦¬^­² §©­I°Á¨®°Á¤=ºœÂ°´µ=¤Ä¥¦°®­¿¤Ä°®¬ž£Iµö³¼¬ž¥+§D¨´§©£¸^¿I§©¸^¢â½·¢£Ô¬^£¨®º8¨Á°®¯Ü² °Á¤¦¢‘Âs§ž££¬ž¤Ä§v¤Ä¢Âs¤Ä¥Ä§ž°Á£I°Á£I¸›Â§©¤Ä§É°´µÖ§gÆv§©°®¨´§©­I¨Á¢žÌÓÍ·¢ ¯"¢¤Ä·¬Â÷¬^¿¤¦ªd¢-¥¦³¼¬ž¥Ä¯2µW§xµT¢-¤$¬ž³îPx°˜Õ ¢-¥Ä¢-£z¤¤¦¥9§žÂ°Á² ¤¦°®¬^£I§©¨ŸµT¿Hgkȟ²¶­I§žµ¦¢Â¢µT¤¦°®¯2§v¤Ä¬ž¥9µ¹g°®£I¡¨®¿I°®£¸·I°Á¢¥Ä§ž¥Ä¡9·I°˜² ¡-§ž¨®¨ÁºµT¯"¬Ÿ¬©¤Ä·¢Âµ¦¿gÜȤ¦¥Ä°Á¢"¯"¬Â¢¨´µ¹6­Ÿº°´Â¢£^¤Ä°Á³¼ºz°®£¸ ¯"¬ž¥Ä¢·°®¸ž·¨®ºª¥Ä¢°´¡¤Ä°®Æž¢2¤9§©¸¢ȏ¢¯"ª¨´§©¥9µ¤Ä·¥¦¬^¿¸ž·É§ ÿ'§©£¸^¿I§©¸^¢©ù”ÝXðØÞâþ~O õ 78% 80% 82% 84% 86% 88% 90% 92% 94% 96% 98% 2k 4k 8k 30k 60k 97.31% 96.31% 94.42% 77.39% 78.92% 81.02% 88.61% 88.17% 83.63% 15k Accuracy Full Model (Paradigm + Context +VLS) Interpolated Suffix Universal Prior Training Set Size (number of tokens) ÝX°®¸ž¿¥Ä¢€ù¨Ê†‚ ‰y`‚i©ß„`‹†1%‰vX«Q‚‡Ž¶%‚»†ˆ#…‡Ž%©ß„%…‡Ž`‹4©|†B…‡–Œg’ˆ ސ‹NÎ °Ïb1Ž …‡†‚-Ž[¸„e¯1¯`†‚1Šˆ ސ‹¯±È`„%‚‡ŽiŠˆ ˆ ޶“†£… ‚N„`ސ‹ŒŽ‹¯~ˆ †…‡ˆ ¡-¬ž¯k­°®£I§v¤Ä°Á¬^£"¬©³6ªI§©¥9§žÂ°Á¸^¯2§v¤Ä°®¡§ž£I¡¬^£^¤Ä¢ȟ¤¦¿N§©¨öµ¦°Á¯Ü² °®¨´§©¥Ä°Á¤=ºW¯"¢§^µT¿I¥¦¢‘µ̛Þ⧞¡9·›¬©³á¤¦·¢‘µT¢¯"¬Â¢-¨´µk¿Iµ¦¢µÜ§žµT² µ¦¬¡°´§v¤Ä°Á¬^£Iµâ§ž£IÂ#°®µT¤¦¥Ä°®­¿¤Ä°Á¬^£I§©¨NµT°®¯"°®¨´§©¥Ä°˜¤Ä°®¢µ'¬ž­Iµ¦¢-¥Äƞ¢‘ °®£W¨´§ž¥¦¸^¢×z¿I§©£z¤¦°Á¤Ä°Á¢‘µØ¬ž³â¥Ä§g½5¤Ä¢ȟ¤œ¤¦¬¡¬^¯"ªN¢£Iµ¦§©¤¦¢³¼¬ž¥ ¨®°®¯"°Á¤¦¢Âx×z¿I§©£z¤¦°Á¤Ä°Á¢‘µ¬©³"¤9§©¸ž¸^¢Âx¤¦¥9§©°®£°®£¸É§©¤Ä§¹Ü§©£I ¢‘§ž¡9·4°´µ¨´§ž£¸ž¿I§ž¸ž¢"°®£I¢ªN¢£I¢£^¤Ô¤¦¬W¤¦·¢¢ȟ¤¦¢£z¤¤¦·N§v¤ £¬†¯"¬ŸÂ°˜Ùd¡-§v¤Ä°®¬ž£4°´µ"¥Ä¢×z¿°®¥Ä¢ÂϤĬ†µ¦·°Á³´¤#§©ªª¨®°´¡-§©¤¦°®¬ž£Nµ ³¼¥Ä¬ž¯ ݍ¥Ä¢-£I¡9·Ã¤¦¬úÞ+£I¸ž¨®°´µT·G¹"¬^¥Ï¬©¤¦·I¢-¥†µ¦¿gÜÈ𮣏øI¢‘¡² ¤Ä°ÁÆ^¢¨´§ž£¸ž¿I§ž¸ž¢‘µÌ Ê µ¦¢¬ž³q¤Ä·¢µ¦¢£¬vƞ¢¨¨Á¢-ȏ°®¡§©¨tª¥¦¬^­² §ž­°®¨Á°Á¤=ºÖ¢‘µ=¤Ä°®¯2§v¤¦°®¬^£¯"¢-¤¦·¬ÂµÔ§ž¡9·°®¢-Æ^¢µ§$óuțТ¥¦¥Ä¬ž¥ ÿ'§©£¸^¿I§©¸^¢©ù6Þâþ î ÿ‰ýTä õ ·RÈ`„(­Šn„,…‡Ž¶`‹N¸G¼«_† ;Š­%­oʆB‚ ‰`‚i©ß„%‹Œ† Ë Š‚N„%)¼ i‹)€  …‡(¼=†‹Œˆ ¾eŽ%¯`‚N„(©±°-Œ„`ˆ †3’<˜­%¯i`‚‡Ž …‡–5©<Í Î °Ï -_†„(© µ ÎH°Ï Cސ…‡†B‚-ŒŽ Î °Ï-_†3„(© µ ÎH°Ï Cސ…‡†B‚-ŒŽ ÎȆªQސ3„(­OÊ‚‡Ž%‚BÐKg’Q†­ Ñpƒ(¼ 9pƒ8ƒe¼ Ñpƒe¼ 9^ƒ8ƒ(¼ Ñpƒe¼ 9pƒpƒ(¼ Ñpƒe¼ 9^ƒ8ƒ(¼ Æ1‹ŒŽ%È=†B‚‡ˆ‡„(­OÊ‚‡Ž`‚ ‚8с Ò8Ò Òµ8 9p9 ‚8‚ ‚µ Ò09½ Ôµ 98с ƒ(É µ8%µ$ƒ 8ҁ Ó8Ò Ó8 ƒ8 Ö ‹…‡†‚i«@;§gŠ5¹±ªØ× Õ „e«@ Òµ8 0Ñ Ò8 Ó8 Ò8 ƒpƒ ÒpӁ Ôp Ô^Óu 9eÉ Ô8с ƒpÒ Ô8Ô½ pÒ Ô^ҁ%µ:Ô ¸@‚‡Ž†€“Î § Òµ8 9(É Ò8 ÑpÑ Ò09 Ò8Ô ÒpӁ Ò8‚ Ô^Óu ƒp Ô8с ÉpÒ Ô8Ô½…µÔ Ô^ҁ Ñp ;ŒŠ­%­[ÐKg’†­ Òµ8 Ñp‚ ÒpÓu ƒ^Ó Ò8 µ Ù]ßxÛ Ü]Ú Ô^Óu Òp‚ Ô8ԁ 9^ƒ Ôp‚ ƒpÒ ñ¶à@Û ñ]Ù ÍX§©­¨®¢ }ùœÊ†‚ ‰y`‚i©ß„`‹†€`‰vÓ±­†ªQ޶„(­o«Q‚‡Ž`‚W†ˆ#…‡Ž%©ß„%…‡Ži‹Ç©|†…‡–Œg’Qˆªi‹K‚‡†’ŠŒ†3’dˆ ސ“†ôˆ †…‡ˆ»‰y‚‡(©  § Õ %‚i«ŒŠˆ ¥9§v¤¦¢k¥Ä¢¿I¡¤¦°®¬ž£W°Á£W³¼¿¨®¨ªzq°Á¤¦¢¥¦­°X¤9§©¸ž¸^¢-¥áªN¢¥T³¼¬^¥¦¯2§ž£I¡¢ ³¼¬ž¥1ݍ¥Ä¢-£I¡9·"¬vÆ^¢-¥â§ž£2°®£z¤¦¢-¥Äªd¬ž¨´§v¤Ä¢Ÿ²?µT¿Hgkȯ"¬Â¢¨I­I§^µT¢-² ¨®°®£¢©¹‰§ž£IÂ4ìgó›V¢¥¦¥Ä¬ž¥D¥Ä§©¤¦¢k¥¦¢‘¿I¡¤¦°®¬ž£W³¼¬ž¥8¢×z¿°®Æv§©¨®¢£^¤ ³¼¿¨®¨+¤9§©¸^¸ž¢-¥ªN¢¥T³¼¬^¥¦¯2§ž£I¡¢Ü¬ž£ÏÞ⣍¸ž¨®°´µT·öÌ™R·I¢-£†¡-¬ž¯Ü² ªI§ž¥¦¢‘½°Á¤Ä·Ö§2µ=¤9§v¤Ä¢²¶¬©³´²{¤¦·¢-²¶§ž¥T¤¯"¬Â¢-¨‰³¼¬ž¥Ø·I°Á¢¥Ä§ž¥Ä¡9·I°˜² ¡-§ž¨®¨Áºœµ¦¯"¬z¬ž¤¦·¢‘ÂÆv§©¥Ä°´§©­¨®¢-²{¨®¢-£I¸©¤¦·8µ¦¿gÜȤ¦¥Ä°®¢µ-¹g¤¦·¢Ø§^Ÿ² °Á¤¦°®¬^£á¬ž³z¤Ä·¢+ªI§ž¥Ä§^°®¸ž¯2§©¤¦°´¡‰§ž£Iœ¡¬ž£z¤Ä¢ȟ¤¦¿I§ž¨^°´µT¤Ä§ž£I¡¢ ¯"¢§^µT¿I¥¦¢‘µt§^¡9·°®¢-ƞ¢‘µt§gÈŸÌ }n› ¢-¥Ä¥¦¬^¥t¥9§v¤Ä¢œ¥¦¢‘¿I¡¤Ä°®¬ž£2³¼¬^¥ ݍ¥Ä¢-£N¡9·#§©£IÂèÈzÌ ñŒ›÷¢-¥Ä¥¦¬^¥â¥Ä¢¿N¡¤¦°®¬^£"¬ž£#Þ⣍¸ž¨®°´µT·öÌ ã ¢-¥¦² ³¼¬ž¥Ä¯2§©£N¡¢Ôµ¦·¬v½Øµ8§¡¬^£Iµ¦°®µT¤¦¢£z¤á°®¯"ª¥Ä¬vƞ¢-¯"¢£z¤q§ž¡¥Ä¬^µÄµ "°ÁÕd¢¥¦¢£z¤¢¯Ô­d¢ÂI°®£¸¤9§©¸ž¸^°®£¸k§ž¨®¸ž¬ž¥Ä°Á¤¦·¯2µ-Ì Ý¿¥¦¤¦·¢¥sµT¤¦¿N°®¢µR§ž¥¦¢÷°®£Ðª¥Ä¬ž¸^¥¦¢‘µ¦µÅ¤¦¬Ç¡-¬ž¯"ªI§ž¥¦¢ ¤¦·I¢$¿NµT¢-³¼¿¨®£¢µÄµ#¬ž³¤¦·I¢µ¦¢¤Ä¢¡9·£I°®×z¿¢‘µ#¬^£ ¨®¬v½²?¡¬ž¿I£^¤ ê¼¥9§v¤Ä·¢-¥2¤¦·I§ž£›¿I£IµT¢¢-£Nï"½t¬^¥Äµ-¹1§ž£IÂŧ©¨´µ¦¬W¤Ä¬Ï¢ȟ¤¦¢£I ¤¦·I°®µD½1¬ž¥ Ê ¤¦¬ð¬ž¯2§ž£°´§©£G¹tOC-¢‘¡9·$§ž£IÂ䏨Á¬vÆ^¢-£°´§ž£G¹ §^µ ³¼¿¥¦¤¦·¢¥4¢ȍ§ž¯"ª¨®¢µ¬©³·°®¸ž·¨®ºú°®£øI¢‘¡¤¦¢‘«¨´§ž£¸ž¿I§ž¸ž¢‘µÌ Þ+Ɵ°´Â¢-£N¡¢2³¼¥Ä¬ž¯æµ¦·°Á³´¤¦°®£¸§©ªª¨®°´¡-§©¤¦°®¬ž£NµD³¼¥¦¬^¯æÝ¥Ä¢-£I¡9· ¤¦¬ÔÞ+£¸^¨Á°´µ¦·Ü°Á£N°´¡-§v¤Ä¢µ6¤Ä·I§v¤1¥¦¢‘µTªd¢¡¤Ä§ž­¨®¢ªN¢¥T³¼¬^¥¦¯2§ž£I¡¢ ¡-§ž£Å­N¢W¬ž­¤9§©°®£¢Â\½°Á¤¦·¬^¿¤#¢ƞ¢-£›¤Ä·¢Ö¥Ä¢²¶¢µT¤¦°®¯2§v¤Ä°Á¬^£ ¬©³tªI§©¥9§©¯"¢-¤¦¢-¥9µ¬^£Ö£¢-½«¨®§ž£¸ž¿N§©¸ž¢‘µ¹ §©¨Á¤¦·I¬ž¿¸^·Ú½t¢Ü¬ ¢ȏªd¢¡¤X¤Ä·I§v¤”µ¦¬ž¯"¢âªN§©¥9§©¯"¢¤Ä¢-¥'¥¦¢-²{¬^ª¤¦°®¯"°§©¤¦°®¬ž£q¡-¬ž¿¨´Â ª¥Ä¬vƞ¢6¿Iµ¦¢³¼¿¨{Ì™¢â­d¢-¨®°®¢-Æ^¢‰¤Ä·I§v¤‰¤¦·°´µ‰§žªª¥Ä¬^§^¡9·Dµ¦·¬ž¿I¨®Â µ¦·¬v½ ¤Ä·¢4¸^¥¦¢‘§v¤¦¢‘µ=¤Ö­d¢-£I¢ٍ¤9µÚ³¼¬ž¥W¤Ä§ž¸ž¸ž¢¥ÄµÚ¢µ¦°Á¸^£¢ ³¼¬ž¥1·°®¸ž·I¨ÁºÔ°®£øI¢¡¤¦°®Æž¢¨´§ž£¸ž¿I§ž¸ž¢‘µ¹^µ¦¿I¡9·#§^µqê õ §vû=°´üاž£I õ ¨´§^ Ê ¹ìí^íQ}zï"§©£I«ê¼Þ+¥ûT§gƞ¢‘¡Ú¢¤§©¨{̘¹ì‘íží^í^ï9¹Ø¸^°®Æž¢-£ ¤¦·N§v¤4¤Ä·¢ §žµÄµ¦¬Ÿ¡-°´§v¤¦°®¬^£I§©¨kªN¬v½1¢-¥4§ž£IÂ˪d¬©¤¦¢£z¤¦°´§©¨"³¼¬^¥ ¤¦·I¢áª¥Ä¬žªd¬^µ¦¢ÂܪN§©¥9§žÂ°®¸ž¯2§©¤¦°´¡âµ¦°®¯"°®¨®§ž¥¦°Á¤=ºD¯"¢§^µT¿¥Ä¢Ø§©¥Ä¢ ¯"¬^µT¤¡¬ž¯"ªd¢-¨®¨®°®£¸8³¼¬^¥ØµT¿I¡9·¨´§©£¸^¿I§©¸^¢µ-Ì  ™†C]KáY‰Z†<TE‰S'Â+EtÃEöK1[ Í·¢D§©¿¤Ä·¬ž¥9µ+½1¬ž¿¨´Âܨ®° Ê ¢¤Ä¬¤¦·N§©£ Ê Œ^§©£ õ §vû=°´ü³¼¬ž¥t·°´µ ¢ȟ¤Ä¥¦¢¯"¢-¨®º4Æv§©¨®¿I§ž­¨®¢µ¦¿¸^¸ž¢µT¤¦°®¬^£Iµ"§©£IÂų¼¢-¢­I§ž¡ Ê ¬ž£ ¤¦·I°®µ½1¬ž¥ Ê Ì  E 9EGFIEöKDC E‰[ § Ë|-Œ‹†)¼½/½Ï' ·Q5§QN–n„e«ŒŽ ‚‡†8/Q„%‹n’ J5§Qސ‹5¯i†‚¨µ$Ò8Ò8ҁ ¾eiˆ#…s° ސ‹5¯‘„(«5«­Ž†3’¨…‡•…N„(¯(¯iސ‹¯ „%‹n’ Ê;Êç„%… …N„%B–5©|†‹… À{6º Á´i´1½¿81'6 ! #"$$$p/5«n„e¯i†ˆm8‚&%½Ó0с ¸“ ¾[‚N„`‹…‡ˆ$ 9^ƒ8ƒ8ƒb¸@‹]¸Ü°ô„lˆ#…N„%…‡Žˆ#…‡Ž3„e­ «Œ„%‚ …s°`‰Ÿ°8ˆs«_††B– …N„e¯1¯`†‚ ¨À}6 Á>´i´1½¿81Ã6' )(***p/«n„(¯`†ˆm989:Ó+%9^µp ·Q¾[‚‡Ž%­%­ „%‹n’)¶  Š@ µÒ8Òp‚ Õ ­¶„`ˆ ˆ Ž-,n†B‚‘(©J-ŒŽ‹Œ„%…‡Ži‹ ‰y`‚1Ž…©¬«Q‚‡QÈ=†3’Ç­†ªQސ3„(­’Qސˆ‡„(©J-Ž%¯iŠn„,…‡Ži‹@ À{6 Á´i´1½¿g6. !/0132¨º4'5 6#"$7$788/5«n„(¯`†ˆ µÒµ9%[µ$Ò0с ·f¾[‚‡Ž%­%­ µÒ8Ò8с ¸È‚N„`‹Œˆ#‰y`‚i©ß„,…‡Ž¶`‹5°-n„%ˆ †3’ㆂ ‚‡%‚s°{’g‚‡Ž%È=†‹ ­†3„,‚‡‹ŒŽ‹¯„`‹n’ ‹Œ„%…‡Š‚N„e­¨­¶„`‹¯`Šn„e¯i†Ë«Q‚‡†ˆ ˆ ސ‹¯Í<Ëñ3„%ˆ † ˆ#…‡ŠŒ’¼bސ‹f«Œ„%‚ …l`‰ ˆs«_††N– …N„(¯(¯iސ‹¯: ¡684;U3³K²³½¨61¿²<  ½¿83½1³½KÁ$1.=(>"@?BADC/[«Œ„(¯i†ˆ“Ñ:Ó0&%Ñ(É8с ·f Õ –Œ„%‚‡‹ŒŽ¶„e¼ / Õ  — †‹n’g‚‡Ž>¼gˆ i‹x/FEÁG„`8-Œˆ `‹@/'„%‹n’ Ð  ʆ‚i¼= ® Ž …‡“8 µÒ8ÒpÇ·IHŠŒ„%…‡Ži‹ˆô‰y`‚¶«n„,‚ …s°`‰°ˆs«_††B– …N„e¯1¯iސ‹5¯uJ ¨À}6 Á>´i´1½¿816¬³ €5´"7"1³ €KX²³½¨6(¿[²< ¡61¿&´À>º ´¿Á>´'61¿L'œÀ>³½ MMÁ½K²<@1>¿[³"´N<O< ½-‰´¿[Á´P"$$7Q8/5«n„e¯i†ˆQÔp‚pÓR%Ô^‚8ҁ S  Õ –gŠQ‚‡B–x µÒ8‚p‚OË ˆ#…‡N–n„%ˆ#…‡Ž¶ «Œ„%‚ …‡ˆ «‚‡(¯`‚N„(© „%‹n’ ‹iŠŒ‹@«Œ–Q‚N„`ˆ †V«n„%‚‡ˆ †B‚€‰`‚ Š‹‚‡†ˆ#… ‚‡Ž…‡†3’)…‡†ªg…T ¨À}6 Á´´º ½¿81f6¤³ €5´(¿UL ¡61¿N´Às´¿Á>´m61¿V'I;;W< ½K´XYX²³3À.²<H²1¿[º 83²&‰´Z À{6 Á´)11i½¿"$7888/«n„e¯i†ˆXµ(ÉR%µ$Ó0 §  Õ ŠŒ†‚‡“„`‹•„`‹Œ’<ýJU e„%‚‡ ® ˆs¼½¼½åµÒpÒ8ҁ“Î@„`‹5¯iŠn„e¯i† ސ‹n’†>° «_†‹Œ’†‹…€‹Œ„(©|†3’l†‹…‡Ž …}¼)‚‡†1¯i‹Ž …‡Ž¶`‹ 1©F-ŒŽ‹ŒŽ‹5¯Ã©|%‚s° «–Œ1­(¯iސ3„(­„%‹n’H`‹…‡†ªg…‡Šn„(­È†Èg޶’†‹†8 À{6 Á´i´1½¿81F6 I W[ "$7$$p/5«n„(¯`†ˆmÒ8ƒR%½Ò8ҁ Õ \F»’Q† ÐH„%‚‡¼=†‹ µÒpÒ8ƒ ÊG„%‚‡ˆ ސ‹¯ …‡–Œ†ªÎE¾ `‚i«ŒŠˆ$ À{6 Á´i´1½¿81Ã6'5 6F"$7$*p/[«Œ„(¯`†ˆ9:Ó0&%98щµ8 ¸“M·‚O] „ È=†8/ § ¡ý£“†‚‡iˆs¼gލ/ª„%‹n’^¶I_7„ Èg‚‡†­ µÒpÒ8ҁ ÐK%‚s° «–Œiˆ{¼Q‹…N„`B…‡Ž¶H…N„e¯1¯`޶‹5¯ `‰ôˆs­QÈ=†‹Œ†pÍÍ·RÈi„e­¶ŠŒ„%…‡Ž‹¯TÊ[X§ …N„e¯1¯i†B‚‡ˆ€„`‹Œ’K…N„(¯iˆ †B…‡ˆ$¸E†B–‹ŒŽ3„(­@‚‡†«_`‚ …/Uý1†«…Œ`‰ Ö ‹° …‡†­%­Ž%¯i†‹…¨§¼Qˆ#…‡†©|ˆ$/ii“†‰o§…‡†‰Ü„%‹ Ö ‹Œˆ#…‡Ž …‡Š…‡†p/½Î>]#ж-5­ ] „`‹Œ„ ¶ — „&]#Ža`/„%‹n’<¾| — ­¶„`’5¼Db¤µÒpÒ8‚Q¸„(¯(¯iސ‹¯ßސ‹@cŒ†…‡Ž%È=†å­¶„`‹° ¯`Šn„(¯`†ˆ$ÍÂÊ‚‡†3’Qސ…‡Ži‹‘%‰¨©|`‚i«–Œ1­(¯iސ3„(­[3„%…‡†¯i`‚‡Ž†ˆô‰%‚ „d‚‡ŽN–@/Eˆ#… ‚‡ŠŒ…‡ŠQ‚‡†3’•…N„e¯iˆ †… À{6 Á´i´X(½¿56. !/I132¨º '5 6#"$$D88/«Œ„(¯`†ˆmÓ8‚8&%½Ó0Òpƒ ¶ — „N] Ža`879pƒpƒ8ƒ7ÐK`‚i«–Œ(­¶(¯iސ3„e­t…N„e¯1¯iސ‹5¯uÍô’„%…N„ ÈQˆ$@’Q޶>° …‡Ži‹Œ„%‚‡Ž†ˆ$d À{6 Á´i´X(½¿81Ø6eY''5 6f(>***p/M«Œ„(¯i†ˆ7ÒpÓe° µ$ƒµ8 ËÁ2ÏC„%…‡‹Œ„(«Œ„%‚i¼g–ŒŽ¨éµ$Ò8Ò(Ɂ Ë ©ß„eªQŽ%©~Š5© †‹… ‚‡1«½¼Í©|g’†­ ‰y`‚嫌„%‚ …s°`‰Ÿ°ˆs«_††N–l…N„e¯1¯`޶‹5¯ug ¨À}6 Á´´1½¿8176.YI "$7$h^/[«Œ„(¯`†ˆXµ8&%[µ)Ó½9 \Fœ§„e­ …‡i‹b„`‹Œ’»Ð  ¶|ÐKR\1Ž%­%­¨ µ$Ò8‚pi1>¿[³À}6&83Á ³½¨61¿ ³¨6 ª6&´À>¿J1>¿&)6(À,4²³½¨61¿Lj¤´³À>½K´)~Q²< !ÐKR\C‚N„®¨° — Ž%­…­¨ Õ ¸§„(©/ŠŒ†­ˆ ˆ `‹@ µÒ8Òp ÐK`‚i«Œ–1­1¯`ސ3„(­W…N„(¯1¯`ސ‹¯è-Œ„`ˆ †3’ †‹…‡Ž ‚‡†­·¼H`‹ª¾ª„$¼†ˆ ޶„`‹lސ‹‰y†‚‡†‹Œ†8e$1³ €B61À.1½ÁJ ¡61¿&´À>º ´¿Á>´F61¿^ ¡684;]35³K²1³½¨61¿[²7<k9½a83½1³½KÁ$1Z"$$7Q8 — V§gB–>l…‡“†8 µÒpÒ8 ÊG„,‚ …s°`‰°ˆs«_††B– ޶‹Œ’Š…‡Ži‹v‰y‚‡(© ˆ ‚N„,…‡B–x ¨À}6 Á>´i´1½¿81Ã6'5 6F"$$7Q8/«n„e¯i†ˆm98щµm%98Ñp‚ §  Ð J¸ª–Œ†3’Q†8 µÒpÒ8‚ ÊG‚‡†3’Q޶B…‡Ž¶‹5¯ «n„,‚ …s°`‰°ˆs«_††B–ސ‹° ‰y`‚i©ß„%…‡Ž`‹ „^-_iŠ…KŠŒ‹5¼Q‹ ® ‹é®ª%‚N’ˆHŠˆ ޶‹5¯ ˆ#…N„%…‡Žˆ#…‡Ž3„(­ ©|†…‡–g’ˆ$0 À{6 Á´i´1½¿k6T !/0132¨º4'5 6n"$$78p/5«Œ„(¯`†ˆ µÑpƒ0ÑN%[µÑpƒ0Ô½ ÏÁ  †ސˆ B–†3’†­/uÐ ¶ÐK††B…‡†‚/ÏÁ§gB–‰®ª„%‚ …‡“8/сÏ1„e©|ˆ –n„®B/ „%‹n’ ¶ ÊG„(­%©~Šލ µ$Ò8Ò8 Õ 1«ŒŽ‹5¯ ® Ž …‡–'„(©J-Ž%¯iŠŒŽ …¼ „%‹n’Š‹¼g‹Œ ® ‹ ®ª`‚N’Qˆ …‡–Q‚‡iŠ5¯i–ð«Q‚‡8-Œ„p-ŒŽ%­Žˆ#…‡Ž4©|g’Q†­ˆ$ ¡684;U3³K²³½¨61¿²<W ½¿83½1³½KÁ$1.=!"$@?oQNC/[«Œ„(¯`†ˆ“8ÑpÒ&%8‚89
2000
35
     !!"#$%'&(*),+, % ) ./0 2143#5 5"#$.6&7+ "8%:9<; =!5?>A@ B =0*0*) CED!FHG'IKJLLNMOJ PQSRT<U.VXWYQ[Z\V^]<_a`]bWcR!d!VXQSUfehg[ijQSZ$g[Qlknm#Z$ipobQSUrqripVts7]<_aevu$QSwYQSxjyzkn{^Q[|bQSZ\V`]<d$U/V[k }v~<~€ ]bU.VX]b‚Q[xƒxj]YehVrUrQ[Q(V„k…evu$Q(wYQ[xjy†e ~€‡ P  kmfˆŠ‰ ‹$ŒlhnŽbŒ\\n‘’N“r’”‹Œb•“%–!‘a“˜—h™š ›œNž(Ÿ F D¡ Ÿ ¢*£¤¦¥a§h¨„§$©«ª*¨”¬¬ª%©­¥%¥t©­¥=®%£©¯ª.°±¦©X²O³h¨”¥t©­¬ ´aµ^¶ ®%¨„·”· ¤¦¸\·º¹©«®%£\»b¬¼»”½¾¿ª/¤¦±¦±ÁÀ¨”¸¬  °\©­¥t®%¤¦» ¸¥=®%£\©f¤¦¹§$»”ª%®%¨”¸ÃX©Ä»”½zª.°±¦©^¤¦¸b² ®t©«ª/¨”ÃX®%¤Å» ¸¥Æ®t»4¤Å®%¥€§$©«ª%½j»”ª.¹€¨”¸hÃX©”ǺȬb² »”§h®%¤¦¸\·É®ËÊN»º¨”¥.¥%°¹§®%¤¦» ¸¥Ì®%£¨„®c¥%©«ª%͔© ®t»É©XÎ\뱦°¬©Ïª/°±Å©Ð¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸¥Ñ¬°ª.¤¦¸\· ®%¨„·”· ¤p¸\·º¨”¸¬'®tª.¨”¤¦¸h¤¦¸\·\ÀÒÊ¿©†¨„ª%ª.¤Å͔©†¨„® ¥%» ¹© ̈́¨„ª.¤¦¨”¸l®%¥Ó»”½É¾Nª.¤¦±¦±OÔÕ¥¼¨„§§ª%» ¨”Ã/£ ®%£h¨„®Ö¨„ª%©A¤¦¸¥t®%¨”¸hÃX©­¥×»”½†ØHÙ.ګۃÜ/ÛÁ݄ÞàßáÛpÜXâ ¹€»<¬\©­±p¥«Ç‚¢£\©­¥%©¹»b¬\©­±¦¥¨”±¦±Å»SÊ7½j»”ª‚³$»”®%£ ª/¨„§h¤¦¬f®tª.¨”¤¦¸¤p¸\·» ¸±p¨„ª%·”©¬¨„®%¨^¥t©«®%¥¨”¸¬ ª/¨„§h¤¦¬Ö®%¨„·”·”©«ªY©XÎ<©­Ã«°\®%¤¦» ¸À· ¤ÅÍb¤¦¸\·K®%¨„·„² · ¤p¸\·^¨”ëë°\ª.¨”ÃXã®%£¨„®¤p¥ ÃX» ¹§h¨„ª.¨„³h±¦©=®t»\À »”ª*³!©«®t®t©«ª®%£¨”¸Ñ®%£\©^¾Nª.¤p±¦±$¹©«®%£\»b¬Ç ä åSæ Ÿ F ç èé¡ Ÿ ê ç æ ´ ¨„ª%®²O»”½ƒ²Ë¥t§$©«©­Ã.£ ë ´Nµ¯¶ì ®%¨„·”· ¤¦¸\·×¤¦¥í®%£\©Y®%¨”¥tîﻔ½ ¨”¥%¥%¤¦· ¸¤¦¸\·×®t»ï©­¨”Ã/£ÓÊN»”ª.¬A¤p¸¼¨ï¥t©­¸l®t©­¸ÃX©K¨ÓâOð­ñ ¤¦¸¬h¤¦Ã«¨„®%¤¦¸\·í¤¦®%¥±¦©XÎb¤¦Ã«¨”±z¥tãb¸l®%¨”ÃX®%¤¦Ã먄®t©«·”»”ª%ã”À…¥%°hÃ.£ ¨”¥†¸\» °¸,»”ª†Í”©«ª%³nÇ ´Nµ¯¶ ®%¨„·”· ¤¦¸\·ò»”½Ì®t©XÎ<®×¤¦¥ ª%©  °¤Åª%©­¬ó½ƒ»”ªÉ¥%°³h¥t©  °\©­¸H®K§ª%»bÃX©­¥%¥t©­¥×¤¦¸ô¹€¨”¸lã ¥tãb¥t®t©­¹€¥«À*©”Çõ·\Çö¥%ã<¸l®%¨”ÃX®%¤¦Ãc§h¨„ª.¥%¤¦¸·\Ç'È÷¸<°¹¯³$©«ª »”½¨”±Å®t©«ª.¸¨„®%¤Å͔©K¹»<¬©­±¦¥Ì¨”¸¬A¹©«®%£\»b¬¥Ì½j»”ªø®%¨„·„² · ¤¦¸\·£¨(͔©†³$©«©­¸'©XÎ<§v±Å»”ª%©­¬Àf¹» ¥t®§h¨„ª%®%¤¦Ã«°h±¦¨„ª.±Åã Ê*¤Å®%£º¨7Íb¤Å©«Êù®t»†¤¦¹€§ª%»(Íb¤¦¸\·Ð®%¨„·”· ¤p¸\·†¨”ëë°ª.¨”ÃXã”À ¤¦¸ë±p°¬¤¦¸\·\ú×£¤¦¬h¬\©­¸¼ûc¨„ª.(ÍÓ¹»b¬\©­±¦¥7ëOü=£l°\ª/Ã.£À ý­þ”ÿ”ÿ#üa£¨„ª/¸¤¦¨„«® ¨”±ÁÇÅÀ^ý­þ”þ#ü=°\®t®%¤¦¸\·7©«®Ì¨”±ÁÇÅÀ ý­þ”þ ì Àbª.°±Å©X²O³v¨”¥t©­¬¹©«®%£\»b¬¥ë ¾Nª.¤¦±¦±OÀý­þ”þ ì Àb¹í¨[Îl² ¤¦¹f°¹ ©­¸l®tª%»”§<ãY¹©«®%£\»b¬¥#먄®%¸¨„§h¨„ª%îb£¤ÁÀ:ý­þ”þ ì À ¹©­¹»”ª.ãH²O³h¨”¥%©­¬E¹©«®%£\»b¬¥Aë Ò¨„©­±Å©­¹€¨”¸¥ï©«®ï¨”±ÁÇÅÀ ý­þ”þ ì À$¨”¹» ¸\· ¥t®»”®%£©«ª.¥«Ç ¢£h¤¦¥€§h¨„§$©«ªÌ¨”¬¬\ª%©­¥.¥t©­¥í®%£©cª.°h±Å©X²O³h¨”¥t©­¬ ´Nµ^¶ ®%¨„·”· ¤¦¸\·€¨„§h§ª%» ¨”Ã.£Ì»”½:¾Nª.¤¦±¦±:ëý­þ”þ…ý­þ”þ ì ÀÊ*£h¤¦Ã.£    "!# $ %&(') *&#+,*-.&(/0%1&(&(1& 243 / 353 06798*- *6;:55 3 .<0=>06)7?0@4+A 3 BC0 &  3 &D71 7 -E ±Å©­¨„ª/¸¥±¦¨”¸\· °¨„·”©ø¹»b¬\©­±¦¥f®%£¨„®ÃX» ¸¥.¤¦¥t®»”½¨Ð¥t©X²  °\©­¸ÃX©ø»”½®tª.¨”¸¥%½ƒ»”ª.¹í¨„®%¤Å» ¸Éª.°±¦©­¥øë ¢F*¥ ì Ê£¤¦Ã.£ 먄§®%°ª%© ÃX» ¸l®t©XÎb®%°¨”±N½j¨”ÃX®t»”ª.¥¤¦¸4§ª%©­¬¤pÃX®%¤¦¸\·YÃX»”ªt² ª%©­ÃX® ¨”¥%¥%¤Å· ¸h¹©­¸H®%¥­Çº¢£\©ø¨„§h§ª%» ¨”Ã.£º¨”±¦±Å»SÊ*¥íÃX©«ªt² ®%¨”¤¦¸Ñ¤p¸H®t©«ª.¨”ÃX®%¤¦» ¸¥a³$©«®ËÊN©«©­¸Ñª.°±Å©­¥a°¥%©­¥«À\Ê*£\©«ª%©«³<ã ¨ÑÃ/£¨”¸\·”©€©>G$©­ÃX®t©­¬7³<ãc» ¸©€¹€¨­ãШ1G$©­ÃX®^Ê*£\©«®%£\©«ª »”ª€¸\»”®¨”¸\»”®%£©«ª€¹€¨­ã†¥%°\³h¥t©  °\©­¸l®%±ÅãIHhª.©”Ç ¶ °Ã.£ ¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸h¥ £¨­Í”©³$©«©­¸¨”¥%¥.°¹©­¬¤¦¹§$»”ª%®%¨”¸l®:½ƒ»”ª ®%£\© ¨„§§ª%» ¨”Ã/£×¤¦¸7ª%©«· ¨„ª/¬4®t»Ï¤Å®%¥#¥.°ëÃX©­¥%¥#» ¸4®%£\© ´aµ^¶ ®%¨„·”· ¤¦¸\·í®%¨”¥tîvÇ J ¸ö®%£¤¦¥Ï§h¨„§$©«ª­ÀÊN©É©XÎ<§v±Å»”ª%©×®%£\©É§$» ¥%¥%¤Å³v¤¦±¦¤Å®˜ã ®%£¨„®a¥%°Ã/£í¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸h¥¿¨„ª%©#Þvݔ⠩­¹€§h¤Åª.¤¦Ã«¨”±p±Åã#¤¦¹² §$»”ª%®%¨”¸l®€½j»”ªÆ¾Nª.¤¦±¦±OÔÕ¥¨„§§ª%» ¨”Ã/£À=¨„®Æ±Å©­¨”¥t® ¨”¥í¨„§\² §h±p¤Å©­¬ï®t» ´aµ^¶ ®%¨„·”· ¤¦¸\·\Ç ¢…»×®%£¤¦¥ ©­¸¬ÀÊ¿©Ð¤¦¸\² ®tª%»b¬°ÃX©Ó®ËÊN» ¨”¥%¥%°h¹§®%¤Å» ¸¥­ÀøÃ«¨”±¦±Å©­¬÷ÛjÞvØHÙ K\ÙXÞvØL Ù«ÞvÚ/ÙA¨”¸¬÷Ú.ÝMNMÛjâMíÙ«Þhâ ÀøÊ*£¤pÃ.£ô¥t©«ª%͔©®t»,©XÎl² 뱦°h¬\©Òª.°±Å©Ä¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸¥­À¨”¸¬Ñ¬\©«Í”©­±Å»”§ø®˜Ê¿»íÍ[¨„ª.¤Å² ¨”¸l®%¥»”½¾Nª.¤¦±¦±ÁÔÕ¥*¨„§§ª%» ¨”Ã/£Y®%£¨„®ª%©­¨”±¦¤p¥t©¯®%£\©­¥%©f¨”¥² ¥%°h¹§®%¤Å» ¸¥­ÇE¢£\©­¥t©7¹©«®%£»<¬¥ø®%°\ª.¸¼» °\®Ñ®t»º³$© ¤¦¸h¥t®%¨”¸ÃX©­¥^»”½^ØHÙ.ګۃÜ/ÛÁ݄ÞÉßáÛpÜXâĹ»<¬©­±¦¥«À…¨Y¥t®%¨”¸¬¨„ª.¬ ¨„§§hª%» ¨”Ã.£öÊ*¤¦®%£¤¦¸¼®%£\©×¹€¨”Ã/£¤¦¸\©×±Å©­¨„ª.¸¤p¸\·OHh©­±p¬ ë¤Å͔©­¥t®«À…ý­þ”ÿQP ì Ça¢£\©­¥%©¯¹»b¬\©­±¦¥=¨”±¦±Å»SÊA½j»”ª*®tª.¨”¤¦¸\² ¤¦¸·í¨”¸¬c©XÎ<©­Ã«°\®%¤¦» ¸c¨”±Å·”»”ª/¤Å®%£¹€¥*®%£¨„®· ¤¦Í”©#¹f°Ã.£ ¤¦¹€§ª%»(͔©­¬f§$©«ª%½j»”ª.¹€¨”¸ÃX©N¤¦¸#®t©«ª.¹€¥:»”½v¥t§$©«©­¬À„Ê*¤¦®%£ ¨”¸A¤p¹§h¨”ÃX®Ì» ¸®%¨„·”· ¤¦¸\·ï¨”ëë°ª.¨”ÃXã®%£¨„®Ñª.¨”¸\·”©­¥ ½jª%» ¹ ¥%±¦¤Å· £l®º¬\©«·”ª/¨”¬¨„®%¤Å» ¸ ®t»ô¥%¹í¨”±¦±Y¤¦¹€§ª%»(͔©X² ¹©­¸l®«Ç ¢*£\©­¥t©Ïª%©­¥%°±Å®%¥Ì¥t©«ª.͔©Ð®t»×¥.£\©­¬±¦¤Å· £l®Æ» ¸ ®%£\©Ä¤¦¹€§!»”ª.®%¨”¸ÃX©»”½‚ª.°±Å©¤¦¸H®t©«ª/¨”ÃX®%¤Å» ¸Ì®t»®%£\©Ò§!©«ª%² ½j»”ª.¹€¨”¸ÃX©¯»”½:¾Nª.¤¦±p±ÁÔÕ¥a»”ª.¤Å· ¤¦¸h¨”±!¹€»<¬\©­±OÇ R S F ê M˜MUT žWV DYX4X ê æ X › LLFHçnD¡[Z ¾Nª.¤¦±p±ÁÔÕ¥ ¥.°\§$©«ª%Í<¤p¥t©­¬ ±Å©­¨„ª.¸h¤¦¸\· ¹€»<¬\©­± \ ®tª.¨”¸h¥t½ƒ»”ª/¹€¨„®%¤Å» ¸b²O³h¨”¥%©­¬©«ª%ª%»”ªt²Ë¬\ª/¤Å͔©­¸±Å©­¨„ª.¸¤¦¸·Q] ÊN»”ª%îb¥ ¨”¥ó½ƒ» ±¦±¦»(Ê*¥«Ç ¢£\©à®tª.¨”¤¦¸¤¦¸·¬¨„®%¨¤¦¥ ÃX»”ª%ª.©­ÃX®%±Åãc¨”¸¸»”®%¨„®t©­¬Ð®t©XÎ<®«Ç#¢£\©ÃX»”ª%ª%©­¥t§$» ¸¬¤p¸\· ª.¨(Ê ®t©XÎ<®¤p¥ ½j©­¬ ®%£\ª%» °· £ ¨”¸ ¤p¸¤Å®%¤¦¨”± ²Ë¥%®%¨„®t© ¨”¸¸»”®%¨„®t»”ª­À Ê*£¤¦Ã/£ ¹€¨„­¥ ¨ ¹€»”ª%©»”ª ±Å©­¥%¥ ÊN©­±¦± ²Ë¤¦¸½ƒ»”ª.¹€©­¬'¤¦¸h¤Å®%¤¦¨”±#· °\©­¥%¥Ð»”½Ì£\»(ʊ®%£©É®t©XÎ<® ¥%£\» °h±¦¬ø³$©¯¨”¸¸»”®%¨„®t©­¬Ç¢*£¤¦¥*¤p¸¤Å®%¤¦¨”±n¨”¸h¸\»”®%¨„®%¤Å» ¸ ¤¦¥ ÃX» ¹§v¨„ª%©­¬®t»Ä®%£\©=®tª.¨”¤¦¸h¤¦¸\·Ò¬¨„®%¨Ä¨”¥5¨Ò³h¨”¥%¤p¥…½j»”ª ±Å©­¨„ª.¸h¤¦¸\·¨Ä¥%©  °\©­¸hÃX©=»”½v¢F¥«À”Ê*£¤¦Ã/£¨„ª%©=ÃX» ¸H®t©XÎb® ¬\©«§$©­¸¬\©­¸l® ÃX»”ª%ª%©­ÃX®%¤¦» ¸Ô#ª/°±Å©­¥«À®%£¨„®†¨„§§h±Åãö¤¦¸ ¥t©  °\©­¸ÃX©Æ®t»Ï¹»b¬¤Å½jãY®%£©Æ¤¦¸¤¦®%¤¦¨”± ¨”¸¸\»”®%¨„®%¤Å» ¸K®t» ³$©«®t®t©«ªø¨„§§ª%»(Îb¤p¹€¨„®t©®%£\©Ð®tª.¨”¤¦¸¤p¸\·K¬¨„®%¨bÇ ¢£\© ®tª.¨”¤¦¸©­¬E¥tãb¥t®t©­¹ ÃX» ¸¥%¤p¥t®%¥É»”½®%£\©Ó¤¦¸h¤Å®%¤¦¨”± ²Ë¥t®%¨„®t© ¨”¸¸\»”®%¨„®t»”ª¿®t»”·”©«®%£\©«ªNÊ*¤Å®%£®%£\©=»”ª.¬©«ª%©­¬€¥t©  °\©­¸ÃX© »”½ ¢F*¥­À\Ê*£¤¦Ã/£Ñ먔¸Ñ³$©^¨„§§h±¦¤Å©­¬ ®t»í°h¸¥t©«©­¸Ñ®t©XÎb®«Ç  »”ª ´Nµ^¶ ®%¨„·”· ¤¦¸·\ÀÄ®%£\©7¤¦¸¤Å®%¤p¨”± ²Ë¥t®%¨„®t©Ð¨”¸¸\»”®² ¨„®t»”ªE¨”¥%¥%¤¦· ¸¥ô©­¨”Ã.£6îb¸\»(Ê*¸ ÊN»”ª.¬6¤Å®%¥ô¹» ¥t®² §ª%»”³v¨„³h±Å©É®%¨„·Ó½jª%» ¹ ¨”¹» ¸\· ¥t®K®%£\» ¥t©º±p¤¦¥t®t©­¬,¤¦¸ ¨Ó±¦©XÎb¤¦ÃX» ¸nÀí¨”¥K¬©«®t©«ª.¹€¤¦¸\©­¬ ½ƒª%» ¹ ¥t» ¹©º®tª.¨”¤¦¸b² ¤¦¸\·7ÃX»”ª.§h°¥«ÇÖ¢£©ø¤¦¸¤Å®%¤¦¨”±N®%¨„·”· ¤¦¸·7»”½Ä°¸\îb¸\»(Ê*¸ ÊN»”ª.¬¥먔¸×³$© £¨”¸h¬±Å©­¬4¤p¸K¨¸<°¹¯³$©«ªf»”½*Êa¨­ãb¥«À °¥%¤p¸\·€Ã«±¦°\©­¥¥.°Ã.£c¨”¥¨€Îb©­¥*¨”¸h¬Y먄§h¤Å®%¨”±p¤¦¥%¨„®%¤Å» ¸Ç  »(ÊN©«Í”©«ª­ÀÊN©¥%£h¨”±¦±h¸\»”®a¬\ÊN©­±¦±» ¸ ®%£\©®tª%©­¨„®%¹©­¸l® »”½€°¸\îb¸\»SÊ*¸¼ÊN»”ª.¬¥Y¤p¸¼®%£¤¦¥Y§h¨„§$©«ª­Çù¢*£\©4¤¦¸¤Å² ®%¤¦¨”±!¨”¸¸\»”®%¨„®%¤Å» ¸Ñ¤p¥¿ÃX» ¹§v¨„ª%©­¬ Ê*¤Å®%£Æ®%£\©Ò®tª.¨”¤¦¸¤¦¸· ¬¨„®%¨®t»7¤¦¬©­¸H®%¤Å½jã4¢F¥®%£¨„®€¤¦¹§hª%»(͔©Ñ¨”ëë°\ª.¨”ÃXã ³<ãÆ¹€¨„îb¤¦¸\·¥t§$©­Ã«¤,HvÃÃX» ¸H®t©XÎb®²O³$» °¸¬ÑÃX»”ª%ª.©­ÃX®%¤Å» ¸¥ ¥t®t©«§h¥­Àn©”Çõ·\ǀª%©«§h±p¨”뤦¸\· ®%¨„· Ê*¤Å®%£fÀ§hª%»(Íb¤¦¬\©­¬ ®%¨„· Ѩ„§§$©­¨„ª.¥ ¤¦¸¥t» ¹©¸\©­¨„ª%³<ãf§$» ¥%¤¦®%¤Å» ¸ÑëjÊ*£\©«ª%© ,¤¦¥¨”±¦¥t»€¨í±Å©XÎb¤p먔±$®%¨„·í»”½:®%£\©^ÊN»”ª.¬ ì Ç ¢£©¯¥t§h¨”ÃX©^»”½ §$» ¥%¥.¤Å³h±Å©Ò¢F*¥¤¦¥ HÎb©­¬ø³<ãÑ¥t®%¨„®² ¤¦¸\·Ö¨ï¥t©«®Y»”½ª.°±¦© ®t©­¹€§h±¦¨„®t©­¥«ÔÅÀ^Ê£¤¦Ã.£¼¨„ª.©4©­¥² ¥t©­¸l®%¤¦¨”±¦±Åã °¸¬\©«ª/¥t§$©­Ã«¤,Hh©­¬ ¢F*¥«Ç  »”ªY©XÎ\¨”¹§h±Å©”À » ¸\©N®t©­¹§h±¦¨„®t©N§ª%»SÍ<¤p¬\©­¥‚®%£©¿§h¨„®t®t©«ª/¸ Ã/£¨”¸\·”©a®%¨„· È ®t»Ð¾ Ê*£\©«ª%© ®%£\© §ª%©«Íb¤Å» °¥#Ê¿»”ª.¬×£¨”¥f®%¨„·4üÔ ëjÊ*£\©«ª.©øÈ¯ÀÕ¾À ü ¨„ª%©ø°h¸¥t§$©­Ã«¤,Hh©­¬ ì džÈ¸»”®%£\©«ªí¨”± ² ±Å»SÊ*¥n®%£\©¿Ã.£¨”¸\·”©N¤Å½H®%¨„·ÄüФ¦¥‚¨”¥%¥%¤Å· ¸\©­¬^®t»*©­¤Å®%£\©«ª‚»”½ ®%£\©a®ËÊN»Ò§ª%©­ÃX©­¬¤p¸\·ÊN»”ª.¬¥­À[©«®%Ã„Ç J ¸#®%£\©=¢F*¥…®%£¨„® ¨„ª%©f±Å©­¨„ª.¸\©­¬Àh®%£\©­¥t©¯°¸h¥t§$©­Ã«¤,Hh©­¬Ì̈́¨”±¦°\©­¥#ë È#ÀÕ¾À ü ì ¨„ª%©4¤p¸¥t®%¨”¸l®%¤¦¨„®t©­¬¼®t»Ö¥t§$©­Ã«¤,HvÃϧh¨„ª.®%¥Ñ»”½¥t§$©«©­Ã.£Ç *°h±Å©­¥¯Ã«¨”¸†¨”±¦¥t»cª%©  °h¤Åª%©í¥t§$©­Ã«¤,H!ÃÊ¿»”ª/¬¥«À:ª.¨„®%£\©«ª ®%£¨”¸ %°¥t®=®%¨„· ¥­Àh®t»í¨„§§$©­¨„ª¤¦¸YÃX» ¸H®t©XÎb®«Ç ¢F¥ø¨„ª%©4±¦©­¨„ª.¸\©­¬A¨”¥c¨”¸A»”ª.¬\©«ª.©­¬A¥t©  °\©­¸ÃX©”Ç Èa®í©­¨”Ã/£º¥%®%¨„·”©”À®%£\©Y¸\©XÎb®€ª.°h±Å©ø¨”¬\»”§®t©­¬º¤¦¥®%£\© » ¸\©ø®%£h¨„®€· ¤Å͔©­¥®%£\©Y³!©­¥%®ÑÞ!٫⤦¹§hª%»(͔©­¹©­¸l®€¤¦¸ ®%¨„·”· ¤¦¸\·ø¨”ëë°ª.¨”ÃXãÖë ¥%¤¦¸ÃX©ª.°±¦©­¥먔¸7³!»”®%£©>G$©­ÃX® ÃX»”ª%ª%©­ÃX®%¤¦» ¸¥Ñð„ÞvØ×¤¦¸l®tª%»b¬°ÃX©í©«ª.ª%»”ª.¥ ì ǝ¢£¤¦¥¯ª.°±Å© ¤¦¥c®%£\©­¸ò¨„§§h±¦¤Å©­¬A®t» ®%£©†Ã«°\ª.ª%©­¸H®c®%¨„·¼¥t®%¨„®t©×»”½ ®%£\©#®tª.¨”¤¦¸¤¦¸·¥%©«®fëjÊ*£¤pÃ.£Y¤p¥¤¦¸¤Å®%¤p¨”±¦±Å〮%£\©f¤¦¸¤Å®%¤p¨”± ² ¥t®%¨„®t© ¨”¥%¥%¤Å· ¸¹€©­¸H® ì Àz¨”¸¬Ð®%£\©Æ¸\©XÎ<®¯ª.°±Å©¥t» °\· £l®«Ç ¢£¤p¥Ì§ª%»bÃX©­¥%¥Ñ®t©«ª.¹€¤p¸¨„®t©­¥ÑÊ*£\©­¸A®%£\©4¤¦¹§hª%»(͔©X² ¹©­¸l®½ ¨”±¦±¦¥a®t»í¥t» ¹€©^§ª%©­¥t§$©­Ã«¤,Hv©­¬Æ®%£ª%©­¥%£\» ±¦¬nÇ È¸ï¤¦¸l®t©«ª%©­¥t®%¤p¸\·Ð½ƒ©­¨„®%°ª%©ø»”½Ä®%£¤¦¥í¨„§§ª%» ¨”Ã/£ï¤¦¥ ®%£¨„®É¤Å®×¨”±¦±Å»SÊ*¥×ÃX©«ª%®%¨”¤¦¸ ¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸¥†³$©«®ËÊN©«©­¸ ª.°±¦©­¥«À4Ê*¤¦®%£ §!» ¥.¥%¤Å³h±Å©ò³$©­¸\©>Hv뤦¨”±©>G$©­ÃX®%¥«Ç È Ã/£¨”¸\·”©×©>G!©­ÃX®t©­¬ò³<ãA» ¸\©Kª.°±Å©4¹í¨­ãÓ먔°¥%©K®%£\© ÃX» ¸l®t©XÎ<®¿ª%©  °¤Åª%©­¹©­¸l®z»”½!¨”¸»”®%£\©«ª5ª.°±¦©N®t»Ä³$©=¥%¨„®² ¤¦¥Hh©­¬7¨”±¦±Å»SÊ*¤¦¸\·ø¤¦®^®t» Hvª%©”Ç  »”ª#©XÎ\¨”¹§h±Å©”À:· ¤Å͔©­¸ ª.°h±Å©­¥ \ Ã.£h¨”¸\·”© ®t»^¤¦½:§hª%©«Í<¤¦» °¥=®%¨„·Ì¤¦¥]í¨”¸¬ \ Ã/£¨”¸\·”©º®t»K¤Å½#§ª%©«Íb¤Å» °¥Ñ®%¨„· ¤¦¥>]﨔¸¬A¨ ¥t©  °\©­¸ÃX© ÀN®%£\©WHhª.¥%®fª/°±Å©Ì¹€¨­ã Hhª%©Ì®t»7· ¤Å͔© ÀĨ”±¦±Å»SÊ*¤¦¸\·×®%£\©7¥t©­ÃX» ¸h¬¼ª.°±Å©®t» Hhª.©Ï· ¤¦Í<¤¦¸· ”ÇÓÈ÷¥t©­ÃX» ¸h¬ï½j»”ª.¹ »”½^¤¦¸H®t©«ª/¨”ÃX®%¤Å» ¸º¤p¥€Ê*£\©«ª.© ®%£\©Æ®%¨„·c¥%°³h¥t®%¤Å®%°\®t©­¬7³lãл ¸\©íª.°±Å©€¤¦¥#¤Å®%¥t©­±Å½a¥%°\³² t©­ÃX®®t»ÌÃ.£h¨”¸\·”©#³<ãѨ”¸\»”®%£\©«ªÒª.°±¦©”Ç  »”ª©XÎ\¨”¹§h±Å©”À · ¤Å͔©­¸×ª/°±Å©­¥ \ Ã.£¨”¸·”© €®t» ¤Å½§ª.©«Í<¤Å» °h¥¯®%¨„·7¤¦¥ ]¨”¸¬ \ Ã/£¨”¸\·”©Æ®t»4¤Å½¸\©XÎb®#®%¨„·7¤¦¥  ]¨”¸¬ ¨¥t©  °\©­¸ÃX©   À5®%£\© Hhª.¥t®fª.°±Å©ÆÃ«¨”¸ Hhª%©í· ¤¦Í<¤¦¸· ”Àv¨”±p±Å»(Ê*¤p¸\·®%£\©¯¥t©­ÃX» ¸¬øª.°±Å©Ò®t» Hvª%©”À\®t»í¨„· ¨”¤¦¸ ¨1G$©­ÃX®®%£\©Ì¥t©­ÃX» ¸¬K§!» ¥.¤Å®%¤Å» ¸Àz· ¤ÅÍb¤¦¸\·! Ç ¶ °Ã.£ ª.°h±Å©K¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸h¥Ï£h¨­Í”©É³$©«©­¸'¥%©«©­¸ò¨”¥Ð¨”¸ö¤¦¹² §$»”ª%®%¨”¸l®¿½j©­¨„®%°\ª%©”À\¨”±¦±¦»(Ê*¤¦¸·¯Ê*£¨„® *¨”¹í¥%£¨­Ê¼¨”¸¬ ûϨ„ª.ë°¥ ëý­þ”þ" ì 먔±¦± ±Å©«Í”©«ª.¨„· ¤¦¸·\ÔÅÀ:¨”¥#¤¦¸ \ ±Å©«Í”©«ªt² ¨„· ¤¦¸·*»”½\§h¨„ª%®%¤p¨”± ¥t» ±¦°\®%¤¦» ¸¥³$©«®ËÊN©«©­¸#¸©­¤Å· £H³$»”ª.¤p¸\· ¤¦¸h¥t®%¨”¸ÃX©­¥U]bÀ<¤ÁÇõ©”Ç ¥t»¯®%£¨„®¿ÃX»”ª%ª%©­ÃX®%¤Å» ¸¥¿©>G$©­ÃX®t©­¬í³<ã ©­¨„ª.±p¤Å©«ªª/°±Å©­¥¤¦¸Y¥t©  °\©­¸ÃX©¨”±¦±¦»(Êö¹»”ª%©¨”ëë°\ª.¨„®t© »”§$©«ª.¨„®%¤Å» ¸€»”½$±p¨„®t©«ª5ª.°±Å©­¥ëj³$»”®%£€¨”¥§h¨„ª%® »”½±Å©­¨„ª.¸\² ¤¦¸·\À¨”¸¬ø¤¦¸Ì©XÎb©­Ã«°\®%¤Å» ¸ ì Ç È¸ò¨”¬\Í[¨”¸l®%¨„·”©×»”½í¾Nª.¤¦±¦±OÔÕ¥ø¨„§§ª%» ¨”Ã/£'¤p¥Y®%£¨„® ¤Å®%¥¿±Å©­¨„ª.¸\©­¬í¹»<¬©­±¤¦¥  °¤Å®t©*ÃX» ¹€§h¨”ÃX®«À<ÃX» ¸h¥%¤¦¥t®%¤¦¸· »”½*¨c½j©«Êô£<°¸¬\ª.©­¬7ª.°±¦©­¥«À…®%£h¨„®먔¸†³$©Æ¬h¤Åª%©­ÃX®%±Åã ¤¦¸h¥t§$©­ÃX®t©­¬òë ÄÇõ½ Ç,®%£\©®%£\» °¥%¨”¸h¬¥í»”½#ÃX» ¸H®t©XÎb®%°¨”± §ª.»”³h¨„³h¤¦±¦¤¦®%¤Å©­¥:»”½$¨  ûÏû ®%¨„·”·”©«ª ì ÇÈ¥¬h¤¦¥%ë°¥%¥%©­¬ ¤¦¸ *¨”¹€¥%£h¨­ÊK¨”¸h¬¯ûϨ„ª.ë°¥aëý­þ”þ" ì À”®%£\©¿¨„§§hª%» ¨”Ã.£ ¤¦¥¯ª%©­¥%¤¦¥%®%¨”¸H®#®t»Y»(͔©«ª.®tª.¨”¤¦¸¤¦¸·Y©>G$©­ÃX®%¥«Ç J ¸4¾Nª.¤¦±¦±OÔÕ¥ ©XÎb§$©«ª.¤¦¹©­¸l®%¥Ïë ¾Nª.¤p±¦±ÁÀÒý­þ”þ ì À®tª.¨”¤p¸¤¦¸\·7» ¸ $#%#%& ®t»”­¸¥5»”½®%£\© ´ ¢¾4®%¨„·”·”©­¬('Yð„ßjß)vâ+*%Ù.Ù­â-,\Ý$.*/Þvð”ß ÃX»”ª%§v°¥*°¸h¬\©«ª®%£\© 뱦» ¥t©­¬Y͔»b먄³h°±p¨„ª%ãѨ”¥%¥%°¹§² ®%¤Å» ¸nÔëjÊ£\©«ª%©É®%£\©«ª%©Ö¨„ª.©Ö¸\»¼°¸\îb¸\»(Ê*¸òÊN»”ª.¬¥ ì · ¨(͔©*®%¨„·”· ¤¦¸\·^¨”ëë°ª.¨”ÃXã#»”½$þQPlÇC0/ Ç21¤Å®%£\» °\®z®%£¤¦¥ ¨”¥%¥.°¹§®%¤Å» ¸nÀ<Ê£\©«ª%©Ò§$©«ª%½ƒ»”ª/¹€¨”¸ÃX©Ä¨”±¦¥t»€¬\©«§$©­¸¬h¥ » ¸'®%£\©×£¨”¸¬±¦¤p¸\·É°¸\îb¸\»(Ê*¸ÓÊ¿»”ª/¬¥«À^®%£\©×¥%ÃX»”ª%© Êa¨”¥Æþ<ÇC0/ Ǻ¢£\©­¥t©Ñª%©­¥.°±Å®%¥ÊN©«ª%©ø¥t®%¨„®t©X²O»”½ƒ²O®%£\©X² ¨„ª%® ¨„® ®%£\©a®%¤¦¹€©”À ³h°\®z£¨­Í”©¥.¤¦¸ÃX©N³!©«©­¸¥.°\ª%§h¨”¥%¥%©­¬ ³<ã¼¥t» ¹©K¥%®%¨„®%¤¦¥t®%¤¦Ã«¨”±#¨„§§ª%» ¨”Ã/£\©­¥c¨”¸h¬ ͔»”®%¤¦¸\·\Ô ¥tãb¥t®t©­¹€¥Ò®%£h¨„®ÒÃX» ¹#³h¤¦¸©¹f°±Å®%¤¦§h±Å©¯®%¨„·”·”©«ª/¥íë*¨„®² ¸¨„§v¨„ª%î<£h¤ÁÀý­þ”þḧ́¨”¸  ¨”±Å®t©«ª%©­¸Ñ©«®*¨”±OÇÅÀ‚ý­þ”þ”ÿ ì Ç ´ ª%»”³h±Å©­¹í¥?»”½ ¾Nª.¤¦±p±ÁÔÕ¥?¨„§§ª.» ¨”Ã.£ ¤¦¸뱦°h¬\©”À Hhª/¥t®%±Åã”Àb®%£\©^¥t§$©«©­¬ »”½…®%¨„·”· ¤p¸\·\ÀÊ*£¤pÃ.£ ÊN¨”¥½ƒ» °¸h¬ ®t»º³!©7¥.¤Å· ¸¤,Hv먔¸l®%±Åã ¥%±Å»(ÊN©«ªY®%£h¨”¸  ûÏûY²O³h¨”¥t©­¬ ÃX» ¹§$©«®%¤Å®t»”ª/¥«Ç  »SÊ¿©«Í”©«ª­ÀN»bÃ.£©†¨”¸¬ ¶ Ã.£h¨„³!©­¥ ëý­þ”þ ì ¥%£»(ʼ£\»(Ê®t»ÃX» ¹§h¤¦±¦©¨#ª.°±Å©X²O³h¨”¥%©­¬í®%¨„·„² ·”©«ªÑ®t»É¨ Hv¸¤Å®t©X²Ë¥%®%¨„®t©Ï®tª/¨”¸¥%¬°ÃX©«ª(À*· ¤ÅÍb¤¦¸\·K͔©«ª%ã ½ ¨”¥t®Ï©XÎ<©­Ã«°\®%¤¦» ¸Ç ¶ ©­ÃX» ¸¬h±Åã”À^®%£\©«ª%©†¤¦¥ø®%£©KÃX» ¥t® ¤¦¸ ®%¤¦¹€©¼»”½c®tª.¨”¤p¸¤¦¸\·\ÀøÊ£¤¦Ã.£ ½j»”ªï±¦¨„ª.·”©«ªt²Ë¥%¤ ©­¬ ®tª.¨”¤¦¸h¤¦¸\·¼¥t©«®%¥ºëj©”Çõ·\Ç $#%#%& ®t»”­¸¥ ì ʤ¦±¦±f®%¨„ ¥t» ¹©«®%£h¤¦¸\·¨„ª%» °¸¬Ñ¨í¬¨­ã »”ª*¹»”ª%©Ä®t»ÆÃX» ¹§h±Å©«®t© ë °¥%¤p¸\·A¾Nª.¤¦±¦±ÁÔեϻSÊ*¸ó¤¦¹€§h±Å©­¹©­¸l®%¨„®%¤Å» ¸ ì Ç *¨”¹² ¥%£¨(ʆ¨”¸h¬¯ûϨ„ª.ë°¥aëý­þ”þ" ì §hª%»”§$» ¥t©¨”¸#½j¨”¥t®t©«ª ¤¦¸b² ÃXª%©­¹©­¸l®%¨”±ÁÔa®tª.¨”¤¦¸h¤¦¸\·4¨”±Å·”»”ª/¤Å®%£¹ ëjÊ*£¤¦Ã/£º¨(͔» ¤¦¬¥ ª%©­¥%먔¸h¸¤¦¸\·É®%£\©†ÃX»”ª%§h°¥ø½ƒ»”ªÏ©­¨”Ã.£'ª.°±Å©Ð®%£h¨„®c¤¦¥ ±Å©­¨„ª.¸©­¬Æ³<ãÆ°h¥%¤¦¸\·±¦¤¦¥t®%¥¿»”½:§!» ¤p¸H®t©«ª.¥N®t»±¦¤¦¸.°±Å©­¥ ®t» ®%£\©,¥%¤Å®t©­¥ Ê*£\©«ª.©'®%£\©«ãù¨„§§h±Åã ì Àгh°\® ¸\»”®t© ®%£¨„®É¤Å®%¥×¹©­¹»”ª.ãóª%©  °¤Åª%©­¹€©­¸H®%¥†¨„ª%©A¥t»ö£¤Å· £ ¨”¥Ì®t»Ö±¦¤¦¹í¤Å®Æ¤Å®%¥Ì¨„§§v±¦¤¦Ã«¨„³h¤¦±p¤Å®Ëã”Ç ¶ ¨”¹f°\©­±€ëý­þ”þ”ÿ ì §ª%»”§$» ¥t©­¥Ñ¨ ±p¨ ãvÔ=͔©«ª.¥.¤Å» ¸»”½¯®tª.¨”¸h¥t½ƒ»”ª/¹€¨„®%¤Å» ¸b² ³h¨”¥t©­¬ ±Å©­¨„ª/¸¤¦¸\·\ÀШ ûY» ¸l®t© üa¨„ª.±¦» ̈́¨„ª.¤¦¨”¸l®»”½ ®%£\©ø¥t®%¨”¸h¬¨„ª.¬×¹©«®%£\»b¬À¨”¸¬É¨„§h§h±¦¤Å©­¥¤Å®®t»7¬¤¦¨[² ±Å»”· °\©f¨”ÃX®Ò®%¨„·”· ¤¦¸\·\Ç¢*£\©$²Ë¢¾×¥tãb¥t®t©­¹ »”½‚¨[² ·”©«ªíëý­þ”þ”þ ì ¤p¥¨”¸Ï©ÆÃ«¤¦©­¸H® ´ ª%» ±Å»”·Ñ¤¦¹§h±¦©­¹©­¸H®%¨[² ®%¤Å» ¸4»”½N®tª/¨”¸¥t½j»”ª.¹€¨„®%¤Å» ¸b²O³v¨”¥t©­¬7±Å©­¨„ª.¸h¤¦¸\·\ÀnÊ*£h¤¦Ã.£ ®tª.¨”¤¦¸h¥ø¤¦¸A¥%£»”ª%®t©«ªY®%¤¦¹€©7®%£¨”¸Ó¾Nª.¤p±¦±ÁÔե̻(ʸӤ¦¹² §h±Å©­¹€©­¸H®%¨„®%¤Å» ¸ºëj³l㝨”¸Ð»”ª.¬\©«ª¯»”½N¹€¨„· ¸h¤Å®%°¬\©”Àn½j»”ª ®%£\©®%¨”¥tîb¥Äª%©«§$»”ª%®t©­¬ ì Ç$²Ë¢¾Ö¨”±p¥t»ø¤¦¹§h±¦©­¹©­¸H®%¥ ±¦¨ ã!Աũ­¨„ª/¸¤¦¸\·\ÀÒ¨”¸¬…¨„·”©«ª­ÔÕ¥cª%©­¥%°±Å®%¥Y¤¦¸¬¤¦Ã«¨„®t© ÃX» ¹§h¨„ª/¨„³h±Å©®%¨„·”· ¤¦¸·#¨”ëë°\ª.¨”ÃXã½j»”ª®%£©*±¦¨ 〨”¸¬ ¥t®%¨”¸¬h¨„ª.¬€¹©«®%£\»b¬¥5¨”¥¿¨„§§h±¦¤¦©­¬®t» ´Nµ^¶ ®%¨„·”· ¤¦¸\·\Ç  å ž éMËJ å[æ Ÿ JhF D¡ Ÿ”ê ç æ'å L=çnF Ÿ D æ Ÿ ¢£\©Ï¥t®%¨„ª%®%¤¦¸·K§$» ¤¦¸l®í½j»”ªÌ®%£\©cÊN»”ª%îï¤p¸Ö®%£¤¦¥Æ§h¨[² §$©«ªÆ¤¦¥  °\©­¥t®%¤Å» ¸¤p¸\·Ð®%£\©¨”¥%¥%°¹§h®%¤Å» ¸É®%£¨„® ª.°±Å© ¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸f¤¦¥‚¤p¹§$»”ª%®%¨”¸H®…®t»®%£©§$©«ª%½j»”ª.¹€¨”¸ÃX©¿»”½ ¾Nª.¤¦±¦±OÔÕ¥¹©«®%£\»b¬ » ¸ ´Nµ^¶ ®%¨„·”· ¤¦¸\·\ÇNüa» ¸¥%¤¦¬\©«ªN®%£\© ½ ¨”ÃX®Ä®%£¨„®«À½ƒ»”ª ´ ¢¾ 'Yð„ßjß )!â+*tÙ/Ù«â ,\Ý$.*rÞvð„ß®t©XÎb®«À ³h¨”¥t©­±p¤¦¸\©§$©«ª%½j»”ª.¹€¨”¸ÃX©°¸¬\©«ª¿®%£\©ë±Å» ¥%©­¬í͔»b먄³\² °±¦¨„ª.ãÖ¨”¥%¥%°¹§h®%¤Å» ¸Àa½ƒª.» ¹ ¨”¥.¥%¤Å· ¸¤¦¸·7®%£\©¹» ¥t®² §ª%»”³v¨„³h±Å©f®%¨„·ø®t»ø©­¨”Ã.£4Ê¿»”ª/¬À‚¤¦¥Ä¨„ª%» °h¸¬7þ"\ÇC0/ Ç È Hv¸¨”±<®%¨„·”· ¤¦¸\·¯¨”ëë°ª.¨”ÃXã#»”½$¨„ª%» °¸h¬€þQP%/ö¤¦¸¬¤pÃr² ¨„®t©­¥¨”¸K¤p¹§ª%»S͔©­¹©­¸H®f»”½¨„ª%» °¸h¬ïý ¤¦¸"0#c®%¨„· ¥ ³$©­¤¦¸\·ÒÃX»”ª%ª%©­ÃX®t©­¬ÇĤÅ͔©­¸®%£¤¦¥ ¥t§v¨„ª%©­¸\©­¥%¥«Ô”»”½!ÃX»”ªt² ª%©­ÃX®%¤Å» ¸h¥«ÀHÊN©¹€¤¦· £H®¥.°¥t§$©­ÃX® ®%£¨„®®%£©*½jª%©  °©­¸ÃXã »”½‚» ¸©ÒÃX»”ª.ª%©­ÃX®%¤Å» ¸Ì³$©­¤¦¸\·f¨„§§ª%»”§ª/¤¦¨„®t©­±Åãë±Å» ¥t©Ò®t» ©­¸¨„³h±¦©^¨”¸\»”®%£\©«ªÊ*¤p±¦±!³!©  °¤Å®t©^±¦»(Ê^Ç ¢£©» ¸±ÅãÄ£¨„ª/¬^ª%©­¥%°h±Å®%¥‚»”½\Ê£¤¦Ã.£¯Ê¿©a¨„ª%©a¨­Êa¨„ª%© ®%£¨„®Ï³$©­¨„ªY» ¸'®%£¤¦¥Y¤p¥%¥%°\©4¨„ª.©K· ¤Å͔©­¸'³<ã *¨”¹² ¥%£¨(Ê ¨”¸¬ ûϨ„ª.ë°¥ ëý­þ”þ" ì ÀÆÊ£\»¼ª%©«§$»”ª%®®%£¨„® ª.°±¦©ø±Å©­¨„ª.¸¤¦¸·Ï» ¸º¨ $#%&®t»”­¸º¥%¨”¹€§h±Å©ø»”½Ò®%£\© ¾Nª%»(ʸ üN»”ª%§v°¥5¤¦¸l͔» ±Å͔©­¬€Ã/£¨”¸\·”©­¥a¨„® ”þ#¥%¤Å®t©­¥«À »”½‚Ê£¤¦Ã.£Æ½j»”ª=» ¸±Åã ”þ¥%¤Å®t©­¥=¬h¤¦¬€®%£\©ÄÃ/£¨”¸\·”©^¬\©X² §$©­¸¬» ¸A®%£©4¨”ÃX®%¤Å» ¸A»”½¹»”ª.©7®%£¨”¸A» ¸©7ª.°±Å©”Ç ¢£\©­¥%©”þ#¥%¤Å®t©­¥¨”ëÃX» °h¸H®N½ƒ»”ªa¨„³$» °\® #bÇ ÿ0/󻔽®%£\© ®tª.¨”¤¦¸h¤¦¸\·K¥%©«®«À³v°\® ¸»”®t©Ð®%£¨„®Ì®%£¤¦¥Ì¬\»<©­¥Ì®tª.¨”¸¥² ±¦¨„®t©®t» #bÇ ÿ0/󻔽®%£\©ª%©­¥%°±¦®%¤¦¸\·Ä®%¨„·”· ¤¦¸\·¨”ëë°\ª.¨”ÃXã ¥%¤p¸ÃX©*¨„§§h±¦ã<¤¦¸·^¨^ª.°±¦©먔¸Æ£¨(͔©©­¤¦®%£\©«ª§$» ¥%¤Å®%¤¦Í”©”À ¸\©«· ¨„®%¤¦Í”©¯»”ª*¸\©­°®tª.¨”±ÃX» ¸¥t©  °\©­¸ÃX©­¥«Ç J ¸ ®%£\©Óª%©­¹€¨”¤¦¸¬©«ªÉ»”½Ï®%£©A§h¨„§$©«ª­ÀYÊN©AÊ*¤p±¦± ¬\©«Í”©­±¦»”§ ¥t» ¹© ª/°±Å©X²O³h¨”¥t©­¬ ®%¨„·”· ¤¦¸\·¹©«®%£»<¬¥ ½j»”ªÖÊ*£¤pÃ.£ ª.°±¦©¤p¸H®t©«ª.¨”ÃX®%¤¦» ¸E¤¦¥×©XÎb§h±¦¤¦Ã«¤Å®%±¦ã ©XÎl² 뱦°h¬\©­¬Ç ¢£\©ó§$©«ª%½j»”ª.¹€¨”¸hÃX© »”½É®%£\©­¥t©ô¹»<¬\² ©­±¦¥§hª%»(Íb¤¦¬\©­¥ ©­¹§h¤¦ª.¤¦Ã«¨”±c©«Í<¤¦¬©­¸ÃX©,ÃX» ¸ÃX©«ª.¸¤¦¸· ®%£\©'ª%©­¨”±Y¤p¹§$»”ª%®%¨”¸ÃX©¼»”½Ðª.°±¦©A¤¦¸H®t©«ª/¨”ÃX®%¤Å» ¸E®t» ®tª.¨”¸h¥t½ƒ»”ª/¹€¨„®%¤Å» ¸b²O³h¨”¥%©­¬ ´Nµ^¶ ®%¨„·”· ¤p¸\·\Ç  åSæ è5JL=J æ è5J æ ¡bJÉD æ èÆç  êOŸ J æ Ÿ µ °ª#Íb¤Å©«Êô»”½ª.°±Å©Æ¤¦¸H®t©«ª/¨”ÃX®%¤Å» ¸×¤¦¥¯©­¹#³$»<¬h¤Å©­¬7³<ã ®˜Ê¿»ï¨”¥.¥%°¹§®%¤¦» ¸¥«ÀÊN©7먔±¦±fۃÞ!ØHÙ K\Ù«ÞvØHÙ«ÞvÚ/ن¨”¸¬ Ú/ÝMNMfÛ âMíÙXÞhâ ÀvÊ*£¤¦Ã/£í®t»”·”©«®%£©«ª¥t©«ª%͔©Ä®t»©XÎ\뱦°¬© ª.°h±Å©í¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸¥­Çc¢£\©ÆÃX» ¹€¹€¤Å®%¹©­¸l®¨”¥%¥%°¹§² ®%¤Å» ¸ïª%©«· °±¦¨„®t©­¥ £\»SÊ ª.°h±Å©­¥€¨„ª%©°¥t©­¬Àaª%©  °¤Åª/¤¦¸\· ®%£¨„®¯Ê*£\©«ª%©€¨Ìª.°h±Å© Hhª%©­¥Ä®t»YÃ/£¨”¸\·”©í¨ø®%¨„·\À…®%£¨„® ®%¨„·7¹€¨(ãK¸\»”®¥%°³h¥t©  °\©­¸H®%±¦ã7³$© Ã/£¨”¸\·”©­¬É¨„· ¨”¤¦¸ ë ¥t»KÊ¿©Ï¨„ª%© ÃX» ¹í¹€¤Å®t®t©­¬ÔN®t»4®%£\©ÏÃ.£¨”¸·”© ì Ç J ¸\² ¬\©«§$©­¸¬©­¸ÃX©Ð¤¦¥Ì®%£\©7¨”¥%¥%°h¹§®%¤Å» ¸®%£¨„®Ñ®%£©Ð½ƒª%©X²  °\©­¸ÃXã€Ê¤Å®%£€Ê*£h¤¦Ã.£ ¨”¸ ©­¨„ª.±¦¤Å©«ª¿ª.°±Å©Ê*¤¦±¦±¹€»<¬¤¦½ƒã ®%£\©ÃX» ¸l®t©XÎ<®Ñª%©­±¦©«Í[¨”¸l®Æ®t»×®%£\© Hhª.¤p¸\·4»”½¯¨†±¦¨„®t©«ª ª.°h±Å©Ä¤¦¥¥%° ë¤Å©­¸H®%±¦ãƱŻSÊA®%£¨„®*¤Å®먔¸ø³!©^¤¦· ¸\»”ª%©­¬Ç J ¸¬¤¦Í<¤¦¬h°¨”±¦±Åã”Àz®%£\©­¥t©c¨”¥%¥%°¹§h®%¤Å» ¸¥¥t®%¤¦±p±=¨”±¦±Å»(Ê ¨ÆÃX» ¸¥%¤¦¬\©«ª/¨„³h±Å©^¬\©«·”ª%©«©#»”½ ½ƒª.©«©­¬\» ¹à¨”¥®t»Æ£\»SÊò¨ ¹©«®%£»<¬†¹€¤Å· £l®¯³$©Æ¥%§!©­Ã«¤ Hh©­¬Ç Ò°\ª.¤¦¸·ø®%¨„·”· ¤¦¸\·\À ½j»”ª¿©XÎ\¨”¹§h±¦©”À<ÃX» ¹€¹í¤Å®%¹©­¸l®N±Å©­¨(͔©­¥a¤Å®»”§$©­¸ ¨”¥®t» Ê*£©«®%£\©«ªÑ¨Kª/°±Å©c¥%©  °\©­¸hÃX©¥.£\» °±¦¬º³$©Ï°¥%©­¬ï³<ã ¨„§§v±Åã<¤p¸\·Y©­¨”Ã/£Öª.°±¦©Æ¤¦¸†®%°\ª.¸†®t»Ð®%£\©Ñ©­¸l®%¤Åª%©Ñ¤¦¸h¤ ² ®%¤¦¨”±z®%¨„·c¥t®%¨„®t©€»”ª¯¤Å½¿®%£\©«ã¥%£\» °±¦¬c³$©¨„§§h±¦¤Å©­¬¨”¥ ¨×·”ª%» °§A®t»É®%¨„·Ö©­¨”Ã.£'®t»”­¸Ó¤p¸º®%°ª.¸Ç¤Å®%£\©«ª »”§®%¤¦» ¸†¨”±p±Å»(Ê*¥#®%£\©Ì§$» ¥%¥%¤Å³v¤¦±¦¤Å®˜ãY®%£¨„®©­¨„ª.±¦¤¦©«ª#®%¨„· ª%©«§v±¦¨”ÃX©­¹©­¸l®#¥t®t©«§h¥f¹€¤Å· £l®#¨1G$©­ÃX®±¦¨„®t©«ª#ª/°±Å© Hhª%² ¤¦¸· ¥«Ç J ¸h¬\©«§$©­¸¬\©­¸ÃX©”À£\»SÊ¿©«Í”©«ª­À…¤¦¸¥t®tª.°hÃX®%¥°¥®t» ¤Å· ¸»”ª%©¥%°Ã/£¯¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸h¥«À[¤p¸ÄÊ*£¤¦Ã/£^먔¥t©¿®%£\©®˜Ê¿» »”§®%¤¦» ¸¥¿¥.£\» °±¦¬í· ¤Å͔©Ò©­¥%¥t©­¸l®%¤¦¨”±¦±Åã©  °¤Å̈́¨”±Å©­¸l®ª%©­¥² °±¦®%¥«Ç1ЩÊ*¤¦±p±§h°\ª.¥.°\©^®%£\©¥t©­ÃX» ¸¬Ï»”§®%¤Å» ¸nÀvÊ*¤¦®%£ ª.°h±Å©­¥=³$©­¤¦¸\·€¨„§§v±¦¤Å©­¬Ñ¨”¥*¨í·”ª%» °\§c®t»€©­¨”Ã.£Ï®t»”­¸À Ê*£h¤¦Ã.£c¤¦¸H͔» ±¦Í”©­¥*®t©­¥t®%¤p¸\·í©­¨”Ã/£cª/°±Å©¯¤¦¸Y®%°\ª.¸c°¸H®%¤p± ®%£\©±p¤¦¥t®z¤¦¥z©XÎb£h¨”°¥t®t©­¬»”ª » ¸\©=ª/°±Å© Hhª%©­¥«À ¤p¸#Ê£¤¦Ã.£ 먔¥t©#®%£\©Äª%©­¹€¨”¤p¸¤¦¸\·ª.°±¦©­¥=¨„ª%©^¤Å· ¸\»”ª%©­¬nÇ ¢*£¤¦¥ » ¸©X²Ë¥%£\»”®«Ô¥t®˜ã<±Å©»”½…°h¥%¤¦¸\·^ª/°±Å©­¥N¤¦¥½ ¨”¹€¤¦±Å² ¤¦¨„ªÑ¤¦¸ ¹€¨”Ã/£¤¦¸\©Ï±Å©­¨„ª.¸¤¦¸·K¨”¥Ì¨”¸¼¤¦¸¥t®%¨”¸hÃX©Y»”½#¨ ØHÙ.ګۃÜ/ÛÁÝ„Þß ÛƒÜrâa¹»b¬\©­±5ë¤Å͔©­¥t®«Àný­þ”ÿQP ì ÇaûY»”ª%©#¥t§$©X² ë¤,H!먔±¦±Åã”À5· ¤Å͔©­¸Ö®%£\©øÃ.£¨„ª/¨”ÃX®t©«ªí»”½®%£\©Ñª.°±Å©­¥¤¦¸\² ͔» ±Å͔©­¬nÀÊN©Ì£h¨­Í”©ø¨ K *tÝKbÝ[ÜrÛjâ ÛÁ݄Þvð”ß^¬\©­Ã«¤¦¥%¤Å» ¸×±¦¤p¥t® ¥tãb¥t®t©­¹ÑÇ Èù¥t®%¨”¸¬¨„ª.¬É°h¥t©Ì»”½¬\©­Ã«¤p¥%¤Å» ¸×±¦¤¦¥t®%¥¤¦¥  3 ??- 7<-0&(&(=> + -F 6&( + !"$#&%' )(+* 5, ½j»”ª*뱦¨”¥%¥%¤ Hv먄®%¤Å» ¸Ì®%¨”¥tîb¥«Àh¤ÁÇõ©”Ç¿¨”¥%¥%¤Å· ¸¤p¸\·먄®t©«·”»”ªt² ¤Å©­¥®t»Æ©XÎ\¨”¹§h±¦©­¥«Àv³v¨”¥t©­¬Y» ¸Ï®%£\©­¤¦ªë ¥t®%¨„®%¤pà ì Ã/£¨„ªt² ¨”ÃX®t©«ª.¤¦¥%®%¤¦Ã«¥«ÇÑ¢£\© ¬ã<¸¨”¹í¤¦Ã„Ô‚Ã.£h¨„ª.¨”ÃX®t©«ª®%¨„·”· ¤¦¸\· ¹€¨„­¥ ¤Å®í¥t» ¹€©«Ê*£¨„®í¬¤ G!©«ª%©­¸l®€½jª%» ¹ 뱦¨”¥.¥%¤,Hvë¨[² ®%¤Å» ¸À¨†½j¨”ÃX®ÌÊ*£¤pÃ.£¹€¤Å· £l®Æ§hª%©­¥t©­¸l®Ì¬h¤ ÆÃ«°±Å®%¤¦©­¥ ½j»”ªí¬\©­Ã«¤¦¥%¤¦» ¸É±¦¤¦¥t®±Å©­¨„ª.¸h¤¦¸\·Ê¿©«ª.©ø¤Å®€¸\»”®€½j»”ª€®%£\© ¤¦¸¬©«§!©­¸h¬\©­¸ÃX©Ò¨”¥%¥%°¹§h®%¤Å» ¸Ç J ¸®%£\©¾¿ª/¤¦±¦±\±Å©­¨„ª.¸h¤¦¸\·^¨„§§ª.» ¨”Ã.£Àlª.°±Å©­¥5¨„ª.©¨”Ãr²  °¤Åª%©­¬Ï¤¦¸c®%£\©¥%¨”¹©»”ª.¬\©«ªÄ¨”¥Ò®%£\©«ãϨ„ª%©¨„§§h±¦¤Å©­¬ ¬°\ª/¤¦¸\·í®%¨„·”· ¤p¸\·\ǀûc¤p¸¤¦¹€¨”±¦±¦ãø¨”¬¨„§®%¤¦¸\·Ñ¥%°hÃ.£Ð¨”¸ ¨„§§ª.» ¨”Ã.£×®t»Ð¨¬\©­Ã«¤¦¥%¤Å» ¸†±¦¤¦¥%®#°¥t© »”½¢ *¥f· ¤Å͔©­¥ ¨”¸f¤¦¸¥t®%¨”¸ÃX©N»”½¨¯Ü«Ù .hÙXÞhâ ÛÁð„ßÚ.Ý”Ù*rۃÞ<ñ#ð„ßõñHÝ%*/Û â[MÀ ¤ÁÇõ©”ÇЮ%£\© Hhª/¥t®#ª/°±Å©Æ±¦©­¨„ª.¸\©­¬ ÃX»(͔©«ª.¥­Ô5¥t» ¹©Ì§h¨„ª%® »”½f®%£\©Ð®tª.¨”¤¦¸¤p¸\·†¬h¨„®%¨Óëj®%£¨„®ø¸\»SÊ ³$©­±Å» ¸\· ¥«Ôa®t» ®%£¨„®zª.°±¦© ì À„¨”¸h¬¯®%£\©­¸f®%£\©=¸\©XÎb®:ª.°±Å©N±Å©­¨„ª.¸\©­¬ÃX»(Íl² ©«ª.¥ø¥%» ¹©Ð§h¨„ª%®Ñ»”½#Ê*£¨„®Ñª%©­¹€¨”¤p¸¥«ÀÒ¨”¸¬¼¥t»É» ¸Ç  » ±¦±¦»(Ê*¤¦¸·1Щ«³³A¨”¸h¬ ¾Nª%îb¤¦Ã×ëý­þ”þ ì ÀÄÊ¿©ª%©«½j©«ª ®t»Ö®%£¤p¥ÑÊN¨(ã»”½f»”ª/¬\©«ª.¤¦¸\·×ª.°h±Å©­¥Ñ¨”¥KðK[K\ÙXÞvؔۃÞlñ À ¤ÁÇõ©”Çù¸\©«Êª.°±¦©­¥Ñ¨„ª%©4¨„§§$©­¸¬\©­¬®t»º®%£\©7©­¸¬A»”½ ®%£\©Òë°\ª%ª%©­¸l®Nª.°±Å©*±p¤¦¥t®«ÇüN» ¹€¹» ¸h±Å㤦¸Æ¥%©  °\©­¸l®%¤¦¨”± ª.°±¦©f±Å©­¨„ª/¸¤¦¸\·\À$ª.°±¦©­¥±Å©­¨„ª/¸\©­¬©­¨„ª.±¦¤Å©«ªÄ¨„ª%©£¤¦· £±Åã ·”©­¸\©«ª.¨”±OÀ¨„§§h±Åãb¤¦¸\·^®t»€¹í¨”¸Hã ¤¦¸¥t®%¨”¸ÃX©­¥­À<Ê£\©«ª%©­¨”¥ ª.°±¦©­¥±Å©­¨„ª.¸©­¬í±¦¨„®t©«ªa¨„ª%©¹€»”ª%©¥t§$©­Ã«¤,H!ÄÀl¨„§§h±¦ã<¤¦¸· ®t»í½j©«ÊN©«ª¤¦¸h¥t®%¨”¸ÃX©­¥«Ç¢£©^·”©­¸\©«ª.¨”±…ª.°±Å©­¥*±Å©­¨„ª.¸\©­¬ ©­¨„ª.±¦¤¦©«ªY£¨­Í”©†¨ï±¦¨„ª.·”©ÖÞ!Ù«â^§!» ¥.¤Å®%¤Å͔©Ð©>G$©­ÃX®«À¯³h°\® ¹€¨(ãø·”©«®¥%§!©­Ã«¤ HvÃ^¥%°\³v¥t©«®%¥»”½5먔¥t©­¥Êª.» ¸\·\Ç*¢£\© ¾Nª.¤¦±¦±\¨„§h§ª%» ¨”Ã.£Æ¨”±¦±Å»SÊ*¥5±¦¨„®t©«ª¿ª.°±Å©­¥ ®t»#³$©±Å©­¨„ª.¸\©­¬ ®t»ÃX»”ª%ª%©­ÃX®©«ª.ª%»”ª.¥N¹€¨”¬\©*³<ã©­¨„ª.±¦¤Å©«ª¿ª.°±Å©­¥­À Í<¤p¨¯¥t©X²  °\©­¸H®%¤p¨”±5ª.°±¦©€¨„§§h±¦¤¦Ã«¨„®%¤¦» ¸Ç  »”ª¨c¬\©­Ã«¤¦¥%¤¦» ¸7±¦¤¦¥t® ®tª%©­¨„®%¹©­¸l®»”½!®%¨„·”· ¤¦¸\·\À<®%£¤¦¥z§$» ¥%¥.¤Å³h¤¦±¦¤¦®Ëã^¬\»<©­¥5¸\»”® ¨„ª.¤¦¥%©”Ç 1Щ«³³¨”¸¬Ö¾Nª%îb¤¦Ã†ëý­þ”þ ì ¨”¬\͔»b먄®t©­¥Ñ¨7̈́¨„ª.¤ ² ¨”¸l®N»”½:¬\©­Ã«¤¦¥%¤¦» ¸í±¦¤¦¥%®¿±Å©­¨„ª.¸h¤¦¸\·#¤p¸íÊ*£h¤¦Ã.£Æª.°±Å©­¥N¨”Ãr²  °¤Åª%©­¬Ï¨„ª%©"K *%Ù K\Ù«ÞvØHÙ%ؔÀ:¤ÁÇõ©”Ç#¨”¬¬\©­¬Ï®t»Ñ®%£\©½ƒª.» ¸H® »”½*®%£©Ìë°ª%ª%©­¸l®±p¤¦¥t®«À5¥t»Ð®%£¨„®ª.°h±Å©­¥¨„ª%©Ñ¨„§§h±¦¤Å©­¬ ¤¦¸Ï®%£\©ª%©«Í”©«ª.¥%©»”½®%£\©»”ª.¬©«ª^¤¦¸YÊ£¤¦Ã.£Ï®%£\©«ã¨„ª%© ±Å©­¨„ª.¸©­¬Ç ¶ °Ã.£º¨”¸ï¨„§§ª.» ¨”Ã.£ ±Å»l»”îb¥§ª.» ¹€¤¦¥%¤¦¸· ½j»”ª¨Ï¬\©­Ã«¤¦¥.¤Å» ¸4±¦¤p¥t®¯®tª%©­¨„®%¹€©­¸H®»”½®%¨„·”· ¤¦¸·¨”¥¤Å® Ê*¤¦±p±”¨”±¦±Å»(ʆ½j»”ª…¹€¤¦¥%®%¨„­¥…¹€¨”¬©³<ãÄ©­¨„ª.±¦¤Å©«ª:±Å©­¨„ª.¸\©­¬ ª.°±¦©­¥®t»³$© »(͔©«ª%ª.¤p¬¬\©­¸Ô<³lã ±¦¨„®t©«ª=¹»”ª%©Ä¥t§$©­Ã«¤,Hvà ª.°±¦©­¥#®%£h¨„®í¨„ª%©Ñ§ª%©«§$©­¸¬©­¬Ç J ¸×®%£¤¦¥fª%©«· ¨„ª.¬nÀN¤Å® ¹€¨(ãY³$©f°h¥t©«½j°h±‚®t»ø¨”±¦±¦»(Ê ¤¦¬\©­¸l®%¤Å®Ëã!Ô$¢F*¥«À$Ê*£h¤¦Ã.£ Hhª%©7±¦©­¨­Íb¤¦¸\·É®%£\©4¬©«½j¨”°±¦® ®%¨„· °¸Ã/£¨”¸\·”©­¬ÀÒ³h°\® ¥t©«ª%͔©N®t»Ò§ª%©«Í”©­¸l®…®%£©N¥%¨”¹€©¿§$» ¥%¤Å®%¤¦» ¸Ä³$©­¤¦¸\·¹€»<¬b² ¤,Hh©­¬4³lãK¨”¸Hã4¥.°\³h¥t©  °\©­¸l®^ª.°±Å© ¤¦¸7®%£© ±¦¤p¥t®«Ç1Щ /0&() 6C&& & %&( 2  3 . 3 106C. %/ => 8)/  -', 8 *??18 243 / 3  -0/ 0@ 2 *-' 3 *&#00" 7716  " *18) *+   &(' &0E<0 Y+,*-  *?716  %&(&(01&  +,   (-0 *?0@+  *8)8C8 243 / 3 &#=) -(: ! Q -  D  3 #7<-0&(0@ 717<-> */ 3 E Ê*¤p±¦±nÃX» ¸¥%¤p¬\©«ª*®tª.¨”¤p¸¤¦¸\·í¹€©«®%£\»<¬h¥*½j»”ªª/°±Å©X²O³h¨”¥t©­¬ ®%¨„·”· ¤¦¸·ÏʤŮ%£\» °\®^ª/°±Å©í¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸×¤¦¸4³$»”®%£K¨„§\² §$©­¸¬¤p¸\·7¨”¸¬º§ª%©«§$©­¸¬¤¦¸·Ï̈́¨„ª.¤¦¨”¸l®%¥«ÀÊ£¤¦Ã.£ïÊ¿© Ê*¤p±¦±hª.©«½ƒ©«ª®t»Æ¨”¥ J ü=Èôë ¤¦¸h¬\©«§$©­¸¬\©­¸ÃX©Ò¨”¸¬ÑÃX» ¹² ¹€¤¦®%¹©­¸H®«À”ʤŮ%£¯¨„§§$©­¸¬h¤¦¸\·\Ô ì ¨”¸¬ J ü ´ ë Ç­Ç«Ç%Ê*¤¦®%£ §ª.©«§!©­¸h¬¤¦¸\·\Ô ì À<ª%©­¥t§$©­ÃX®%¤Å͔©­±Åã”Ç " V ZJ V FHD ê æ ê æ X › M XnçnF êOŸ Z ¢£©f¤¦¸h¬\©«§$©­¸¬\©­¸ÃX©¯¨”¸h¬ÏÃX» ¹í¹€¤Å®%¹©­¸l®Ä¨”¥%¥%°¹§² ®%¤Å» ¸h¥N£h¨­Í”©Ä¨#ÃX» ¸h¥t©  °©­¸ÃX©Ò®%£¨„®=¤¦¥¿ÃXª/°뤦¨”±!¤¦¸í¨”± ² ±Å»SÊ*¤¦¸\· ¨íª/¨„§h¤¦¬Y¨”¸h¬Ï¹€©­¹»”ª%ãø©ÆÃ«¤¦©­¸H®®tª.¨”¤¦¸¤p¸\· ¨”±Å·”»”ª/¤Å®%£¹ÑÇ ¢£¤¦¥^¤p¥Ä®%£¨„®¯ÊN©í먔¸†¬¤ÅÍb¤¦¬\©®tª.¨”¤¦¸\² ¤¦¸·c¤¦¸l®t»Ð¥t©«§h¨„ª.¨„®t©Ì§h£¨”¥%©­¥«À ©­¨”Ã.£×»”½Ê£¤¦Ã.£†¥t©«§\² ¨„ª.¨„®t©­±¦ã¨”¬h¬\ª%©­¥%¥t©­¥Ì» ¸h±Å㺮%£©7±Å©­¨„ª.¸¤¦¸·K»”½ª.°±¦©­¥ ®%£¨„®€¹€»<¬¤¦½ƒã4¨· ¤¦Í”©­¸Ö¥%¤¦¸\· ±¦© ´aµ^¶ Ç ¢£<°¥«À¤¦¸É¨ §h£h¨”¥t©±Å©­¨„ª.¸¤¦¸·íª.°±¦©­¥®%£h¨„®^¹»<¬h¤Å½ƒãÑ®%¨„·$#rÀn¤¦¸¬©X² §$©­¸¬\©­¸hÃX©#¤¦¸¥%®tª.°ÃX®%¥*°h¥*®t»Ñ¤Å· ¸\»”ª.©¯®%£\©f½j¨”ÃX®Ò®%£¨„® ª.°h±Å©­¥Ò±Å©­¨„ª.¸\©­¬¤¦¸Ï¥t» ¹©©­¨„ª.±p¤Å©«ª€ëj»”ª^±¦¨„®t©«ª ì §h£¨”¥%© ¹€¤¦· £H®€¹»b¬¤Å½jã4®%£©øÃX» ¸H®t©XÎb®%¥Æ¨„ª.» °¸¬×§$» ¥%¤Å®%¤Å» ¸h¥ ®%¨„·”·”©­¬%#/ÇÖȱ¦¥t»\ÀN®%£\©ø§$» ¥%¥.¤Å³h¤¦±¦¤¦®Ë㝮%£¨„®€ª.°h±Å©­¥»”½ »”®%£\©«ª*§h£¨”¥t©­¥¹í¤Å· £H®=¹»b¬¤Å½jãí¥t» ¹€©Ä»”®%£\©«ª®%¨„·€®t» ³$©­ÃX» ¹©&#^먔¸4³$© ¤Å· ¸\»”ª%©­¬Àz¥%¤¦¸hÃX©ÆÃX» ¹€¹€¤¦®%¹©­¸H® §ª.©«Í”©­¸H®%¥a®%£¤p¥®%¨„·½jª%» ¹ô³$©­¤¦¸\·¯½ °\ª%®%£©«ªN¹»b¬¤,Hv©­¬ ³<ãÆª/°±Å©­¥=±Å©­¨„ª.¸©­¬ø¤¦¸Ì®%£\©^ë°\ª.ª%©­¸H®*§h£¨”¥t©”Ç ¿¨”Ã.£×§h£h¨”¥t©Æ»”½*®tª.¨”¤¦¸¤¦¸·c먔¸†ª%©­¥t®tª.¤pÃX®¤¦®%¥#¨„®² ®t©­¸l®%¤Å» ¸Ì®t»f» ¸±Åãí¨f±¦¤¦¹€¤¦®t©­¬€§$»”ª%®%¤Å» ¸Æ»”½n®%£\©Ò®tª.¨”¤¦¸\² ¤¦¸·ÌÃX»”ª.§h°¥«À©”Çõ·\ǀ±Å©­¨„ª/¸¤¦¸\· ª.°±Å©­¥®t»ø¹»b¬¤Å½jãø®%¨„· #rÀ ÊN©*¸\©«©­¬» ¸±Å㨔¬¬\ª.©­¥%¥zª%©­±Å©«Í„¨”¸H® ¬h¨„®%¨Ä§$» ¤¦¸H®%¥­ÔÅÀ Ê*£h¤¦Ã.£c¨„ª%©#®%£» ¥t©#§$» ¥%¤¦®%¤Å» ¸¥£¨(Í<¤p¸\·Æ¤¦¸h¤Å®%¤¦¨”±®%¨„·&# ¨”¸¬Ð¨„®#±Å©­¨”¥t®Ä» ¸\©€¨”±Å®t©«ª/¸¨„®%¤Å͔©í±Å©XÎ\¤¦Ã«¨”±:®%¨„·\Çí¢£© ¢F¥¯®%£h¨„®f먔¸†¨„§§h±ÅãШ„®f®%£\©­¥t© §$» ¥%¤Å®%¤Å» ¸¥¯¨„ª%© ¨ ½jª.¨”ÃX®%¤Å» ¸×»”½®%£\» ¥t©Ì®%£¨„®ÃX» °±¦¬×¨„§§h±¦ã7¨”¸Hã<Ê*£\©«ª%© ¤¦¸ ®%£\©ÃX»”ª%§h°h¥«Çó¢£\©¨„§§ª.» ¨”Ã.£¼¨”±¦±Å»SÊ*¥ ½ƒ»”ªÑ¨”¸ ¤¦¸hÃXª%©­¹©­¸l®%¨”±ÁÔa®tª.¨”¤¦¸¤¦¸·†¨”±Å·”»”ª/¤Å®%£¹ Ê*£¤¦Ã/£¬\»l©­¥ ¸\»”®…¥%° G$©«ªn®%£\©¥t§v¨”ÃX©5§ª%»”³h±¦©­¹€¥$»”½¨”¹€¥%£¨(ÊK¨”¸¬ ûϨ„ª.ë°¥*ëý­þ”þ" ì Ç J ¸h¤Å®%¤¦¨”±¦±Åã”À„ÊN©¹#°h¥t®¥%ÃX»”ª%©=½j»”ª¨”±¦± ®tª.¨”¸h¥t½ƒ»”ª/¹€¨„®%¤Å» ¸¥ ®%£h¨„®¨„§§h±Åãf¨„®5®%£\©*¬h¨„®%¨Ä§$» ¤¦¸H®%¥ »”½f®%£\©Ð§h£¨”¥%©”Ç µ ¸¼¤p¬\©­¸H®%¤¦½ƒãb¤¦¸\·†¨”¸¬¼¨”¬\»”§®%¤¦¸· ®%£\©³$©­¥t® ª.°±Å©('¯ÀÊ¿©Ð¬\»×¸\»”®Ñ¸\©«©­¬®t»×ª%©­ÃX» ¹² §h°®t©#¨”±¦±ª/°±Å©^¥%ÃX»”ª%©­¥­Àv³h°®*먔¸c¤p¸¥t®t©­¨”¬Y¥.먔¸Y®%£\© ¬¨„®%¨Ò§$» ¤¦¸l®%¥z®t»9Hv¸¬f®%£\» ¥%©¨„® Ê*£¤¦Ã/£)' Hhª%©­¥«À ¨”¸¬ °\§¬¨„®t©N» °\ªz©XÎ\¤¦¥t®%¤¦¸\·ª%©­ÃX»”ª.¬»”½vª.°±Å©N¥%ÃX»”ª%©­¥5¤p¸¯ª%©X² · ¨„ª.¬®t»Ä®%£©N§$» ¥%¥.¤Å³h±Å©¢ *¥:®%£¨„®5먔¸ Hhª%©a¨„® ®%£\©­¥t© ±Å»b먄®%¤Å» ¸¥­Ç 1Щ먔¸Ï®%£\©­¸¤¦¹€¹©­¬h¤¦¨„®t©­±ÅãѬ\©«®t©«ª.¹² ¤¦¸©Ä®%£\©^¸\©XÎb®ª.°±Å©Ò®t»í¨”¬»”§®«Àh¨”¸¬Ñ¥t»í» ¸Ç ȸ ¤¦¸½ƒ»”ª.¹í¨”±†¥%«®%Ã.£ »”½º®%£\©à®tª.¨”¤¦¸¤¦¸·ù¨”± ² ·”»”ª.¤¦®%£¹ ¤¦¥Y· ¤¦Í”©­¸ò¤¸ò¢£¤¦¥Ï¹€¨„­¥Ð¸\» §ª%»SÍ<¤Å² ¥%¤¦» ¸×½ƒ»”ªí°h¸\î<¸»(Ê*¸†ÊN»”ª.¬¥«ÀN¥%¤¦¸ÃX©ÑÊN©ø¹€¨„ø®%£\© ë±Å» ¥%©­¬c͔»b먄³h°±p¨„ª%ãø¨”¥%¥%°¹€§®%¤Å» ¸Ôh¤p¸ø» °\ªÒ©XÎb§!©«ª%² n» ¨”¬Ñ®tª/¨”¤¦¸¤¦¸\·€ÃX»”ª.§h°¥¤¦¸l®t»€¹©­¹»”ª.ãÐë ¨„ª.ª.¨­ã ì Àv¤ÁÇõ©”ǿʿ»”ª/¬¥ ÃX»”ª%ª.©­ÃX®*®%¨„·\À$¨”¸¬ø¹€¨„f¤¦¸¤Å®%¤¦¨”±!®%¨„· ¨”¥%¥%¤Å· ¸h¹©­¸H®¯ë ¨”¥.¥%¤Å· ¸ø¹» ¥%®²O§ª%»”³h¨„³h±¦©®%¨„·Æ®t»€©­¨”Ã.£cÊN»”ª.¬ ì Ç ¢£\©­¸Àb½j»”ª©­¨”Ã.£c®%¨„·…Àh±Å©­¨„ª.¸Ñ±¦¤p¥t®a»”½:ª/°±Å©­¥a®%£¨„®*¨„§h§h±Åãí½j»”ª®%£¤p¥a®%¨„·Æ¨”¥=½j» ±¦±Å»SÊ*¥«ú ý„Ç ¶ 먔¸øÃX»”ª%§v°¥«À¨”¸¬Ìª%©­ÃX»”ª/¬ ¬¨„®%¨§$» ¤¦¸l®%¥«Ôb½ƒ»”ª*®%£¤¦¥a§h£¨”¥%©”À¤ÁÇõ©”Ç5§$» ¥%¤Å®%¤Å» ¸h¥¿Ê£\©«ª%©Ä¬\©«½ ¨”°±Å®a®%¨„·Æ¤¦¥ Ϩ”¸h¬ÑÊ¿»”ª/¬ø£¨”¥¨„®*±¦©­¨”¥t®=» ¸\©¯¨”±Å®t©«ª/¸¨„®%¤Å͔©^®%¨„·cë ¥%®t»”ª%©¯®%£\©­¥%©¯¨„ª%ª.¨(ãÆ»G¥t©«®%¥ ì <Ç=üN» ¹§h°®t©Ï¤¦¸h¤Å®%¤¦¨”±¢F÷ÃX»”ª.ª%©­ÃX®%¤Å» ¸¥%ÃX»”ª%©­¥«Ç  ¤Åª.¥%®%±Åã”À¥%먔¸ ½ƒ»”ªÑ¬¨„®%¨†§$» ¤¦¸l®%¥€Ê*£©«ª%©c®%¨„·A¤¦¥ ¤¦¸ÃX»”ª%ª.©­ÃX®«À¨”¸¬ ¨„®Ì©­¨”Ã.£¼¥%ÃX»”ª%© ýc½j»”ªÌ¨”±p±*§$» ¥%¥%¤¦³h±Å©Y¢F¥ÆÃ/£¨”¸\· ¤¦¸·¼®t»×®%£\©ÃX»”ª%ª%©­ÃX®Ñ®%¨„· ë ¥%ÃX»”ª%©­¥ ¥t®t»”ª%©­¬Ö¤p¸Ö¨Ð£¨”¥%£ ì Ç ¶ ©­ÃX» ¸¬±Åã”ÀN¨„®í©­¨”Ã/£Ö§$» ¤¦¸l®Ê*£\©«ª%©Ñ®%¨„·¤¦¥ÃX»”ª%ª.©­ÃX®«À¥%ÃX»”ª%©¯ý ½ƒ»”ª¨”±¦±§$» ¥%¥%¤¦³h±Å©Ò¢F*¥Ã/£¨”¸\· ¤¦¸·Y®t» ¨”¸HãѱũXÎ\¤¦Ã«¨”±n¨”±Å®t©«ª/¸¨„®%¤Å͔© ³h°\®¯Ý„Þß ½ƒ»”ªª.°±Å©­¥=¨”±Åª.©­¨”¬\ã §ª%©­¥t©­¸l®*¤¦¸Ñ¥.ÃX»”ª%©­¥*£¨”¥.£7ë ¤ÁÇõ©”Ç5ª.°±¦©­¥a©>G!©­ÃX®%¤¦¸·í¨„®±Å©­¨”¥%®=» ¸\©¯ÃX»”ª%ª%©­ÃX®%¤¦» ¸ ì Ç <Ç n»<»”§c¨”à  °¤Åª.¤p¸\·#ª/°±Å©­¥=¨”¥½j» ±¦±Å»SÊ*¥«ú ¶ 먔¸ ¥%ÃX»”ª%©­¥£h¨”¥%£€½j»”ª³$©­¥t®5ª/°±Å© 'öëj©XÎ\¤Å®±¦»l»”§ ¤Å½!¥.ÃX»”ª%©¸\»”®¿¨„³!»S͔©®%£ª%©­¥%£\» ±¦¬ ì ¨”¸¬€¨”¬¬í®t»¯ª.°±¦© ±¦¤¦¥t®«Ç §¬¨„®t©¥.ÃX»”ª%©­¥a£¨”¥%£Ì¨”¥N½ƒ» ±¦±¦»(Ê*¥«úz¥%먔¸ ½j»”ªa¬h¨„®%¨#§$» ¤¦¸l®%¥¿Ê*£\©«ª%©ª.°±Å© 'Hhª.©­¥«À<¨”¸h¬ °§$¬h¨„®t© ¥%ÃX»”ª%©­¥*½ƒ»”ªª.°±Å©­¥N®%£¨„®?Hvª%©^¨„®®%£\©­¥t©¯§!» ¥.¤Å®%¤Å» ¸¥«Çë §¬¨„®t©^¥%ÃX»”ª.¤¦¸·€¬\©«®%¨”¤¦±¦¥=· ¤¦Í”©­¸ø¤¦¸Ì®t©XÎb®«Ç ì  ¤Å· °\ª.©ý„úN¢‚ª/¨”¤¦¸¤¦¸\·È±Å·”»”ª.¤Å®%£¹ ¤¦¹©­¸l®%¥«Ç  ¤Å· °ª%©,ý„Ç » ¹€¤Å®%¥×®%£©¥%§!©­Ã«¤ Hv륆»”½ £\»SÊ ª/°±Å©4¥%ÃX»”ª.©­¥Ð¨„ª%©†¤p¸ÃXª%©­¹©­¸l®%¨”±¦±Åã¼°\§¬¨„®t©­¬À ¨”¥Ï®%£¤¦¥Ï¬¤,G$©«ª.¥ø³$©«®˜Ê¿©«©­¸,¨„§§$©­¸¬h¤¦¸\·Ö¨”¸¬'§ª%©X² §$©­¸¬¤¦¸·c¨„§§ª.» ¨”Ã.£\©­¥YëjÊN©Ñª%©«®%°\ª/¸×®t»Ð®%£¤¦¥¹€¨„®² ®t©«ªÒ¥%£\»”ª%®%±Åã ì Çȸ ©ÆÃ«¤¦©­¸ÃXãѽƒ©­¨„®%°ª%©”Ôv»”½ ®%£©#¨”± ² ·”»”ª.¤Å®%£h¹ò¤p¥‚®%£¨„® ¤¦¸fÃX» ¹§h°®%¤¦¸\·*®%£©a¤p¸¤Å®%¤¦¨”±l¥%ÃX»”ª%©­¥ ½j»”ª‚¢F*¥­À(ÊN© Hhª.¥t®%±¦ã¤¦¬\©­¸l®%¤Å½jã®%£\©5ª/°±Å©­¥$®%£¨„®n©>G$©­ÃX® ¨„®#±¦©­¨”¥t®¯» ¸\©ÆÃX»”ª%ª%©­ÃX®%¤Å» ¸†¥t» ¹©«Ê*£©«ª%©”À:¨”¸h¬Ð®%£\©­¸ ¥%°\³v¥t©  °©­¸H®%±ÅãY» ¸±Åãc¥.ÃX»”ª%©€¸\©«· ¨„®%¤Å͔©íÃ/£¨”¸\·”©­¥Ä½j»”ª ®%£\©­¥t©#ª.°±Å©­¥­Ç  »”ª®%£\©#¨„§§$©­¸¬h¤¦¸\·f͔©«ª.¥%¤Å» ¸Àv®%£\©­¥t© ¤¦¸¤¦®%¤¦¨”±¦±Åã^¤¦¬©­¸H®%¤,Hv©­¬#ª/°±Å©­¥ ¨„ª%©=®%£\©*» ¸±Å㯻 ¸©­¥ ®%£¨„® ¸\©«©­¬¼³$©7ÃX» ¸¥.¤¦¬\©«ª%©­¬¼¬°ª.¤¦¸\·†®%£\©7§h£h¨”¥t©”Çࢣ\© ¥%¤Å®%°h¨„®%¤Å» ¸Ó¤¦¥Ï¥%¤¦¹í¤¦±¦¨„ªÑ½j»”ªc®%£\©†§ª%©«§$©­¸¬h¤¦¸\·×͔©«ªt² ¥%¤Å» ¸nÀn©XÎbÃX©«§h®^®%£¨„®^ÊN©€¹#°h¥t®^¨”¬¬¤Å®%¤¦» ¸¨”±¦±ÅãY¨”±p±Å»(Ê ½j»”ª¤¦¬\©­¸l®%¤Å®Ëã ª.°±Å©­¥N®t»³$©ÄÃX» ¸¥%¤¦¬©«ª%©­¬Ñ¨”¥=§$» ¥%¥%¤Å³v±Å© ª.°±¦©­¥=¬°\ª.¤¦¸·f¤¦¸hÃXª%©­¹©­¸l®%¨”±n°\§¬¨„®t©”Ç ¶ » ¹€©c½ °\ª%®%£\©«ªÆ©ÆÃ«¤Å©­¸뤦©­¥Æ®%£h¨„® ¨„ª%©Ð¸\»”®Ì¬\©X² ¥%ÃXª.¤¦³!©­¬ ¤¦¸  ¤Å· °\ª%©ý„Ǥ¦¸l͔» ±Å͔©Ä°¥%¤¦¸·¯®%£\©Ò§ª%©­¥%§!©X² ë¤,Hh©­¬f¥t®t»”§§v¤¦¸\·®%£\ª%©­¥%£\» ±p¬^½j»”ª ®tª.¨”¤¦¸¤¦¸·®t»Ä§ª%©>Hv±Å² ®t©«ª=ÊN»”ª%îvÇ21£\©­¸ ®%£\©ÒÃX»”ª%§h°¥a¤¦¥.Hhª.¥%®N±Å» ¨”¬©­¬ÀbÊ¿© 먔¸×©XÎb®tª.¨”ÃX®ÃX» °¸H®%¥½j»”ª¨”±¦± ©«ª%ª%»”ª§h¨”¤Åª.¥­ÔNë ì Ê*£\©«ª.©Ö®%£\©ï¬\©«½ ¨”°±Å®Ð®%¨„·ô¥.£\» °±¦¬,£¨(͔©º³$©«©­¸ !Ç(1ЩY¸\©«©­¬ï» ¸±Åã4©­¸l®t©«ª%®%¨”¤¦¸º¢F*¥íÃ.£¨”¸\· ¤p¸\· ®t»7¤Å½a®%£\© ÃX» °h¸H®f½ƒ»”ªf®%£\©Æ§h¨”¤Åª ë ì ¤¦¥#¨„³$»(͔© ®%£\ª%©­¥.£\» ±¦¬À<¤ÁÇõ©”Ç5¥%¤¦¸ÃX©¤Å®¿¤p¥»”®%£\©«ª.Ê*¤¦¥t©Ò¤¦¹§$» ¥%¥%¤¦³h±Å© ®%£¨„® ¥%°Ã.£ ¢F¥íÃX» °±¦¬ï¥%ÃX»”ª%©¨„³!»S͔©Y®%£\ª.©­¥%£\» ±¦¬Ç È §$» ¥%¤Å®%¤Å» ¸4Ê*¤Å®%£K¤¦¸¤Å®%¤¦¨”±z®%¨„·ɤ¦¥fÃX» ¸¥%¤¦¬©«ª%©­¬4¨ ¬¨„®%¨§$» ¤¦¸H®«Ô­¤p¸^®tª.¨”¤¦¸h¤¦¸\·=» ¸±ÅãĤŽb¤Å®‚£h¨”¥…¨”¸#¨”±Å®t©«ª.¸b² ¨„®%¤Å͔©®%¨„·c½ƒ»”ª^¨”¸Ð¨„³$»S͔©€®%£\ª%©­¥%£» ±¦¬Y©«ª.ª%»”ªÄ§h¨”¤Åª ë ì Ç ¶ ¤¦¹€¤p±¦¨„ª.±Åã”À:Ê*£\©­¸×ÊN©ø¤¦¬\©­¸l®%¤Å½jã7®%£\©Ñ¬¨„®%¨ §$» ¤¦¸l®%¥^½j»”ªf¨c§h£¨”¥%©í»”½®tª.¨”¤p¸¤¦¸\·\À…Ê¿©Ì먔¸×ÃX» °¸H® ®%£\©¿»<ëë°ª%ª%©­¸ÃX©­¥:»”½\ÊN»”ª.¬¥:¨„§§$©­¨„ª.¤¦¸\·=Ê*¤¦®%£¤¦¸Ä®%£\© ÃX» ¸l®t©XÎ<®ÏÊ*¤¦¸h¬\»(Ê^Ô¨„ª.» °¸¬¼®%£\©­¥%©4§$» ¤¦¸l®%¥«Ç 1Щ ¸\©«©­¬Y®%£\©­¸Ñ» ¸±¦ã ÃX» ¸¥.¤¦¬\©«ª¤¦¸¥%®%¨”¸H®%¤¦¨„®%¤¦» ¸¥=»”½z±Å©XÎl² ¤¦Ã«¨”±$®t©­¹€§h±¦¨„®t©­¥°¥.¤¦¸\·fÊ¿»”ª.¬h¥aÊ*£\» ¥t©Ä¨„§§$©­¨„ª.¨”¸ÃX© ÃX» °¸l®*¤¦¸Ì®%£©­¥t©¯ÃX» ¸l®t©XÎ<®%¥¤¦¥¨„³$»(͔©¯®%£\ª%©­¥%£\» ±p¬Ç È¥ø¸\»”®t©­¬¼¨„³$»S͔©”ÀÄ®%£\©7¬\©«®%¨”¤¦±¦¥ »”½f®%£\©7¤¦¸ÃXª.©X² ¹©­¸l®%¨”±‚°§$¬h¨„®t©§v¨„ª%®=»”½z®%£\©^®tª.¨”¤¦¸h¤¦¸\·¨”±Å·”»”ª.¤Å®%£¹ ¬¤ G!©«ª ½j»”ª5®%£\© J üaÈ ¨”¸¬ J ü ´ ̈́¨„ª.¤¦¨”¸l®%¥«Ç  »”ª J üaȯÀ ®%£\©N¥%ÃX»”ª%©­¥z£¨”¥%£#¥%£\» °±p¬Ä³$©Íb¤Å©«Ê¿©­¬f¨”¥…ª%©­ÃX»”ª.¬¤¦¸·\À ¨„®:©­¨”Ã.£¥t®%¨„·”©”À”®%£\©N¥%ÃX»”ª%©­¥:½j»”ª…©­¨”Ã/£#ª/°±Å©5¤Å½\¤Å®…Ê¿©«ª%© ¨”¬\»”§h®t©­¬^®t» ÃX»S͔©«ª­Ô„¥t» ¹©5§$»”ª%®%¤Å» ¸^»”½<®%£©5ª%©­¹€¨”¤¦¸\² ¤¦¸·Ñ®tª.¨”¤¦¸¤¦¸· ¤¦¸h¥t®%¨”¸ÃX©­¥«Ç 1£\©­¸7¥t» ¹€©í¸\©«Ê ª.°h±Å© '臘¥z¨”¬\»”§h®t©­¬À ÊN©=®%£\©­¸¤¦¬\©­¸l®%¤Å½jãÄ®%£\©=¬¨„®%¨Ò§$» ¤¦¸H®%¥ Ê*£©«ª%© ' Hhª%©­¥«Ç:¢£\©­¥t©¿§$» ¥%¤Å®%¤Å» ¸¥‚¸\»(Ê ³$©­±Å» ¸\·\Ô­®t» '¯À!¨”¸¬ø¨„ª%©f¬¤¦¥%ÃX» °h¸H®t©­¬Ñ½j»”ª¥.°\³h¥t©  °\©­¸l®*¥%ÃX»”ª.¤¦¸· ë ¤ÁÇõ©”Ç5¬©­±Å©«®t©­¬½jª%» ¹ ®%£\©*ª%©­ÃX»”ª.¬€»”½ ¬¨„®%¨^§!» ¤p¸H®%¥«Ô ì À ¨”¸¬YÊ¿©#¹#°¥%®*¨”±¦¥t» °h¸¬\»\Ô»”ª먔¸ÃX©­±‚¨”¸lãø¥%ÃX»”ª%©­¥ ¤¦¸Ï®%£\©ª.°h±Å©f¥.ÃX»”ª%©­¥Ä£¨”¥%£Ï®%£¨„®^¬\©«§$©­¸¬c» ¸®%£\©­¥t© §$» ¥%¤Å®%¤¦» ¸¥«Àa¤ÁÇõ©”Ǽ¥%ÃX»”ª.¤¦¸\·¯ýY¤Å½Ò®%£\©Yª.°±Å©ø©>G$©­ÃX®%¥ ¨øÃX»”ª.ª%©­ÃX®%¤Å» ¸4¨„®¯» ¸\©»”½a®%£\©­¥t©€§$» ¥%¤Å®%¤¦» ¸¥«À»”ª ý ¤Å½¿¤Å®¤¦¸l®tª%»b¬°ÃX©­¥Ò¨”¸Ï©«ª%ª%»”ª­Ç  »”ª J ü ´ À!®%£\©¥%ÃX»”ª%©­¥ £¨”¥.£^ª%©­ÃX»”ª.¬h¥«À[¨„®…©­¨”Ã.£¥t®%¨„·”©”À„®%£\©N¥%ÃX»”ª%©­¥…½ƒ»”ª…©­¨”Ã.£ ª.°h±Å©f¤¦½¿¤Å®ÒÊN©«ª%©€¨”¬\»”§®t©­¬Ð®t»4Ý „Ù**rÛ ØHÙí®%£\©©XÎ\¤¦¥t®² ¤¦¸·Äª.°±Å©¥t©«®«Ç5È· ¨”¤¦¸ÀHÊ*£\©­¸€¨¯ª.°±Å© ' ¤p¥5¨”¬\»”§®t©­¬À ÊN©Y먔¸ï°§$¬h¨„®t©Ñ®%£\©Y¥%ÃX»”ª%©­¥Æ£¨”¥%£É³<ã×ÃX» ¸¥%¤¦¬\©«ª%² ¤¦¸·#» ¸±¦ã€®%£\©Ò¬¨„®%¨§$» ¤¦¸H®%¥¿Ê*£\©«ª.© ' Hhª%©­¥­À<³v°\®N¤p¸ ®%£¤p¥f먔¥%©Ñ®%£\©ÑÃ.£¨”¸\·”©ø¹í¨”¬\©Ì®t»Ð®%£\©Ñ¥%ÃX»”ª%©Ñ»”½¨ §$» ¥%¥%¤¦³h±Å©fª.°±Å© ®%£¨„®^ÃX» °±p¬ Hhª%©¨„®^®%£\©­¥t©§$» ¥%¤Å² ®%¤Å» ¸h¥¬©«§!©­¸h¬¥*¸\»”®» ¸ÏÊ*£\©«®%£\©«ªô» °\®t§h°\®%¥®%£\© ÃX»”ª%ª%©­ÃX®=®%¨„·í»”ª¸\»”®«À\³h°\®aª.¨„®%£\©«ª=» ¸ÌÊ£\©«®%£\©«ª=®%£\© §$» ¥%¤Å®%¤Å» ¸øÊ¿» °±p¬Ñ³$©¯ÃX»”ª%ª%©­ÃX®°¸¬\©«ª®%£©¯§ª%©«Íb¤Å» °¥ ª.°±¦©¯¥t©«®«Àn¨”¸¬cÊ*£\©«®%£\©«ªÒ¤Å®Ê*¤¦±¦±³$©fÃX»”ª%ª%©­ÃX®Ò· ¤Å͔©­¸ '¯ÔÕ¥€¨”¬»”§®%¤Å» ¸Ç ¶ §$©­Ã«¤,Hv먔±¦±¦ã”À5¤Å½¨§$» ¥%¤Å®%¤¦» ¸KÊa¨”¥ ÃX»”ª%ª%©­ÃX®€ëjª%©­¥%§nǤ¦¸hÃX»”ª%ª%©­ÃX® ì ¨”¸¬ ' ¹€¨„­¥Ò¤Å®ÃX»”ªt² ª%©­ÃX®ëjª%©­¥t§nÇÃX»”ª%ª%©­ÃX® ì À$®%£\©­¸c½ƒ»”ªª.°±Å© ó®%£¨„®Ò먔¸ Hhª%©^¨„®*®%£¤¦¥a§$» ¥%¤Å®%¤Å» ¸ÌÊN©¯¥%ÃX»”ª.© ý€ëjª%©­¥t§‚Ç ¯ý ì Ç  V ZJ V DYX4X ê æ X › MXnç‚F êÁŸ Z  »”ª=¾Nª.¤¦±¦±OÔÕ¥¹©«®%£\»b¬À<®%£\©Ò¬¤¦¥t®%¨”¸ÃX©¨„®¿Ê£¤¦Ã.£Æ®ËÊN» ª.°±¦©a°h¥t©­¥5¹€¨(ãÑØ”Û *%Ù%Ú(â ß ¨1G$©­ÃX®¿©­¨”Ã/£€»”®%£\©«ªN¤¦¥5±¦¤¦¹² ¤Å®t©­¬'³lã¼®%£©KÊ*¤p¬\®%£¼»”½íÃX» ¸l®t©XÎ<®%¥Ð¤p¸Aª.°±¦©7®t©­¹² §h±¦¨„®t©­¥­Ç¾Nã Ã/£¨”¤¦¸¤¦¸·\À¤¦¸¬¤Åª.©­ÃX®a¤p¸H®t©«ª.¨”ÃX®%¤¦» ¸¥먔¸ ¤¦¸†®%£\©«»”ª%ã×¥t§h¨”¸×»S͔©«ª ¹f°Ã/£×·”ª%©­¨„®t©«ªÆ¬¤p¥t®%¨”¸ÃX©­¥«Ç  ©­¸ÃX©”À¨®%¨„·”· ¤¦¸\·€¨”±Å·”»”ª/¤Å®%£¹ôÊ*£¤¦Ã/£Ì¨„§§h±¦¤Å©­¥¿®%£\© ª.°±¦©­¥:¬h¤Åª%©­ÃX®%±Åãf먔¸¸\»”® ÊN»”ª%î#ʤŮ%£¨Ä¸¨„ª%ª%»SÊÉÊ*¤¦¸b² ¬\»SÊK»S͔©«ª…®%£\©¿®t©XÎ<®«À”³v°\®‚¤¦¸¥%®t©­¨”¬^¹#°h¥t®‚ÊN»”ª%îÒÊ*¤Å®%£ ®t©XÎb®*°¸¤Å®%¥a»”½ ¨„®*±¦©­¨”¥t®¥t©­¸l®t©­¸ÃX©#¥%¤ ©”Ç  »”ª:®%£\©¿§ª%©­¥t©­¸l®…¨„§§hª%» ¨”Ã.£À„®%£\©¿¤¦¸¬\©«§$©­¸¬\©­¸hÃX© ¨”¥%¥%°h¹§®%¤Å» ¸Ä®t©­±p±¦¥‚°¥n®t»¤Å· ¸»”ª%©5ª.°±¦©5¤¦¸H®t©«ª/¨”ÃX®%¤Å» ¸¥«À ¥t»ºÊ¿©×먔¸'°h¥t©K¨”¸ò¨”±Å·”»”ª.¤Å®%£h¹?Ê*£¤pÃ.£Ó¥%®tª%©­¨”¹€¥ ®%£\©Ð®t©XÎb®ø®%£\ª%» °· £¼¨×³h° G$©«ªÑ®%£¨„®ø¤¦¥ %°¥t®ÌÊ*¤¦¬\© ½j»”ªø¥%¤¦¸\· ±Å©cª.°±Å©­¥ ®t» Hhª%©”Ç 1 ¤Å®%£¼ÃX» ¸l®t©XÎb®%¥Ñª.¨”¸b² · ¤¦¸\·Ð°\§h®t»Ï®%£ª%©«©Ñ©­±Å©­¹©­¸l®%¥©­¤Å®%£\©«ª€¥%¤p¬\©”À¨ÏÊ*¤¦¸b² ¬\»SʺÊ*¤p¬\®%£»”½n¥t©«Í”©­¸Æ©­±Å©­¹©­¸l®%¥¿¥%° ÆÃX©­¥«Ç Èa®¿©­¨”Ã/£ ¥t®t©«§f»”½\¨”¬\Í[¨”¸h뤦¸\·®%£\©¿Ê*¤¦¸¬»(Ê^ÀSÊ¿©¿©XÎb¨”¹€¤p¸\©5®%£\© ¤¦¸¤¦®%¤¦¨”±<®%¨„·¯¨”¥%¥%¤Å· ¸©­¬®t»^®%£\©*ÃX©­¸l®tª.¨”±h©­±Å©­¹©­¸l®«Àl¨”Ãr² ÃX©­¥%¥®%£\©*¥%°\³뱦¨”¥.¥z»”½vª.°h±Å©­¥z®%£¨„®¿Ã«¨”¸í¹»<¬h¤Å½ƒã#®%£¤¦¥ ®%¨„·\À‚¨”¸¬Ã/£\©­Ã%îÏ®%£\©­¥t©ª.°±¦©­¥¤¦¸Ï¥t©  °\©­¸ÃX©®t»Ñ¥t©«© ¤Å½$¨”¸Hã먔¸Æ¨„§§h±Åã”Ç J ½¨”¸lã» ¸\©#ëj»”ª¿¸» ¸\© ì »”½$®%£\©­¹ Hhª%©­¥­À\Ê¿©¯£h¨­Í”©5Hv¸h¤¦¥%£\©­¬Ì¨„®®%£¤p¥=§$» ¥%¤Å®%¤Å» ¸Ç   DMËéD Ÿ ê æ XïFléMOJ  œ D ž Jhè Ÿ DYX4X ê æ X êÁŸ Z5ç…é Ÿ FHé¿MOJ ê æ Ÿ JF D¡ Ÿ ê ç æ 1ЩÌÃX» ¹§h¨„ª%©­¬†®%£\©Æ§!©«ª.½ƒ»”ª.¹í¨”¸ÃX©€»”½*¾Nª.¤¦±p±ÁÔÕ¥^¤¦¹² §h±Å©­¹€©­¸H®%¨„®%¤Å» ¸Šëj½j»”ª¼Ê*£¤¦Ã/£à®%£©öÃX»b¬\©,¤¦¥º§h°\³\²  3 & /0  &(009@:?- *&()18  3 -%18 3  3  <! Q -), 0@D/ *&(0&0E 1-D1 *?716 <;+ 3 #7)&(;) 2 &D7<-0= C%1&(6;: /0*-(-0/  IF '> & ; 1/0*-(-0/   1 &(&(8)&# 3  / -), -0/ . 8 A 3 0 &.&(/0- & 1/ -0?0@ &(C/0F +,*-  &B )7)  &  Q /  2 *& 10%<( 6 *  3 C& 7&(;C 1N1 2 ; 2 C66D Q / ? 5/0*-(-0/ )[E +O1&  & , &(8)&F  1/0*-(-0/ ? 8 ;& &(/ -9& &664C/ -0?0@ &(1/0 &B Q0/  2 &.7-0= )%&(6;: 08> UC=)F *B 3 &B7)&( , )? *  2 ;&A Q0/  2 66D1 %( 6   E )E   '>0&  3  16C *? +,*F *'<8  3  7)&(;) 2 -)8  E   3  - C&Y @%0&)F &  243   3  2  & 3 %16C? / % , 66;:BU=> 2 -;Y D 8 / 3 8) C@4 3  1% Q , U 3   3   %& )%<71%<(C84  &(4 3 * 3 & & 3 A 87-0&(   243 0 -%160&# 7176;:8   3 .1  #+  2 7)&(;)&B *- / 3 0/ '>QE  / /0<C85  3 FC107  1/0 &(&(%?7) A 3 & < , /0&() & 3 %16C9F *( -6;(6C  *9%-#<+,-F *6  <716* , )& 0 *-4 3 &4C %E ±¦¤pë±Åãc¨(Í[¨”¤¦±p¨„³h±Å© ì Ê*¤¦®%£7¨”¸7¤¦¹§v±Å©­¹©­¸l®%¨„®%¤Å» ¸7¤¦¸Kü »”½ø®%£© ¨”±Å·”»”ª/¤Å®%£¹€¥†¥t«®%Ã.£©­¬ ¨„³!»S͔©”ÀÑ©­¹§h±Å»SãH² ¤¦¸· ®%£©×¥.¨”¹©É®t©­¹§h±¦¨„®t©Ö¥%©«®ïëjÊ*¤Å®%£ö³$»”®%£ö±Å©XÎl² ¤¦Ã«¨”±=¨”¸h¬É¸\» ¸b²Ë±Å©XÎ\¤¦Ã«¨”±N®t©­¹§h±¦¨„®t©­¥ ì ÀN¨”¸¬É¹í¨„î<¤¦¸· ®%£\©øÃ«±¦» ¥t©­¬×͔»<먄³v°±¦¨„ª%ㆨ”¥%¥%°¹§h®%¤Å» ¸Ç4ÈùÃXª%» ¥%¥² ̈́¨”±¦¤¦¬¨„®%¤¦» ¸†¨„§h§ª%» ¨”Ã.£×Êa¨”¥€°¥%©­¬Ç1ЩѮt»<»”îK®%£\© ©«Í”©­¸A¸<°¹#³$©«ª%©­¬º¥%©­ÃX®%¤Å» ¸¥Ì»”½¯®%£©Ï®%¨„·”·”©­¬ 'Yð„ßjß )vâ+*%Ù.Ù­â ,\Ý$.*/Þ!ð„ß¿ÃX» ¹§$» ¸\©­¸l®»”½®%£\© ´ ¢*¾ ë ÃX» ¹² §ª/¤¦¥%¤¦¸\· ”ÿ$#%& ®t»”­¸¥ ì ¨”¸¬ ª.¨”¸¬\» ¹í±ÅãÖ¬¤¦¥t®tª/¤Å³\² °\®t©­¬Ö®%£\©ø¥t©­¸l®t©­¸ÃX©­¥í¤¦¸l®t»©­¤Å· £l®€³h¤¦¸¥­À ª%©­¥%°±Å®%¤p¸\· ¤¦¸†³h¤p¸¥¯»”½Ò¥%¤ ©øÿ5$&®t»”­¸¥«Ç×¢£\©­¥t©ÑÃX» ±¦±Å©­Ãr² ®%¤Å» ¸h¥€¨”±¦±Å»SÊ ½ƒ»”ªí¨”¸Ö©­¤Å· £H®²O½j» ±¦¬ÖÃXª.» ¥%¥²Ö́¨”±¦¤¦¬¨„®%¤Å» ¸ »”½Nª%©­¥%°±¦®%¥«Àn¤ÁÇõ©”Ç©­¨”Ã.£4®tª.¨”¤¦¸¤¦¸·Æ®%¨”¥t¦¥Òª%©«§$©­¨„®t©­¬ ©­¤Å· £l®¯®%¤p¹©­¥«À:Ê*¤Å®%£7» ¸©€»”½=®%£\©í³v¤¦¸¥^¥t©­±¦©­ÃX®t©­¬K¨”¥ ®%£\©í®t©­¥t®¯¬¨„®%¨bÀ:¨”¸¬®%£\©€»”®%£\©«ª#¥t©«Í”©­¸4ÃX» ¹#³h¤¦¸\©­¬ ®t»Æ½ƒ»”ª.¹à®%£\©¯®tª/¨”¤¦¸¤¦¸\·¬¨„®%¨bÀ!· ¤ÅÍb¤¦¸\·®tª.¨”¤p¸¤¦¸\·€¥t©«® ¥%¤ ©­¥»”½¨„ª%» °¸h¬O”þ5$&÷®t»”­¸¥Yë ÃX» ¹€§h¨„ª.¨„³h±Å© ®t»f®%£\©$#%#%&ò°¥t©­¬í³l〾Nª.¤p±¦±\¤¦¸í£h¤¦¥5±¦¨„ª%·”©«ª¿©XÎ<§$©«ª.¤Å² ¹©­¸l®%¥ ì Ç¢*£\©Òª%©­¥%°±Å®%¥a»”³h®%¨”¤¦¸\©­¬Ñ¨„ª%©^¨(͔©«ª.¨„·”©­¬ø®t» · ¤Å͔©¨Ä¹€»”ª%©aª%©­±¦¤¦¨„³h±¦©N¤¦¸h¬¤¦Ã«¨„®%¤Å» ¸#»”½v§$©«ª%½j»”ª.¹€¨”¸hÃX© ®%£¨”¸YÃX» °±¦¬Ñ³$©Ò· ¨”¤¦¸\©­¬Ñ½jª%» ¹E¨”¸lãÑ¥%¤¦¸\· ±¦©ª/°¸Ç! 1ЩѮtª/¨”¤¦¸\©­¬†®%£\©Ì¾¿ª/¤¦±¦±N¥tã<¥%®t©­¹Š¨„®f®%£ª%©­¥%£\» ±¦¬h¥ ý €ëj®%£©=Í[¨”±¦°©a¥.°\·”·”©­¥t®t©­¬€½j»”ª ®%£¤¦¥ ¹f°Ã/£®tª.¨”¤¦¸¤p¸\· ¬¨„®%¨Ì¤¦¸Ï®%£\©ÃX»b¬\©”ÔեĤ¦¸¥%®tª.°ÃX®%¤Å» ¸¥ ì ¨”¸¬×ý <Ç#¢£© ª%©­¥.°±Å®%¥¨„ª%©f¥.£\»(Ê*¸¤¦¸Y®%£\©¢…¨„³v±Å©Æý„Ç€ëjÊ*£\©«ª.©#®%£\© ¥%ÃX»”ª.©­¥a½ƒ»”ª=¨”ëë°\ª.¨”ÃXã”Àh©«®%ÄÀ¨„ª%©Ä¨­Í”©«ª.¨„·”©­¥*»(͔©«ª®%£\© ©­¤Å· £l®Ñª.°¸h¥ ì Çô¢*£\© Hv¸¨”±ÒÃX» ±¦°¹€¸»”½#®%£\©Ð®%¨„³h±Å© ¥%£»(Ê*¥®%£\©f®%¤¦¹©#®%¨„­¸®t» ®%¨„·Ðýf¹€¤¦±¦±p¤Å» ¸Ì®t»”­¸¥ °¥.¤¦¸\·*®%£©¿ª.°h±Å©¥t©«®%¥z§ª.»<¬°hÃX©­¬Æë ¨„· ¨”¤¦¸¨­Í”©«ª/¨„·”©­¬ ì Ç ¢£©®%¨„³h±Å©N¨”±¦¥t»¥%£\»(Ê¥…ÃX»”ª%ª.©­¥t§$» ¸¬¤¦¸\·=ª.©­¥%°±Å®%¥‚½ƒ»”ª ®%£\© J ü=Ȇ¨”¸h¬ J ü ´ ®tª.¨”¤p¸¤¦¸\·¹©«®%£»<¬¥­À[¨„®:¨ª.¨”¸\·”© »”½ ¬¤,G$©«ª%©­¸l®=®tª.¨”¤¦¸¤p¸\·#®%£ª%©­¥%£\» ±¦¬h¥«Ç ‚©«®°¥ ʪ/¤Å®t© \ ¹©«®%£\»b¬#"S®%£\ª%©­¥.£\» ±¦¬A] ëj©”Çõ·\Ç ¾Nª.¤¦±p±$"<ý  ì ®t»×ª%©«½j©«ªÌ®t»Ö¨K¹€©«®%£\»<¬ ®tª.¨”¤p¸\©­¬¨„®Ì¨ · ¤Å͔©­¸í®%£\ª%©­¥%£» ±¦¬Ç µ ³h¥t©«ª%͔©*®%£¨„®5®%£\©®%¨„·”· ¤p¸\·#¨”Ãr² ë°\ª/¨”ÃXãK½j»”ª J üaÈ%"<ý 7¤¦¥±Å»(ÊN©«ª­ÀN³lã #bǦý 0/ Àa®%£¨”¸ ½j»”ª5¾Nª.¤¦±¦±$"<ý <Ç  »(ÊN©«Í”©«ª­À J üaȺ£\©«ª%©a§ª%»b¬°ÃX©­¥z±Å©­¥%¥ ª.°h±Å©­¥z®%£¨”¸¾Nª.¤¦±¦±ëý "#Í<¥«Çaý­ÿ ì Àb¥t»^¤Å®5¤¦¥ ¤¦¸l®t©«ª%©­¥t®² ¤¦¸·€¤¦¸¥t®t©­¨”¬Ñ®t» ÃX» ¹§h¨„ª%© J ü=È%"<ý í¨”¸¬ø¾Nª.¤¦±¦±&"<ý <À Ê*£h¤¦Ã.£§ª.»<¬°hÃX©€¥%¤¦¹€¤¦±p¨„ªª/°±Å©ÃX» °¸l®%¥«Ç  ©«ª%©€®%£\© ¨”ëë°\ª/¨”ÃXã »”½ J üaÈÓ¤¦¥=±¦»(ÊN©«ª³lãÆ» ¸±Åã#bǦý / Ç ¶ ¤¦¸hÃX© J üaÈ ¤¦¥®%£\© ¹€¤¦¸¤¦¹í¨”±aÍ[¨„ª.¤p¨”¸H®«Ô5»”½Ä®%£©ø»”ª.¤Å· ¤¦¸h¨”± ¾Nª.¤¦±p±:¹€©«®%£\»<¬nÀ‚®%£\©­¥t©íª%©­¥%°±Å®%¥Ä먔¸4³$©Í<¤Å©«ÊN©­¬7¨”¥ §ª.»(Íb¤¦¬¤¦¸\·^¬h¤Åª%©­ÃX®5©­¹§v¤Åª.¤¦Ã«¨”±b©«Íb¤¦¬\©­¸ÃX©*»”½®%£\©¤¦¹² '  3 )( -C661( *118  <7 -?0@& 2  - 7 -(+,-?0 )  %1+*A@ -7-&(,.-0/ < -=> - 3 = 18? 2 ,1/2/23 3 $546 , (       64&Y 187  ! +[?0?*-(:@E  3   :9   ( 18 <7 -?0@& %&(  3  & *? F / 3  [1% @:  3  C?4 3  : 2  - -%1  3 )?0  %18* )0F 3 => +,%-;,1/0/13 3 $46(     $ <4 &0E ®%¨„·”· ¤¦¸\· ª.°h±Å© ®tª/¨”¤¦¸¤¦¸\·\ú ©XÎb©­Ã«°\®%¤Å» ¸nú ¹€©«®%£\»<¬ ®%£ª%©­¥%£\» ±¦¬ ¨”ëë°ª.¨”ÃXã ÃX» °h¸H® ª.°h¸H®%¤¦¹€© ¹©­¹»”ª.ã ª.°h¸H®%¤¦¹€©ë ¥%©­Ã«¥ ì ë / ì ë ¹í¤¦¸¥ ì » ¸7ý# ®t»”­¸¥ ¾Nª.¤¦±¦± ý  þ<Ç þ”þ ý­ÿ ý bý# "ý ý­ÿ$# ¾Nª.¤¦±¦± ý  þQPlÇ #Q bý" ý  "ý $#[P J ü=È ý  þ<Ç ÿ ý " ý„ÇC ý­ÿ ý#bÇC J ü=È ý  þ<Ç ÿ”þ ý­ÿ ý„ÇC ý­ÿ ý#bÇ þ J ü=È þ þ<Ç þ bý ý„Ç P ý­ÿ ý”ý„Ç " J ü=È  þQPlÇ #Q " ý„Ç þ ý­ÿ ý <ÇC J ü=È " þQPlÇ #[P "[”þ <ÇC ý­ÿ ý <Ç þ J ü=È  þQPlÇ # þ $#Q <Ç " ý­ÿ ý <Ç # J ü=È  þQPlǦý”ý þbý­þ <Ç P ý­ÿ ý­ÿ<Ç " J ü ´ ý  þ<Ç þ ý<P ý„Ç þ ý­þ ý”ý„Ç # J ü ´ ý  þQPlÇ #Q bý”ý <Ç # ý­þ ý”ý„ÇC J ü ´ þ þQPlǦý  QP <Ǧý $# ý <Ç # J ü ´  þQPlÇCbý ”ÿ”þ <ÇC bý ý <Ǧý J ü ´ " þQPlÇCQP  <Ç þ  ý <Ǧý J ü ´  þQPlÇC$# P <ÇC  ý<PlǦý J ü ´  þQPlÇC ý”ý­ÿ "\Ǧý  bý„Ç þ ¢:¨„³h±Å©€ý„ú5Î<§$©«ª.¤p¹©­¸H®%¨”±D*©­¥%°±Å®%¥^ë ÃXª.» ¥%¥²Ö́¨”±¦¤¦¬¨„®%¤Å» ¸nÀ\Ê*¤Å®%£Ì®tª.¨”¤¦¸h¤¦¸\·¥t©«®%¥=»”½5”þ$&ó®t»”­¸¥ ì §$»”ª%®%¨”¸ÃX©×»”½íª/°±Å©×¤¦¸H®t©«ª/¨”ÃX®%¤Å» ¸ö®t»¼¾¿ª/¤¦±¦±ÁÔÕ¥cª.°±Å©X² ³h¨”¥t©­¬#®%¨„·”· ¤¦¸\·Ä¹©«®%£»<¬À”¥%°·”·”©­¥t®%¤¦¸\·®%£¨„®z¤Å®…¬\»<©­¥ £¨(͔©Ò¨”¸ ¤¦¹§v¨”ÃX®» ¸Æ§!©«ª.½ƒ»”ª.¹í¨”¸ÃX©”Àl³v°\®» ¸\©®%£¨„® ¤¦¥ ª.¨„®%£\©«ªÑ±¦¤p¹€¤Å®t©­¬Ç,¢£\©¥%£\»”ª.®Æ®tª.¨”¤p¸¤¦¸\·†®%¤¦¹©­¥ ½j»”ª J üaÈ ¨”±¦±¦»(Ê ®%£¨„®fÊ¿©Ì¹€¤Å· £l®f¥%©«©«î4®t»Ïª%©­ÃX» °\§ ®%£¤¦¥¿±Å» ¥%¥N¤¦¸í¨”ëë°ª.¨”ÃXã”À¨”¥¿ÃX» ¹€§h¨„ª%©­¬Æ®t»f¾Nª.¤¦±p±ÁÀ ³<ã ®tª.¨”¤¦¸h¤¦¸\·K¨„®ø±¦»(ÊN©«ªÑ®%£\ª%©­¥%£\» ±p¬¥«Ç ¢£\©¨”ëë°\ª.¨”ÃXã »”½*¾Nª.¤¦±p±$"<ý ø¤¦¥#¸\©­¨„ª.±Åã7¹í¨„®%Ã.£\©­¬×³<ã J ü=È%"„þϨ”¸¬ ¥%±¦¤¦· £H®%±Åãø¥.°\ª%§h¨”¥%¥%©­¬Y³<ã J üaÈ "<ǯ¢£\©ª%©­¥%°h±Å®%¥*½j»”ª J ü ´ ³$©­¨„ª» °\®=®%£\©^¥.°\·”·”©­¥t®%¤Å» ¸Ñ®%£h¨„®§ª%©«§$©­¸¬¤p¸\· ¹€¤Å· £l®¯¤¦¹€§ª%»(͔©Æ§$©«ª%½ƒ»”ª/¹€¨”¸ÃX©”ÇÑ¢£\©€ª.©­¥%°±Å®%¥Ä½j»”ª J ü ´ "<ý Ĩ”¸h¬ J ü ´ "<ý ^¨„ª.©» ¸±Åã#͔©«ª%ã¥%±¦¤Å· £l®%±Å㯳$©X² ±Å»SÊ'®%£\» ¥%©#½j»”ª¾Nª.¤p±¦±$"<ý €¨”¸h¬c¾Nª.¤¦±p±$"<ý <Ç*¢…ª.¨”¤¦¸¤¦¸· ¨„®z±Å»(ÊN©«ªz®%£\ª%©­¥%£\» ±p¬¥«À J ü ´ §ª%»b¬°ÃX©­¥‚ª%©­¥%°±Å®%¥‚®%£¨„® ¥%¤Å· ¸h¤,Hv먔¸l®%±ÅãÓ¤¦¹§hª%»(͔©É» ¸,®%£» ¥t©Ö½j»”ª4¾Nª.¤¦±¦±$"<ý <À ³h°\®¥%®%¤¦±¦±!Ê*¤Å®%£Ì͔©«ª%ãÑ¥%£»”ª%®=®tª.¨”¤¦¸¤p¸\·®%¤¦¹©­¥«Ç È'¥t®tª.¤Åîb¤¦¸\·#½ƒ©­¨„®%°\ª.©»”½…®%£\©Äª%©­¥%°h±Å®%¥¿¤p¸ ¢:¨„³h±Å©ý„Ç ¤¦¥#®%£\©Ìª%©­±¦¨„®%¤Å͔©Ñ¤¦¸h¥t©­¸¥%¤Å®%¤¦Í<¤Å®˜ãc»”½ J ü=È%" J ü ´ ®%¨„·„² · ¤¦¸\·Ä®%¤p¹©®t»¯ª/°±Å©=¥t©«®¿¥.¤ ©”ÇüN» ¹§h¨„ª.¤p¸\· J üaÈ "<ý  ¨”¸¬ J üaÈ "<À¯½j»”ªÏ©XÎb¨”¹§v±Å©”ÀÄÊ¿©×¥t©«©†¨Öª%» °\· £h±Åã ½j» °\ªt²O½j» ±¦¬A¤¦¸ÃXª.©­¨”¥t©4¤¦¸¼ª/°±Å©7ÃX» °¸l®«À^³h°\®c¨”¸Ó¤¦¸b² ÃXª%©­¨”¥t©K¤¦¸ ®%¨„·”· ¤¦¸\·Ö®%¤¦¹€©Ï»”½¨„ª%» °¸h¬ "0# / ÇE¾Nã ÃX» ¸l®tª.¨”¥t®«À¾Nª.¤¦±¦±OÔÕ¥¹©«®%£\»b¬Y¥%£»(Ê*¥¨”¸c¤¦¸hÃXª%©­¨”¥t©#¤p¸ ®%¨„·”· ¤¦¸·c®%¤¦¹€©€®%£¨„®#¤p¥^ª%» °\· £h±Åãc§hª%»”§$»”ª%®%¤Å» ¸¨”±z®t» ®%£\©¤p¸ÃXª%©­¨”¥t©=¤¦¸fª.°±¦©NÃX» °¸l®«Ç5¢£¤p¥:¤p¥:¸»”®5¥%°\ª%§ª/¤¦¥² ¤¦¸·*¥%¤¦¸ÃX© ®%£©®%¨„·”· ¤¦¸\·¨”±Å·”»”ª.¤Å®%£h¹Aª%©  °¤Åª%©­¥‚¨§h¨”¥%¥ »S͔©«ª®%£©f±Å» ¨”¬©­¬Y¤¦¸\§v°\®=½ƒ»”ª©­¨”Ã.£Ïª.°h±Å©¯¤¦¸Y®%£\©#¥t©X²  °\©­¸ÃX©”Ç  »”ª ¬\©­Ã«¤¦¥%¤¦» ¸¯±¦¤¦¥%®n®%¨„·”· ¤¦¸\·\À”» ¸±¦ãÒ¨¥%¤¦¸· ±Å© §h¨”¥.¥#»S͔©«ª®%£\©Ñ¤¦¸\§h°®¯¤¦¥f¹€¨”¬\©”ÇÐÈa®©­¨”Ã.£×®t»”­¸ » ¸±¦ãƨ¥%°\³v¥t©«®a»”½‚®%£\©Äª.°h±Å©­¥a¨„ª%©^¨”ëÃX©­¥%¥t©­¬Àv¨”¸¬Ñ¤Å½ ¨”¸lãÆª/°±Å©Hhª%©­¥«Àb®%£\©Äª.©­¹€¨”¤¦¸¬\©«ª=¨„ª%©^¤¦· ¸\»”ª%©­¬Ç21Щ ¹€¤¦· £H®§ª%©­¬¤¦ÃX®¥.±¦¤Å· £l®%±Åã ¥.±Å»(ÊN©«ª®%¨„·”· ¤p¸\·í½j»”ª J ü ´ ª.°h±Å©­¥Ò¥t©«®%¥Ä®%£¨”¸½j»”ª J üaȯÀ¤ÁÇõ©”Ç¥%¤¦¸hÃX©f®%£©£h¤Å· £±Åã ·”©­¸\©«ª/¨”±‚ª.°h±Å©­¥®%£¨„® Hhª%©f¹» ¥t®*»”½j®t©­¸cÊ*¤¦±¦±n¨„§h§!©­¨„ª ±¦¨„®t©«ª¿¤¦¸f®%£\©¬\©­Ã«¤¦¥%¤¦» ¸f±p¤¦¥t®:½ƒ»”ª J ü ´ ®%£¨”¸½j»”ª J üaȯÀ ¨”¸¬Æ®%£\©«ª%©Ò¤¦¥N¥t» ¹©©«Í<¤p¬\©­¸ÃX©*½j»”ªN®%£¤¦¥N¤¦¸€®%£\©ª%©­¥² °±¦®%¥^ëj©”Çõ·\ÇNÃX» ¹§v¨„ª%© J ü ´ " "€®t» J üaÈ " ì Ç   ê˞ ¡\é ž[ž„ê ç æ 1Щ*£h¨­Í”©*¥%£»(Ê*¸£\»SÊï¾Nª.¤¦±¦±ÁÔÕ¥…ª.°±¦©X²O³h¨”¥t©­¬f®%¨„·”· ¤¦¸\· ¨„§§hª%» ¨”Ã.£먔¸ ³$©øª%©«½j»”ª.¹#°h±¦¨„®t©­¬º¨”¥Ì¨†¬\©­Ã«¤¦¥%¤¦» ¸ ±¦¤p¥t®7¹»b¬\©­±ÁÀí¨”±¦±¦»(Ê*¤¦¸·A½ƒ»”ª†¹#°hÃ.£ ½j¨”¥t®t©«ª†®tª.¨”¤¦¸\² ¤¦¸·\À=®t»K· ¤Å͔©Ïª%©­¥%°h±Å®%¥€®%£¨„®Ì¨„ª%©ÃX» ¹§h¨„ª.¨„³v±Å©Y®t»\À »”ª ³!©«®t®t©«ª ®%£¨”¸À=®%£» ¥t©c»”½^¾Nª.¤p±¦±ÁÔÕ¥»”ª.¤¦· ¤¦¸¨”±¥tãb¥² ®t©­¹ÑÇ ¢£\©=¹»b¬¤,Hv©­¬#¨„§§hª%» ¨”Ã.£¨”±¦±¦»(Ê*¥z½j»”ªz®%¨„·”·”©«ª ©XÎb©­Ã«°\®%¤Å» ¸†®%£¨„®¤¦¥  °¤¦®t©í½ ¨”¥t®«À5©«Í”©­¸†½ƒ»”ª¨”¸×¤¦¹² §h±Å©­¹€©­¸H®%¨„®%¤Å» ¸^§!©«ª.½ƒ»”ª.¹í¤¦¸\·=¬¤Åª%©­ÃX®ª/°±Å© ¤¦¸l®t©«ª%§ª%©«®² ¨„®%¤Å» ¸ÇK¢£¤¦¥f¤¦¹§h±¦©­¹©­¸H®%¨„®%¤¦» ¸À5£\»(ÊN©«Í”©«ª­ÀN§ª%»”³\² ¨„³h±Åã€Ã«¨”¸h¸\»”®NÃX» ¹€§h¨„ª%©ÒÊ*¤Å®%£í®%£\©¥%§!©«©­¬Æ»”½‚¨f®%¨„·„² ·”©«ª*½j»”ª®%£\©^¥%®%¨”¸¬¨„ª.¬ø¾Nª.¤p±¦±$¨„§§ª%» ¨”Ã/£ø§ª%»b¬°ÃX©­¬ °¥%¤p¸\·Y®%£\©WH!¸¤Å®t©X²Ë¥t®%¨„®t©ÑÃX» ¹§h¤p±¦¨„®%¤Å» ¸×¹©«®%£\»b¬†»”½ »bÃ.£©^¨”¸¬ ¶ Ã.£h¨„³!©­¥^ëý­þ”þ ì Àv³$©­¤¦¸\·fª.°¸ » ¸Ñë°\ªt² ª%©­¸l®£¨„ª.¬\Êa¨„ª%©”Ç21Щ^©XÎb§$©­ÃX®=®%£¨„®*¥%¤p¹€¤¦±¦¨„ª Hv¸¤Å®t©X² ¥t®%¨„®t© ÃX» ¹§h¤¦±¦¨„®%¤¦» ¸Ð¤¦¥Ò§$» ¥%¥%¤Å³h±¦©f½j»”ª^®%£\©Æ¬\©­Ã«¤¦¥%¤Å» ¸ ±¦¤¦¥%®¨„§§ª%» ¨”Ã/£Àv¨”¸¬c¥t§$©­Ã«°±¦¨„®t©^®%£h¨„®«Àv¥%¤¦¸hÃX©¯±Å» ¸\·„² ¬¤¦¥%®%¨”¸ÃX©K¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸ò©>G$©­ÃX®%¥Ð¨„ª%©†©XÎb뱦°h¬\©­¬À¯¤Å® ¥%£\» °h±¦¬†³!©Ì§$» ¥%¥%¤¦³h±Å© ®t»4¬\»Ð¨7ÃX» ¹§h¤p±¦¨„®%¤Å» ¸†§ª%»„² ¬°ë¤p¸\·¹#°Ã/£ø¥%¹€¨”±p±Å©«ª Hv¸¤Å®t©X²Ë¥%®%¨„®t©^®tª.¨”¸¥%¬h°ÃX©«ª.¥«Ç È Hv¸¨”±§$» ¤¦¸l®®t»ÓÃX» ¸¥%¤¦¬©«ª7¤¦¥Ê*£\©«®%£\©«ªÐ©XÎ<² ¨”¹§h±¦©­¥=½ƒ»”ªÊ*£¤¦Ã/£Ñ®%£\©¯¾Nª.¤p±¦±¨„§§ª%» ¨”Ã/£øª%©­±p¤Å©­¥=» ¸ ¤¦¸l®t©«ª.¨”ÃX®%¤Å» ¸A먔¸Ó³$©7ÃX»”ª%ª%©­ÃX®%±Åã¼£¨”¸¬h±Å©­¬¼¤¦¸®%£\© ¹»b¬¤,Hh©­¬ ¨„§§ª.» ¨”Ã.£Ç21Щ¯Ã«¨”¸h¸\»”®a· ¤Å͔©^¨f·”©­¸\©«ª.¨”± ¨”¸¥tÊN©«ª=®t»®%£¤¦¥  °\©­¥t®%¤Å» ¸À<³h°\®N®%£\©Ò½j» ±¦±Å»(ʤ¦¸\·¯©XÎ<² ¨”¹§h±¦©f· ¤¦Í”©­¥Ä¥t» ¹©€¤¦¸¬h¤¦Ã«¨„®%¤Å» ¸c»”½¿Ê*£h¨„®Ò¹í¨­ãϳ!© £¨„§§$©­¸¤p¸\·Ð¤¦¸ï¹€¨”¸lãÖ먔¥t©­¥«Ç'¢£\©Y¾Nª.¤¦±p±a®%¨„·”·”©«ª ¨”¥%¥%¤¦· ¸¥Ò®%£\©§h£\ª/¨”¥t©ÑâOÝÑÜrÛ ñ Þ.Kc®%£\©€¤p¸¤Å®%¤¦¨”±…®%¨„· ¥      À<Ê£¤¦Ã.£Ñ¨„ª.©Æë ÃX»”ª%ª%©­ÃX®%±Åã ì Ã.£h¨”¸\·”©­¬Ö®t»7³$©     °¸¬\©«ª ®%£\©cª.°±Å©­¥ \   ³$©­ÃX» ¹©­¥  ¤Å½Ò§ª%©«Íb¤Å» °¥€®%¨„·×¤¦¥  ]†¨”¸¬ \  ³$©­ÃX» ¹©­¥  ¤Å½ÄÊN»”ª.¬º¤p¥  ¨”¸¬ §ª%©«Íb¤Å» °¥®%¨„·K¤¦¥   ]bÀ5Ê*£©«ª%©Ñ®%£\©ø¥t©­ÃX» ¸¬Öª%©­±¦¤Å©­¥ » ¸Ï®%£\©"Hhª.¥t®£¨­Íb¤¦¸\· Hhª%©­¬ÇÒ¾N»”®%£ J üaÈ ¨”¸h¬ J ü ´ ãb¤Å©­±¦¬fª.°±Å©­¥:®%£¨„®¿ÃX»”ª%ª%©­ÃX®%±Å㣨”¸¬±¦©¿®%£\©먔¥%©*¨”±¦¥t»\Ç  »”ª J ü ´ À…½j»”ª#©XÎ\¨”¹§h±¦©”À Ê¿©Ì£¨­Í”©Ì¨”±¦¥t»Ï®%£\© Hhª.¥t® »”½!®%£\©¾Nª.¤¦±¦±<ª.°±¦©­¥«À”³h°\®z®%£\©¥t©­ÃX» ¸h¬ª.°±Å©a¤¦¸l͔» ±Å͔©­¬ ¤¦¥#¤¦¸¥t®t©­¨”¬ \  ³$©­ÃX» ¹©­¥  ¤Å½aÊ¿»”ª/¬4¤¦¥   ¨”¸¬ ®%¨„·Ö³$©«½j»”ª%©7±¦¨”¥t®ø¤¦¥  ]bÀ=Ê*£¤¦Ã/£ ÃX»”ª.ª%©­ÃX®%±Åã Hhª%©­¥ Ê*¤Å®%£» °\®aª%©­±Åãb¤¦¸\·» ¸Ñ®%£\©5Hvª.¥t®=ª.°±¦©”Ç JJF J æ ¡bJ ž ! #"%$'&( )"+*+*-,/.102043,6587:9<;>=? @:ACB:D<@FE:GHD#=2=;)9ID47:JLKM9 N DPORQ4?DSQRE N E:D<;TO UVORQ4,XW!Y, Z[,\)Y]1^)"+^1_a`cb"+d4]1 #^)"e\gfihPj Wk]1bb^)fR*+d<lPb"%l_WmY"n*%l4o]S*+pY"%l_Wqr, ! #"%$'&( )"+*+*-,/.10202s,ut #l4b^Cjvh4 #wxlP\)"+h4by{zl2^C]IoH]S # )h2 Cy o #"+d4]Sb|*+]1l4 )b"+b}~l4bo|bl<\# :lP*€*%lPb}2lP}2]p )h4y $S]1^#^C"+b}‚ƒq„$Sl2^C]…^g\#of†"+b‡pl4 C\ˆhPj‰^)p]S]1$:Y€\#lP}4y }2"nb} ,‹Š!9<Œ=?KMD<K>UŽ9<OD N UVO QP?U%@TK>UŽ7T@T_’‘.2“Ž”R•F‚ sP”23—– s4˜2s_Z™]1$S]Sw[z]S I, !}4]Sb]ˆš›YlP #b"%lPœ_mš› )\)"%^‰]Sbo #"%$:œ ^)h4b_kž]S"+*(ŸRl4$Fy h2z^Ch2b_clPb o€ ¡"%$:YlP]1*™Wk]S #œ4h<¢£"n\)¤2,¥.104023,¦›§2 l<y \#"nh2b^!jvh2 ›p lP )\›hPj¨^Cp]S]I$:Y©\#l4}4}4"+b} ,ªMbˆ«¬;)917:E:E:G<UVO Q<@ 9C­[KvJEr® N EF¯<EFO KVJ’°‰D<K>UŽ9<OD N Š!9<O1­FES;)ESO7:E±9POX5™;TK>U ²!A 7SUŽD N³ OKME NVN UeQ2ESO7:ES_pl4}4]1^´PµP”P–´<µ20, ¶·]1bb]F\#Y¸š›Y  :$:Y,¹.102µ4µ,¡qº^g\#h$:Yl4^C\)"%$xplP )\#^[p #hPy }2 #l4w„l4bo»bh2bƒpY :l4^)]±pl4 #^)]S rjvh4 b #]1^C\) #"+$F\)]1o \#]F¼ \1,xªMb'«(;)9I7:E:E#GPUVORQP@½9g­xKvJEx¾E:7#9POG»Š!9PO1­FES;)EFO7:E 9PO’5(=4= N UŽE#G°[DPK-?;#D N D<O QP?D1Q2E·«(;)9I7:ET@:@TUVORQ4, ŸRlPwx]1^·š›^#^C]1b^S,[.1040 ´ ,£WalP )\Cy{hPjVyM^)p ]1]1$:Y¿\#lP}2}4"+b}±^Cy "+b}…W! #h4}4h2*-,¡ªMb†«(;)9I7:E#E:GPUVORQP@’9g­½KVJE¿¾ES¯—ESOKvJ ³ O A KMES;TOD<K>UŽ9<OD N‰À 9P;)Á—@)J9:=Â9<O ³ OGP?7FK>UV¯—E  9)Q4UŽ7©«(;)9<A Q4;)DPŒ‰ŒUVORQ4_pl4}4]1^£023—–.1Ã4µ, Z™h4}¡š›\C\#"nb}_Ÿ2*n"%lPbi¶·p"+]1$P_ŸRlPbÄWk]Io]S :^)]Sb_lPbo Wk]1b]S*+h4p]ÆÅ "+zb,8.I040R‘,ÇqÇp #l2$T\#"+$1lP*plP )\ÄhPj ^)p]S]1$:Y¥\:lP}2}4]S I,ȪMb«(;)9I7#E:E:G<UVO Q<@‡9C­†KVJELÉJUV;)G Š!9PO1­FES;)EFO7:Eˆ9PO…5›=2= N UŽE:G’°[DPK-?;#D N D<ORQ4?DSQREx«(;)9<A 7:EF@:@:UVO QP_plP}2]1^·.1323—–.S”2Ã, Ê»lP*n\)]1 »ZËlP]1*n]1w©lPb^1_ŸRlPœ zÍÌl—d )]1*-_Wk]F\#]S »&›]1 #$:œ_ l4bo[ÅR\#]Sd2]Sb[ÎË"+*n*+"+^1,m.I0402˜, ¡z\I‚q'w]1wxh4 #f2y{zl2^C]Io p lP )\›hPj^)p]S]1$:Y±\#lP}2}4]1 Cy{}4]1b]S :l<\#h4 I,ªMb’«(;)9I7#E:E:G<UVO Q<@ 9C­mKvJE›Ï9P?;TKVJ À 9<;)Á<@CJ9#=’9PO»Ð EF;TÑ  DP;-QRE·Š!9<;>=9<;)DP_ p lP}4]I^·.S”P–‘2´, th4 #zÒSÓ h4 #bÔlP}2]S I, .102040, t¬Y]¦ÕyMt£&(ÔÖ^)f^g\#]Sw’‚ Ôh4}2"+$¹p )h2}4 :lPwxwx"nb}€\)h h4*%^¡jvh2 Ä\# #l4b^gjvh2 )w©lP\)"+h4by z l4^)]1o¿*n]IlP #b"nb},¨ªMb¡«(;#917:E:E:G<UVORQP@‰9g­·KvJEˆÉJ UV;)G ³ O A KMES;TOD<K>UŽ9<OD N¿À 9P;)Á—@)J9:=9<OLŠa9PŒc=?K{DPK-UŽ9POD N °‰D<K>A ?;#D N D<O QP?D1Q2E  E:DP;TOUVORQÄ×:Š!9S° ÙØ Ú2ÚTÛ _ &(]S #}4]1b, ÔlPb$S](q[,PÜclPw©^)Yl—¢ƒlPbo‰ ¡"n\#$:Y]1*n*Wk,< Xl4 #$S^S,m.10404”, !¼p*nh2 )"+b}Â\)Y]†^C\#lP\)"%^g\#"+$1lP*±o]1 )"+d<l<\)"+h4bLhPj¿\) :lPb^Cy jvh2 )w©lP\)"+h4br #*+](^)]1§R]1b$F]I^jvh2 kplP )\Cy{hPjVyM^Cp]S]I$:Y·\#l4}Py }2"nb},mªMbÄ«(;#917:E:E:G<UVORQP@‰9g­[KVJE[Ý4Þ4OGx5™O O?D Nß E:ESK-A UVO Q¿9C­ÙKvJE·5‰Š  _plP}2]1^£µ2˜—– 0Rs, q™o¢(l4"e\ËÜcl<\#blPpl4 )œ Y">,£.10204˜,mqHw©l<¼"nw‰wº]SbR\) #h4p f wxho]1*›jvh2 ‰p lP )\Cy{hPjVyM^Cp]S]I$:Y¹\#l4}4}4"+b} ,iªMb†«(;)9I7#E:E:G<A UVO Q<@9C­¬KVJErŠ!9<OI­FEF;#EFO7#EË9<O©®(Œ=UV;TUŽ7:D Nß ESKvJ9IG—@cUVO °‰D<K>?;)D N  D<O QP?D1Q2EÙ«(;#917:EF@#@TUVO QP_p lP}4]I^Ë.1343<–.1”R‘, Ü£h2blP*%o»Ô!,¨Ü£"nd2]1^C\1,i.104µ ´ ,½Ô]1l4 )b"+b}¡o]I$F"%^C"+h4b»*+"+^C\#^1, ß D27#JUVOE  E#DP;TOUVO QP_ ‘“Ž3R•T‚à‘4‘P0<–‘P”2˜, mww©l4bR]S*›Üch $:Y]±lPbo…áËd4]1^Å$:Yl4z ]I^S,….10202s,ˆZ]Sy \#]S #wx"nb"%^C\)"%$Xpl4 C\)y>h4jVy{^)p ]1]1$:Yâ\:lP}4}2"nb}ƒ¢£"e\#Y†ãb"n\)]Sy ^C\#lP\)]¿\) :lPb^#o$S]S :^S,äŠ!9PŒc=?K{DPK-UŽ9POD N¬ UVORQ4?U%@TK-UŽ7F@T_ ‘.2“>‘4•T‚à‘4‘R´1–‘4sP3, ¶·]1bÂÅlPw‰]S*>,å.I0404µ,¦Ôl4¤SfÆ\) :lPb^Cjvh4 #w©l<\)"+h4by>zl2^C]Io *+]1l4 )b"nb} ,cªMb…«(;#917:E:E:G<UVORQP@x9C­[KvJEr® N EF¯<ESOKvJ ³ OKMES;TA OD<K>UŽ9<OD N Ï9<;TUŽG2Dˆ5 ³[æ ET@SE:D<;)7:J…¾Ñ<Œc=9—@TUV?ŒuŠ!9<OA ­FES;)ESO7:ES_plP}2]1^c‘432sI–‘P320, ™lPb^¹d<l4b¥™lP*n\)]1 )]1b_½ŸRlPœ zåÌl—d )]1*-_±l4boLʅl4*e\#]S ZËlP]1*n]1wx]Sb^1,m.I0404µ,ªMwxp )h<d "+b}olP\#lPy{o #"+d4]SbÙ¢(h4 :o y $S*+l2^)^Ù\#lP}2}4"+b}¡z f…^)f^g\#]Swç$Fh2w[z"+bl<\#"nh2b,ˆªMbÆ«(;)9<A 7:E:E:G<UVO Q<@è9g­HKvJE ³ O K{ES;TODPK-UŽ9POD N Š!9<O1­FES;)ESO7:EL9<O Š!9PŒc=?K{DPK-UŽ9POD N! UVO QP?U%@TK>UŽ7T@XŠ¬é ³ °©ê¬A Ú4ë _kplP}2]1^ ”R0.F–R”20 ´ , ÎË]1hPìÂʅ]Szz€lPbo†ž[,›&( )œ "%$P,L.102043,ÂÔ]1l4 )b"nb}ƒo]Fy $S"+^)"+h4bä*+"+^C\#^iz fÂp #]Sp]Sb o"nb}‡"+bjv]S # )]IoH )*n]I^S,ÖªMb «¬;)917:E:E:G<UVO Q<@£9g­›KVJE!5 ³TíRÚ Ý À 9<;)Á<@)J9#=¡9<O ß D47:J UVOE  E:D<;TO UVORQxD<OGîÙÑ4BS;TUŽG[¾ ї@TKMEFŒ‰@T_PplP}2]1^a˜<–.1Ã_P ¡]S*ny zh4 )b]2,
2000
36
An Improved Error Model for Noisy Channel Spelling Correction Abstract The noisy channel model has been applied to a wide range of problems, including spelling correction. These models consist of two components: a source model and a channel model. Very little research has gone into improving the channel model for spelling correction. This paper describes a new channel model for spelling correction, based on generic string to string edits. Using this model gives significant performance improvements compared to previously proposed models. Introduction The noisy channel model (Shannon 1948) has been successfully applied to a wide range of problems, including spelling correction. These models consist of two components: a source model and a channel model. For many applications, people have devoted considerable energy to improving both components, with resulting improvements in overall system accuracy. However, relatively little research has gone into improving the channel model for spelling correction. This paper describes an improvement to noisy channel spelling correction via a more powerful model of spelling errors, be they typing mistakes or cognitive errors, than has previously been employed. Our model works by learning generic string to string edits, along with the probabilities of each of these edits. This more powerful model gives significant improvements in accuracy over previous approaches to noisy channel spelling correction. 1 Noisy Channel Spelling Correction This paper will address the problem of automatically training a system to correct generic single word spelling errors.1 We do not address the problem of correcting specific word set confusions such as {to,too,two} (see (Golding and Roth 1999)). We will define the spelling correction problem abstractly as follows: Given an alphabet Σ , a dictionary D consisting of strings in Σ * and a string s, where D s ∉ and * Σ ∈ s , find the word D w∈ that is most likely to have been erroneously input as s. The requirement that D s ∉ can be dropped, but it only makes sense to do so in the context of a sufficiently powerful language model. In a probabilistic system, we want to find ) | ( argmax w s w P . Applying Bayes’ Rule and dropping the constant denominator, we get the unnormalized posterior: ) ( *) | ( argmax w w P w s P . We now have a noisy channel model for spelling correction, with two components, the source model P(w) and the channel model P(s | w). The model assumes that natural language text is generated as follows: First a person chooses a word to output, according to the probability distribution P(w). Then the person attempts to output the word w, but the noisy channel induces the person to output string s instead, according to the 1 Two very nice overviews of spelling correction can be found in (Kukich 1992) and (Jurafsky and Martin 2000). Eric Brill and Robert C. Moore Microsoft Research One Microsoft Way Redmond, Wa. 98052 {brill,bobmoore}@microsoft.com distribution P(s | w). For instance, under typical circumstances we would expect P(the | the) to be very high, P(teh | the) to be relatively high and P(hippopotamus | the) to be extremely low. In this paper, we will refer to the channel model as the error model. Two seminal papers first posed a noisy channel model solution to the spelling correction problem. In (Mayes, Damerau et al. 1991), word bigrams are used for the source model. For the error model, they first define the confusion set of a string s to include s, along with all words w in the dictionary D such that s can be derived from w by a single application of one of the four edit operations: (1) Add a single letter. (2) Delete a single letter. (3) Replace one letter with another. (4) Transpose two adjacent letters. Let C be the number of words in the confusion set of d. Then they define the error model, for all s in the confusion set of d, as:    − = = otherwise )1 ( ) (1 d s if ) | ( C d s P α α 7KLVLVDYHU\VLPSOHHUURUPRGHOZKHUH LV the prior on a typed word being correct, and the remaining probability mass is distributed evenly among all other words in the confusion set. Church and Gale (1991) propose a more sophisticated error model. Like Mayes, Damerau, et al. (1991), they consider as candidate source words only those words that are a single basic edit away from s, using the same edit set as above. However, two improvements are made. First, instead of weighing all edits equally, each unique edit has a probability associated with it. Second, insertion and deletion probabilities are conditioned on context. The probability of inserting or deleting a character is conditioned on the letter appearing immediately to the left of that character. The error probabilities are derived by first assuming all edits are equiprobable. They use as a training corpus a set of spacedelimited strings that were found in a large collection of text, and that (a) do not appear in their dictionary and (b) are no more than one edit away from a word that does appear in the dictionary. They iteratively run the spell checker over the training corpus to find corrections, then use these corrections to update the edit probabilities. Ristad and Yianilos (1997) present another algorithm for deriving these edit probabilities from a training corpus, and show that for the problem of word pronunciation, using the learned string edit distance gives one fourth the error rate compared to using unweighted edits. 2 An Improved Error Model Previous error models have all been based on Damerau-Levenshtein distance measures (Damerau 1964; Levenshtein 1966), where the distance between two strings is the minimum number of single character insertions, substitutions and deletions (and in some cases, character pair transpositions) necessary to derive one string from another. Improvements have been made by associating probabilities with individual edit operations. We propose a much more generic HUURU PRGHO  /HW  EH DQ DOSKDEHW  2XU model allows all edit operations of the form È  ZKHUH * Σ ∈ , . 3  È  LV WKH probability that when users intends to type WKHVWULQJ WKH\W\SH LQVWHDG1RWHWKDW the edit operations allowed in Church and Gale (1991), Mayes, Damerau et al. (1991) and Ristad and Yianilos (1997), are properly subsumed by our generic string to string substitutions. In addition, we condition on the position in the string that the edit occurs in, 3  È  _ 361  ZKHUH 361  ^VWDUW RI word, middle of word, end of word}.2 The position is determined by the location of VXEVWULQJ LQWKHVRXUFH GLFWLRQDU\ ZRUG Positional information is a powerful conditioning feature for rich edit operations. For instance, P(e | a) does not vary greatly between the three positions mentioned above. However, P(ent | ant) is highly dependent upon position. People rarely mistype antler as entler, but often mistype reluctant as reluctent. Within the noisy channel framework, we can informally think of our error model as follows. First, a person picks a word to generate. Then she picks a partition of the characters of that word. Then she types each partition, possibly erroneously. For example, a person might choose to generate the word physical. She would then pick a partition from the set of all possible partitions, say: ph y s i c al. Then she would generate each partition, possibly with errors. After choosing this particular word and partition, the probability of generating the string fisikle with the partition f i s i k le would be P(f | ph) *P(i | y) * P(s | s) *P(i | i) * P(k | c) *P(le | al).3 The above example points to advantages of our model compared to previous models based on weighted Damerau-Levenshtein distance. Note that neither P(f | ph) nor P(le | al) are modeled directly in the previous approaches to error modeling. A number of studies have pointed out that a high percentage of misspelled words are wrong due to a single letter insertion, substitution, or deletion, or from a letter pair transposition (Damerau 1964; Peterson 1986). However, even if this is the case, it does not imply that nothing is 2 Another good PSN feature would be morpheme boundary. 3 We will leave off the positional conditioning information for simplicity. to be gained by modeling more powerful edit operations. If somebody types the string confidant, we do not really want to model this error as P(a | e), but rather P(ant | ent). And anticedent can more accurately be modeled by P(anti | ante), rather than P(i | e). By taking a more generic approach to error modeling, we can more accurately model the errors people make. A formal presentation of our model follows. Let Part(w) be the set of all possible ways of partitioning string w into adjacent (possibly null) substrings. For a particular partition R∈Part(w), where |R|=j (R consists of j contiguous segments), let Ri be the ith segment. Under our model, P(s | w) = ∑ ∏ ∑ ∈ = = ∈ ) ( | | 1 | | | | ) ( ) | ( ) | ( w Part R R i i i R T s Part T R T P w R P One particular pair of alignments for s and w induces a set of edits that derive s from w. By only considering the best partitioning of s and w, we can simplify this to: P(s | w) = max R ∈Part(w),T∈Part(s) P(R|w)∏ = | | 1 R i P(Ti|Ri) We do not yet have a good way to derive P(R | w), and in running experiments we determined that poorly modeling this distribution gave slightly worse performance than not modeling it at all, so in practice we drop this term. 3 Training the Model To train the model, we need a training set consisting of {si, wi} string pairs, representing spelling errors si paired with the correct spelling of the word wi. We begin by aligning the letters in si with those in wi based on minimizing the edit distance between si and wi, based on single character insertions, deletions and substitutions. For instance, given the training pair <akgsual, actual>, this could be aligned as: a c t u a l a k g s u a l This corresponds to the sequence of edit operations: aÈa cÈN Èg tÈs uÈu aÈa lÈl To allow for richer contextual information, we expand each nonmatch substitution to incorporate up to N additional adjacent edits. For example, for the first nonmatch edit in the example above, with N=2, we would generate the following substitutions: c È k ac È ak c È kg ac È akg ct È kgs We would do similarly for the other nonmatch edits, and give each of these substitutions a fractional count. We can then calculate the probability RI HDFK VXEVWLWXWLRQ  È  DV FRXQW  È FRXQW FRXQW È LVVLPSO\WKHVXP of the counts derived from our training data as explained above. Estimating FRXQW LVD bit tricky. If we took a text corpus, then extracted all the spelling errors found in the corpus and then used those errors for training, FRXQW  ZRXOG VLPSO\ EH WKH number of times VXEVWULQJ  RFFXUV LQ WKH text corpus. But if we are training from a set of {si, wi} tuples and not given an associated corpus, we can do the following: (a) From a large collection of representative WH[WFRXQWWKHQXPEHURIRFFXUUHQFHVRI  (b) Adjust the count based on an estimate of the rate with which people make typing errors. Since the rate of errors varies widely and is difficult to measure, we can only crudely approximate it. Fortunately, we have found empirically that the results are not very sensitive to the value chosen. Essentially, we are doing one iteration of the Expectation-Maximization algorithm (Dempster, Laird et al. 1977). The idea is that contexts that are useful will accumulate fractional counts across multiple instances, whereas contexts that are noise will not accumulate significant counts. 4 Applying the Model Given a string s, where D s ∉ , we want to return ) | ( ) | ( argmax w context w P s w P . Our approach will be to return an n-best list of candidates according to the error model, and then rescore these candidates by taking into account the source probabilities. We are given a dictionary D and a set of parameters P, where each parameter is 3 È IRUVRPH * Σ ∈ , , meaning the SUREDELOLW\WKDWLIDVWULQJ LVLQWHQGHGWKH QRLV\FKDQQHOZLOOSURGXFH LQVWHDG)LUVW note that for a particular pair of strings {s, w} we can use the standard dynamic programming algorithm for finding edit distance by filling a |s|*|w| weight matrix (Wagner and Fisher 1974; Hall and Dowling 1980), with only minor changes. For computing the Damerau-Levenshtein distance between two strings, this can be done in O(|s|*|w|) time. When we allow generic edit operations, the complexity increases to O(|s|2*|w|2). In filling in a cell (i,j) in the matrix for computing DamerauLevenshtein distance we need only examine cells (i,j-1), (i-1,j) and (i-1,j-1). With generic edits, we have to examine all cells (a,b) where a≤i and b≤j. We first precompile the dictionary into a trie, with each node in the trie corresponding to a vector of weights. If we think of the x-axis of the standard weight matrix for computing edit distance as corresponding to w (a word in the dictionary), then the vector at each node in the trie corresponds to a column in the weight matrix associated with computing the distance between s and the string prefix ending at that trie node. :HVWRUHWKH È SDUDPHWHUVDVDtrie of tries. We have one trie corresponding to DOOVWULQJV WKDWDSSHDURQWKHOHIWKDQGVLGH of some substitution in our parameter set. At every node in this trie, corresponding to a VWULQJ ZHSRLQWWRDtrie consisting of all VWULQJV WKDWDSSHDURQWKHULJKWKDQGVLGH RIDVXEVWLWXWLRQLQRXUSDUDPHWHUVHWZLWK on the left hand side. We store the substitution probabilities at the terminal QRGHVRIWKH WULHV %\ VWRULQJ ERWK  DQG  VWULQJV LQ reverse order, we can efficiently compute edit distance over the entire dictionary. We process the dictionary trie from the root downwards, filling in the weight vector at each node. To find the substitution parameters that are applicable, given a particular node in the trie and a particular position in the input string s (this corresponds to filling in one cell in one vector of a dictionary trie node) we trace up from the node to the root, while tracing GRZQWKH trie from the root. As we trace GRZQWKH trie, if we encounter a terminal node, we follow the pointer to the FRUUHVSRQGLQJ  trie, and then trace backwards from the position in s while WUDFLQJGRZQWKH trie. Note that searching through a static dictionary D is not a requirement of our error model. It is possible that with a different search technique, we could apply our model to languages such as Turkish for which a static dictionary is inappropriate (Oflazer 1994). Given a 200,000-word dictionary, and using our best error model, we are able to spell check strings not in the dictionary in approximately 50 milliseconds on average, running on a Dell 610 500mhz Pentium III workstation. 5 Results 5.1 Error Model in Isolation We ran experiments using a 10,000word corpus of common English spelling errors, paired with their correct spelling. We used 80% of this corpus for training and 20% for evaluation. Our dictionary contained approximately 200,000 entries, including all words in the test set. The results in this section are obtained with a language model that assigns uniform probability to all words in the dictionary. In Table 1 we show K-best results for different maximum context window sizes, without using positional information. For instance, the 2-best accuracy is the percentage of time the correct answer is one of the top two answers returned by the system. Note that a maximum window of zero corresponds to the set of single character insertion, deletion and substitution edits, weighted with their probabilities. We see that, up to a point, additional context provides us with more accurate spelling correction and beyond that, additional context neither helps nor hurts. Max Window 1-Best 2-Best 3-Best 0 87.0 93.9 95.9 CG 89.5 94.9 96.5 1 90.9 95.6 96.8 2 92.9 97.1 98.1 3 93.6 97.4 98.5 4 93.6 97.4 98.5 Table 1 Results without positional information In Table 1, the row labelled CG shows the results when we allow the equivalent set of edit operations to those used in (Church and Gale 1991). This is a proper superset of the set of edits where the maximum window is zero and a proper subset of the edits where the maximum window is one. The CG model is essentially equivalent to the Church and Gale error model, except (a) the models above can posit an arbitrary number of edits and (b) we did not do parameter reestimation (see below). Next, we measured how much we gain by conditioning on the position of the edit relative to the source word. These results are shown in Table 2. As we expected, positional information helps more when using a richer edit set than when using only single character edits. For a maximum window size of 0, using positional information gives a 13% relative improvement in 1-best accuracy, whereas for a maximum window size of 4, the gain is 22%. Our full strength model gives a 52% relative error reduction on 1-best accuracy compared to the CG model (95.0% compared to 89.5%). Max Window 1-Best 2-Best 3-Best 0 88.7 95.1 96.6 1 92.8 96.5 97.4 2 94.6 98.0 98.7 3 95.0 98.0 98.8 4 95.0 98.0 98.8 5 95.1 98.0 98.8 Table 2 Results with positional information. We experimented with iteratively reestimating parameters, as was done in the original formulation in (Church and Gale 1991). Doing so resulted in a slight degradation in performance. The data we are using is much cleaner than that used in (Church and Gale 1991) which probably explains why reestimation benefited them in their experiments and did not give any benefit to the error models in our experiments. 5.2 Adding a Language Model Next, we explore what happens to our results as we add a language model. In order to get errors in context, we took the Brown Corpus and found all occurrences of all words in our test set. Then we mapped these words to the incorrect spellings they were paired with in the test set, and ran our spell checker to correct the misspellings. We used two language models. The first assumed all words are equally likely, i.e. the null language model used above. The second used a trigram language model derived from a large collection of on-line text (not including the Brown Corpus). Because a spell checker is typically applied right after a word is typed, the language model only used left context. We show the results in Figure 1, where we used the error model with positional information and with a maximum context window of four, and used the language model to rescore the 5 best word candidates returned by the error model. Note that for the case of no language model, the results are lower than the results quoted above (e.g. a 1-best score above of 95.0%, compared to 93.9% in the graph). This is because the results on the Brown Corpus are computed per token, whereas above we were computing results per type. One question we wanted to ask is whether using a good language model would obviate the need for a good error model. In Figure 2, we applied the trigram model to resort the 5-best results of the CG model. We see that while a language model improves results, using the better error model (Figure 1) still gives significantly better results. Using a language model with our best error model gives a 73.6% error reduction compared to using a language model with the CG error model. Rescoring the 20-best output of the CG model instead of the 5-best only improves the 1-best accuracy from 90.9% to 91.0%. 93 94 95 96 97 98 99 100 1 2 3 4 5 N-Best Accuracy No Language Model Trigram Language Model Figure 1 Spelling Correction Improvement When Using a Language Model 84 86 88 90 92 94 96 1 2 3 4 5 N-Best Accuracy No Language Model Trigram Language Model Figure 2 Using the CG Error Model with a Trigram Language Model Conclusion We have presented a new error model for noisy channel spelling correction based on generic string to string edits, and have demonstrated that it results in a significant improvement in performance compared to previous approaches. Without a language model, our error model gives a 52% reduction in spelling correction error rate compared to the weighted DamerauLevenshtein distance technique of Church and Gale. With a language model, our model gives a 74% reduction in error. One exciting future line of research is to explore error models that adapt to an individual or subpopulation. With a rich set of edits, we hope highly accurate individualized spell checking can soon become a reality. References Church, K. and W. Gale (1991). “Probability Scoring for Spelling Correction.” Statistics and Computing 1: 93-103. Damerau, F. (1964). “A technique for computer detection and correction of spelling errors.” Communications of the ACM 7(3): 659-664. Dempster, A., N. Laird, et al. (1977). “Maximum likelihood from incomplete data via the EM algorithm.” Journal of the Royal Statistical Society 39(1): 1-21. Golding, A. and D. Roth (1999). “A Winnow-Based Approach to Spelling Correction.” Machine Learning 34: 107-130. Hall, P. and G. Dowling (1980). “Approximate string matching.” ACM Computing Surveys 12(4): 17-38. Jurafsky, D. and J. Martin (2000). Speech and Language Processing, Prentice Hall. Kukich, K. (1992). “Techniques for Automatically Correcting Words in Text.” ACM Computing Surveys 24(4): 377-439. Levenshtein, V. (1966). “Binary codes capable of correcting deletions, insertions and reversals.” Soviet Physice -- Doklady 10: 707-710. Mayes, E., F. Damerau, et al. (1991). “Context Based Spelling Correction.” Information Processing and Management 27(5): 517-522. Oflazer, K. (1994). Spelling Correction in Agglutinative Languages. Applied Natural Language Processing, Stuttgart, Germany. Peterson, J. (1986). “A note on undetected typing errors.” Communications of the ACM 29(7): 633637. Ristad, E. and P. Yianilos (1997). Learning String Edit Distance. International Conference on Machine Learning, Morgan Kaufmann. Shannon, C. (1948). “A mathematical theory of communication.” Bell System Technical Journal 27(3): 379-423. Wagner, R. and M. Fisher (1974). “The string to string correction problem.” JACM 21: 168-173.
2000
37
Query-Relevant Summarization using FAQs Adam Berger Vibhu O. Mittal School of Computer Science Just Research Carnegie Mellon University 4616 Henry Street Pittsburgh, PA 15213 Pittsburgh, PA 15213 [email protected] [email protected] Abstract This paper introduces a statistical model for query-relevant summarization: succinctly characterizing the relevance of a document to a query. Learning parameter values for the proposed model requires a large collection of summarized documents, which we do not have, but as a proxy, we use a collection of FAQ (frequently-asked question) documents. Taking a learning approach enables a principled, quantitative evaluation of the proposed system, and the results of some initial experiments—on a collection of Usenet FAQs and on a FAQ-like set of customer-submitted questions to several large retail companies—suggest the plausibility of learning for summarization. 1 Introduction An important distinction in document summarization is between generic summaries, which capture the central ideas of the document in much the same way that the abstract of this paper was designed to distill its salient points, and query-relevant summaries, which reflect the relevance of a document to a user-specified query. This paper discusses query-relevant summarization, sometimes also called “user-focused summarization” (Mani and Bloedorn, 1998). Query-relevant summaries are especially important in the “needle(s) in a haystack” document retrieval problem: a user has an information need expressed as a query (What countries export smoked salmon?), and a retrieval system must locate within a large collection of documents those documents most likely to fulfill this need. Many interactive retrieval systems—web search engines like Altavista, for instance—present the user with a small set of candidate relevant documents, each summarized; the user must then perform a kind of triage to identify likely relevant documents from this set. The web page summaries presented by most search engines are generic, not query-relevant, and thus provide very little guidance to the user in assessing relevance. Query-relevant summarization (QRS) aims to provide a more effective characterization of a document by accounting for the user’s information need when generating a summary. Search for relevant documents         Summarize documents relative to Q σ  σ       σ       σ      (a) (b) Figure 1: One promising setting for query-relevant summarization is large-scale document retrieval. Given a user query ! , search engines typically first (a) identify a set of documents which appear potentially relevant to the query, and then (b) produce a short characterization "$#&%(')!+* of each document’s relevance to ! . The purpose of "$#&%(')!+* is to assist the user in finding documents that merit a more detailed inspection. As with almost all previous work on summarization, this paper focuses on the task of extractive summarization: selecting as summaries text spans—either complete sentences or paragraphs—from the original document. 1.1 Statistical models for summarization From a document , and query - , the task of queryrelevant summarization is to extract a portion . from , which best reveals how the document relates to the query. To begin, we start with a collection / of 0 ,213-(14.65 triplets, where . is a human-constructed summary of , relative to the query - . From such a collecSnow is not unusual in France... D1 S1 Q1 = Weather in Paris in December D2 Some parents elect to teach their children at home... S2 Q2 = Home schooling D3 Good Will Hunting is about... S3 Q3 = Academy award winners in 1998 7 8 9 : ; < = > ? @ < A B @ < 8 A ? C D E F < G H I D G B @ < 8 A < A J K > L < G B M < K N ? O B P 8 L < @ > K 8 P < > ? ... ... ... ... ... ... Figure 2: Learning to perform query-relevant summarization requires a set of documents summarized with respect to queries. Here we show three imaginary triplets QR%(')!S')TRU , but the statistical learning techniques described in Section 2 require thousands of examples. tion of data, we fit the best function VXWZY[-(13,(\^]_. mapping document/query pairs to summaries. The mapping we use is a probabilistic one, meaning the system assigns a value `ZYa.2bc,214-S\ to every possible summary . of Ya,213-S\ . The QRS system will summarize a Ya,213-S\ pair by selecting V(Ya,214-S\ def dfe6gihkjlenm o `pYa.pbi,213-S\ There are at least two ways to interpret `pYa.pbi,213-S\ . First, one could view `pYa.2bc,214-S\ as a “degree of belief” that the correct summary of , relative to is . . Of course, what constitutes a good summary in any setting is subjective: any two people performing the same summarization task will likely disagree on which part of the document to extract. We could, in principle, ask a large number of people to perform the same task. Doing so would impose a distribution `pYiqrbi,213-S\ over candidate summaries. Under the second, or “frequentist” interpretation, `pYa.pbi,213-S\ is the fraction of people who would select . —equivalently, the probability that a person selected at random would prefer . as the summary. The statistical model `pYiqrbi,213-S\ is parametric, the values of which are learned by inspection of the 0 ,214-(13.n5 triplets. The learning process involves maximum-likelihood estimation of probabilistic language models and the statistical technique of shrinkage (Stein, 1955). This probabilistic approach easily generalizes to the generic summarization setting, where there is no query. In that case, the training data consists of 0 ,213.n5 pairs, where . is a summary of the document , . The goal, in this case, is to learn and apply a mapping s WZ,t]u. from documents to summaries. That is, vxw y{z A single FAQ document |+} ~€ $‚ ƒ…„ Summary of document with respect to Q2 . . . What is amniocentesis? Amniocenteses, or amnio, is a prenatal test in which... What can it detect? One of the main uses of amniocentesis is to detect chromosomal abnormalities... What are the risks of amnio? The main risk of amnio is that it may increase the chance of miscarriage... Figure 3: FAQs consist of a list of questions and answers on a single topic; the FAQ depicted here is part of an informational document on amniocentesis. This paper views answers in a FAQ as different summaries of the FAQ: the answer to the † th question is a summary of the FAQ relative to that question. find s Y[,(\ def dfe6gihkjlenm o `pYa.2bc,(\ 1.2 Using FAQ data for summarization We have proposed using statistical learning to construct a summarization system, but have not yet discussed the one crucial ingredient of any learning procedure: training data. The ideal training data would contain a large number of heterogeneous documents, a large number of queries, and summaries of each document relative to each query. We know of no such publicly-available collection. Many studies on text summarization have focused on the task of summarizing newswire text, but there is no obvious way to use news articles for query-relevant summarization within our proposed framework. In this paper, we propose a novel data collection for training a QRS model: frequently-asked question documents. Each frequently-asked question document (FAQ) is comprised of questions and answers about a specific topic. We view each answer in a FAQ as a summary of the document relative to the question which preceded it. That is, an FAQ with ‡ question/answer pairs comes equipped with ‡ different queries and summaries: the answer to the ˆ th question is a summary of the document relative to the ˆ th question. While a somewhat unorthodox perspective, this insight allows us to enlist FAQs as labeled training data for the purpose of learning the parameters of a statistical QRS model. FAQ data has some properties that make it particularly attractive for text learning: ‰ There exist a large number of Usenet FAQs— several thousand documents—publicly available on the Web1. Moreover, many large companies maintain their own FAQs to streamline the customer-response process. ‰ FAQs are generally well-structured documents, so the task of extracting the constituent parts (queries and answers) is amenable to automation. There have even been proposals for standardized FAQ formats, such as RFC1153 and the Minimal Digest Format (Wancho, 1990). ‰ Usenet FAQs cover an astonishingly wide variety of topics, ranging from extraterrestrial visitors to mutual-fund investing. If there’s an online community of people with a common interest, there’s likely to be a Usenet FAQ on that subject. There has been a small amount of published work involving question/answer data, including (Sato and Sato, 1998) and (Lin, 1999). Sato and Sato used FAQs as a source of summarization corpora, although in quite a different context than that presented here. Lin used the datasets from a question/answer task within the Tipster project, a dataset of considerably smaller size than the FAQs we employ. Neither of these paper focused on a statistical machine learning approach to summarization. 2 A probabilistic model of summarization Given a query and document , , the query-relevant summarization task is to find .iŠ^‹ e6gchŒjle6m o `pYa.pbi,213-S\1 the a posteriori most probable summary for Ya,213-S\ . Using Bayes’ rule, we can rewrite this expression as . Š d e6gch2jlenm o `pYa-Žbc.€14,\…`pYa.pbi,(\R1  e6gch2jlenm o `pYa-Žbi.‘\ ’ “•” – relevance `pY[.pbi,(\ ’ “‘” – fidelity 1 (1) where the last line follows by dropping the dependence on , in `pY[-Žbc.€13,(\ . Equation (1) is a search problem: find the summary . Š which maximizes the product of two factors: 1. The relevance `pY[-—bi.•\ of the query to the summary: A document may contain some portions directly relevant to the query, and other sections bearing little or no relation to the query. Consider, for instance, the problem of summarizing a 1Two online sources for FAQ data are www.faqs.org and rtfm.mit.edu. survey on the history of organized sports relative to the query “Who was Lou Gehrig?” A summary mentioning Lou Gehrig is probably more relevant to this query than one describing the rules of volleyball, even if two-thirds of the survey happens to be about volleyball. 2. The fidelity `pY[.pbi,\ of the summary to the document: Among a set of candidate summaries whose relevance scores are comparable, we should prefer that summary . which is most representative of the document as a whole. Summaries of documents relative to a query can often mislead a reader into overestimating the relevance of an unrelated document. In particular, very long documents are likely (by sheer luck) to contain some portion which appears related to the query. A document having nothing to do with Lou Gehrig may include a mention of his name in passing, perhaps in the context of amyotropic lateral sclerosis, the disease from which he suffered. The fidelity term guards against this occurrence by rewarding or penalizing candidate summaries, depending on whether they are germane to the main theme of the document. More generally, the fidelity term represents a prior, query-independent distribution over candidate summaries. In addition to enforcing fidelity, this term could serve to distinguish between more and less fluent candidate summaries, in much the same way that traditional language models steer a speech dictation system towards more fluent hypothesized transcriptions. In words, (1) says that the best summary of a document relative to a query is relevant to the query (exhibits a large `pYa-Žbi.‘\ value) and also representative of the document from which it was extracted (exhibits a large `pYa.pbi,(\ value). We now describe the parametric form of these models, and how one can determine optimal values for these parameters using maximumlikelihood estimation. 2.1 Language modeling The type of statistical model we employ for both `pY[-Žbc.•\ and `pY[.pbi,\ is a unigram probability distribution over words; in other words, a language model. Stochastic models of language have been used extensively in speech recognition, optical character recognition, and machine translation (Jelinek, 1997; Berger et al., 1994). Language models have also started to find their way into document retrieval (Ponte and Croft, 1998; Ponte, 1998). The fidelity model `˜Y[.pbi,\ One simple statistical characterization of an ™ -word document , d 0ršœ› 1 š… 1•ž‘ž•ž š€Ÿ 5 is the frequency of each word in , —in other words, a marginal distribution over words. That is, if word   appears ˆ times in , , then `¢¡Y[ £\ d ˆ¥¤€™ . This is not only intuitive, but also the maximum-likelihood estimate for `¢¡(Ya ¦\ . Now imagine that, when asked to summarize , relative to - , a person generates a summary from , in the following way: ‰ Select a length § for the summary according to some distribution ¨ ¡ . ‰ Do for © dtª 13«œ1‘ž•ž‘ž&§ : – Select a word   at random according to the distribution ` ¡ . (That is, throw all the words in , into a bag, pull one out, and then replace it.) – Set .R¬(­®  . In following this procedure, the person will generate the summary . d 0r¯n› 1 ¯• 1•ž•ž‘ž ¯R° 5 with probability `pYa.pbi,(\ d ¨ ¡ Ya§±\ ° ² ¬´³ › ` ¡ Y ¯ ¬ \ (2) Denoting by µ the set of all known words, and by ¶ Y[ X·¸,\ the number of times that word   appears in , , one can also write (2) as a multinomial distribution: ¹ Ya.2bc,(\ d ¨º¡Y[§±\ ² »(¼6½ ¹ Ya ¦\)¾4¿ »(¼ ¡…À ž (3) In the text classification literature, this characterization of , is known as a “bag of words” model, since the distribution `¢¡ does not take account of the order of the words within the document , , but rather views , as an unordered set (“bag”) of words. Of course, ignoring word order amounts to discarding potentially valuable information. In Figure 3, for instance, the second question contains an anaphoric reference to the preceding question: a sophisticated context-sensitive model of language might be able to detect that it in this context refers to amniocentesis, but a context-free model will not. The relevance model `˜Y[-Žbc.•\ In principle, one could proceed analogously to (2), and take `pYa-Žbi.‘\ d ¨ o Yaˆ¥\ ° ² ¬&³ › ` o YaÁ¬a\ž (4) for a lengthˆ query d 0 Á › 13Á  ž•ž‘ž)ÁÂ{5 . But this strategy suffers from a sparse estimation problem. In contrast to a document, which we expect will typically contain a few hundred words, a normal-sized summary contains just a handful of words. What this means is that ` o will assign zero probability to most words, and Ã Ä Å Æ Ç È É Ê Ã Ë Ì Í Ã Ë Î Í Ã Ë Ì Ì Ë Ì Ï Ã ÃÐÎ Ì Ë Ã Ë Î Ï Ã Ñ È É Ò Ä Ë Ã Ó È Ñ Í Ã Ó È Ñ Ï Ã Ó È Ñ Î Ã Ó È Ñ Ì Figure 4: The relevance Ô(#&!^Õ[TRÖ ×‘* of a query to the Ø th answer in document Ù is a convex combination of five distributions: (1) a uniform model ÔSÚ . (2) a corpus-wide model ÔÜÛ ; (3) a model ÔSÝßÞ constructed from the document containing T Ö × ; (4) a model Ôxà(Þ á constructed from T Ö × and the neighboring sentences in %¢Ö ; (5) a model ÔSâ Þ á constructed from T Ö × alone. (The Ôxà distribution is omitted for clarity.) any query containing a word not in the summary will receive a relevance score of zero. (The fidelity model doesn’t suffer from zeroprobabilities, at least not in the extractive summarization setting. Since a summary . is part of its containing document , , every word in . also appears in , , and therefore ` ¡ Y ¯ \äãæå for every word ¯ ·ç. . But we have no guarantee, for the relevance model, that a summary contains all the words in the query.) We address this zero-probability problem by interpolating or “smoothing” the ` o model with four more robustly estimated unigram word models. Listed in order of decreasing variance but increasing bias away from ` o , they are: `Sè : a probability distribution constructed using not only . , but also all words within the six summaries (answers) surrounding . in , . Since ` è is calculated using more text than just . alone, its parameter estimates should be more robust that those of ` o . On the other hand, the ` è model is, by construction, biased away from ` o , and therefore provides only indirect evidence for the relation between and . . `¢¡ : a probability distribution constructed over the entire document , containing . . This model has even less variance than `Sè , but is even more biased away from ` o . `Sé : a probability distribution constructed over all documents , . `+ê : the uniform distribution over all words. Figure 4 is a hierarchical depiction of the various language models which come into play in calculating `pY[-Žbc.•\ . Each summary model ` o lives at a leaf node, and the relevance `pYa-Žbc.•\ of a query to that summary is a convex combination of the distributions at each node Algorithm: Shrinkage for ë ì estimation Input: Distributions ` o 1)`¢¡í1)`Sé¢1Ð` ê , î d 0 ,213-(14.n5 (not used to estimate ` o 1Ð` ¡ 1Ð` é 1Ð`+ê ) Output Model weights ë ì d 0 ì o 1 ì èï1 ì ¡S1 ì éS1 ì ê 5 1. Set ì o ­ ì èð­ ì ¡ñ­ ì é±­ ì ê ­ ª ¤ßò 2. Repeat until ë ì converges: 3. Set óRôßõœö…÷4ø d å for ùú· 0 .€1)ûç14,21c/(1aüý5 5. (E-step) óRôßõœö…÷ o ­þóRôßõ¥ö{÷ oÿ   ¿  À  ¿ &o À (similarly for ûç14,21c/(1[ü ) 6. (M-step) ì o ­      Þ    Þ (similarly for ì è 1 ì ¡ 1 ì é 1 ì ê ) along a path from the leaf to the root2:  #&!^Õ[TR* â Ô â #&!+*€àZÔSà^#&!$*  (5) €ÝnÔxݜ#&!+*…ÛrÔÜۀ#&!$*€ÚœÔxÚí#&!+* We calculate the weighting coefficients ë ì d 0 ì o 1 ì èl1 ì ¡í1 ì éx1 ì ê 5 using the statistical technique known as shrinkage (Stein, 1955), a simple form of the EM algorithm (Dempster et al., 1977). As a practical matter, if one assumes the ¨ o model assigns probabilities independently of . , then we can drop the ¨ o term when ranking candidate summaries, since the score of all candidate summaries will receive an identical contribution from the ¨ o term. We make this simplifying assumption in the experiments reported in the following section. 3 Results To gauge how well our proposed summarization technique performs, we applied it to two different realworld collections of answered questions: Usenet FAQs: A collection of «ßå ª frequentlyasked question documents from the comp.* Usenet hierarchy. The documents contained ª åßå questions/answer pairs in total. Call-center data: A collection of questions submitted by customers to the companies Air Canada, Ben and Jerry, Iomagic, and Mylex, along with the answers supplied by company 2By incorporating a ÔSÝ model into the relevance model, equation (6) has implicitly resurrected the dependence on % which we dropped, for the sake of simplicity, in deriving (1). representatives. These four documents contain ª å¥1  €ò question/answer pairs. We conducted an identical, parallel set of experiments on both. First, we used a randomly-selected subset of 70% of the question/answer pairs to calculate the language models ` o 1)` è 1Ð` ¡ 1)` é —a simple matter of counting word frequencies. Then, we used this same set of data to estimate the model weights ë ì d 0 ì o 1 ì è 1 ì ¡ 1 ì é 1 ì êZ5 using shrinkage. We reserved the remaining 30% of the question/answer pairs to evaluate the performance of the system, in a manner described below. Figure 5 shows the progress of the EM algorithm in calculating maximum-likelihood values for the smoothing coefficients ë ì , for the first of the three runs on the Usenet data. The quick convergence and the final ë ì values were essentially identical for the other partitions of this dataset. The call-center data’s convergence behavior was similar, although the final ë ì values were quite different. Figure 6 shows the final model weights for the first of the three experiments on both datasets. For the Usenet FAQ data, the corpus language model is the best predictor of the query and thus receives the highest weight. This may seem counterintuitive; one might suspect that answer to the query ( . , that is) would be most similar to, and therefore the best predictor of, the query. But the corpus model, while certainly biased away from the distribution of words found in the query, contains (by construction) no zeros, whereas each summary model is typically very sparse. In the call-center data, the corpus model weight is lower at the expense of a higher document model weight. We suspect this arises from the fact that the documents in the Usenet data were all quite similar to one another in lexical content, in contrast to the callcenter documents. As a result, in the call-center data the document containing . will appear much more relevant than the corpus as a whole. To evaluate the performance of the trained QRS model, we used the previously-unseen portion of the FAQ data in the following way. For each test Y[,213-S\ pair, we recorded how highly the system ranked the correct summary . Š —the answer to in , —relative to the other answers in , . We repeated this entire sequence three times for both the Usenet and the callcenter data. For these datasets, we discovered that using a uniform fidelity term in place of the `pYa. bS,(\ model described above yields essentially the same result. This is not surprising: while the fidelity term is an important component of a real summarization system, our evaluation was conducted in an answer-locating framework, and in this context the fidelity term—enforcing that the summary be similar to the entire document from which 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 9 10 iteration model weight uniform corpus FAQ nearby answers answer -6.9 -6.8 -6.7 -6.6 -6.5 -6.4 -6.3 1 2 3 4 5 6 7 8 9 10 Iteration Log-likelihood Test Training Figure 5: Estimating the weights of the five constituent models in (6) using the EM algorithm. The values here were computed using a single, randomly-selected 70% portion of the Usenet FAQ dataset. Left: The weights  for the models are initialized to !"$# , but within a few iterations settle to their final values. Right: The progression of the likelihood of the training data during the execution of the EM algorithm; almost all of the improvement comes in the first five iterations.  â €à ßÝ {Û ßÚ Usenet FAQ %'& (*)$+ % & %$), % & !.( %'& -/0# % call-center %'& !! + % & %$% % & %+ %'& %, % & % / ) Summary 29% Neighbors 10% Document 14% Corpus 47% Uniform 0% Summary 11% Neighbors 0% Document 40% Corpus 42% Uniform 7% Figure 6: Maximum-likelihood weights for the various components of the relevance model Ô(#&! Õ[TR* . Left: Weights assigned to the constituent models from the Usenet FAQ data. Right: Corresponding breakdown for the call-center data. These weights were calculated using shrinkage. it was drawn—is not so important. From a set of rankings 001 › 1 1  1‘ž•ž•ž 1*2 5 , one can measure the the quality of a ranking algorithm using the harmonic mean rank: 3 def d ‡  2 ¬&³ › › 4 Þ A lower number indicates better performance; 3 dXª , which is optimal, means that the algorithm consistently assigns the first rank to the correct answer. Table 1 shows the harmonic mean rank on the two collections. The third column of Table 1 shows the result of a QRS system using a uniform fidelity model, the fourth corresponds to a standard tfidf-based ranking method (Ponte, 1998), and the last column reflects the performance of randomly guessing the correct summary from all answers in the document. trial # trials LM tfidf random Usenet 1 554 1.41 2.29 4.20 FAQ 2 549 1.38 2.42 4.25 data 3 535 1.40 2.30 4.19 Call 1 1020 4.8 38.7 1335 center 2 1055 4.0 22.6 1335 data 3 1037 4.2 26.0 1321 Table 1: Performance of query-relevant extractive summarization on the Usenet and call-center datasets. The numbers reported in the three rightmost columns are harmonic mean ranks: lower is better. 4 Extensions 4.1 Question-answering The reader may by now have realized that our approach to the QRS problem may be portable to the problem of question-answering. By question-answering, we mean a system which automatically extracts from a potentially lengthy document (or set of documents) the answer to a user-specified question. Devising a highquality question-answering system would be of great service to anyone lacking the inclination to read an entire user’s manual just to find the answer to a single question. The success of the various automated question-answering services on the Internet (such as AskJeeves) underscores the commercial importance of this task. One can cast answer-finding as a traditional document retrieval problem by considering each candidate answer as an isolated document and ranking each candidate answer by relevance to the query. Traditional tfidf-based ranking of answers will reward candidate answers with many words in common with the query. Employing traditional vector-space retrieval to find answers seems attractive, since tfidf is a standard, timetested algorithm in the toolbox of any IR professional. What this paper has described is a first step towards more sophisticated models of question-answering. First, we have dispensed with the simplifying assumption that the candidate answers are independent of one another by using a model which explicitly accounts for the correlation between text blocks—candidate answers—within a single document. Second, we have put forward a principled statistical model for answerranking; e6gch2jlenm 5 `pYa. b(,214-S\ has a probabilistic interpretation as the best answer to within , is . . Question-answering and query-relevant summarization are of course not one and the same. For one, the criterion of containing an answer to a question is rather stricter than mere relevance. Put another way, only a small number of documents actually contain the answer to a given query, while every document can in principle be summarized with respect to that query. Second, it would seem that the `pYa.2bc,(\ term, which acts as a prior on summaries in (1), is less appropriate in a question-answering setting, where it is less important that a candidate answer to a query bears resemblance to the document containing it. 4.2 Generic summarization Although this paper focuses on the task of queryrelevant summarization, the core ideas—formulating a probabilistic model of the problem and learning the values of this model automatically from FAQ-like data—are equally applicable to generic summarization. In this case, one seeks the summary which best typifies the document. Applying Bayes’ rule as in (1), .iŠ ‹ e6gchŒjle6m o `pY[.pbi,\ d e6gchŒjle6m o `pYa,ñbc.•\ ’ “•” – generative `pYa.•\ ’ “•” – prior (6) The first term on the right is a generative model of documents from summaries, and the second is a prior distribution over summaries. One can think of this factorization in terms of a dialogue. Alice, a newspaper editor, has an idea . for a story, which she relates to Bob. Bob researches and writes the story , , which we can view as a “corruption” of Alice’s original idea . . The task of generic summarization is to recover . , given only the generated document , , a model `pYa,ñbc.•\ of how the Alice generates summaries from documents, and a prior distribution `pY[.‘\ on ideas . . The central problem in information theory is reliable communication through an unreliable channel. We can interpret Alice’s idea . as the original signal, and the process by which Bob turns this idea into a document , as the channel, which corrupts the original message. The summarizer’s task is to “decode” the original, condensed message from the document. We point out this source-channel perspective because of the increasing influence that information theory has exerted on language and information-related applications. For instance, the source-channel model has been used for non-extractive summarization, generating titles automatically from news articles (Witbrock and Mittal, 1999). The factorization in (6) is superficially similar to (1), but there is an important difference: ¹ Y[,lbi.•\ is a generative, from a summary to a larger document, whereas ¹ Ya-Žbi.‘\ is compressive, from a summary to a smaller query. This distinction is likely to translate in practice into quite different statistical models and training procedures in the two cases. 5 Summary The task of summarization is difficult to define and even more difficult to automate. Historically, a rewarding line of attack for automating language-related problems has been to take a machine learning perspective: let a computer learn how to perform the task by “watching” a human perform it many times. This is the strategy we have pursued here. There has been some work on learning a probabilistic model of summarization from text; some of the earliest work on this was due to Kupiec et al. (1995), who used a collection of manually-summarized text to learn the weights for a set of features used in a generic summarization system. Hovy and Lin (1997) present another system that learned how the position of a sentence affects its suitability for inclusion in a summary of the document. More recently, there has been work on building more complex, structured models—probabilistic syntax trees—to compress single sentences (Knight and Marcu, 2000). Mani and Bloedorn (1998) have recently proposed a method for automatically constructing decision trees to predict whether a sentence should or should not be included in a document’s summary. These previous approaches focus mainly on the generic summarization task, not query relevant summarization. The language modelling approach described here does suffer from a common flaw within text processing systems: the problem of synonymy. A candidate answer containing the term Constantinople is likely to be relevant to a question about Istanbul, but recognizing this correspondence requires a step beyond word frequency histograms. Synonymy has received much attention within the document retrieval community recently, and researchers have applied a variety of heuristic and statistical techniques—including pseudo-relevance feedback and local context analysis (Efthimiadis and Biron, 1994; Xu and Croft, 1996). Some recent work in statistical IR has extended the basic language modelling approaches to account for word synonymy (Berger and Lafferty, 1999). This paper has proposed the use of two novel datasets for summarization: the frequently-asked questions (FAQs) from Usenet archives and question/answer pairs from the call centers of retail companies. Clearly this data isn’t a perfect fit for the task of building a QRS system: after all, answers are not summaries. However, we believe that the FAQs represent a reasonable source of query-related document condensations. Furthermore, using FAQs allows us to assess the effectiveness of applying standard statistical learning machinery—maximum-likelihood estimation, the EM algorithm, and so on—to the QRS problem. More importantly, it allows us to evaluate our results in a rigorous, non-heuristic way. Although this work is meant as an opening salvo in the battle to conquer summarization with quantitative, statistical weapons, we expect in the future to enlist linguistic, semantic, and other non-statistical tools which have shown promise in condensing text. Acknowledgments This research was supported in part by an IBM University Partnership Award and by Claritech Corporation. The authors thank Right Now Tech for the use of the call-center question database. We also acknowledge thoughtful comments on this paper by Inderjeet Mani. References A. Berger and J. Lafferty. 1999. Information retrieval as statistical translation. In Proc. of ACM SIGIR-99. A. Berger, P. Brown, S. Della Pietra, V. Della Pietra, J. Gillett, J. Lafferty, H. Printz, and L. Ures. 1994. The CANDIDE system for machine translation. In Proc. of the ARPA Human Language Technology Workshop. Y. Chali, S. Matwin, and S. Szpakowicz. 1999. Querybiased text summarization as a question-answering technique. In Proc. of the AAAI Fall Symp. on Question Answering Systems, pages 52–56. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39B:1–38. E. Efthimiadis and P. Biron. 1994. UCLA-Okapi at TREC-2: Query expansion experiments. In Proc. of the Text Retrieval Conference (TREC-2). E. Hovy and C. Lin. 1997. Automated text summarization in SUMMARIST. In Proc. of the ACL Wkshp on Intelligent Text Summarization, pages 18–24. F. Jelinek. 1997. Statistical methods for speech recognition. MIT Press. K. Knight and D. Marcu. 2000. Statistics-based summarization—Step one: Sentence compression. In Proc. of AAAI-00. AAAI. J. Kupiec, J. Pedersen, and F. Chen. 1995. A trainable document summarizer. In Proc. SIGIR-95, pages 68–73, July. Chin-Yew Lin. 1999. Training a selection function for extraction. In Proc. of the Eighth ACM CIKM Conference, Kansas City, MO. I. Mani and E. Bloedorn. 1998. Machine learning of generic and user-focused summarization. In Proc. of AAAI-98, pages 821–826. J. Ponte and W. Croft. 1998. A language modeling approach to information retrieval. In Proc. of SIGIR98, pages 275–281. J. Ponte. 1998. A language modelling approach to information retrieval. Ph.D. thesis, University of Massachusetts at Amherst. S. Sato and M. Sato. 1998. Rewriting saves extracted summaries. In Proc. of the AAAI Intelligent Text Summarization Workshop, pages 76–83. C. Stein. 1955. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proc. of the Third Berkeley symposium on mathematical statistics and probability, pages 197–206. F. Wancho. 1990. RFC 1153: Digest message format. M. Witbrock and V. Mittal. 1999. Headline Generation: A framework for generating highly-condensed non-extractive summaries. In Proc. of ACM SIGIR99, pages 315–316. J. Xu and B. Croft. 1996. Query expansion using local and global document analysis. In Proc. of ACM SIGIR-96.
2000
38
An Algorithm for One-page Summarization of a Long T ext Based on Thematic Hierarc h y Detection Y oshio Nak ao F ujitsu Lab oratories Ltd. Kamik o danak a 4-1-1, Nak ahara-ku, Ka w asaki, Japan, 211-8588 nak ao@ ab.fujitsu.co.jp Abstract This pap er presen ts an algorithm for text summarization using the thematic hierarc h y of a text. The algorithm is in tended to generate a onepage summary for the user, thereb y enabling the user to skim large v olumes of an electronic b o ok on a computer displa y . The algorithm rst detects the thematic hierarc h y of a source text with lexical cohesion measured b y term rep etitions. Then, it iden ti es b oundary sentences at whic h a topic of appropriate grading probably starts. Finally , it generates a structured summary indicating the outline of the thematic hierarc h y . This pap er mainly describ es and ev aluates the part for b oundary sen tence iden ti cation in the algorithm, and then brie y discusses the readabilit y of one-page summaries. 1 In tro duction This pap er presen ts an algorithm for text summarization using the thematic hierarc h y of a long text, esp ecially for use b y readers who w an t to skim an electronic b o ok of several dozens of pages on a computer displa y . F or those who w an t an outline to quic kly understand imp ortan t parts of a long text, a one-page summary is more useful than a quarter-size summary , suc h as that generated b y a t ypical automatic text summarizer. Moreo v er, a one-page summary helps users reading a long text online b ecause the whole summary can app ear at one time on the screen of a computer displa y . T o mak e suc h a highly compressed summary , topics of appropriate grading m ust b e extracted according to the size of the summary to b e output, and selected topics m ust b e condensed as m uc h as p ossible. The prop osed algorithm decomp oses a text in to an appropriate n um b er of textual units b y their subtopics, and then generates short extracts for eac h unit. F or example, if a thirt ysen tence summary is required to con tain as man y topics as p ossible, the prop osed algorithm decomp oses a source text in to appro ximately ten textual units, and then generates a summary comp osed of t w oor three-sen tence extracts of these units. The prop osed algorithm consists of three stages. In the rst stage, it detects the thematic hierarc h y of a source text to decomp ose a source text in to an appropriate n umb er of textual units of appro ximately the same size. In the second stage, it adjusts eac h b oundary b et w een these textual units to identify a b oundary sentenc e, indicating where a topic corresp onding to a textual unit probably starts. It then selects a le ad sentenc e that probably indicates the con ten ts of subsequen t parts in the same textual unit. In the last stage, it generates a structured summary of these sen tences, thereb y pro viding an outline of the thematic hierarc h y of the source text. The remainder of this pap er includes the follo wing: an explanation of problems in onepage summarization that the prop osed algorithm is in tended to solv e; brief explanations of a previously published algorithm for thematic hierarc h y detection (Nak ao, 1999) and a problem that m ust b e solv ed to successfully realize one-page summarization; a description and ev aluation of the algorithm for b oundary sen tence iden ti cation; a brief explanation of an algorithm for structured summary construction; and some p oin ts of discussion on one-page summarization for further researc h. 2 Problems in one-page summarization of a long text This section examines problems in one-page summarization. The prop osed algorithm is in tended to solv e three suc h problems. The rst problem is related to text decomp osition. Newspap er editorials or tec hnical pap ers can b e decomp osed based on their rhetorical structures. Ho w ev er, a long aggregated text, suc h as a long tec hnical surv ey rep ort, cannot b e decomp osed in the same w a y , b ecause large textual units, suc h as those longer than one section, are usually constructed with only w eak and v ague relationships. Lik ewise, their arrangemen t ma y seem almost at random if analyzed according to their logical or rhetorical relationships. Th us, a metho d for detecting suc h large textual units is required. Since a large textual unit often corresp onds to a logical do cumen t elemen t, suc h as a part or section, rendering features of logical elemen ts can ha v e an imp ortan t role in detecting suc h a unit. F or example, a section header is distinguishable b ecause it often consists of a decimal n um b er follo w ed b y capitalized w ords. Ho w ev er, a metho d for detecting a large textual unit b y rendering features is not exp ected to ha v e wide range of applicabilit y . In other w ords, since the pro cess for rendering features of logical elemen ts v aries according to do cumen t t yp e, heuristic rules for detection m ust b e prepared for ev ery do cumen t t yp e. That is a problem. Moreo v er, the logical structure of a text do es not alw a ys corresp ond to its thematic hierarc h y , esp ecially if a section consists of an o v erview clause follo w ed b y other clauses that can b e divided in to sev eral groups b y their subtopics. Since then, based on Hearst's w ork (1994), an algorithm for detecting the thematic hierarc h y of a text using only lexical cohesion (Halida y and Hasan, 1976) measured b y term rep etitions w as dev elop ed (Nak ao, 1999). In comparison with some alternativ es (Salton et al., 1996; Y aari, 1998), one of the features of the algorithm is that it can decomp ose a text in to thematic textual units of appro ximately the same size, ranging from units just smaller than the en tire text to units of ab out one paragraph. In this pap er, a summarization algorithm based on this feature is prop osed. The second problem is related to the textual coherence of a one-page summary itself. A three-sen tence extract of a large text, whic h the prop osed algorithm is designed to generate for an appropriate grading topic, tend to form a collection of unrelated sen tences if it is generated b y simple extraction of imp ortan t sen tences. F urthermore, the summary should pro vides new information to a reader, so an in tro duction is necessary to help a reader understand it. Figure 4 sho ws a summary example of a tec hnical surv ey rep ort consisting of one h undred thousand c haracters. It w as generated b y extracting sen tences with m ultiple signi can t terms as determined b y the lik eliho o d ratio test of go o dness-of- t for term frequency distribution. It seems to ha v e sentences with some imp ortan t concepts (k eyw ords), but they do not relate m uc h to one another. Moreo v er, inferring the con texts in whic h they app ear is diÆcult. T o prev en t this problem, the prop osed algorithm is designed to extract sen tences from only the lead part of ev ery topic. The third problem is related to the readabilit y of a summary . A one-page summary is m uc h shorter than a v ery long text, suc h as a one-h undred-page b o ok, but is to o long to read easily without some breaks indicating segues of topics. Ev en for an en tire exp ository text, for whic h a metho d for displa ying the thematic hierarc h y with generated headers w as prop osed to assist a reader to explore the con ten t (Y aari, 1998), a go o d summary is required to help a user understand quic kly . T o impro v e readabilit y , the prop osed algorithm divides ev ery one-page summary in to sev eral parts, eac h of whic h consists of a heading-lik e sen tence follo w ed b y some paragraphs. 3 T ext Summarization Algorithm 3.1 Thematic Hierarc h y Detection In the rst stage, the prop osed algorithm uses the previously published algorithm (Nak ao, 1999) to detect the thematic hierarc h y of a text based on lexical cohesion measured b y term rep etitions. The output of this stage is a set of lists consisting of thematic b oundary candidate sections (TBCS). The lists corresp ond individually to ev ery la y er of the hierarc h y and are comp osed of TBCSs that separate the source text in to thematic textual units of appro ximately the same size. 3.1.1 Thematic Hierarc h y Detection Algorithm First, the algorithm calculates a cohesion score at xed-width in terv als in a source text. According to Hearst's w ork (1994), a cohesion score is calculated based on the lexical similarit y of t w o adjacen t xed-width windo ws (whic h are eigh t times larger than the in terv al width) set at a sp eci c p oin t b y the follo wing form ula: c(b l ; b r ) =  t w t;b l w t;b r q  t w 2 t;b l  t w 2 t;b r where b l and b r are the textual blo c k in the left and righ t windo ws, resp ectiv ely , and w t;b l is the frequency of term 1 t for b l , and w t;b r is the frequency t for b r . Hereafter, the p oin t b et w een the left and righ t windo ws is referred to as the reference p oin t of a cohesion score. The algorithm then detects thematic b oundaries according to the minimal p oin ts of four-item mo ving a v erage (arithmetic mean of four consecutiv e scores) of the cohesion score series. After that, it selects the textual area con tributing the most to ev ery minimal v alue and iden ti es it as a TBCS. Figure 1 sho ws the results of a TBCS detection example, where F C is, F orw ard Cohesion, a series of a v erage v alues plotted at 1 All con ten t w ords (i.e., v erbs, nouns, and adjectiv es) extracted b y a tok enizer for Japanese sen tences. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 10900 11000 11100 11200 11300 11400 11500 11600 11700 Cohesion Score Location in Text [words] (4.4) (4.4.1) minimal FC minimal BC moving average range EP FC < BC FC > BC C FC BC TBCS Section Boundary Figure 1: Example of TBCS Detection the reference p oin t of the rst a v eraged score, and B C is, Bac kw ard Cohesion, a series of a v eraged v alues plotted at the reference p oin t of the last a v eraged score. Since the textual area just b efore the p oin t at whic h F C plotted is alw a ys in the left windo w when one of the a v eraged cohesion scores is calculated, F C indicates the strength of forw ard (left-to-righ t) cohesion at a p oin t. Con v ersely , B C indicates the strength of bac kw ard cohesion at a p oin t. In the gure, E P is, Equilibrium P oin t, the p oin t at whic h F C and B C ha v e an iden tical v alue. The algorithm c hec ks for F C and B C starting from the b eginning till the end of the source text; and it records a T B C S , as depicted b y the rectangle, whenev er an equilibrium p oin t is detected (see (Nak ao, 1999) for more information). 640 B(4) 1280 B(3) 2560 B(2) 5120 B(1) entire B(0) 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 Window Width [words] Location in Text [words] (4.2) (4.2.1) (4.2.2) (ref) (4.3) (4.3.1) (4.3.2) (4.3.3) (ref) (4.4) (4.4.1) (4.4.2) (4.4.3) (4.4.4) (ref) [0] [0] [1] [2] [0] [1] [2] [3] [4] [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [6] TBCS Section Boundary Figure 2: Example of Thematic Hierarc h y F or a sample text, Figure 2 sho ws the resulting thematic hierarc h y that w as detected T able 1: Accuracy of Thematic Hierarc h y Detection Windo w Boundary # Original TBCS Uni ed TBCS width cor. res. Recall Precision Recall Precision 5120 1 2 100 (22) 50 (11) 100 (0.3) 50 (0.1) 2560 2 4 100 (22) 50 (11) 50 (0.5) 25 (0.3) 1280 3 10 100 (27) 30 (8.1) 67 (1.4) 20 (0.4) 640 30 42 90 (23) 64 (16) 57 (2.3) 40 (1.7) 320 114 163 67 (22) 47 (16) 46 (4.5) 33 (3.2) 160 184 365 70 (22) 35 (11) 51 (9.1) 25 (4.6) 80 322 813 57 (25) 23 (10) 57 (21) 23 (8.2) 40 403 1681 52 (25) 13 (6.2) 71 (42) 17 (10) The gures in paren theses are the baseline rates. b y the aforemen tioned pro cedure using v arying windo w widths (the ordinates). Eac h horizon tal sequence of rectangles depicts a list of TBCSs detected using a sp eci c windo w width. T o narro w the width of candidate sections, the algorithm then uni es a TBCS with another TBCS in the la y er immediate b elo w. It con tin ued the pro cess un til TBCSs in all la yers, from the top to the b ottom, are uni ed. After that, it outputs the thematic hierarc h y as a set of lists of TBCS data: i: la y er index of the thematic hierarc h y B (i)[j ]: TBCS data con taining the follo wing data mem b ers: ep: equilibrium p oin t r ang e: thematic b oundary candidate section. In Figure 2, for example, B (1)[1] is uni ed with B (2)[1]; B (3)[4]; B (4)[6]; : : : , and the v alues of its data mem b ers (ep and range) are replaced b y those of the uni ed TBCS in the b ottom la y er, whic h has b een detected using the minim um windo w width (40 w ords). 3.1.2 Results of Thematic Hierarc h y Detection T able 1 summarizes the accuracy of thematic hierarc h y detection in an exp erimen t using the follo wing three kinds of Japanese text as test data: a tec hnical surv ey rep ort 2 that consists of three main sections and contains 17,816 con ten t w ords; eigh t series of 2 \Progress Rep ort of T ec hnical Committee on Netw ork Access" in Survey on Natur al L anguage Pr o c essing Systems b y Japan Electronic Industry Dev elopmen t Asso ciation, c hapter 4, pp. 117{197, Mar. 1997. newspap er columns 3 , eac h of whic h consists of 4 to 24 articles con taining ab out 400 w ords; and t w elv e economic researc h rep orts 4 , eac h of whic h consists of ab out ten articles containing 33 to 2,375 w ords. In the table, c or. denotes the n um b er of the correct data v alues comp osed of the starting p oin ts of sections that con tain the same n umb er of w ords or more than the windo w width listed in the same ro w 5 . In addition, r es. denotes the n um b er of TBCSs. The original TBCS columns list the recall and precision rates of detected TBCSs b efore TBCS uni cation, and the uni e d TBCS columns list those rates after TBCS uni cation. On eac h la y er, the width of candidate sections for original TBCS is ab out half of the windo w width; and that of uni e d TBCS is 25 w ords (ab out half of the minim um windo w width). The gures sho wn in paren theses are the baseline rates corresp onding to random selection. That is, parts are randomly selected from the source text whose total size is equal to the total area size of TBCSs. As the b oundary gures indicate, the prop osed algorithm decomp oses a text in to textual units of ab out equiv alen t windo w widths. In addition, the rates of detected TBCSs are clearly larger than their baselines. F urther3 Obtained from the Daily Y omiuri On-line (h ttp://www.y omiuri.co.jp/). 4 Mon thly rep orts written for a Japanese compan y b y a Japanese professor living in the U.S.A. 5 Only headings and in ten tional breaks, suc h as sym b ol lines inserted to separate a prologue or epilogue from a main b o dy , are used as correct b oundaries. As a result, the precision rates of using smaller windo w widths tend to degrade b ecause of insuÆcien t amoun ts of correct data. more, for t w o relativ ely large series of newspap er columns, the ma jor b oundaries w ere detected prop erly . That is, using larger windo w widths, those b oundaries w ere selectiv ely detected that separate groups of columns b y their subtopics. F or example, the starting p oin t of a set of three consecutiv e columns iden tically en titled \The Great Cultural Revolution" in the \Chinese Rev olution" series w as detected using 1,280 w ord width windo w, as w ell as those of other three sets of consecutiv e columns en titled iden tically . Th us, the prop osed algorithm is exp ected to b e e ectiv e for arbitrarily selecting the size of textual units corresp onding to di eren t grading topics. Ho w ev er, there are problems ab out ho w to determine a b oundary p oin t in the range de ned b y a TBCS. Although the previously published algorithm (Nak ao, 1999) determines a b oundary p oin t with minimal p oin ts of cohesion scores for the smallest windo w width, the accuracy degrades substan tially (see T able 3). The b oundary sen tence iden ti cation algorithm giv en b elo w is a solution to this problem. 3.2 Boundary Sen tence Iden ti cation In the second stage, from sen tences in a TBCS, the algorithm iden ti es a b oundary sentenc e, indicating where a topic corresp onding to a textual unit probably starts, and selects a le ad sentenc e that probably indicates the con ten ts of subsequen t parts in the same textual unit. Figure 3 sho ws the algorithm in detail. 3.2.1 F orw ard/Bac kw ard Relev ance Calculation In steps 2 and 3, b oundaries are iden ti ed and lead sen tences are selected based on t w o kinds of relev ance scores for a sen tence: forwar d r elevanc e indicating the sen tence relev ance to the textual unit immediately after the sen tence, and b ackwar d r elevanc e indicating the sen tence relev ance to the textual unit immediately b efore the sen tence. The di erence b et w een the forw ard and the bac kw ard relev ance is referred to as r elative forwar d r el1. Assign the target la y er as the b ottom la y er of the thematic hierarc h y: i i max . 2. F or eac h TBCS in the target la y er, B (i)[j ], do he follo wing: (a) If i i max , then select and iden tify all sen tences in B (i)[j ]:r ang e as Boundary Sentenc e Candic ates (B.S.C.); otherwise, select and iden tify the sen tences in B (i)[j ]:r ang e lo cated b efore or iden tical to the b oundary sen tence of B (i + 1) as B.S.C. (b) F rom the B.S.C., iden tify a sen tence as a Boundary Sentenc e (B.S.), whose relativ e forw ard relev ance is greater than 0 and has the most incremen t from that of the previous sen tence. (c) Among the sen tences in the B.S.C. located after or iden tical to the B.S., select the sen tence that has the greatest forw ard relev ance as a L e ad Sentenc e (L.S.). 3. If i > 1, then i i 1, and rep eat from step 2. Figure 3: Boundary Sen tence Iden ti cation Algorithm evanc e. F orw ard or bac kw ard relev ance is calculated using the form ula b elo w, where ev ery textual unit is partitioned at the equilibrium p oin ts of t w o adjacen t TBCSs in the target la y er, the equilibrium p oin t of eac h TBCS is initially set b y the thematic hierarc h y detection algorithm, and the p oin t is replaced b y the lo cation of the b oundary sen tence after the b oundary sen tence is iden ti ed (i.e., step 2b is completed). r S;u = 1 jS j X t2S tf t;u juj  log ( jD j d f t )) jS j total n um b er of terms in sen tence S juj total n um b er of terms in textual unit u tf t;u frequency of term t in textual unit u jD j total n um b er of xed-width (80 w ords) blo c ks in the source text d f t total n um b er of xed-width blo c ks where term t app ears The use of this form ula w as prop osed as an e ectiv e and simple measure for term imp ortance estimation (Nak ao, 1998) 6 . It is a 6 An exp erimen t rep orted in (Nak ao, 1998) indiT able 2: Example of Boundary Sen tence Iden ti cation Relev ance Sen tence [partially presen ted] Lo cation Bac kw ard F orw ard Relativ e (translation) O :R: 11122 0 0.017 0.017 [吉村他 , 86] ([Y oshim ura et. al]) 11124 0.021 0.004 -0.017 吉村賢治… : "…の自動抽出システム ", … , pp.33-40, 1986 (Y oshim ura, Kenji ... : Automatic Extraction System of ...) B :S : 11146 0 0.016 0.016 4.4. 検索エンジン (Searc h Engine) L:S : 11148 0.005 0.022 0.017 ここでは…知的情報アクセスにおける…ついて報告する。 (This section rep orts on ... of in telligen t information access.) 11170 0.010 0.016 0.006 以下の各節の報告に共通するテーマは、…である。 (The k ey issue of the rep orts in the follo wing clauses is ... ) mo di ed v ersion of en trop y , where information bit (log part of the form ula) is calculated b y reducing the e ect of term rep etitions in a short p erio d. The mo di cation w as done to increase the scores for an imp ortan t term higher, based on the rep orted observ ation that con ten t b earing w ords tend to o ccur in clumps (Bo okstein et al., 1998). 3.2.2 Example of Boundary Sen tence Iden ti cation T able 2 summarizes an example of b oundary sen tence iden ti cation of a TBCS lo cated just b efore the 12,000th w ord in Figure 2. Every ro w in the table except the rst ro w, whic h is mark ed with O :R :, sho ws a candidate sentence. The ro w mark ed B :S: sho ws a b oundary sen tence, whic h has p ositiv e relativ e forw ard relev ance (0.016 in the fourth column of the ro w) and the greatest incremen t from the previous v alue (-0.017). The ro w mark ed L:S: sho ws a lead sen tence, whic h has the greatest forw ard relev ance (0.022 in the third column of the ro w) among all sen tences after the b oundary sen tence. 3.2.3 Ev aluation of Boundary Iden ti cation T able 3 sho ws recall and precision rates of the b oundary iden ti cation algorithm in the same format as T able 1. Compared with the results obtained using the previous v ersion of the algorithm (Nak ao, 1999), as sho wn in the minimal c ohesion columns, the prop osed algorithm iden ti es more accurate b oundaries cates that heading terms (i.e., terms app eared in headings) are e ectiv ely detected b y scoring terms with the part of the form ula in the summation op erator. (the b oundary sentenc e columns). In addition, b oundary sen tence iden ti cation w as successful for 75% of the correct TBCSs, that is, TBCSs including correct b oundaries 7 (see uni e d TBCS in T able 1). Th us, the prop osed b oundary sen tence iden ti cation algorithm is judged to b e e ectiv e. T able 3 also summarizes a feature of the prop osed algorithm that it tends to detect and iden tify headings as b oundary sen tences (the he ading r ate columns). F or the part corresp onding to larger textual units, whic h the prop osed algorithm mainly used, the gures in the over al l columns indicate that half of b oundary sen tences or more are iden tical to headings in the original text; and the gures in the identi c ation columns indicate that the prop osed algorithm iden ti es headings as b oundary sen tences for more than 80% of the case where TBCSs including headings. 3.3 Summary Construction In the third and last stage, the algorithm outputs the b oundary and lead sen tences of TBCSs on a la y er that probably corresp onds to topics of appropriate grading. Based on the ratio of source text size to a giv en summary size, the algorithm c ho oses a la y er that contains an appropriate n um b er of TBCSs, and generates a summary with some breaks to indicate thematic c hanges. F or example, to generate a 1,000-c haracter summary consisting of sev eral parts of appro ximately 200 c haracters for eac h topic, a text decomp osition consisting of v e textual 7 F or the correct TBCSs, the a v erage n um b er of b oundary sen tence candidates is 4.4. units is appropriate for summarization. Since the sample text used here w as decomp osed in to v e textual units on the B (2) la y er (see Figure 2), it outputs the b oundary sen tences and lead sen tences of all TBCSs in B (2). 4 Discussion Figure 5 sho ws a one-page summary of a tec hnical surv ey rep ort, where (a) is a part of the summary automatically generated, and (b) is its translation. It corresp onds to the part of the source text b et w een B (1)[1] and B (1)[2] (in Figure 2). It is comp osed of three parts corresp onding to B (2)[1], B (2)[2], and B (3)[6]. Eac h part consists of a b oundary sentence, presen ted as a heading, follo w ed b y a lead sen tence. In comparison with the k eyw ord-based summary sho wn in Figure 4, generated in the pro cess describ ed in Section 2, the one-page summary giv es a go o d impression as b eing easy to understand. In fact, when w e informally ask ed more than v e colleagues to state their impression of these summaries, they agreed with this p oin t. As describ ed in Section 2, one of the reasons for the go o d impression should b e the di erence in coherence. The relationship among sen tences in the k eyw ord-based summary is not clear; conv ersely , the second sen tence of the one-page summary in tro duces the outline of the clause, and it is closely related to the sen tences that follo w it. The fact that the one-page summary pro vides at least t w o sen tences, including a heading, for eac h topic is also considered to mak e coherence strong. As sho wn in T able 3, the prop osed algorithm is exp ected to extract headings e ectiv ely . Ho w ev er, there is a problem that detected headings do not alw a ys corresp ond to topics of appropriate grading. F or example, the second b oundary sen tence in the example is not appropriate b ecause it is a heading of a sub clause m uc h smaller than the windo w width corresp onding to B (2)[2], and its previous sen tence \4.3.2 T ec hnical T rend of IR T ec hniques" is more appropriate one. This example is also related to another limitation of the prop osed algorithm. Since there is no outline description in the subsequen t part of the heading of clause 4.3.2, the prop osed algorithm could not generate a coheren t extract if it had iden ti ed the heading as a b oundary sen tence. It is a future issue to dev elop more elaborated algorithm for summarizing detected topics esp ecially for the user who w an ts ric her information than that can b e pro vided in a extract consisting of t w o or three sen tences. 5 Conclusion This pap er has prop osed an algorithm for onepage summarization to help a user skim a long text. It has mainly describ ed and rep orted the e ectiv eness of the b oundary sentence iden ti cation part of the algorithm. It has also discussed the readabilit y of one-page summaries. The e ectiv eness of structured summaries using the thematic hierarc h y is an issue for future ev aluation. References A. Bo okstein, S. T. Klein, and T. Raita. 1998. Clumping prop erties of con ten t-b earing w ords. Journal of the A meric an So ciety for Information Scienc e, 49(2):102{114. Mic hael A.K. Halida y and Ruqaiy a Hasan. 1976. Cohesion in English. Longman, London. Marti A. Hearst. 1994. Multi-paragraph segmentation of exp ository text. In Pr o c. of the 32nd A nnual Me eting of Asso ciation for Computational Linguistics, pages 9{16. Y oshio Nak ao. 1998. Automatic k eyw ord extraction based on the topic structure of a text. IPSJ SIG Notes FI-50-1. (in Japanese). Y oshio Nak ao. 1999. Thematic hierarc h y detection of a text using lexical cohesion. Journal of the Asso ciation for Natur al L anguage Pr o c essing, 6(6):83{112. (in Japanese). Gerard Salton, Amit Singhal, Chris Buc kley , and Mandar Mitra. 1996. Automatic text decomp osition using text segmen ts and text themes. In Pr o c. of Hyp ertext '96, pages 53{65. the Asso ciation for Computing Mac hinery . Y aak o v Y aari. 1998. T explore { exploring exp ository texts via hierarc hical represen tation. In Pr o c. of CVIF '98, pages 25{31. Asso ciation for Computational Linguistics. T able 3: Ev aluation of Boundary Sen tence Iden ti cation Windo w Boundary # Minimal cohesion Boundary sen tence Heading rate width cor. res. Recall Precision Recall Precision Ov erall Iden ti cation 5120 1 2 0 (0.1) 0 (.05) 100 (0.1) 50 (.05) 100 (6.6) 100 (29) 2560 2 4 0 (0.2) 0 (0.1) 100 (0.2) 50 (.05) 100 (6.6) 100 (29) 1280 3 10 33 (0.5) 10 (0.2) 67 (0.5) 20 (0.2) 80 (6.6) 80 (30) 640 30 42 27 (1.0) 19 (0.7) 47 (1.0) 33 (0.7) 67 (6.3) 88 (34) 320 114 163 26 (1.8) 18 (1.3) 40 (1.8) 28 (1.3) 54 (5.0) 82 (31) 160 184 365 28 (3.5) 14 (1.8) 43 (3.5) 22 (1.8) 37 (4.8) 77 (28) 80 322 813 29 (7.8) 12 (3.1) 45 (7.8) 18 (3.1) 23 (4.8) 70 (26) 40 403 1681 37 (17) 9 (3.9) 46 (16) 11 (3.9) 12 (4.8) 58 (26) The gures in paren theses are the baseline rates. 4.3 ネットワーク上の検索サービス …また検索精度を高めるために、高頻度語は検索の対象 としない、タイトルや見出しに含まれる語に重みをつけ る、などの工夫がなされている。 …また、検索サービスが収集したページ数が膨大になる につれて、ヒット数も膨大になってきたため、すばやく 必要な情報を探すために、よりわかりやすい自動抄録作 成技術が必要となる。… … tf・ idf 方式とは、単語に分割された文章の各単語の 重要度を、その単語が文書中に出現する頻度 tf と、その 単語を含む文書が文書集合中に出現する頻度の逆数 idf の 積によってその単語の重要さを数値化する手法である。 … [河合 , 92] の研究キーワードのカイ二乗値から各キー ワードの分類に対する得点を計算する場合に、シソーラ ス辞書から得られる抽象的な意味を得点に加える手法で ある。… ya part of a summary condensed to 1.3% of the source text (a) Original 4.3 In ternet Services ... They are also enhanced with some tec hniques, suc h as eliminating high frequency w ords, w eighing a term in do cumen t titles and headings, etc., to ac hiev e high precision. ... ... In addition, since the greatly increasing amoun t of pages pro vided b y an In ternet service causes a great increase of a v erage hit n um b er for a query , more e ectiv e automatic text summarization tec hnique is required for helping a user to nd out required information quic kly . ... ... Tfidf metho d w eighs a term in a do cumen t with a pro duct of the term frequency (tf ) in a do cumen t and in v erse do cumen t frequency (idf ), i.e., in v erse of the n um b er of do cumen t that the term app ears. ... ... [Ka w ai, 92] A do cumen t classi cation metho d calculates a score based on  2 v alues of not only k eyw ord frequencies but also seman tic frequencies corresp onding to o ccurrences of abstracted seman tic category in target divisions. ... (b) T ranslation Figure 4: Example of Keyw ord-based Summary (partially presen ted) ネットワーク上の検索サービス [4.3 参照 ] 本節では、 WWW 上の検索サービスと電子出版及び 電子図書館について、現在行われている各サービスの 特徴、技術的なポイント、問題点等を調査すると同時 に、関連する研究分野も調査し、将来どのようなサー ビスが望まれるか、また、そこに必要となる技術は何 であるか、についてまとめる。… キーワード抽出 [(1) 参照 ] ネットワーク上の文書をアクセスする方法の 1 つとし てキーワード検索がある。… 分散検索 [(4) 参照 ] 情報を一ヶ所に集中登録するタイプの検索サービス では、今後ますます肥大化・多様化していく WWW には対応しきれなくなることが予想される。… ya part of a summary condensed to 1% of the source text (a) Original In ternet Services [see 4.3] This clause surv eys in ternet services, electronic publishing, and digital libraries, rep orts on their features, tec hnical p oin ts, and problems observ ed in their t ypical cases, and suggests the desired services in the future and the required tec hnology for their realization based on the in v estigation of related researc h areas. ... Keyw ord Extraction [see (1)] Keyw ord-based IR is a p opular access metho d for retrieving do cumen t on the net w orks. ... Distributed IR Systems [see (4)] In near future, it will b e imp ossible for a single IR system storing all resources in a single database to handle the increasing n um b er of large WWW text collections. ... (b) T ranslation Figure 5: Example of One-page Summary (partially presen ted)
2000
39
     "!#$ &%'  ()*,+  -/.103254 678:9<;=03>?6 75@BAC0EDGF67H7JIK0EL MNPO Q$R$SUTO VXWZY T Q'[]\W_^`Q$acbdSfehgji[lk<m^`an TSfNPQpo*qrehNr\ qPNtsNPnbdQ$SUaNP\uS vXwBxzy|{b}qlONP\#~€p\ e‚}NPQfR$e‚S]ƒ„^dW xHNPqlO \^`Vh^`…dƒ s‡†‰ˆ`Š}‹`ˆ}Œ{bdqlO NP\XkŽNPQ$acb}\uƒ =‘d’*“*”}•=–1—*˜ ‘d™`š›}œ<–ž Ÿ™u¡Eœu¢£E›E›¤ž¢“}— Ÿ¥ “ ¦€§ ¨©.žDª6«}. ¬­|® ¯3°²±´³3µ1³ ¶l·¹¸5¶ºE°²± »l¼E±‰±½® ¯u¶J¼E±]¶ ¾ž¿ »lµž± »lµžºu¶ZºÁÀ3­3°Â®]¶± ® µ1®]¶Ã®]·‰µž­E± ºE¼3»U¶l·‰± ¿h¾ ·ÁÄ,µž»‰¯E°Å­u¶¹®]·$µž­E± Ʋµ1® ° ¾ ­=Ç ÈÉ­}¼EÄÊ Ë ¶l· ¾ž¿ ±‰Ä,µžÆ²ÆÍÌ ºu¶Zº3°²»lµ1®]¶ZºÎ®]·‰µž­E± ºE¼3»U¶l·‰± °Å±pµ1³E³3ÆÅ°Â¶Zº€® ¾ » ¾ ­dϞ¶l· ®±]¶Z­d®]¶Z­E»U¶t³*µž°Â·‰± ¿ · ¾ Äе Ë °ÅƲ°²­uѪ¼EµžÆ » ¾ · ³3¼3±°²­G® ¾ ў¶Z­E¶l·]Ê µžÆÅ°ÂÒl¶ZºÓ®]·‰µž­3± Ʋµ1® ° ¾ ­Ô³3µ1®]®]¶l·‰­E±lÇÖÕ¯E¶Z±]¶ ³*µ1®]®]¶l·‰­E±lÌ,® ¾ ў¶l® ¯E¶l·C¸×°²® ¯Ø® ¯u¶´®]·‰µž­3±Ê º3¼E»U¶l·‰±µ1· ¶‡® ¯E¶Z­)¼E± ¶Zº)µž± µ¯E°Â¶l·‰µ1·‰»$¯E°ÙÊ »lµžÆ<®]·‰µž­E± ÆÅµ1® ° ¾ ­ÃĶZÄ ¾ · Ú ¿h¾ · ¿ ¼EƲÆÂÚtµž¼`Ê ® ¾ Ä:µ1® °²»®]·‰µž­E± ÆÅµ1® ° ¾ ­ÇÛ¶Z± ¼EƲ® ± ¾ ­€® ¯u¶ Ü ¶l·‰Ä,µž­}ÝdÞ5­uѪƲ°²± ¯ÐßàEáâã ä<â*å‚æç» ¾ ·]Ê ³*¼E±µ1· ¶zѪ°ÂϞ¶Z­=Ç è é 7.žDª9@HêH«}.ªëì9<7 í ¾ · ³*¼E± Ë µž±]¶Zºµ1³E³E· ¾ µž»‰¯u¶Z± ® ¾ µž¼E® ¾ Ä,µ1® °²»?®]·‰µž­E±Ê Ʋµ1® ° ¾ ­C» ¾ Ķ­#µ­}¼EÄ Ë ¶l· ¾ž¿ ºE°Ùï ¶l· ¶Z­d®zð3µ©Ï ¾ ·‰±lÇ ¬­„® ¯u¶:± °Åij3ÆÂ¶Z±]® ¿h¾ ·‰ÄÃ̏®]·‰µž­E± ÆÅµ1® ° ¾ ­E±zµ1· ¶t±]® ¾ · ¶Zº µž­EºK· ¶Z¼E±]¶Zº ¿‚¾ ·ñ® ¯u¶t®]·‰µž­E± Ʋµ1® ° ¾ ­ ¾ž¿ ­u¶l¸ò°²­u³3¼E®lÇ Õ¯E°Å±µ1³E³3· ¾ µž»‰¯ÌGó}­ ¾ ¸×­µž±X®]·‰µž­E± ÆÅµ1® ° ¾ ­Ä¶ZÄ ¾ · ÚžÌ ¶UôuµžÄ³3ÆÂ¶UÊ Ë µž± ¶Zº ¾ ·?»lµž± ¶UÊ Ë µž±]¶Zº)®]·‰µž­E± Ʋµ1® ° ¾ ­Ì3»lµž­ ¸ ¾ · ó ¾ ­p® ¯u¶¸ ¾ ·‰ºzÆÂ¶lϞ¶Zƪµž±<¸5¶ZÆÅƞµž± ¾ ­z±]®]·$¼E»U® ¼u· ¶Zº ¶UôuµžÄ³3ÆÂ¶Z±zµž±p® ¯u¶lڄµ1· ¶,ў¶Z­u¶l·$µ1®]¶ZºCºE¼E·‰°²­uÑ­EµžÆÙÊ Ú`± °²±µž­Eº½Ñž¶Z­E¶l·‰µ1® ° ¾ ­´°²­½Ä ¾ · ¶Ãў·‰µžÄ,Ä,µ1· Ê Ë µž±]¶Zº ®]·‰µž­E±‰Æ²µ1® ° ¾ ­Ã³3µ1·$µžºE°ÂѪÄ,±ñõ÷öz°Â® µž­ ¾ Ì/øZùžùžú}û ü · ¾ ¸×­Ì øZùžùžýªþ$Ç ÿX°²­E°²®]¶ ±]® µ1®]¶ ®]·‰µž­E± ºE¼3»U¶l·‰±l̸ׯE°²»$¯ »lµž­ Ë ¶ ÆÂ¶Zµ1·‰­E¶Zº ¿ · ¾ Ä Ë °²Æ²°Å­uѪ¼EµžÆ» ¾ · ³ ¾ ·‰µ`Ìc¯3µZϞ¶ Ë ¶l¶Z­ ³E· ¾ ³ ¾ ±]¶Zº ¿h¾ ·#µž¼u® ¾ Ä,µ1® °²»Á®]·$µž­E± Ʋµ1® ° ¾ ­ õ÷ÈĶZ­`Ê Ñª¼EµžÆÃ¶l®´µžÆ_ÇÂÌ  þ$̀µž±´¯EµZϞ¶ Ë ¶l¶Z­ Ë °²Æ²°²­EѪ¼EµžÆ ±]® ¾ »‰¯Eµž± ® °²»,ў·‰µžÄ:Ä,µ1·‰±tõ#¼=Ì øZùžùžýªþ$Ç`® µ1® °²±]® °²»lµžÆ µ1³E³E· ¾ µž»‰¯u¶Z±,õ#µž­uÑjµž­3º #µž° Ë ¶ZÆ_Ì/øZùžù dû  »‰¯c¶l® µžÆÍÇÂÌøZùžùžùªþ µžÆ²± ¾ñ¿ µžÆ²Æ°²­d® ¾ ® ¯u¶'»lµ1®]¶lÑ ¾ · Ú ¾ž¿ » ¾ · ³3¼E± Ë µž±]¶Zºµ1³E³E· ¾ µž»$¯u¶Z±lÇ ¬­j® ¯E°Å±³3µ1³ ¶l·ZÌ3µ:®]·‰µž­E± Ʋµ1® ° ¾ ­Ä¶l® ¯ ¾ º°Å±³E· ¾ Ê ³ ¾ ±]¶Zº¸×¯3°²»‰¯°²± Ë µž±]¶Zº ¾ ­,® ¯u¶Ϟ¶l· Ú± µžÄ¶×³E·‰°²­E»l°ÂÊ ³3Ʋ¶Z±zµž±z® ¯u¶tµ ¿‚¾ · ¶ZÄ,¶Z­G® ° ¾ ­E¶ZºCµ1³E³3· ¾ µž»‰¯u¶Z±ZÇ  ­E¶ ºE°Âï¶l· ¶Z­3»U¶×°²±lÌd® ¯Eµ1®5­ ¾ ® µ ¿ ¼EƲÆÂÚñµž¼u® ¾ Ä,µ1® °²»?®]·‰µž°²­uÊ °²­EÑ ¾ž¿ ® ¯u¶ ®]·$µž­E± Ʋµ1® ° ¾ ­ Ä ¾ ºE¶ZÆ:°²±„³ ¶l· ¿h¾ ·‰Ä¶Zº=Ç Û?µ1® ¯u¶l·ZÌ?µÁ­}¼EÄ Ë ¶l· ¾ž¿ ±]³ ¶Z»l°²µžÆ?³3¼u·‰³ ¾ ± ¶j®]·‰µž­E±Ê ºE¼3»U¶l·‰±5µ1·‰¶?¯Eµž­EºuÊì»U·‰µ ¿ ®]¶Zºµž­Eº¼E±]¶Zºt® ¯u¶Z­µ1®5® ¸ ¾ ³ ¾ °²­d® ±lÇîÿ/°²·‰±]®lÌ/® ¾ » ¾ ­dϞ¶l· ®‡® ¯E¶ Ë °²ÆÅ°²­uѪ¼EµžÆ/®]·‰µž°²­uÊ °²­EÑ» ¾ · ³*¼E±°Å­G® ¾ µ®]·‰µž­E± ÆÅµ1® ° ¾ ­‡Ä,¶ZÄ ¾ · ڇ» ¾ ­d® µž°²­`Ê °²­EÑC®]·‰µž­3± Ʋµ1® ° ¾ ­ ³3µ1®]®]¶l·$­E±î·‰µ1® ¯u¶l·Ã® ¯Eµž­ Ä,¶l· ¶ZÆÂÚ ±]¶Z­d®]¶Z­E»U¶Ã³3µž°²·‰±lÌ µž­EºÎ¸×¯E°Å»‰¯Î°²±°Â® ±]¶ZÆ ¿ ¼E± ¶ZºÎµž±µ ®]·‰µž­3± ºE¼E»U¶l·/°²­'® ¯u¶5®]·‰µž­E± Ʋµ1® ° ¾ ­z³E· ¾ »U¶Z± ±lÇ }¶Z» ¾ ­EºÌ ¸×¯E¶Z­j­u¶l¸Ô±]¶Z­d®]¶Z­E»U¶Z±?µ1· ¶z® ¾ Ë ¶'®]·‰µž­3± Ʋµ1®]¶ZºÌu® ¯u¶ ®]·‰µž­3± ºE¼E»U¶l·‰±ñµ1· ¶îµ1³E³3Ʋ°²¶Zº„® ¾ ®]·‰µž­E± ¿h¾ ·‰Ä&® ¯E¶°²­uÊ ³3¼E®H±]¶Z­d®]¶Z­E»U¶°Å­G® ¾z¾ ­E¶ ¾ · Ä,µž­GÚ³ ¾ ± ±‰° Ë ÆÂ¶ ® µ1· ў¶l® ±]¶Z­d®]¶Z­E»U¶Z±® ¯u¶ Ë ¶Z±]® ¾ž¿ ¸×¯3°²»‰¯Ìrµž»l» ¾ ·‰ºE°²­uÑ×® ¾ ± ¾ Ķ ± » ¾ ·$°²­uѱ »‰¯E¶ZĶžÌE°²± ±]¶ZÆÂ¶Z»U®]¶Zºjµž± ® ¯E¶®]·$µž­E± Ʋµ1® ° ¾ ­=Ç ¬­® ¯u¶'­u¶Uô`® ± ¶Z»U® ° ¾ ­Ìu® ¯u¶'» ¾ ­E±]®]·$¼E»U® ° ¾ ­ ¾ž¿ ® ¯u¶ ®]·‰µž­3± ºE¼E»U¶l·‰±„µž­3ºØ® ¯u¶½®]·‰µž­E±‰Æ²µ1® ° ¾ ­ØÄ¶ZÄ ¾ ·‰Ú°²± ¾ ¼u® ÆÅ°²­u¶ZºÇ×Õׯu¶Z­Ì® ¯u¶ñµ1³3³3Ʋ°²»lµ1® ° ¾ ­ ¾ž¿ ® ¯u¶ñ®]·‰µž­E±Ê ºE¼3»U¶l·‰± ¿h¾ ·×® ¯u¶z®]·$µž­E± Ʋµ1® ° ¾ ­ ¾ž¿ ­u¶l¸ ±]¶Z­G®]¶Z­3»U¶Z±?°²± ºu¶Z±‰»U·‰° Ë ¶ZºÇά ­½® ¯E¶jƲµž±]®,±]¶Z»U® ° ¾ ­ ® ¯u¶Ã· ¶Z± ¼3ÆÂ® ± ¾ž¿ ± ¾ Ä,¶z®]·‰µž­E± ÆÅµ1® ° ¾ ­î¶Uô}³ ¶l·‰°ÅĶZ­G® ±×µ1· ¶'Ѫ°ÂϞ¶Z­Ç   4H0  DG6 75¨1@HêH«`03DG¨  ! ÈÀ3­E°²®]¶Á± ® µ1®]¶ ®]·‰µž­E±‰ºE¼E»U¶l·¹õ÷ÿ"`Õ?þ„°²±„µ À3­E°²®]¶ ±]® µ1®]¶Áºu¶lÏ`°²»U¶#¸×¯3°²»‰¯Ô· ¶ZµžºE±± Ú}Ä Ë ¾ Ʋ± ¿ · ¾ Ä ¾ ­u¶ »$¯Eµž­E­u¶ZÆ*µž­Eº ¾ ¼u®]³*¼u® ±Hµz± ®]· ¶ZµžÄ ¾ž¿ ± Ú}Ä Ë ¾ Ʋ±® ¾ µ ±]¶Z» ¾ ­3º,»‰¯Eµž­3­u¶ZÆÍÇ# ¾ ÌGµzÿ"`Õ½»lµž­ Ë ¶ºu¶l³3°²»U®]¶Zºµž± µ®]·$µž­E± °Â® ° ¾ ­­u¶l®¸×°Â® ¯j¶Zºuў¶Z±×µž­Eº­ ¾ ºE¶Z±lÌE¸×¯u¶l·‰¶ ® ¯u¶­ ¾ ºu¶Z± · ¶l³3· ¶Z±]¶Z­d®® ¯E¶± ® µ1®]¶Z±zµž­Eºc® ¯u¶¶Zºuў¶Z± ® ¯u¶³ ¾ ± ±‰° Ë ÆÂ¶ñ±]® µ1®]¶,®]·‰µž­E±‰°Â® ° ¾ ­E±lLJÕ¯u¶¶Zºuў¶Z±zµ1· ¶ Ʋµ Ë ¶ZƲƲ¶Zºt¸?°Â® ¯îµž­j°²­u³3¼E® ±]Ú`Ä Ë ¾ Ƶž­3ºjµž­ ¾ ¼u®]³3¼E® ±]®]·$°²­uÑuÌu¸×¯E°Å»‰¯ÃÄ,µ©Ú Ë ¶'® ¯u¶p¶ZijE®ìÚî¸ ¾ ·‰ºE± ¾ž¿ ® ¯u¶ ® ¸ ¾ Ï ¾ »lµ Ë ¼EƲµ1·‰°Â¶Z±ZÇÕ¯E¶À3­EµžÆ×±]® µ1®]¶Z±î»lµž­´³E· ¾ Ê ºE¼3»U¶zµžºEºE°Â® ° ¾ ­3µžÆ ¾ ¼u®]³3¼u®lÇ „¶¸ µž­G® ® ¾ » ¾ ­E± ®]·‰¼E»U®H®]·$µž­E± ºE¼E»U¶l·$± ¿h¾ ·5µž¼u® ¾ Ê Ä,µ1® °Å»Ä:µž»‰¯E°²­E¶×®]·‰µž­E±‰Æ²µ1® ° ¾ ­ ¿ · ¾ Ä µñѪ°ÂϞ¶Z­ Ë °²ÆÅ°²­`Ê Ñª¼EµžÆ» ¾ ·‰³3¼E±lLJ¬ ­ ¿ µž»U®l̏µî» ¾ ƲÆÂ¶Z»U® ° ¾ ­ ¾ž¿ ±]¶Z­d®]¶Z­E»U¶ ³3µž°Â·$±,»lµž­ Ë ¶ÃÏ`°Â¶l¸ ¶Zº µž±µ„®]·‰°ÂÏ`°²µžÆ®]·‰µž­3± ºE¼E»U¶l·ZÌ ¸×¯u¶l·‰¶ñ¶Zµž»$¯K±]¶Z­d®]¶Z­E»U¶³3µž°²· °Å± · ¶l³E· ¶Z±]¶Z­d®]¶Zº Ë Úcµ ºE°²± ® °²­E»U®'Ʋ°²­u¶ ¾ž¿ ­ ¾ ºu¶Z±z» ¾ ­E­E¶Z»U®]¶Zº Ë Úc¶ZºEў¶Z±pƲµPÊ Ë ¶ZÆÂ¶Zºt¸×°Â® ¯,® ¯E¶± ¾ ¼E·‰»U¶'±]¶Z­G®]¶Z­3»U¶¸ ¾ ·‰ºE± µž­Eº® ¯u¶ ® µ1· ў¶l®±]¶Z­d®]¶Z­E»U¶t¶ZÄ:°Â®]®]¶Zº ¿ · ¾ Ä ® ¯u¶,À3­EµžÆH±]® µ1®]¶žÇ Õ¯E°Å±»lµž­ Ë ¶K¶Zµž± °²Æ²Ú ®]·‰µž­E± ¿h¾ ·‰Ä¶Zº°²­d® ¾ µ¹®]· ¶l¶ ®]·‰µž­E±‰ºE¼E»U¶l· Ë Ú Ë ¼E°²ÆÅºE°²­uÑ:µ³E·‰¶lÀuô®]· ¶l¶ ¾ Ϟ¶l·p® ¯u¶ ± ¾ ¼u·$»U¶p±]¶Z­d®]¶Z­E»U¶Z±lÇ ¬­ õ÷ÈĶZ­uѪ¼EµžÆ¶l®CµžÆÍÇÂÌ  þKµÓĶl® ¯ ¾ ºB°²± Ѫ°ÂϞ¶Z­ ® ¾ ³E· ¾ ³3µ1Ѫµ1®]¶¹³E· ¶lÀuô`¶Z± ¾ž¿ ® ¯u¶´®]·$µž­E± ƲµPÊ ® ° ¾ ­E±,® ¾ ¸ µ1·‰ºE±® ¯u¶· ¾d¾ ® ¾ž¿ ± ¼3»‰¯´µ€®]· ¶l¶®]·‰µž­E±Ê ºE¼E»U¶l·jµž­3º ® ¾ » ¾ µžÆÂ¶Z± »U¶Î±]® µ1®]¶Z±j® ¾ Ѫµž°Å­ ў¶Z­u¶l·]Ê µžÆ²°ÂÒZµ1® ° ¾ ­Á³ ¾ ¸5¶l·©Ç$„¶)»‰¯ ¾}¾ ±]¶¯u¶l·‰¶îµKºE°Ùï ¶l· ¶Z­G® · ¾ ¼u®]¶‡® ¾ ў¶Z­E¶l·‰µžÆ²°ÂÒZµ1® ° ¾ ­ Ë Úî¼3± °²­uÑ:µž­cµ1³E³E· ¾ µž»‰¯ ± °²Ä:°²Æ²µ1·c® ¾ ® ¯u¶ ¾ ­u¶´¼E±]¶Zº ¿h¾ ·#»‰¯}¼E­uóÔ³*µ1·‰± °²­uÑuÌ ¸×¯u¶l·‰¶cµ½»lµž± »lµžºE¶ ¾ž¿ ÿ"`Õ °²±îµ1³E³3Ʋ°²¶Zºõ÷È Ë ­u¶lÚžÌ øZùžù žþ$ÇÁÞHµž»‰¯Á®]·$µž­E± ºE¼E»U¶l·©Ì ºu¶lÀ3­u¶Zº Ë Úε€±]¶l® ¾ž¿ · ¶lѪ¼EÆÅµ1·]Ê_¶Uô}³3· ¶Z± ± ° ¾ ­³3µ1®]®]¶l·$­E±lÌ`· ¶ZµžºE± ³3µ1·‰® ¾ž¿ ® ¯u¶ °²­u³*¼u®±]¶Z­G®]¶Z­3»U¶)µž­Eº½¸×·‰°Â®]¶Z±µ#±]®]·‰¶ZµžÄ ¾ž¿ »lµ1®]¶UÊ Ñ ¾ · ÚpÆÅµ Ë ¶ZÆÅ±lÌr¸?¯E°²»‰¯ ¿h¾ ·‰ÄÃÌ1® ¾ ў¶l® ¯u¶l· ¸×°²® ¯p® ¯u¶ ¼E­`Ê µž­EµžÆÂÚ}Òl¶ZºÎ³3µ1· ® ± ¾ž¿ ® ¯u¶î±]¶Z­d®]¶Z­E»U¶žÌ ® ¯u¶î°²­u³*¼u®z® ¾ ® ¯u¶p­E¶Uô}®®]·‰µž­3± ºE¼E»U¶l·×°Å­î® ¯u¶z»lµž± »lµžºu¶žÇ  ¼E·cµ1³E³3· ¾ µž»‰¯JºE°Ùï ¶l·‰± ¿ · ¾ Ä ® ¯u¶Áµ ¿‚¾ ·‰¶ZĶZ­`Ê ® ° ¾ ­u¶Zºc»‰¯}¼E­uó³3µ1·‰± °²­EÑ:°²­)® ¯Eµ1®µž­€µž­EµžÆ²ÚdÒl¶Zº€±]¶UÊ % ¼u¶Z­E»U¶ ¾ž¿ ¸ ¾ ·$ºE±,°²±­ ¾ ®:· ¶l³3Ʋµž»U¶Zº Ë Úή ¯u¶»lµ1®Ê ¶lÑ ¾ · ÚƲµ Ë ¶ZÆ Ë ¼E®t°²±tóž¶l³E®Ãµž±ÃµK³*µ1·‰µžÆ²ÆÂ¶ZÆ ¾ ³E® ° ¾ ­ ¿h¾ ·×®]·‰µž­E±‰ºE¼E»U¶l·‰±µ1³E³*Ʋ°Â¶ZºÃµ1®×µ,ÆÅµ1®]¶l·?±]® µ1ў¶žÇ'& ¾ ¸ ® ¯E°²± ÆÂ¶Zµžº3±5® ¾ ® ¯u¶'» ¾ ­E± ®]·‰¼E»U® ° ¾ ­ ¾ž¿ µñ®]·‰µž­3± Ʋµ1® ° ¾ ­ ў·‰µ1³3¯j¸×°²Æ²Æ Ë ¶ ¶Uô}³3ÆÅµž°²­u¶ZºÃ°²­(}¶Z»U® ° ¾ ­ú}Ç ÿ ¾ ·H®]·‰µž­E± Ʋµ1® ° ¾ ­ÌG­ ¾ ® ¾ ­EƲÚp® ¯u¶µž­EµžÆ²Ú}± °Å± ¾ž¿ ® ¯u¶ ± ¾ ¼u·$»U¶,±]¶Z­G®]¶Z­3»U¶:°²± · ¶ % ¼E°Â· ¶Zº Ë ¼E® µžÆÅ± ¾ ® ¯u¶ў¶Z­`Ê ¶l·‰µ1® ° ¾ ­ ¾ž¿ ® ¯E¶Ã® µ1· ў¶l®t±]¶Z­d®]¶Z­E»U¶žÇÕ¯E°²±»lµž­ Ë ¶ µž»‰¯3°Â¶lϞ¶Zº° ¿ ® ¯u¶'®]·‰µž­E±‰ºE¼E»U¶l·‰±¸×·‰°Â®]¶'»lµ1®]¶lÑ ¾ · ÚjƲµPÊ Ë ¶ZƲ±µž±=¸ ¶ZƲƪµž±<®]·‰µž­E± ÆÅµ1® ° ¾ ­E± ® ¾ ® ¯u¶ ¾ ¼u®]³3¼u®=»$¯Eµž­`Ê ­u¶ZÆÍÇ „¶pµžÆÅÆ ¾ ¸ ¿h¾ ·Ä ¾ · ¶'® ¯Eµž­ ¾ ­E¶'®]·‰µž­E± Ʋµ1® ° ¾ ­ ¿h¾ · µѪ°ÂϞ¶Z­)°²­u³3¼E® ± ¶ % ¼u¶Z­3»U¶žÇ Õ¯3°²±·‰µž°²± ¶Z±® ¯u¶ % ¼u¶Z±Ê ® ° ¾ ­ ¾ž¿ ¯ ¾ ¸Ô® ¾ ±]¶ZÆÂ¶Z»U® ¾ ­E¶z®]·‰µž­E± ÆÅµ1® ° ¾ ­ ¾ Ϟ¶l·® ¯u¶ ¾ ® ¯u¶l·‰±ZÇ) ¾ Ķjó`°²­Eº ¾ž¿ ±‰» ¾ ·‰°²­uÑK°Å±,· ¶ % ¼E°Â· ¶Zº=Ì µ ³ ¾ °²­d® ¸ ¶z¸×°ÅƲÆ· ¶l® ¼u·‰­î® ¾ °Å­Ã±]¶Z»U® ° ¾ ­  Ç ú}Ç Õ ¾ ± ¼3Ä,Ä,µ1·‰°ÂÒl¶* ¶Zµž»‰¯c®]·‰µž­3± ºE¼E»U¶l·°²±Ѫ°ÂϞ¶Z­cµž± µ ±]¶l® ¾ž¿ % ¼Eµžºu·‰¼u³*ÆÂ¶Z± ¾ž¿ ® ¯u¶ ¿h¾ ·‰Ä*,+ Ʋµ Ë ¶ZÆ.± ¾ ¼u·$»U¶'³3µ1®]®]¶l·‰­ ® µ1· ў¶l®×³3µ1®]®]¶l·‰­ -ò±‰» ¾ · ¶0/_ÇHÈ ® ·‰¼E­d® °²Ä¶c® ¯u¶Z±]¶€³3µ1®]®]¶l·$­E±Ãµ1· ¶„±]® ¾ · ¶Zº °²­µÎ³E· ¶UÊ Àuôή]· ¶l¶î¸×°Â® ¯Î· ¶Z±]³ ¶Z»U®® ¾ ® ¯u¶j± ¾ ¼u·$»U¶³*µ1®]®]¶l·‰­E±lÇ „¶¸·‰°²®]¶ñ® ¯E¶ÆÅµ Ë ¶ZÆÅ±µ1® ÀE·$±]® ³ ¾ ±‰°Â® ° ¾ ­cµž±'® ¯u¶Z±]¶ ®]·‰µž­E±‰Æ²µ1® ° ¾ ­E±³3µ1®]®]¶l·$­E±,»lµž­ Ë ¶Ã¼E±]¶Zº °²­Î® ¯u¶j· ¶UÊ Ïž¶l·‰±]¶îºE°Â·‰¶Z»U® ° ¾ ­Ì°ÍÇ ¶žÇ ¿ · ¾ Ä ® µ1· ў¶l®Ʋµž­uѪ¼Eµ1ў¶® ¾ ± ¾ ¼u·$»U¶ñƲµž­EѪ¼Eµ1ў¶žÇ׬­±]¶Z»U® ° ¾ ­  Ç21® ¯E°²±?³E· ¾ ³ ¶l· ® Ú °²±t¼E±]¶Zº ® ¾ » ¾ ­dϞ¶l· ®îµ Ë °ÅƲ°²­uѪ¼EµžÆ » ¾ · ³*¼E±:°²­d® ¾ µ ±]¶l® ¾ž¿ ®]·‰µž­E± ÆÅµ1® ° ¾ ­Î³3µ1®]®]¶l·‰­E±ñ¸×¯E°Å»‰¯Áµ1· ¶ ¿‚¾ ·$ć¼`Ê Æ²µ1®]¶ZºÎ°²­K®]¶l·‰Ä,± ¾ž¿ ¸ ¾ ·‰º3±pµž­EºÎ»lµ1®]¶lÑ ¾ · ÚCÆÅµ Ë ¶ZÆÅ±lÇ ¬ ®µžÆÅ± ¾ ±‰¯ ¾ ¸×±® ¯u¶±]®]·‰¼3»U® ¼u·‰µžÆ/°²ºu¶Z­d® °Â®ìÚî ¾ Ë °²ÆÅ°²­`Ê Ñª¼EµžÆ=ў·‰µžÄ,Ä,µ1·‰±×µž±×¼E±]¶Zº°²­#õ#¼Ì<øZùžùžýªþ$Ç 3 465798;:<9=:>57?5A@B:>C9 DE>F78HGI<9= !J8 K ¾ ±]® ¾ž¿ ® ¯E¶c®]·$µž­E± ºE¼E»U¶l·$±µ1·‰¶€»l¼3±]® ¾ Ä,°ÂÒl¶Zº ® ¾ Ê ¸ µ1·‰ºE±:® ¯u¶º ¾ Ä,µž°Å­ ¿‚¾ ·t¸×¯E°²»$¯½® ¯u¶j®]·$µž­E± Ʋµ1® ° ¾ ­ ±]Ú`±]®]¶ZÄ °²± ºE¶lϞ¶ZÆ ¾ ³ ¶ZºÇ ¬ ­ ® ¯u¶Jßà3áâ*ã ä<â*傿 í ¾ ·‰³3¼E±l̸ׯE°²»$¯ °²±j¼E± ¶Zº ¿‚¾ ·j® ¯E¶#¶Uô}³ ¶l·‰°ÅĶZ­G® ±ZÌ ® °²Ä,¶´µž­EºØºEµ1®]¶½¶Uô`³E· ¶Z± ± ° ¾ ­E±„µ1· ¶½Ïž¶l· Ú³E· ¾ Ä,°ÂÊ ­u¶Z­d®lÇ?Õ ¾ ®]·‰µž­E± Ʋµ1®]¶ñ® ¯ ¾ ±]¶ñ¶Uô`³E· ¶Z± ± ° ¾ ­E±lÌ*µ± Ä,µžÆÅÆ Ñž·‰µžÄ:Ä,µ1·¯Eµž± Ë ¶l¶Z­ ºu¶lϞ¶ZÆ ¾ ³ ¶Zº µž­Eº » ¾ ºu¶Zº µž± À3­3°Â®]¶± ® µ1®]¶®]·$µž­E± ºE¼E»U¶l·©Ç#È?»U® ¼EµžÆÅÆÂڞÌ®ì¸ ¾ ®]·‰µž­E±Ê ºE¼3»U¶l·‰±µ1· ¶Á¼E±]¶Zº=ÇL  ­Ó® ¯u¶CÀE·‰±]®cÆÂ¶lϞ¶ZÆÍÌz¸ ¾ ·‰ºE± µ1· ¶Á· ¶l³*Ʋµž»U¶Zº Ë ÚÔÆ²µ Ë ¶ZƲ±ZÌpƲ°Âóž¶NMÈPOQ  ÿPÞ5ÞHö RTS K ¾ ­G® µ1Ñ!UK ¾ ­EºEµ©ÚžÌ"M °Â¶Z­E±]® µ1Ñ!UPÕ/¼u¶Z± ºEµ©ÚžÌHÇÂÇÂÇWVªÇ  ­z® ¯u¶H±]¶Z» ¾ ­EºpÆÂ¶lϞ¶ZÆÍÌP® ¯u¶Z±]¶HƲµ Ë ¶ZƲ±® ¾ ў¶l® ¯u¶l·X¸×°²® ¯ Ʋµ Ë ¶ZÆÂ¶Zº­}¼EÄ Ë ¶l·‰± õ ¾ ·‰ºE°Å­EµžÆÍÌ`»lµ1·‰ºE°²­EµžÆ_Ì ¿ ·‰µž»U® ° ¾ ­E±$þ ¿ · ¾ Ä ® ¯u¶ ­d¼EÄ Ë ¶l·H®]·‰µž­3± ºE¼E»U¶l· µ1· ¶'¼E±]¶Zºt® ¾¿‚¾ ·‰Ä » ¾ ij*ÆÂ¶Uô® °²Ä¶×µž­EºtºEµ1®]¶×¶Uô`³E· ¶Z±‰± ° ¾ ­E±lÇ" ¾ Ķ?¶UôdÊ µžÄ³*ÆÂ¶Z±µ1· ¶'Ѫ°ÂϞ¶Z­)°²­ÃÕ/µ Ë ÆÂ¶,ø1Ç ÈƲÆB°Å­ µžÆ²ÆB¸ ¶ ¼E± ¶ »l¼u· ·‰¶Z­G® ÆÂÚ ±]¶lϞ¶Z­ ¾ž¿ ® ¯ ¾ ± ¶?ºu¶ZºE°Å»lµ1®]¶Zº:®]·‰µž­3± ºE¼E»U¶l·‰±H*­EµžÄ¶Z± õh³ ¶l·‰± ¾ ­E±ZÌ ® ¾ ¸×­E±lÌ`³3Ʋµž»U¶Z±ZÌ}¶lϞ¶Z­d® ±lÌu¶l® »©þ$ÌE±]³ ¶ZƲÆÅ°²­uÑz±]¶ % ¼u¶Z­E»U¶Z± õh¶žÇ ÑuÇYXZMØÈJº ¾ ¼ Ë ÆÂ¶\[#] þ$Ì3­d¼3Ä Ë ¶l·‰±zõ ¾ ·‰º3°²­EµžÆÍÌE»lµ1·]Ê ºE°Å­EµžÆÍÌ ¿ ·‰µž»U® ° ¾ ­3±lÌ5¶l® »1Ç þ$̱‰°²Ä³3ÆÂ¶î® °²Ä,¶jµž­Eº ºEµ1®]¶ ¶Uô`³E· ¶Z± ±‰° ¾ ­E±lÌ}» ¾ ij ¾ ¼E­Eº® °²Ä,¶×µž­EººEµ1®]¶×¶Uô`³E· ¶Z±Ê ± ° ¾ ­E±l̪³3µ1·‰®Ê ¾ž¿ Êì±]³ ¶l¶Z»$¯,® µ1ўѪ°²­uÑuÌ}ў·‰µžÄ,Ä:µ1·õ÷­ ¾ ¼E­ ³3¯E·‰µž±]¶Z±lÌ=Ϟ¶l· Ë ³3¯u·$µž±]¶Z±$þ$Ç:Õ¯E¶· ¶ZƲµ1® ° ¾ ­E±‰¯E°Â³ Ë ¶UÊ ® ¸5¶l¶Z­Î® ¯u¶Z±]¶ºE°Ùï ¶l· ¶Z­d®z®]·‰µž­E± º3¼E»U¶l·‰±z°²±‡ºu¶l³3°²»U®]¶Zº °²­À3Ѫ¼u· ¶'ø1Ç Õ¯u¶µ1· · ¾ ¸×± °²­3ºE°²»lµ1®]¶® ¯Eµ1® »lµ1®]¶lÑ ¾ · Ú Æ²µ Ë ¶ZƲ± °²­d®]· ¾ ºE¼E»U¶Z± Ë Ú ¾ ­u¶®]·‰µž­E±‰ºE¼E»U¶l·'µ1· ¶,¼E±]¶Zº Ë Úµž­ ¾ ® ¯u¶l·×®]·‰µž­3± ºE¼E»U¶l·ZÇ Õׯu¶ ºE°²Ï}°²±‰° ¾ ­É°²­d® ¾ ® ¯E¶Z±]¶ ®]·$µž­E± ºE¼E»U¶l·$±(°²± Ä,µž°Å­EÆÂÚÃµî» ¾ ­E»U¶l³E® ¼EµžÆ ¾ ­u¶žÇzÕ¯E¶‡ÀEϞ¶ Ë µž±]¶ÆÂ¶lϞ¶ZÆ ®]·‰µž­3± ºE¼E»U¶l·‰±X» ¾ ¼EÆÅº Ë ¶ » ¾ µžÆÂ¶Z± »U¶Zº°²­d® ¾ ¾ ­u¶ ®]·‰µž­E±Ê ºE¼3»U¶l·ZDŽÈ?»U® ¼EµžÆÅÆÂÚžÌ ® ¯E°²±ñ°²±º ¾ ­u¶îµ1®·‰¼E­d® °²Ä¶ ¿‚¾ · ¶;^:»l°²¶Z­E»UÚžÇ & ¾ ¸ ¶lϞ¶l·ZÌ ® ¾ óž¶l¶l³C® ¯E¶ZÄ µ1³3³3µ1· ®pµ1® » ¾ ­E± ®]·‰¼E»U® ° ¾ ­C® °²Ä¶Ñª°ÂϞ¶Z±Ä ¾ · ¶tðE¶Uô`° Ë °²Æ²°Â® ڞÇjÿ ¾ · ¶UôuµžÄ³3ÆÂ¶žÌ ¸×¯E°ÅÆÂ¶ ¿h¾ ·tµ„»lÆ ¾ ±]¶Zº´Ï ¾ »lµ Ë ¼EƲµ1·‰Úΰ²­½µ ±]³ ¶l¶Z»$¯,®]·‰µž­E±‰Æ²µ1® ° ¾ ­,® µž± óñ® ¯E¶Z±]¶×®]·‰µž­3± ºE¼E»U¶l·‰± Ë ¾ °ÅÆ º ¾ ¸×­„® ¾ ± °Åij3ÆÂ¶± ¼ Ë ± ® °Â® ¼u® ° ¾ ­€Æ²°²± ®µž­ ¾ ³ ¶Z­€Ï ¾ Ê »lµ Ë ¼3Ʋµ1· Ú½® µž±]ó¹¸×°²Æ²Æ×· ¶ % ¼3°Â· ¶)µÎÄ ¾ · ¶€¶ZƲµ Ë ¾ ·‰µ1®]¶ µ1³E³3· ¾ µž»‰¯p® ¾ ³E· ¾ ³ ¶l·<­3µžÄ¶H±]³ ¾ ®]® °²­uÑ ¾ ·/¯Eµž­EºEÆÅ°²­uÑ ¾ž¿ ­}¼EÄ Ë ¶l·‰±ZÇ Õׯu¶ ³3µ1·‰®Ê ¾ž¿ Êì±]³ ¶l¶Z»$¯ ®]·$µž­E± ºE¼E»U¶l·¯Eµž± Ë ¶l¶Z­ » ¾ ­E± ®]·‰¼E»U®]¶Zºñ±]¶ZÄ,°ÙÊ쵞¼u® ¾ Ä:µ1® °²»lµžÆ²ÆÂڞÇXÈÁ® µ1ўў¶l·X¸ µž± Տµ Ë ÆÂ¶,ø* í ¾ ij ¾ ¼E­3ºº3µ1®]¶z®]·‰µž­E± ÆÅµ1® ° ¾ ­î³3µ1®]®]¶l·‰­E±ZÇ Õ¬_KÞ -ò¼EÄ,`babK  ÛBMca¯u· µ1®B`YadK 'ÛBM ¾ ] »lÆ ¾ » ó Ê  Çe fHÞ5Û¬g dM -h`YadK í È?ÛYM Ë °²±i`YadK í È?ÛYMc-j`YadK í ÈÛBM® °²ÆÅÆk`badK í ÈÛBMlÊ  Çe fHÞ5Û¬g dM -h`YadK 'ÛBMcK ¾ ­3µ1®]¶‡Æ²µž­uÑ ¿‚¾ ·B`badK  ÛYMÄ ¾ ­G® ¯E± Ê ú}Ç  MÈÕÞ M×ÈPOm-òµžÄTMÈPO  ÿ  ÞHÞ5ö ¾ ­nMÈPO  ÿ  ÞHÞ5ö Ê  MÈÕÞ -ò°²­Ãºu¶l·B`badK í ÈÛBMo ¾ »‰¯u¶ °²­î® ¯u¶Q`babK í È?ÛYMJ¸5¶l¶ló Ê  Çe MÈÕÞ -òÈ?­ ¿ µž­uÑpKq d`Õr& Ë ¶lѪ°Å­E­E°²­uÑ ¾ž¿ K d`Õr& Ê  Çe MÈÕÞ -hMÈÕÞ Ë °²± ÒZ¼3ÄjMÈÕÞ ¿ · ¾ ÄTMÈÕÞ¹® °²Æ²ÆMÈÕÞ Ê  Çe Name Spell Number Simple Date POS-Tags Grammar Compound Date ÿX°ÂѪ¼E· ¶,ø*#&°Â¶l·‰µ1·‰»$¯GÚ ¾ž¿ ®]·‰µž­E± ºE¼3»U¶l·‰±lÇ ¼E±]¶ZºJ® ¾ ў¶l®#µ ¸ ¾ ·‰º Ý)f' s ® µ1ÑÓÆ²°²±]®lÇÕ¯E°²± ¸ µž±» ¾ Ä Ë °²­E¶Zº ¸×°²® ¯ µž­µž¼u® ¾ Ä,µ1® °²»lµžÆ²Æ²Ú½Ñž¶Z­u¶l·]Ê µ1®]¶Zºc®]·‰µž­E± ÆÅµ1® ° ¾ ­ÆÂ¶Uôu°²» ¾ ­Îõt  »‰¯c¶l®?µžÆ_ÇÂÌ/øZùžùžùªþ×® ¾ ³E· ¾ ºE¼E»U¶jµCƲ°²±]® ¾ž¿ Ʋµ Ë ¶ZÆ Ýθ ¾ ·‰º½Ýή]·‰µž­3± Ʋµ1® ° ¾ ­ ³3µ1®]®]¶l·‰­3±lÇ)Õ¯3°²±'¸ µž±‡® ¯E¶Z­CÄ,µž­}¼EµžÆ²Æ²Ú€» ¾ ·‰· ¶Z»U®]¶Zº µž­Eºjµž¼EѪĶZ­G®]¶Zº¸×¯u¶l· ¶z­E¶Z»U¶Z± ± µ1· ÚžÇ ¬ºu¶ZµžÆ²ÆÂÚžÌ ¾ ­u¶î¸ ¾ ¼EƲºÎƲ°Âóž¶t® ¾ ¯3µZϞ¶Ãµ€» ¾ Ä,Ä ¾ ­ ® µ1Ѫ±]¶l® ¿h¾ · Ë ¾ ® ¯± ¾ ¼u·‰»U¶€µž­Eº´® µ1·‰Ñž¶l®îÆÅµž­uѪ¼Eµ1ў¶žÇ ¬ ¿ ® ¯E°²±c°²±c­ ¾ ®„µZÏ1µž°²Æ²µ Ë ÆÂ¶Îµž­JµžÆÂ®]¶l·‰­Eµ1® °ÂϞ¶½°²±)® ¾ ¼E±]¶µt® µ1Ѫ±]¶l® ¿‚¾ · ¾ ­u¶Ʋµž­EѪ¼Eµ1ў¶ñµž­3ºc°Å­EºE¼E»U¶zÏ`°²µ ® ¯u¶,¸ ¾ ·‰ºK® ¾ ¸ ¾ ·‰º#» ¾ · · ¶Z± ³ ¾ ­3ºu¶Z­E»U¶Z±pµj® µ1ўѪ°²­uÑ ¿h¾ ·Î® ¯E¶±]¶Z» ¾ ­3º Ʋµž­uѪ¼Eµ1ў¶žÇ Õ¯3°²±K°²±C® ¯u¶ µ1³uÊ ³E· ¾ µž»$¯B® µ1óž¶Z­ °²­J® ¯E°²±#± ® ¼EºuڞÇÉȱ„® µ1Ѫ±]¶l®Î¸5¶ ¼E±]¶5® ¯u¶'`® ¼u®]®]Ѫµ1· ®ÊìÕu ¼ Ë °²­uў¶l·<® µ1Ѫ± ¶l® ¿h¾ · Ü ¶l·‰Ä,µž­ õt`»$¯E°²Æ²ÆÂ¶l· ¶l®×µžÆ_ÇÂÌ<øZùžùvªþ$Ç ÿX°²­EµžÆÅÆÂڞÌ×µ½± Ä,µžÆÅÆ Ë °²Æ²°²­uѪ¼3µžÆў·‰µžÄ,Ä:µ1· Ë µž±]¶Zº ¾ ­nf' st® µ1Ѫ±×¯Eµž± Ë ¶l¶Z­Ã»U·$µ ¿ ®]¶Zº)Ä,µž­d¼3µžÆ²ÆÂÚžÇ Õ¯u¶ ³3¼u·‰³ ¾ ± ¶ ¾ž¿ ® ¯E¶ў·‰µžÄ,Ä,µ1· °²±® ¸ ¾ž¿h¾ Ʋº*ÿX°Â·‰± ®lÌd°²ÄÊ ³E· ¾ Ï}°Å­uÑjў¶Z­u¶l·‰µžÆÅ°ÂÒZµ1® ° ¾ ­ Ë Ú„· ¶Z» ¾ Ѫ­3°ÂÒZ°²­uÑc± °²Ä³3Ʋ¶ ­ ¾ ¼E­½µž­3ºÁ³E· ¶l³ ¾ ± °Â® ° ¾ ­3µžÆH³3¯E·‰µž±]¶Z±lÇw}¶Z» ¾ ­Eº=ÌH® ¾ ¯Eµž­Eº3ÆÂ¶,® ¯u¶îºE°Ùï ¶l· ¶Z­d®‡¸ ¾ ·‰º ¾ ·‰ºu¶l·‰°²­EѰ²­Î± ¾ ¼u·‰»U¶ µž­Eº ® µ1·‰Ñž¶l®cƲµž­EѪ¼Eµ1ў¶žÌz¶Z±]³ ¶Z»l°²µžÆ²Æ²Ú¹°²­ ® ¯E¶#Ϟ¶l· Ë ³3¯u·$µž±]¶Z±lÇ yx z{=|5A79} Õ¯E¶c± » ¾ ·‰¶Z±Ãµ1®]® µž»‰¯u¶ZºÔ® ¾ ® ¯E¶c®]·$µž­E± Ʋµ1® ° ¾ ­³3µ1®Ê ®]¶l·‰­3±C»lµž­ Ë ¶´Ï`°Â¶l¸5¶Zº µž±ÁµÔó`°²­Eº ¾ž¿ ®]·‰µž­E± ƲµPÊ ® ° ¾ ­ ±‰» ¾ · ¶Z±lÇò¬ ­ ® ¯E¶#»l¼u· · ¶Z­d®j°²Ä³3Ʋ¶ZĶZ­G® µ1® ° ¾ ­ µî·‰µ1® ¯E¶l·p»U·‰¼Eºu¶¯u¶Z¼E·‰°²±]® °²»l±?® ¾ ў¶l® ¯u¶l·‡¸×°Â® ¯„± ¾ Ķ Ä,µž­}¼EµžÆ/® ¼E­E°Å­uÑt°²­)® ¯u¶ў·‰µžÄ,Ä:µ1·'®]·‰µž­E± ºE¼3»U¶l·°²± µ1³E³*Ʋ°Â¶ZºÇçÕׯu¶C°²ºE¶Zµ °²±)® ¾ Ѫ°ÂϞ¶C³3· ¶ ¿ ¶l·‰¶Z­E»U¶K® ¾ Æ ¾ ­Eў¶l· ®]·$µž­E± Ʋµ1® ° ¾ ­³3µ1®]®]¶l·$­E± µž± ® ¯u¶lÚ,® µ1óž¶zÄ ¾ · ¶ » ¾ ­d®]¶Uô}®¹°²­G® ¾ µž»l» ¾ ¼E­d®´µž­Eº ¶Z­E» ¾ ºu¶ ¸ ¾ ·‰º · ¶UÊ ¾ ·‰ºE¶l·‰°²­uÑz°²­µž­:¶Uô}³*Ʋ°²»l°Â®Ä,µž­E­E¶l·ZÇ  ¾ Ì ¿h¾ ·H± °²Ä³*ÆÂ¶ µž­Eº» ¾ ij ¾ ¼E­Eº®]·‰µž­E± ÆÅµ1® ° ¾ ­î³3µ1®]®]¶l·‰­E±® ¯u¶p± » ¾ · ¶ °²± ¶Uô}³ ¾ ­u¶Z­d® °²µžÆE® ¾ ® ¯u¶×ÆÂ¶Z­Eў® ¯ ¾ž¿ ® ¯u¶×± ¾ ¼E·‰»U¶×³3µ1®Ê ®]¶l·‰­=ÇpÕ¯u¶± » ¾ · ¶Z±'µ1· ¶­u¶lѪµ1® °²Ïž¶ Ë Úc» ¾ ­GϞ¶Z­d® ° ¾ ­* ­ ¾ ®?®]·‰µž­E± ÆÅµ1® °²­uÑ,µ,¸ ¾ ·‰ºѪ°ÂϞ¶Z±×Òl¶l· ¾ » ¾ ±]®lÌ®]·‰µž­E±Ê Ʋµ1® °Å­uÑ,°Â®Ѫ°ÂϞ¶Z±µ Ë ¶Z­E¶lÀE®lÌE°ÍÇ ¶žÇ ­u¶lѪµ1® °ÂϞ¶ñ» ¾ ±]® ±lÇ W~ €79}<9FA€'‚FAƒ"€79} Õ¯E¶)±]¶Z­d®]¶Z­E»U¶c³3µž°Â·‰±t°²­½® ¯u¶ Ë °²Æ²°Å­uѪ¼EµžÆ ®]·‰µž°²­E°Å­uÑ » ¾ · ³*¼E±C» ¾ ¼3Ʋº Ë ¶¼E±]¶Zº ºE°Â· ¶Z»U® ÆÂÚBµž±Áµ± °²Ä³*ÆÂ¶ ®]·‰µž­3± Ʋµ1® ° ¾ ­ ĶZÄ ¾ · ÚžÇ & ¾ ¸ ¶lϞ¶l·ZÌî® ¾ °²Ä,³E· ¾ Ϟ¶ ® ¯u¶» ¾ Ϟ¶l·‰µ1ў¶ ¾ ­„¼E­3±]¶l¶Z­)º3µ1® µ`Ì=® ¯u¶Z±]¶±]¶lѪĶZ­d® ± µ1· ¶ ®]·‰µž­E± ¿‚¾ ·‰Ä,¶Zº‡°²­d® ¾ ®]·$µž­E± Ʋµ1® ° ¾ ­p³3µ1®]®]¶l·‰­E±X» ¾ ­`Ê ® µž°²­3°²­uÑj»lµ1®]¶lÑ ¾ · Ú#Ʋµ Ë ¶ZƲ±ZÇ:ÿ ¾ ·p¶Zµž»$¯K®]·$µž­E± ºE¼E»U¶l· ® µ1óž¶Z­ ¿ · ¾ Ä ® ¯E¶j» ¾ ij3ÆÂ¶l®]¶j»lµž± »lµžºE¶îÝCµž±Ѫ°ÂϞ¶Z­ °²­CÿX°ÂѪ¼u· ¶ø,Ýc® ¯E¶:®]·‰µž­E±‰ºE¼E»U¶l·‰±‡µ1· ¶µ1³3³3Ʋ°Â¶Zº€® ¾ Ë ¾ ® ¯Ìî ¯u¶ ± ¾ ¼E·‰»U¶ µž­Eº ® ¯u¶ ® µ1· ў¶l®´±]¶Z­d®]¶Z­E»U¶Z± ¾ž¿ ® ¯u¶ Ë °²Æ²°²­EѪ¼EµžÆ®]·‰µž°²­E°²­EÑÁ» ¾ ·‰³3¼E±Cõ„ ¾ ў¶ZƵž­Eº `?¶lÚžÌ  þ$Ç5Õ¯ ¾ ± ¶?±]¶Z­d®]¶Z­E»U¶³3µž°Â·‰±H¸×¯u¶l· ¶­d¼EÄÊ Ë ¶l·#µž­EºJ® Úd³ ¶Z± ¾ž¿ »lµ1®]¶lÑ ¾ · ÚBƲµ Ë ¶ZƲ±€°²­J± ¾ ¼u·‰»U¶ µž­Eº® µ1· ў¶l® ±]¶Z­G®]¶Z­3»U¶ñÄ,µ1® »$¯c¶Zµž»‰¯ ¾ ® ¯u¶l· µ1· ¶‡±]¶UÊ ÆÂ¶Z»U®]¶ZºC°²­d® ¾ ® ¯u¶:ºEµ1® µ Ë µž± ¶ ¾ž¿ » ¾ Ä,³ ¾ ¼3­Eº€®]·‰µž­E±Ê Ʋµ1® ° ¾ ­´³3µ1®]®]¶l·‰­3±lÇÕ/µ Ë ÆÂ¶  ± ¯ ¾ ¸×±,¶UôuµžÄ³3ÆÂ¶Z± ¾ž¿ ± ¾ Ä,¶H®]·‰µž­E±‰Æ²µ1® ° ¾ ­'³3µ1®]®]¶l·$­E±=¸×¯3°²»‰¯p· ¶Z± ¼EÆÂ®]¶Zº ¿ · ¾ Ä Ë °ÅƲ°²­uѪ¼EµžÆƲµ Ë ¶ZƲ°²­EÑuÇ …  4H0  DG6 75¨1>_6.ªë_9<7)†:Dª9«`03¨1¨ Õ¯E¶„¸ ¾ · ó`°²­uÑ ¾ž¿ ® ¯u¶K®]·‰µž­E± ºE¼3»U¶l·‰±Ã°²± Ë ¶Z±]®jºu¶UÊ ± »U·$° Ë ¶Zº¹µž±î® ¯u¶„» ¾ ­E± ®]·‰¼E»U® ° ¾ ­ ¾ž¿ µC®]·$µž­E± Ʋµ1® ° ¾ ­ Õ/µ Ë ÆÂ¶  * í ¾ Ä,³ ¾ ¼3­Eº®]·‰µž­E± Ʋµ1® ° ¾ ­î³3µ1®]®]¶l·‰­E±lÇ í Õrf?-hMÈÕÞѪ°²­uў¶'¶Z±×¸×°Â¶Zºu¶l· -jMÈÕް®°²± ³ ¾ ± ± ° Ë ÆÂ¶µ1Ѫµž°Å­ Êt1uÇ ý í Õrf?-h`?ÈdK)އˆaÛB`ÈdKÞµžÄ|È?³E³3µ1·‰µ1®b-ò® ¯E°²±°²±i`ÈbK)މˆa?ÛY`?ÈdK)Þ¹±]³ ¶Zµ1ó`°²­uÑpÊt1uÇ ý í Õrf?-h`YfºEµž¼u¶l·‰®rMÈÕÞ -j`Yf ® µ1óž¶Z±YMÈÕ×Þ Ê ú}Ç ú í Õrf?-ò­u¶Z¯EĶZ­(f#fHÞ5ۊ`bfwMÈÕ×Þ ÆÂ¶l®rf#f5ÞHÛÓ® µ1󞶋`YfŒMÈÕÞ Êt1uÇ ý ў·‰µ1³3¯=Ç&Õ¯Eµ1®)°Å±j® ¾ ± µZڞ̇® ¯u¶Î± ¶Z­G®]¶Z­E»U¶Î® ¾ Ë ¶ ®]·‰µž­E±‰Æ²µ1®]¶Zº„°²±Ï}°²¶l¸5¶Zº€µž±'µÑž·‰µ1³3¯€¸×¯3°²»‰¯c°²±®]·‰µPÊ Ïž¶l·‰±]¶Zº ¿ · ¾ ÄÖÆ²¶ ¿ ® ® ¾ ·‰°²Ñª¯G®lÇÿ ¾ ·z¶Zµž»$¯KÄ,µ1® »$¯E°²­uÑ ± ¾ ¼u·$»U¶,³3µ1®]®]¶l·‰­ÌXµž±p± ® ¾ · ¶ZºK°²­#® ¯u¶,®]·‰µž­E±‰ºE¼E»U¶l·‰±lÌ µ'­u¶l¸½¶Zºuў¶°Å±µžºEºu¶Zº® ¾ ® ¯u¶ў·‰µ1³*¯ÇÕ¯u¶ ¶Zºuў¶°²± Ʋµ Ë ¶ZÆÂ¶Zº¸×°²® ¯® ¯u¶×»lµ1®]¶lÑ ¾ · Ú:ÆÅµ Ë ¶ZÆ ¾ž¿ ® ¯u¶®]·$µž­E± ƲµPÊ ® ° ¾ ­€³3µ1®]®]¶l·$­ÇñÕׯu¶‡®]·$µž­E± Ʋµ1® ° ¾ ­„µž­3º€® ¯u¶®]·‰µž­E±Ê Ʋµ1® ° ¾ ­)± » ¾ · ¶ñµ1· ¶‡µ1®]® µž»$¯u¶Zºc® ¾ ® ¯u¶p¶Zºuў¶žÇ¬ ­® ¯E°²± ¸ µZÚ µÁ®]·‰µž­E± Ʋµ1® ° ¾ ­¹Ñž·$µ1³3¯ °²±Ã» ¾ ­E±]®]·$¼E»U®]¶ZºÇ ¬ ­ ® ¯ ¾ ±]¶»lµž±]¶Z±lÌX¸×¯u¶l·‰¶,µ± ¾ ¼u·$»U¶,³3µ1®]®]¶l·‰­Î¯Eµž±p± ¶lÏGÊ ¶l·‰µžÆG®]·‰µž­E± Ʋµ1® ° ¾ ­E±lÌ ¾ ­u¶5¶Zºuў¶ ¿h¾ ·<¶Zµž»$¯p®]·‰µž­3± Ʋµ1® ° ¾ ­ °²±µžºEºE¶ZºÃ® ¾ ® ¯u¶'ў·‰µ1³*¯ÇE  ­u¶'µžºuÏ1µž­d® µ1ў¶ ¾ž¿ ® ¯E°²± µ1³E³E· ¾ µž»‰¯Ã°²± ® ¯Eµ1®°Â® »lµž­ Ë ¶ µ1³3³3Ʋ°Â¶Zº:® ¾ ³¶l· ¿‚¾ ·‰Ä ®]·‰µž­E±‰Æ²µ1® ° ¾ ­ ¾ ­K¸ ¾ ·‰ºÁƲµ1®]® °²»U¶Z±µž±‡Ñž¶Z­u¶l·‰µ1®]¶Zº Ë Ú ±]³ ¶l¶Z»‰¯t· ¶Z» ¾ Ѫ­E°²® ° ¾ ­:±]Ú`±]®]¶ZÄ,±H¸×°Â® ¯ ¾ ¼E®Hµž­GÚ,Ä ¾ º`Ê °ÂÀ3»lµ1® ° ¾ ­E±lÇ Õ¯E¶Ʋ¶ ¿ ® ÝG·‰°²Ñª¯G®®]·‰µZϞ¶l·$± µžÆ ¾ž¿ ® ¯u¶ў·‰µ1³3¯„°²± ¾ ·]Ê Ñªµž­E°ÂÒl¶Zº€°²­)± ¼E»$¯cµt¸ µZÚ® ¯3µ1®µžÆ²Æ³3µ1® ¯E±µ1· ¶ñ®]·‰µPÊ Ïž¶l·‰±]¶ZºC°²­„³3µ1·$µžÆ²ÆÂ¶ZÆ µž­Eº#® ¯u¶:³3µ1®]®]¶l·$­E±p±]® ¾ ·‰¶ZºK°²­ ® ¯u¶®]·‰µž­E± º3¼E»U¶l·Îµ1·‰¶Ä,µ1® »$¯u¶Zºò±]Ú`­E»‰¯u· ¾ ­ ¾ ¼E± ÆÂÚžÇ ÿ ¾ ·¶Zµž»‰¯½­ ¾ ºu¶Ž¹µž­EºC¶Zµž»‰¯Î¶Zºuў¶îÆÂ¶ZµžºE°Å­uÑj® ¾ ® ¯Eµ1® ­ ¾ ºu¶µžÆ²Æ}³3µ1®]®]¶l·‰­3±°²­ñ® ¯u¶®]·‰µž­E± º3¼E»U¶l·±]® µ1· ®Ê °²­uѸװ® ¯„® ¯u¶t¸ ¾ ·$º ¾ ·»lµ1®]¶lÑ ¾ · ÚKƲµ Ë ¶ZÆ ¾ž¿ µ1· ¶ µ1®]® µž»‰¯E¶Zº ® ¾ Ç Õ¯E°²±:Ѫ°ÂϞ¶Z±îµC­}¼EÄ Ë ¶l· ¾ž¿ ¯dÚGÊ ³ ¾ ® ¯u¶Z±]¶Z±€ºu¶Z± »U·$° Ë °²­uѽ³*µ1· ® °²µžÆ²ÆÂÚ Ä,µ1® »$¯E°²­uѳ3µ1®Ê ®]¶l·‰­E±ZÇ È?Ʋ· ¶ZµžºuÚJ±]® µ1· ®]¶Zº ¯dÚ}³ ¾ ® ¯E¶Z±]¶Z±#µ1· ¶´¶Uô}Ê ³3µž­EºE¶Zº ¸×°Â® ¯Ô® ¯u¶CƲµ Ë ¶ZÆ ¾ž¿ ® ¯E¶#¶Zºuў¶Î·‰¼E­E­E°Å­uÑ ¿ · ¾ Ä ® ¯u¶'³E·‰¶lÏ}° ¾ ¼3±­ ¾ ºE¶'® ¾ ® ¯u¶p»l¼u·‰· ¶Z­G®­ ¾ ºu¶žÇ ȱtµž­¹¶UôuµžÄ³3ÆÂ¶žÌ® ¯E¶®]·‰µž­E±‰Æ²µ1® ° ¾ ­¹Ñž·‰µ1³3¯ ¿h¾ · ® ¯u¶±]¶Z­d®]¶Z­E»U¶pXuµžÄ,±]® µ1ÑüE­Eº)ÿu¶ Ë ·‰¼Eµ1· ± °²­3ºjѪ¼u®lÌ µ Ë ¶l·'ºu¶l· Ï}°Â¶l·‰®]¶ñ¸‹u µ1· ¶ Ë ¶Z±‰±]¶l·H] °²±?±‰¯ ¾ ¸×­€°²­€ÿX°ÂÑ1Ê ¼u· ¶  ÇHÈ»U® ¼EµžÆ²Æ²ÚžÌu® ¯u¶'ў·‰µ1³3¯j°Å±ć¼E»$¯ Ë °Âўў¶l·©Ç¬ ­ ® ¯u¶ñÀEѪ¼u· ¶žÌ ¾ ­EÆÂÚ® ¯ ¾ ±]¶ñ¶Zºuў¶Z±'µ1· ¶± ¯ ¾ ¸?­¸×¯3°²»‰¯ » ¾ ­d®]·‰° Ë ¼u®]¶ZºÓ® ¾ ® ¯u¶#» ¾ ­E±]®]·$¼E»U® ° ¾ ­ ¾ž¿ ® ¯u¶ Ë ¶Z±]® ³3µ1® ¯Ç x‘ ’bJJ5A“D‚5k€!JFA7k:Ž”oF•:=C Õ ¾ °ÅijE· ¾ Ϟ¶×® ¯u¶'» ¾ Ϟ¶l·$µ1ў¶ ¾ ­î¼E­E±]¶l¶Z­t®]¶Z±]®ºEµ1® µ`Ì °Â®×Ä:µZÚ Ë ¶zµžºuÏ1µž­G® µ1ў¶ ¾ ¼E± ® ¾ µžÆ²Æ ¾ ¸ ¿‚¾ · ¾ ­EÆÂÚõ1³uÊ ³E· ¾ ô`°ÅÄ,µ1® °ÂϞ¶„Ä,µ1® »‰¯3°²­uÑÁ¸×°Â® ¯¹® ¯u¶„±]¶lѪĶZ­d® ±j°²­ ® ¯u¶'®]·‰µž­3± Ʋµ1® ° ¾ ­îĶZÄ ¾ ·‰ÚžÇHÕ¯u¶'°²ºu¶Zµ°²± ® ¾ µ1³E³*ÆÂÚ Æ ¾ ­Eў¶l·/±]¶lѪÄ,¶Z­G® ± ¿h¾ ·X± Ú}­d® µž»U® °²»lµžÆ²ÆÂÚ Ë ¶l®]®]¶l·/®]·‰µž­E±Ê Ʋµ1® ° ¾ ­E±¸×°²® ¯ ¾ ¼u®×Æ ¾ ±‰°²­uÑ® ¾}¾ Äñ¼E»‰¯)µž± ¿ µ1· µž±×® ¯u¶ » ¾ ­d®]¶Z­G® ¾ž¿ ® ¯u¶±]¶Z­G®]¶Z­3»U¶Z± °Å± » ¾ ­E»U¶l·‰­u¶Zº=Ç „¶¼E± ¶ µ¸ ¶Z°ÂѪ¯d®]¶Zºz¶ZºE°²® º3°²±]® µž­E»U¶žÌP°ÍÇ ¶žÇ¶Zµž»$¯z¶l· · ¾ ·5õ÷°²­E±]¶l· Ê ® ° ¾ ­=̪ºu¶ZÆÂ¶l® ° ¾ ­̞± ¼ Ë ±]® °Â® ¼u® ° ¾ ­*þ °²±Xµž± ± ¾ »l°²µ1®]¶Zº¸×°²® ¯ µ± » ¾ · ¶žÇ×Õׯu¶l· ¶ Ë ÚžÌ*® ¯u¶ñºu¶ZÆÂ¶l® ° ¾ ­ ¾ · °²­E±]¶l· ® ° ¾ ­ ¾ž¿ ® Úd³3°Å»lµžÆ×À3ÆÅÆÂ¶l·,¸ ¾ ·‰ºE±:»lµž­ Ë ¶µžÆ²Æ ¾ ¸5¶Zº≠ׯu¶l· ¶Zµž± ® ¯u¶Ãºu¶ZƲ¶l® ° ¾ ­ ¾ ·,°Å­E±]¶l· ® ° ¾ ­ ¾ž¿ » ¾ ­d®]¶Z­G®¸ ¾ ·‰ºE±°²± µ©Ï ¾ °²ºu¶ZºÇ &?Ú}³ ¾ ® ¯E¶Z±]¶Z±„¸×°Â® ¯J® ¾ ¯E°ÂѪ¯JµÓÄ,µ1® »$¯E°²­uÑ ¶l·]Ê · ¾ ·z±‰» ¾ · ¶,µ1· ¶ºE°²± »lµ1·$ºu¶ZºÇpÈ ® ¯u· ¶Z± ¯ ¾ ÆÅº³E· ¾ ³ ¾ · Ê ® ° ¾ ­3µžÆ ® ¾ ® ¯u¶Ã­d¼EÄ Ë ¶l· ¾ž¿ » ¾ Ϟ¶l·‰¶Zº´³ ¾ ± °Â® ° ¾ ­3±‡°²± ¼E± ¶ZºÇKÕ¯}¼E±lÌHÆ ¾ ­uў¶l·®]·‰µž­E± ÆÅµ1® ° ¾ ­Î³3µ1®]®]¶l·‰­E±»lµž­ Ë ¶KÄ,µ1® »$¯u¶ZºÓ¸?°Â® ¯ÓÄ ¾ · ¶C°²­E±]¶l· ® ° ¾ ­E±lÌ ºu¶ZÆÂ¶l® ° ¾ ­E± µž­Eº‡± ¼ Ë ± ® °Â® ¼u® ° ¾ ­E±ZÇ=Ƚºu·‰µZ¸ Ë µž» ó ¾ž¿ ® ¯E°²±<°²±lÌ1¯ ¾ ¸Ê ¶lϞ¶l·ZÌp® ¯Eµ1® ¿h¾ ·)Æ ¾ ­uÑ ³3µ1®]®]¶l·‰­E±jÄ:°²± Ä,µ1® »$¯u¶Z± ¾ ­ » ¾ ­d®]¶Z­G®?¸ ¾ ·$ºE±Ä,µ©Ú ¾ »l»l¼u·ZÇ Þ5µž»‰¯ ®]·‰µž­E± ºE¼3»U¶l·:¯Eµž±:°²® ± ¾ ¸×­ Ʋ°²±]® ¾ž¿ °²­E±]¶l· Ê ® ° ¾ ­=Ì:ºu¶ZÆÂ¶l® ° ¾ ­ µž­Eº ± ¼ Ë ±]® °²¼u® ° ¾ ­±‰» ¾ · ¶Z±lÇ È?»fÊ ® ¼EµžÆÅÆÂÚžÌ ¾ ­EÆ²Ú ¿h¾ ·,® ¯ ¾ ±]¶j®]·$µž­E± ºE¼E»U¶l·$±¸×¯u¶l· ¶Ã® ¯u¶ ®]·‰µž­3± Ʋµ1® ° ¾ ­Ø³*µ1®]®]¶l·‰­E±#» ¾ Ϟ¶l·ÁÆ ¾ ­uў¶l·K±]¶ % ¼u¶Z­E»U¶Z± ¾ž¿ ¸ ¾ ·‰ºE±zµž­EºCƲµ Ë ¶ZƲ±'º ¾ ¸ ¶t¼E± ¶¶l· · ¾ ·z® ¾ ÆÂ¶l·$µž­G® Ä,µ1® »$¯E°²­uÑuÇ ÞH· · ¾ ·]Ê_® ¾ Ʋ¶l·‰µž­G®îÄ,µ1® »$¯E°²­uÑÁÄ,µZÚ¹µžÆ²± ¾ ¯E¶ZÆÂ³ ® ¾ » ¾ ij ¶Z­E±‰µ1®]¶ ¿h¾ ·j±]³ ¶l¶Z»‰¯· ¶Z» ¾ Ѫ­3°Â® ° ¾ ­¶l· · ¾ ·‰±Ã°Å­ ® ¯u¶t»lµž±]¶ ¾ž¿ ±]³ ¶l¶Z»‰¯„®]·$µž­E± Ʋµ1® ° ¾ ­3±lǬ ­#® ¯Eµ1®p»lµž±]¶ ® ¯u¶,» ¾ ­ ¿ ¼E±‰° ¾ ­„Ä,µ1®]·‰°Ùô ¾ Ë ® µž°²­u¶Zº Ë Ú€» ¾ ij3µ1·‰°²­EÑ ® ¯u¶‡· ¶Z» ¾ Ѫ­E°ÂÒl¶l· ¾ ¼u®]³*¼u® ¿h¾ ·?® ¯u¶‡®]·‰µž°²­E°Å­uÑ,±]³ ¶l¶Z»‰¯ ºEµ1® µ,¸×°Â® ¯î® ¯u¶z®]·$µž­E± Ʋ°Â®]¶l·$µ1® ° ¾ ­Ã»lµž­ Ë ¶z¼E±]¶ZºÇ x–3 —87}$F˜EFA79}<9F™}Aq”o5G9ˆ€ Õ¯E¶ µ1³3³3Ʋ°²»lµ1® ° ¾ ­ ¾ž¿ ® ¯u¶ ®]·‰µž­E± º3¼E»U¶l·‰± ® ¾ µѪ°ÂϞ¶Z­ ± ¾ ¼E·‰»U¶± ¶Z­G®]¶Z­E»U¶îÚ`°Â¶ZƲºCµ)Ʋµ1·‰Ñž¶­}¼EÄ Ë ¶l· ¾ž¿ ® µ1·]Ê Ñž¶l®z±]¶Z­d®]¶Z­E»U¶Z±lÇzÕׯu¶Z±]¶µ1· ¶± » ¾ · ¶Zº€µž»l» ¾ ·‰ºE°²­EÑt® ¾ ® ¯u¶,»l¼3ć¼EÆÅµ1® °ÂϞ¶,± » ¾ · ¶Z± ¾ž¿ ® ¯u¶,µ1³E³*Ʋ°Â¶Zº)®]·‰µž­E± ƲµPÊ ® ° ¾ ­€³*µ1®]®]¶l·‰­E±lÇpȱzµž­€°²­EºE¶l³¶Z­3ºu¶Z­G®µž­Eº„ºE°Â·‰¶Z»U® Ä ¾ ºu¶ZÆ ¾ž¿ ® ¯u¶Ʋ°²óž¶ZƲ°²¯ ¾}¾ º ¾ž¿ ® ¯u¶ ® µ1· ў¶l® ±]¶Z­d®]¶Z­E»U¶Z± µtƲµž­uѪ¼Eµ1ў¶pÄ ¾ ºu¶ZÆ<°²±µ1³E³3ÆÅ°Â¶ZºÇ"„¶ñ¼E±]¶pµ¸ ¾ ·‰º`Ê Ë µž± ¶Zº#®]·‰°Âў·$µžÄçÆ²µž­uѪ¼3µ1ў¶Ä ¾ ºu¶ZÆ×õtuµZ¸ µ ¿ ¶l®‡µžÆÍÇÂÌ  þ$ÇÕ¯u¶Æ ¾ Ѫµ1·‰°²® ¯EÄ ¾ž¿ ® ¯u¶Ʋµž­uѪ¼3µ1ў¶Ä ¾ ºu¶ZÆ ³E· ¾ Ë µ Ë °²Æ²°²® °Â¶Z±=°²±XµžºEºu¶Zºñ® ¾ ® ¯u¶ ®]·‰µž­E±‰ºE¼E»U¶l·± » ¾ · ¶Z± und sind gut aber der vierte waere besser DAYWEEK Saturday -0.5 Samstag Februar MONTH February -0.5 ADV but -0.1 NUM_ORD fourth -2.0 DATE Saturday -0.6 DATEDAY the fourth -4.0 DATE the fourth -4.1 C_PHRASE the fourth would be better -7.4 DATE February -0.6 S_PHRASE are good -2.1 DATE Saturday and February -4,2 ÿX°ÂѪ¼u· ¶  *5Õ<·‰µž­3± Ʋµ1® ° ¾ ­î¶UôuµžÄ³3ÆÂ¶žÇ ¸×¯u¶Z­Á® ¯E¶ Ë ¶Z±]®³3µ1® ¯½® ¯u· ¾ ¼uѪ¯½® ¯u¶Ã®]·‰µž­3± Ʋµ1® ° ¾ ­ ў·‰µ1³3¯:°²±X¶Uô}®]·$µž»U®]¶ZºÇHȱ »lµžÆ²°²­uÑ ¿ µž»U® ¾ · µžÆÅÆ ¾ ¸×± ¿h¾ · µ Ë °Åµž± ¾ ­Ã® ¯u¶z¶Uï ¶Z»U® ¾ž¿ ® ¯u¶pƲµž­EѪ¼Eµ1ў¶pÄ ¾ ºu¶ZÆÍÇ š ›6œ 20EDdëìF037.ª¨î6 75@ž€03¨1êH>Í.ª¨ ¬­|® ¯E°²±¹±]¶Z»U® ° ¾ ­̀¸ ¶Ô¸×°ÅƲÆÃѪ°ÂϞ¶± ¾ ĶԷ‰¶Z± ¼EÆÂ® ± ¾ Ë ® µž°²­E¶ZºJ¸×°Â® ¯J® ¯u¶´»lµž± »lµžºE¶ZºB®]·‰µž­E±‰ºE¼E»U¶l·#µ1³uÊ ³E· ¾ µž»$¯Ç,Þô`³ ¶l·‰°²Ä¶Z­d® ± ¸ ¶l· ¶³ ¶l· ¿‚¾ ·$ĶZº ¾ ­„® ¯u¶ ßàEáâãäâ3åhæ» ¾ ·‰³3¼E±lÇjÕ¯3°²±p» ¾ · ³*¼E±p» ¾ ­E±‰°²±]® ± ¾ž¿ ±]³ ¾ ­d® µž­u¶ ¾ ¼E± ÆÂÚî±]³ ¾ óž¶Z­jºE°ÅµžÆ ¾ Ѫ±°²­î® ¯u¶‡µ1³E³ ¾ °²­G®Ê ĶZ­d®×± »$¯u¶ZºE¼EƲ°Å­uÑñº ¾ Ä,µž°²­#õ#µž¯EƲ± ®]¶l·ZÌ  þ$Ç È ± ¼EÄ:Ä,µ1· Ú ¾ž¿ ® ¯u¶#» ¾ · ³*¼E±Ã¼E±]¶Zº °²­ ® ¯E¶„¶Uô}³ ¶l·‰°ÙÊ Ä¶Z­d® ±H°²± Ѫ°ÂϞ¶Z­,°²­,Õ/µ Ë ÆÂ¶×ú}ÇX¬­,Õ/µ Ë ÆÂ¶r1z® ¯E¶± °ÂÒl¶Z± ¾ž¿ ® ¯u¶z±]³ ¶Z»l°²µžÆ ³3¼E· ³ ¾ ±]¶ ®]·‰µž­E± ºE¼3»U¶l·‰±µ1· ¶zѪ°ÂϞ¶Z­=Ç Õ/µ Ë ÆÂ¶‡ú*Տ·‰µž°²­E°²­Eѵž­3ºj®]¶Z±]® » ¾ ­EºE°Â® ° ¾ ­E± ¿h¾ ·×® ¯u¶ ßàEáâãäâ3åhæÓ® µž± ó*Ç Õ¯u¶´®]·‰°²Ñž·‰µžÄ ³ ¶l· ³*ÆÂ¶Uô`°²®ìÚ õf#f þ °²± Ѫ°ÂϞ¶Z­=Ç Ü ¶l·‰Ä,µž­ Þ5­uѪƲ°²± ¯ Տ·‰µž°²­ }¶Z­d®]¶Z­E»U¶Z± úJ1#1Gýv  ¾ ·‰ºE± úžýžú#v`ø01 úŸžú‚v  ù „ ¾ »1Ç ý úŸ`ø ú# 1ýžý Տ¶Z±]® }¶Z­d®]¶Z­E»U¶Z± ø01!  ¾ ·‰ºE± ø<ùžýŸ  ø 1ú f#f Ý øZù}Çe Õ¯E¶±]¶Z­d®]¶Z­E»U¶Z± ¿ · ¾ Ä ® ¯u¶J®]·‰µž°Å­E°²­uÑ » ¾ · ³3¼E± ¸ ¶l· ¶±]¶lѪĶZ­d®]¶Zº °Å­G® ¾ ±‰¯ ¾ · ®]¶l·:±]¶lѪÄ,¶Z­G® ±¼E± °²­EÑ ±]¶Z­d®]¶Z­E»U¶pÄ,µ1·‰ó}±µž± Ë · ¶Zµ1ód³ ¾ °²­d® ±lÇÕ¯E°Å±5·‰¶Z± ¼EÆÂ®]¶Zº °²­o1Gú ý  ù Ë °²Æ²°²­uѪ¼3µžÆz³3¯u·$µž±]¶Z±€·‰¼E­E­3°²­uÑ ¿h¾ ·‰Ä ø ¸ ¾ ·‰ºK¼u³K® ¾ Ÿ  ¸ ¾ ·‰ºE±p°²­#ÆÂ¶Z­uў® ¯=ÇtÕ¯u¶tÆ ¾ ­uў¶Z±]® Õ/µ Ë ÆÂ¶\1•*'`°ÂÒl¶ ¾ž¿ ® ¯u¶'®]·‰µž­E±‰ºE¼E»U¶l·‰±lÇ Õ<·$µž­E± ºE¼E»U¶l· fXµ1®]®]¶l·‰­E± `?µžÄ,¶Z± 11  `?¼3Ä Ë ¶l·‰± úJ1  }³ ¶ZÆ²Æ ý  `°²Ä³*ÆÂ¶HM µ1®]¶ øZý`ø í ¾ ij ¾ ¼E­EºAMµ1®]¶ ø 1ú  ¾ ·‰ºEÕ/µ1Ѫ± ý‚ }ø01 Ü ·‰µžÄ,Ä:µ1· ø  1 ³3¯E·‰µž±]¶Z±z¸ ¶l· ¶îºE°²± »lµ1·$ºu¶ZºKµž±ñ°Â®‡°Å±zϞ¶l· ÚK¼3­EƲ°Âóž¶ZÆÂÚ ® ¯Eµ1®„® ¯E¶lÚԸװ²ÆÅƇÄ,µ1® »$¯ ¾ ® ¯u¶l·K± ¶Z­G®]¶Z­E»U¶Z±ZÇ  ¾ Ì ¿h¾ ·® ¯u¶C» ¾ ­E±]®]·‰¼E»U® ° ¾ ­ ¾ž¿ ® ¯u¶#®]·‰µž­E±‰Æ²µ1® ° ¾ ­ ³3µ1®Ê ®]¶l·‰­3± ¾ ­EÆÂÚb1 " ±]¶Z­d®]¶Z­E»U¶H³3µž°²·‰±¸ ¶l· ¶5¼E± ¶ZºÌ©® ¯u¶ Æ ¾ ­Eў¶Z±]®Î± ¶Z­G®]¶Z­E»U¶Z±½» ¾ ­d® µž°²­E°Å­uÑ ± °Âô}®]¶l¶Z­ ± ¾ ¼u·‰»U¶ ¸ ¾ ·‰ºE±lÇc}® µ1· ® °²­EÑ ¿ · ¾ Ä ® ¯ ¾ ±]¶c±‰°²Ä³3ÆÂ¶³3¯u·‰µž±]¶Z±ZÌ ± ¼3»l»U¶Z± ± °ÂϞ¶ZÆÂÚ'Ä ¾ ·‰¶ ®]·‰µž­E± º3¼E»U¶l·‰±=¸ ¶l· ¶Hµ1³3³3Ʋ°Â¶Zº'¼u³ ® ¾ ® ¯u¶ ¿ ¼3Æ²Æ »lµž± »lµžºu¶žÇ#È|® ¾ ® µžÆ ¾ž¿ øHv ýŸ  ®]·‰µž­E±Ê Ʋµ1® ° ¾ ­Ã³3µ1®]®]¶l·‰­E±?» ¾ ­G® µž°Å­E°²­uÑ ¾ ­u¶ ¾ ·×Ä ¾ · ¶pÆÅµ Ë ¶ZÆÅ± · ¶Z±‰¼EÆÂ®]¶Zº½µž­Eº´­E¶Zµ1·‰ÆÂÚ¡1Ev  ±]¶Z­d®]¶Z­E»U¶³3µž°²·‰± Ë ¶UÊ »lµžÄ¶°²ºu¶Z­d® °²»lµžÆE¸?¯u¶Z­¸ ¾ ·‰ºE± ¾ ·5¸ ¾ ·$º:±]¶ % ¼u¶Z­E»U¶Z± ¸ ¶l· ¶z· ¶l³*Ʋµž»U¶Zº Ë ÚÆ²µ Ë ¶ZƲ±ZÇ ÿ ¾ ·ÎµÔ®]¶Z±]®Î» ¾ ·‰³3¼E±#» ¾ ­E±‰°²±]® °²­uÑ ¾ž¿ ø01! Ô±]¶Z­`Ê ®]¶Z­E»U¶Z±ZÌ ® ¯u¶ ®]·‰µž­E± ÆÅµ1® ° ¾ ­E±J¯EµZϞ¶ Ë ¶l¶Z­ ¶lÏ1µžÆ²¼`Ê µ1®]¶Zºµž»l» ¾ ·‰ºE°²­EÑ´® ¾ ® ¸ ¾ ĶZµž± ¼u·‰¶Z±Cõ`°Â¶;¢ž¶Z­Ô¶l® µžÆÍÇÂÌ  þ£*mK¼EƲ® °ÙÊ_· ¶ ¿ ¶l· ¶Z­E»U¶Á¸ ¾ ·$ºJ¶l· · ¾ ·„·‰µ1®]¶ õ÷Ä ÞHÛ þ£* ¿‚¾ ·Á¶Zµž»‰¯ ± ¾ ¼u·‰»U¶ ±]¶Z­d®]¶Z­E»U¶Ó±]¶lϞ¶l·‰µžÆ Ñ ¾}¾ ºK®]·‰µž­E± ÆÅµ1® ° ¾ ­E±'µ1· ¶tѪ°ÂϞ¶Z­ÇtÕׯu¶¸ ¾ ·$º#¶l· · ¾ · ·‰µ1®]¶ Ë ¶l® ¸5¶l¶Z­Ô® ¯u¶#ў¶Z­u¶l·$µ1®]¶ZºÓ®]·‰µž­E±‰Æ²µ1® ° ¾ ­Óµž­Eº ® ¯u¶z»lÆ ¾ ±]¶Z±]®· ¶ ¿ ¶l· ¶Z­E»U¶z°²± »lµžÆ²»l¼3Ʋµ1®]¶ZºÇE`¼ ˈ¤ ¶Z»U® °²Ïž¶ ±]¶Z­d®]¶Z­E»U¶Ã¶l· · ¾ ·ñ·$µ1®]¶„õtA`ÞHÛ þ£*/® ¯E¶t®]·‰µž­3± Ʋµ1® ° ¾ ­E± µ1· ¶î¶lÏ1µžÆ²¼Eµ1®]¶Zº Ë ÚCµc¯}¼EÄ,µž­C¶UôuµžÄ,°²­u¶l·¼E± °Å­uÑ)µ ± »lµžÆÂ¶·‰µž­uѪ°²­uÑ ¿ · ¾ Ä  ® ¾ ø  ǹÕ¯u¶jµ©Ïž¶l·‰µ1ў¶ ¾ž¿ ® ¯u¶Z±]¶5ÏPµžÆÅ¼u¶Z±<°²±<Ʋ°²­u¶Zµ1·‰Æ²Ú×®]·‰µž­E± ¿‚¾ ·‰Ä,¶Zºz® ¾ Ѫ°²Ïž¶H® ¯u¶ ±]¶Z­d®]¶Z­E»U¶p¶l·‰· ¾ ··‰µ1®]¶p°Å­î³¶l·$»U¶Z­G®lÇ ~–‘ ’Y¥‚!=:Ž5A@\¦“>F§¨§¨F™ È ±‰°²Ä³3ÆÂ¶¹®]·‰µž­E± ÆÅµ1® ° ¾ ­òĶZÄ ¾ · Ú ¸×°Â® ¯ ¾ ¼u®Áµž­dÚ »lµ1®]¶lÑ ¾ ·‰°ÂÒZµ1® ° ¾ ­ Ѫ°²Ïž¶Z±°²­E±‰¼•^:»l°Â¶Z­d®Ã» ¾ Ϟ¶l·‰µ1ў¶ ¾ ­ ¼E­E± ¶l¶Z­Ô®]¶Z±]®„ºEµ1® µ`Ç© °Â® ¯ ® ¯u¶C³*µ1· ®Ê ¾ž¿ Êì± ³¶l¶Z»$¯ ®]·‰µž­E±‰ºE¼E»U¶l·î¸5¶„Ñž¶l® ¾ ­u¶ ¾ ·Ä ¾ · ¶€®]·‰µž­E±‰Æ²µ1® ° ¾ ­E± ¿h¾ ·¶Zµž»$¯ ¸ ¾ ·‰º °²­ ® ¯E¶cÏ ¾ »lµ Ë ¼EÆÅµ1· ÚžÇØü ¼E® ¾ ­3ÆÂÚ Ë Úµ1³E³3ÆÂÚ`°²­uÑ®]·$µž­E± ºE¼E»U¶l·$±×¸×¯3°²»‰¯)¯Eµž­EºEÆÂ¶‡Æ ¾ ­uў¶l· ®]·‰µž­E±‰Æ²µ1® ° ¾ ­³3µ1®]®]¶l·‰­E±°Å± ¸ ¾ ·‰º· ¶ ¾ ·$ºu¶l·‰°²­uÑñ³ ¾ ± ± °ÙÊ Ë ÆÂ¶žÇ ¬­KÕ/µ Ë ÆÂ¶tý® ¯u¶t· ¶Z± ¼EÆÂ® ±pµ1· ¶tѪ°²Ïž¶Z­ ¿h¾ ·ñºE°Âï¶l·]Ê ¶Z­d®<» ¾ Ä Ë °²­Eµ1® ° ¾ ­E± ¾ž¿ ®]·$µž­E± ºE¼E»U¶l·$±lÇ<Õ¯u¶ Ë µž±]¶ZƲ°²­E¶ õ÷Õ?þ?°²±?® ¯u¶» ¾ Ä Ë °²­Eµ1® ° ¾ ­ ¾ž¿ µžÆ²ÆX±]³ ¶Z»l°²µžÆ³3¼u·‰³ ¾ ± ¶ ®]·‰µž­E±‰ºE¼E»U¶l·‰± õ÷­EµžÄ¶žÌ`±]³ ¶ZƲÆÍÌG­}¼EÄ Ë ¶l·ZÌdºEµ1®]¶žÌ`¸ ¾ ·‰º ® µ1Ѫ±$þj³3Ʋ¼E±î® ¯E¶C± °²Ä,³3ÆÂ¶„®]·‰µž­E±‰Æ²µ1® ° ¾ ­E±Ã³*µ1®]®]¶l·‰­E±lÇ Õ¯u¶Z­® ¯u¶ў·‰µžÄ,Ä:µ1·¸ µž±µžºEºE¶Zºµž­3º:À3­3µžÆ²ÆÂÚ® ¯u¶ » ¾ ij ¾ ¼E­Eºj®]·‰µž­E± Ʋµ1® ° ¾ ­j³3µ1®]®]¶l·‰­3±lÇÕ¯u¶‡®]·‰°Âў·‰µžÄ Ʋµž­uѪ¼3µ1ў¶jÄ ¾ ºE¶ZÆ ¿h¾ ·,® ¯u¶Ã® µ1·‰Ñž¶l®:Ʋµž­uѪ¼Eµ1ў¶Ã¸ µž± µ1³E³3ÆÅ°Â¶Zº)°Å­c± ¶ZÆÂ¶Z»U® °²­uÑî® ¯u¶ Ë ¶Z±]®®]·‰µž­E± ÆÅµ1® ° ¾ ­Ì Ë ¼u® ­ ¾ ¶l· · ¾ ·® ¾ ÆÂ¶l·‰µž­d®×Ä,µ1® »$¯E°²­uÑ,¸ µž±×µžÆÅÆ ¾ ¸ ¶ZºÇ Õ/µ Ë ÆÂ¶5ý*/Þï ¶Z»U® ¾ž¿ Ë °²ÆÅ°²­uѪ¼EµžÆPў·‰µžÄ,Ä:µ1· ¾ ­p®]·‰µž­E±Ê Ʋµ1® ° ¾ ­ % ¼EµžÆ²°²®ìÚk* Õ R fE QdÊ_® µ1ўѪ°²­uÑuÌ Ü R ў·‰µžÄÊ Ä,µ1·ZÌ í R » ¾ ij ¾ ¼E­Eºî®]·‰µž­3± Ʋµ1® ° ¾ ­Ã³*µ1®]®]¶l·‰­E±lÇ Õ<·$µž­E± ºE¼E»U¶l· Ä.Þ5Û6+«ª6/ ™uÞHÛ6+«ª‹/ Õ 1Eø1Ç   v}ÇZŸ Õ Ü úžù}Çe  ÇZv Õ Üzí úŸ}ÇZŸ  Dzø „¶ ¾ Ë ±]¶l· Ϟ¶ µñ»lÆÂ¶Zµ1· ¶Uï¶Z»U® °²­,¸ ¾ ·‰ºt¶l· · ¾ · ·‰µ1®]¶ µž­Eº± ¼ ˈ¤ ¶Z»U® °²Ïž¶ ± ¶Z­G®]¶Z­E»U¶×¶l· · ¾ ··‰µ1®]¶žÇ Õ¯E¶ ¼3±]¶ ¾ž¿ ® ¯u¶ Ë °²Æ²°Å­uѪ¼EµžÆ ў·‰µžÄ,Ä:µ1·ZÌ×µžÆ²± ¾ Ϟ¶l· Ú´· ¶Z± ®]·‰°²»U®]¶ZºÌ °²Ä³3· ¾ Ϟ¶Z±®]·‰µž­E± Ʋµ1® ° ¾ ­ % ¼3µžÆ²°Â®ìÚžÇ È?³E³3ÆÂÚ`°²­uÑÁ® ¯u¶ » ¾ ij ¾ ¼E­Eº´®]·‰µž­E± Ʋµ1® ° ¾ ­½³3µ1®]®]¶l·‰­E±:Ѫ°ÂϞ¶Z±µž­ µžº`Ê ºE°Â® ° ¾ ­EµžÆ ± Ä,µžÆ²Æ°²Ä,³E· ¾ Ϟ¶ZÄ,¶Z­G®lÇ ¬­¹Õ/µ Ë ÆÂ¶qv#µÎ± °²Ä³*ÆÂ¶jµž­EºµCÄ ¾ ·‰¶)°²­dÏ ¾ ÆÂϞ¶Zº ¶UôuµžÄ³3ÆÂ¶ ¿h¾ ·® ¯u¶z· ¶ ¾ ·$ºu¶l·‰°²­uÑñ¶Uï ¶Z»U® ¾ž¿ ® ¯u¶ Ë °²Æ²°Å­`Ê Ñª¼EµžÆuў·$µžÄ,Ä,µ1·5µ1·‰¶Ѫ°ÂϞ¶Z­Ç Õ¯E¶ÀE·‰±]®®]·‰µž­3± Ʋµ1® ° ¾ ­ ³3µ1®]®]¶l·‰­ ¾ ³ ¶l·‰µ1®]¶Z±± ¾ ÆÂ¶ZÆÂÚ ¾ ­´® ¯E¶)ÆÂ¶lϞ¶ZÆ ¾ž¿ fE s ® µ1Ѫ±?¸×¯u¶l· ¶Zµž±×® ¯u¶p±]¶Z» ¾ ­Eº¶Uô`µžÄ,³3ÆÂ¶'ў¶Z­u¶l·‰µ1®]¶Z± µ ¯E°Â¶l·$µ1·‰»‰¯E°Å»lµžÆ<±]®]·‰¼3»U® ¼u· ¶žÇr„¶µ1· ¶ñ­ ¾ ®» ¾ ­3»U¶l·‰­u¶Zº ¸×¯u¶l® ¯E¶l·:® ¯u¶c± ¾ ¼u·‰»U¶c±]¶Z­G®]¶Z­3»U¶)³3µ1·$±]¶Z±:µ1· ¶c» ¾ ·]Ê · ¶Z»U®lÌ}Ñ ¾d¾ º,®]·$µž­E± Ʋµ1® ° ¾ ­3±H°²±¸?¯Eµ1®H¸ ¶×µ1· ¶Æ ¾d¾ ó`°²­uÑ ¿h¾ ·ZÇ ~3 ’b¥#!= :p5A@Y‚FA79}<9F™}”o5GP€ Õ¯E¶:­u¶Uô`®z¶Uô}³ ¶l·‰°ÅĶZ­G®z±‰¯ ¾ ¸×±p® ¯u¶,¶Uï ¶Z»U® ¾ž¿ µ1³uÊ ³3ƲÚ}°²­EрµCƲµž­uѪ¼Eµ1ў¶cÄ ¾ ºu¶ZÆ ¿h¾ ·:® ¯u¶j® µ1·‰Ñž¶l®Æ²µž­uÊ Ñª¼Eµ1ў¶žÇ ȹ¸ ¾ ·‰º`Ê Ë µž±]¶Zº®]·$°Âў·‰µžÄØÆ²µž­uѪ¼3µ1ў¶ Ä ¾ ºu¶ZÆ ¸ µž±½°²­d®]¶l· ³ ¾ Ʋµ1®]¶Zº ¸?°Â® ¯ ® ¯E¶± » ¾ ·‰¶Z± ¿ · ¾ Ä ® ¯u¶ ®]·‰µž­3± ºE¼E»U¶l·‰±ZǬ ­ÃÕ/µ Ë ÆÂ¶Q ñ® ¯u¶'¶Uï ¶Z»U® ¾ž¿ ® ¯u¶z± »lµžÆÙÊ °²­EÑ Ë ¶l® ¸5¶l¶Z­® ¯u¶'®ì¸ ¾ Ä ¾ ºu¶ZÆÅ±°²±± ¯ ¾ ¸×­Ç Õׯu¶l· ¶#°²±Ãµ½»lÆÂ¶Zµ1·cºu· ¾ ³Ó°²­ ® ¯E¶¬Þ5Û&¸×¯u¶Z­ ±]¸?°Â® »‰¯E°Å­uÑ ¾ ­® ¯u¶?ÆÅµž­uѪ¼Eµ1ў¶ Ä ¾ ºE¶ZÆÍÇÕ¯E°²± °Å±5ºE¼E¶ ® ¾ ® ¯u¶ ¿ µž»U®lÌ ® ¯Eµ1®±]¶lϞ¶l·‰µžÆ/®]·‰µž­E± ÆÅµ1® ° ¾ ­¯dÚd³ ¾ ® ¯u¶UÊ ±]¶Z± ¯Eµ©Ïž¶?® ¯u¶± µžÄ¶± » ¾ · ¶ ¿ · ¾ Ä ® ¯u¶×®]·‰µž­3± ºE¼E»U¶l·‰±ZÇ  ¾ ̰®°²±?·‰µ1® ¯u¶l· Ë Ú)»‰¯Eµž­E»U¶° ¿ ® ¯E¶ Ë ¶Z±]®?®]·‰µž­E± ƲµPÊ ® ° ¾ ­ ¿‚¾ · µzѪ°²Ïž¶Z­:¸ ¾ ·‰º°²± »‰¯ ¾ ±]¶Z­Ç Õ¯u¶Ʋµž­uѪ¼Eµ1ў¶ Ä ¾ ºu¶ZÆ ¿h¾ ·® ¯E¶ñ® µ1·‰Ñž¶l®zƲµž­uѪ¼Eµ1ў¶¯u¶ZÆÂ³*±?°²­cº ¾ °²­EÑ ® ¯E°Å±lÇ Õ/µ Ë ÆÂ¶¬ !*#Þï ¶Z»U® ¾ž¿ ÆÅµž­uѪ¼Eµ1ў¶)Ä ¾ ºu¶ZÆ ¾ ­¸ ¾ ·‰º ¶l· · ¾ ··‰µ1®]¶pµž­Eº± ¼ ˈ¤ ¶Z»U® °ÂϞ¶z±]¶Z­d®]¶Z­E»U¶p¶l·‰· ¾ ··‰µ1®]¶žÇ [PK­`»lµžÆ²¶ ÄÞ5Û6+«ª6/ A`ÞHۋ+«ª6/  Ç  1Gù}Ç ú ú`ø1ÇZŸ  Ç  úŸ}ÇZŸ  ú}ÇZv  ÇZv úŸ}ÇZŸ  Dzø ø1Ç  úžù}Ç21  ú}ÇZŸ v}Ç  1  Ç ý  dÇ21 Õׯu¶l· ¶t°Å±pµ± ¶Z» ¾ ­Eº Ë ¶Z­E¶lÀE®zѪµž°²­u¶Zº ¿ · ¾ Äç® ¯u¶ Ʋµž­EѪ¼Eµ1ў¶cÄ ¾ ºu¶ZÆt*#± ¾ Ķl® °²Ä¶Z±t® ¯u¶€± ¾ ¼u·‰»U¶€±]¶Z­`Ê ®]¶Z­E»U¶´»lµž­ Ë ¶Á» ¾ Ϟ¶l· ¶ZºB¸×°Â® ¯ ¾ ­EÆÂÚ Ïž¶l· ÚJ± ¯ ¾ ·‰® ± ¾ ¼E·‰»U¶€³3µ1®]®]¶l·‰­E±lÇBÕׯEµ1®°Å±t® ¾ ± µ©ÚžÌ?¸ ¾ ·‰º » ¾ ­`Ê ®]¶Uô`®)°²±¯Eµ1·‰ºEÆÂÚ® µ1óž¶Z­°²­G® ¾ µž»l» ¾ ¼E­G®lÇ® °Â® ¯Ôµ Ʋµž­EѪ¼Eµ1ў¶tÄ ¾ ºu¶ZÆ » ¾ ­d®]¶Uô`®ñ°²± Ë · ¾ ¼uѪ¯d®p°²­d® ¾ ³3ÆÅµZÚ µ1Ѫµž°²­=Ç ¬ ¿ ® ¯u¶zƲµž­uѪ¼3µ1ў¶pÄ ¾ ºu¶ZÆ ± »lµžÆ²°²­uÑ ¿ µž»U® ¾ ·?°²± °²­3»U· ¶Zµž±]¶Zº#® ¾}¾ ć¼E»$¯K®]·$µž­E± Ʋµ1® ° ¾ ­ % ¼EµžÆÅ°Â®ìڄºu¶l®]¶UÊ ·‰° ¾ ·‰µ1®]¶Z±:µ1Ѫµž°²­=LJ ¾ Ì µ„Ñ ¾}¾ º Ë µžÆ²µž­E»U¶ Ë ¶l® ¸5¶l¶Z­ Ë ¾ ® ¯Ãó`­ ¾ ¸?ÆÂ¶Zºuў¶z± ¾ ¼u·$»U¶Z±×°²±­u¶Z»U¶Z±‰± µ1· ÚžÇ ¬­„Õ/µ Ë ÆÂ¶.Ÿî± ¾ ͶUôuµžÄ³3ÆÂ¶Z± ¸?¯E°²»‰¯„±‰¯ ¾ ¸Ø® ¯u¶ ¶Uï ¶Z»U® ¾ž¿ ® ¯u¶ÆÅµž­uѪ¼Eµ1ў¶tÄ ¾ ºu¶ZÆ5µ1· ¶Ñª°ÂϞ¶Z­ÇÕ¯E¶ ÀE·$±]®î®]·‰µž­E± Ʋµ1® ° ¾ ­°Å±t¸?°Â® ¯ ¾ ¼u®îƲµž­uѪ¼Eµ1ў¶#Ä ¾ ºu¶ZÆ_Ì ® ¯u¶Ã±]¶Z» ¾ ­3º½°²±® ¯u¶î®]·‰µž­E± ÆÅµ1® ° ¾ ­ ¾ Ë ® µž°²­u¶ZºÎ¸×¯u¶Z­ ® ¯u¶ÎƲµž­uѪ¼Eµ1ў¶ÎÄ ¾ ºu¶ZÆp± » ¾ ·‰¶Î°²±µžºEºu¶Zº¼E± °²­uÑ´µ ± »lµžÆÅ°²­uÑ ¿ µž»U® ¾ · ¾ž¿  ÇZv}Ç ~yx ’b¥#!= :p5A@Y’bJJ5A“D‚5k€!JFA7k:Ž”oF•:=CP79} ÿX°²­EµžÆÅÆÂڞÌ® ¯u¶´¶Uï ¶Z»U® ¾ž¿ ¶l· · ¾ ·#® ¾ ÆÂ¶l·$µž­G®KÄ:µ1® »‰¯`Ê °²­EÑî¯3µž± Ë ¶l¶Z­„°Å­GϞ¶Z±]® °²Ñªµ1®]¶ZºÇ '­EÆÂÚ ¿h¾ ·'® ¯u¶,± °²ÄÊ ³3Ʋ¶µž­Eº,» ¾ ij ¾ ¼E­Eº,®]·‰µž­E± Ʋµ1® ° ¾ ­³3µ1®]®]¶l·‰­E±H¶l· · ¾ ·‰± ¯Eµ©Ïž¶ Ë ¶l¶Z­µžÆ²Æ ¾ ¸ ¶Zºñ°Å­‡Ä,µ1® »$¯E°²­uѳ3µ1· ® ± ¾ž¿ ® ¯E¶ °²­uÊ Õ/µ Ë ÆÂ¶Qv*HÞ ô`µžÄ³*ÆÂ¶ ¿‚¾ ·×® ¯u¶zµ1³E³3Ʋ°Å»lµ1® ° ¾ ­ ¾ž¿ ® ¯u¶ Ë °²Æ²°Å­uѪ¼EµžÆ*ў·‰µžÄ,Ä,µ1·ZÇ „df‡-jf#fHÞ5Û¯„\Kÿ/¬°`žf#f±„b„¬²`ÿw-hf#f5ÞHۊ„dK)ÿ/¬²`o„b„?¬°`?ÿ³f‚f „df S f‚fHÞHÛ S °²»‰¯ -ò¬iÊ  Dzø.V „\Kÿ/¬°` S Ä.´ ¾ »$¯G®]¶¸ µž­G®bÊ  DzøV f‚f S ÈYf‚fHÛ S Ä,°²®Y¸×°Â® ¯(Ê  Dzø.V S f#fHÞ5Û S ¬¯E­u¶Z­(Ú ¾ ¼qÊ  DzøV `Yf S ÈÛXÕ S ¶Z°²­u¶Z­-òµŽ Ç  ø.V S `b` S Õ<¶l·$Ä,°²­(-òºEµ1®]¶µÊ  DzøµV -òµ,ºEµ1®]¶‹Ê  Ç  ù“V -òµ:ºEµ1®]¶z¸×°Â® ¯îÚ ¾ ¼qÊ ý}Ç  ù“V „b„?¬°`ÿ S Ϟ¶l· ¶Z°²­ Ë µ1· ¶Z­ ® ¾ µ1· ·‰µž­uў¶‹Ê  DzøµV ¬ ¸ µž­G®×® ¾ µ1· ·‰µž­uў¶‡µºEµ1®]¶z¸×°Â® ¯jÚ ¾ ¼ ʉø  ÇZvžùnV Տµ Ë ÆÂ¶QŸ*HÞôuµžÄ³3Ʋ¶Z± ¿‚¾ ·×® ¯u¶'¶Uï¶Z»U® ¾ž¿ ® ¯u¶zƲµž­uѪ¼Eµ1ў¶‡Ä ¾ ºE¶ZÆÍÇ ¶l·‰±]®¸×°²¶Zºu¶l·µ Ë ºu¶ZÄ|± ¶Z»‰¯uÒl¶Z¯E­d®]¶Z­Ç ­ ¾ [9K ±]® µ1· ® °²­EÑ ¿ · ¾ Ä|® ¯u¶z± °Ùô`®]¶l¶Z­G® ¯ ¾ ­EÆÂڍµ1Ѫµž°²­Ç ¸?°Â® ¯[9K ¾ ­EÆÂڍ±]® µ1· ® °²­uÑ ¿ · ¾ Ä ® ¯E¶p± °Ùô`®]¶l¶Z­G® ¯=Ç ¤ µ`Ìu¸?¼E­Eºu¶l· Ë µ1·ZÇÄ,µž»‰¯u¶Z­¸×°Â·ºEµž±± ¾ Ì3¼E­EºÃºEµž­3­î®]· ¶Uï¶Z­¸×°Â·¼E­3±ºEµž­E­Ã°²­&µžÄ Ë ¼u· ÑuÇ ­ ¾ [9K ڞ¶Z±lÌ3­E°Å»U¶žÇ¸×°²Æ²Æ¸ ¶pº ¾ ¸×¯E°Å»‰¯î·‰°ÂѪ¯d®lÌEµ ¿ ®]¶l·?µžÆ²Æ ¸ ¶pĶl¶l®×¼3±µ ¿ ®]¶l·µžÆ²Æ°²­&µžÄ Ë ¼u· ÑuÇ ¸?°Â® ¯[9K À3­u¶žÇ ÆÂ¶l®¼3±º ¾ °Â®Ʋ°²óž¶® ¯3µ1®lÌ*µž­Eºî® ¯u¶Z­¸5¶'¸?°²Æ²Æ Ķl¶l®® ¯u¶Z­°²­&?µžÄ Ë ¼u·‰ÑuÇ ³3¼u®z± ¶Z­G®]¶Z­E»U¶Z±‡® ¾ ± ® ¾ · ¶Zº#®]·‰µž­3± Ʋµ1® ° ¾ ­„³*µ1®]®]¶l·‰­E±lÇ Õ¯u¶ ¶Uï ¶Z»U® ¾ž¿ °²­E»U· ¶Zµž± °Å­uÑ® ¯u¶'¶l· · ¾ ·® ¯u·‰¶Z± ¯ ¾ Ʋºî°²± Ѫ°ÂϞ¶Z­j°Å­ÃՏµ Ë Æ²¶pù}Ç Õ/µ Ë ÆÂ¶pù*HÞ ï¶Z»U® ¾ž¿ ¶l· · ¾ ·® ¾ ÆÂ¶l·‰µž­d®×Ä,µ1® »$¯E°²­uÑuÇ Þ · · ¾ ·‰± ³ ¶l·¸ ¾ ·‰º ÄÞ5Û6+«ª6/ ™uÞHÛ6+«ª‹/  Ç  úŸ}ÇZŸ  Dzø  Ç  úŸ}Ç ú  Ç ú  Ç21 ú dÇ   ø1Ç   Ç ý úžù}Ç ý  1uÇ  „¶c±]¶l¶µK» ¾ ­3± °²ºu¶l·‰µ Ë ÆÂ¶Ã°²Ä³E· ¾ Ϟ¶ZĶZ­d®,¸×¯u¶Z­ µžÆ²Æ ¾ ¸×°²­uÑ ¿h¾ ·Ôµ|±‰Ä,µžÆ²Æ#­}¼EÄ Ë ¶l· ¾ž¿ ¶l· · ¾ ·‰±Ó°²­ Ä,µ1® »$¯E°²­uÑc® ¯u¶®]·‰µž­E± Ʋµ1® ° ¾ ­K³3µ1®]®]¶l·$­E±ñ® ¾ ® ¯u¶î°²­`Ê ³3¼u®„±]¶Z­d®]¶Z­E»U¶žÇ¶& ¾ ¸5¶lϞ¶l·Z̍° ¿ ® ¯u¶´Ä,µ1® »‰¯ ў¶l® ± ® ¾}¾ ± Æ ¾ ³E³dÚÓ± ¶l·‰° ¾ ¼E±¶l· · ¾ ·‰± ¾ »l»l¼E·c¸?¯E°²»‰¯ µžÆÂ®]¶l· ® ¯u¶îĶZµž­E°²­EÑ ¾ž¿ ® ¯E¶î± ¶Z­G®]¶Z­E»U¶žÇCÿ ¾ ·,Æ ¾ ­uў¶l·±]¶UÊ % ¼u¶Z­E»U¶Z± ¾ž¿ ¸ ¾ ·‰ºE±® ¯u¶ ­}¼EÄ Ë ¶l· ¾ž¿ ¶l·‰· ¾ ·‰±/µžÆ²Æ ¾ ¸ ¶Zº Ë ¶Z» ¾ ĶZ±/¯E°ÂѪ¯u¶l·® ¯Eµž­p® ¯E¶ ºu¶ ¿ µž¼EÆÂ®± » ¾ · ¶ ¿‚¾ ·X± ¼ Ë Ê ±]® °Â® ¼E® ° ¾ ­E±lǬ­„± ¼E»‰¯Kµj»lµž±]¶:» ¾ ­d®]¶Z­G®‡¸ ¾ ·$ºE±z»lµž­ Ë ¶z± ¼ Ë ± ® °Â® ¼u®]¶ZºÇ È­ ¶UôuµžÄ³3Ʋ¶ ¾ž¿ ¯ ¾ ¸(® ¯u¶c± µžÄ¶€± ¾ ¼u·‰»U¶c±]¶Z­`Ê ®]¶Z­E»U¶Cў¶l® ±)º3°Ùï¶l·‰¶Z­G®Ã®]·‰µž­3± Ʋµ1® ° ¾ ­E±î¸?¯u¶Z­ÓÄ ¾ · ¶ Ä,µ1® »$¯E°²­uÑÁ¶l· · ¾ ·‰±îµ1· ¶„µžÆ²Æ ¾ ¸ ¶Zº °²±Ñª°ÂϞ¶Z­ °²­ՏµPÊ Ë Æ²¶,ø  Ç · -Xê5FF6 DžLÔ675@c¸]ê.ªê Dª0¨¹î9=D º ¬­Ó® ¯E°Å±Ã³3µ1³ ¶l·µ½®]·‰µž­3± Ʋµ1® ° ¾ ­Óµ1³3³E· ¾ µž»‰¯ Ë µž±]¶Zº ¾ ­j»lµž±‰»lµžºu¶ZºjÀ3­3°Â®]¶'±]® µ1®]¶z®]·‰µž­E±‰ºE¼E»U¶l·‰±¯Eµž± Ë ¶l¶Z­ ³E·‰¶Z±]¶Z­G®]¶Zº=Ç€È ± Ä,µžÆ²Æ5­d¼EÄ Ë ¶l· ¾ž¿ ± °²Ä³3Ʋ¶,®]·‰µž­E±Ê ºE¼3»U¶l·‰±?°Å±×¯Eµž­EºuÊì»U·‰µ ¿ ®]¶Zºcµž­Eº)® ¯u¶Z­c¼E±]¶Zº)® ¾ » ¾ ­`Ê Ïž¶l· ® µ Ë °²Æ²°²­EѪ¼EµžÆ» ¾ · ³*¼E± °²­d® ¾ µò®]·$µž­E± Ʋµ1® ° ¾ ­ ĶZÄ ¾ · Úû ¾ ­E± °²±]® °Å­uÑ ¾ž¿ ± ¾ ¼u·‰»U¶'³3µ1®]®]¶l·‰­ÃÝt® µ1· ў¶l® ³3µ1®]®]¶l·$­#³3µž°Â·$±lÌ=¸×¯E°Å»‰¯#°²­3»lƲ¼Eºu¶»lµ1®]¶lÑ ¾ · ÚKÆÅµ Ë ¶ZÆÅ±lÇ Õ·‰µž­E± ÆÅµ1® ° ¾ ­p°²±<® ¯u¶Z­‡³¶l· ¿‚¾ ·‰Ä,¶Zº Ë Úzµ1³E³3ÆÂÚ`°²­uÑ® ¯u¶ » ¾ ij*ÆÂ¶l®]¶p»lµž± »lµžºu¶ ¾ž¿ ®]·$µž­E± ºE¼E»U¶l·$±lÇ  °Â® ¯Á® ¯u¶j±‰°²Ä³3ÆÂ¶î¯u¶Z¼u·$°²±]® °²» ¿‚¾ ·:® ¯u¶î®]·‰µž­E± ƲµPÊ ® ° ¾ ­± » ¾ · ¶Z±µÎƲµž­uѪ¼Eµ1ў¶cÄ ¾ ºE¶ZÆ ¿h¾ ·t® ¯u¶c® µ1· ў¶l® Ʋµž­EѪ¼Eµ1ў¶°Å±p³3µ1·$µžÄ ¾ ¼E­d®‡® ¾ ±]¶ZÆÂ¶Z»U®Ñ ¾}¾ ºC®]·‰µž­E±Ê Ʋµ1® ° ¾ ­E±lÇ ÞH· · ¾ ·]Ê_® ¾ Ʋ¶l·‰µž­G®¹Ä,µ1® »‰¯E°Å­uÑ °²Ä,³E· ¾ Ϟ¶Z± ®]·‰µž­3± Ʋµ1® ° ¾ ­ % ¼EµžÆ²°Â® ÚžÇ Þ ô}³ ¶l·‰°ÅĶZ­G® ±î¯Eµ©Ïž¶K± ¯ ¾ ¸×­ ® ¯u¶„³ ¾ ®]¶Z­d® °²µžÆ ¾ž¿ ® ¯E°Å±×µ1³E³E· ¾ µž»‰¯ ¿‚¾ · Ä,µž»$¯E°²­u¶‡®]·‰µž­E± Ʋµ1® ° ¾ ­Ç Ü ¾}¾ º » ¾ Ϟ¶l·‰µ1ў¶ ¾ ­J¼E­E±]¶l¶Z­ ®]¶Z±]®„ºEµ1® µ » ¾ ¼EƲº Ë ¶ ¾ Ë Ê ® µž°²­E¶ZºÇȹÄ,µ ¤ ¾ ·5µžºEÏPµž­d® µ1ў¶ ¾ž¿ ® ¯E°²±X®]·$µž­E± Ʋµ1® ° ¾ ­ Տµ Ë ÆÂ¶,ø  *HÞôuµžÄ³3Ʋ¶Z± ¿‚¾ ·×® ¯u¶'¶Uï¶Z»U® ¾ž¿ ¶l· · ¾ ·® ¾ ÆÂ¶l·$µž­G®×Ä:µ1® »‰¯E°²­EÑuÇ ¤ µÃÌE¸?¼E­Eºu¶l· Ë µ1·Ç Ä:µž»‰¯u¶Z­¸×°Â·ºEµž±± ¾ ÌE¼E­EºÃºEµž­3­î®]· ¶Uï¶Z­¸×°Â·¼E­3±ºEµž­E­Ã°²­&µžÄ Ë ¼u· ÑÃÇ  Ç  À*­u¶ÇHÆÂ¶l®¼E±×º ¾ °Â®Ʋ°Âóž¶'® ¯3µ1®zÌEµž­EºÃ® ¯u¶Z­j¸5¶z¸?°²Æ²Æ Ķl¶l®® ¯u¶Z­°²­&?µžÄ Ë ¼u·‰ÑÃÇ  Ç  À*­u¶ÇHÆÂ¶l®¼E±×º ¾ ® ¯Eµ1®zÌ3µž­3ºî® ¯u¶Z­Ã¸ ¶z¸×°²Æ²Æ Ķl¶l®?°²­&?µžÄ Ë ¼u·‰ÑÃÇ  Ç21 À*­u¶ÇHÆÂ¶l®¼E±×º ¾ °Â®Ʋ°Âóž¶'® ¯3µ1®zÌEµž­EºÃ® ¯u¶Z­j¸5¶z¸?°²Æ²Æ Ķl¶l®×°²­&µžÄ Ë ¼u· ÑÃÇ  Ç ý À*­u¶ÇHÆÂ¶l®¼E±×º ¾ °Â®Ʋ°Âóž¶'® ¯3µ1®zÌEµž­EºÃ® ¯u¶Z­j¸5¶z¸?°²Æ²Æ Ķl¶l®×°²­îÚ ¾ ¼E· ¾ ^:»U¶,Ç Ä¶l® ¯ ¾ º½°²±ñ® ¯Eµ1®,°Â® Ë ·‰¶Zµ1ó}±® ¯u¶ÃÄ,°ÅºEºEÆÂ¶tў· ¾ ¼E­Eº Ë ¶l®ì¸ ¶l¶Z­#ºE°Â·‰¶Z»U®®]·$µž­E± Ʋµ1® ° ¾ ­„Ä,¶l® ¯ ¾ º3±'Ʋ°Âóž¶± °²ÄÊ ³3ÆÂ¶'®]·$µž­E± Ʋµ1® ° ¾ ­ÃÄ,¶ZÄ ¾ · Ú ¾ ·×¸ ¾ ·‰º`Ê Ë µž±]¶Zº±]® µ1® °²±Ê ® °²»lµžÆ®]·$µž­E± Ʋµ1® ° ¾ ­Kµž­Eº„®]·‰µž­E± ¿ ¶l· Ë µž±]¶Zº#Ķl® ¯ ¾ ºE± °²­dÏ ¾ ÆÂÏ`°²­uÑ?ºE¶l¶l³,Ʋ°²­uѪ¼E°Å±]® °²» µž­EµžÆ²Ú}± °Å± ¾ž¿ ® ¯u¶°²­u³3¼E®lÇ ¬­ ¿ µž»U®lÌu® ¯u¶z»lµž±‰»lµžºu¶ZºÃ®]·‰µž­E±‰ºE¼E»U¶l·µ1³E³E· ¾ µž»‰¯ÃµžÆÙÊ Æ ¾ ¸×± ¿h¾ · Ë ¼E°²Æ²º3°²­uÑ % ¼E°²»‰ó}ÆÂÚε€ÀE·‰±]®Ϟ¶l·‰±‰° ¾ ­´µž­Eº °²Ä³3· ¾ Ï`°²­uÑ®]·‰µž­E± ÆÅµ1® ° ¾ ­ % ¼3µžÆ²°Â®ìÚ Ë Ú Ñž·‰µžºE¼3µžÆ²ÆÂÚ µžºEºE°Å­uÑ¹Ä ¾ ·‰¶ÁÆÅ°²­uѪ¼E°²± ® °²»„µž­EºJº ¾ Ä,µž°²­±]³ ¶Z»l°ÂÀ3» ó`­ ¾ ¸×Ʋ¶Zºuў¶žÇ „¶ñ¶Uô}³ ¶Z»U® ¿ ¼E· ® ¯u¶l·?°ÅijE· ¾ Ϟ¶ZĶZ­G® Ë Újµž±‰± °ÂѪ­`Ê °²­uÑ ®]·‰µž­E± ÆÅµ1® ° ¾ ­(±‰» ¾ · ¶Z± µž»l» ¾ ·‰ºE°²­uÑ ® ¾ » ¾ · ³3¼E± ±]® µ1® °²± ® °²»l±lÇ Õ¯E°²± ¸×°ÅÆ²Æ Ë ¶z® ¯u¶pÄ,µž°Å­ ¿‚¾ »l¼E± ¿‚¾ · ¿ ¼`Ê ® ¼u· ¶'¸ ¾ · óÇ » =J¼–75 Ž€G9}§¨7k:| Õ¯E°Å± ¸ ¾ · ó ¸ µž± ³3µ1· ® Ʋڠ±‰¼u³E³ ¾ · ®]¶Zº Ë Ú'® ¯u¶ Ü ¶l·‰Ä,µž­ñÿu¶Zºu¶l·$µžÆˆK)°²­`Ê °²±]®]·‰Ú ¾ž¿ Þ5ºE¼E»lµ1® ° ¾ ­̬`»l°Â¶Z­E»U¶žÌ€Û¶Z± ¶Zµ1·‰»‰¯ µž­Eº Տ¶Z»‰¯E­ ¾ Æ ¾ ÑžÚ ¼3­Eºu¶l·Ã® ¯u¶ í ¾ ­G®]·‰µž»U®`?¼3Ä Ë ¶l·  ø ¬²„c  ø‡Õi1õ÷ßàEáâ*ã äâ3åhædþ$Ç €0™¸0EDª0375«`03¨ ½ˆ¾•¿"À|Á ÂÄÃ>¾bÅÇÆÆÈ¾‚ÉÊËÍÌÏÎÐÑyÎÒÏÓÂÄ£԰Ջ̰ÊHÖÖ>×eÁ|ÖQÊHÁ|Ø.Ó|ÊHËÏΠ̲אÊÙ Ó|ÊH˰ÒÍאÁ Ö!¾ڑÁq½ˆ¾{ۖÐ>Ü Á ֓ÊHÁ|ØqÝb¾ÞPÙeÐÐHÌ²Õ ÐÐHÑyÌÇß àâá Ø Ò£¾ ãgßä–å;æçèé_êÍë°ì0éÄí°îbïpíÄðyñ|å£î;é'òyóµôAìHóJõè|ìÄõJíBì;óˆî öç í°í°÷²ñ\øIæÏå£÷°ígé°é°òyóõHßÓ|ÊÖÂÇÒ"ÅÅÇù0úˆÅ£ûü|¾Hý'ÙeÜþPÂÄËP¿#ÔÄÊHΠأÿdאÔiÉÜ À ِאÒÏÕ ÂÄ˰ңß‚Ð˰Ø˲£԰ÕJÌ£ß ÞPÐJÒt̲ÐÁ™ßAÐÁ|ØÐ>Á™¾  ¾‚¾–¿"ÿdÂÄÁ Ö>Ü|ÊHÙß  ¾¾–ÞP£Á £Ø×ß 9¾ PÊÒ²ÊÔgÜ|À!£ËÍÌ°Ê ß ¿B¾ PÊ>Òt̰ÊHÁ Ð!ßB¿r¾ PÊÒÍÌÏ£ÙeÙWÊHÁ ÐJÒÄß Y¾n¾  ×eÿd£Á Â>ß Y¾™ÙÐ˲ÂÄÁ!ÒÄß\¿r¾ pÊHË£ÊHÙß¾BÉÊ>Òt̲ÐËÇß P¾rɖ˰Ê;ÌÇß á ¾ #×WØ ÊÙâßÊHÁ|Ø  ¾¾ ‚×eÙWÊHËǾ ¾IÕ Â á ÜÌÏ˰ÊHÁ|ÒÍÎÚ ÒÏÓ!££԰Õq̲˲ÊÁ|ÒÍÙWÊ;̲×eÐ>Á˜ÒÍÃÒÍÌÏ£ÿ‹¾‡ïpì>÷²ñòyóí æÏì;ó|é! eì;ê ðò å;ó#"öHç|í²÷Äò ì$ &%°é°é_è|í!"#'gåHæ_ðyñ|÷°å$(bòyóJõ¾ ) ¾ Y¾rÞPËÏÐ;þ Á™¾ ÅÇÆÆ>ü ¾ á* ÊHÿdÓ ÙÂgÎÀ|Ê>ÒÍÂÇØ³ÿ\ÊÔ°Õ ×Á  ̲˲ÊÁ|ÒÍÙWÊ;̲×eÐ>Á˜×Á˜ÌÏÕ|ŽÓ|ÊÁ ÖِÐ>ҲҋÒÍÃÒÍÌÏÂÄÿµ¾‡Ú‘ÁNøIæÏå£÷,+ å-'\ä .–ô/%-02143 5672ˆñ í860ð3ñ2%gó|ð9+PäåHó:';+9å;ó(ä–å(Yê çˆèðìHðâò åHóì$ ôòyóJõèòWé°ðò ÷gé°ß9Ó|ÊHÖ>£Ò6Å£ü>Æ0úÅ;È=<!ß9Ð>Ó!£ÁÎ Õ|ÊÖ£Á™ß#£Á ÿ\ÊHË>ˆß ¿"Ü Ö>Ü|ÒtÌǾ ? ¾9ý'×Z̰ÊHÁ Ð!¾¨ÅÇÆÆû|¾ ¿cÔÄÐÿdÓ Ë²ÂÄÕ Â£Á|ÒÍ×A@µÊHÁ!Ø(Ó Ë°ÊÔgÌÏ×eÎ Ô£ÊHÙiÿdÐØÂÄÙrÐÑbÿd£ÿbÐ>ËÏÃJÎÀ|ÊÒÏ£ؘÿdÊ>Ô°Õ ×Á “ÌÏ˰ÊHÁ|ÒÏÙWÊ;Π̲×eÐ>Á™¾³Ú‘Á ) ¾IÞIÊ$BÍÔÄÒÏÃßIÂÇØ×Z̲ÐËÇ߂øPæÏåÇ÷:+YåC'ð3ñ íD8EHð3ñ %gó|ð9+#FåHòyó|ð–ä–å;ó,';+•å;óG‚æ_ðò H9÷Äò ì$ %gó|ð‘í; I «òZõJíÄó÷°ígßÓ|ÊHÖ>Â£Ò Å=>ÈHü0úÅ=Hù¾Ð>ËÏÖJÊHÁ‹ýiÊHÜÑ3ÿ\ÊHÁ|Á™¾ ½ˆ¾J"אÂK>ÂÄÁ™ß P¾  ¾LE԰ՙßBÝb¾M™ÂÄÜ!ÒÏ԰ՙßrÊÁ|Ø ? ¾J"ÂÄÃ>¾  ¾d¿"ÁÂ@;ÊÙeÜ|ÊHÌÏאÐÁŽÌÏÐÐÙ{Ñ3Ð>Ërÿ\ÊÔ°Õ ×Á ÂbÌÏ˰ÊHÁ!ÒÍÙWÊ;Π̲×eÐ>ÁONM !ÊÒÍÌiÂ@;ÊHِÜ|Ê;̲×eÐ>ÁÑ3ÐË PwËÏÂÇÒÍÂÇÊH˰԰ՙ¾EڑÁRQ>óî %gó|ð9+Pä–å;ó:';+9å;ó.ôAìHóJõè|ìÄõJíTS#ígégåHèæÏ÷°í_érì;óˆîU V0ì «è|ì;ê ðò å;ó!ß Ó|ÊÖ£ÒIû>Æ0ú<Wß¿PÌÏÕ|ÂÄÁ|Ò£ß!ÝE˲ÂÄ£ÔÄÂß.Ê0Ã>¾ P¾  ¾XLE԰ՙßY‚¾IאÙeِÿ\ÊHÁ Á™ß.ÊÁ|Ø ? ¾J"ÂÄÃ>¾ Å£ÆÆ>Æ ¾ ڑÿdÓ Ë²Ð$@ÂÇØpÊÙeאÖÁ ÿd£Á>Ì'ÿdÐØÂÄÙWÒEÑ3ÐËiÒÍ̲Ê;̲אÒÍÌÏ×WÔÄÊÙÿ\Ê;Î Ô°Õ|×eÁ µ̲˲ÊÁ|ÒÏِÊHÌÏאÐÁ™¾ڑÁ¡øPæÏåÇ÷:+iå-'µðyñ|íPFå;òyó!ðrö%1Iê Z G ŠäåHó:';+bå;ó[U\(#çòyæ_ò ÷°ì$ Eïpígð3ñ åÇî0éŽòyó]0Yì;ðèæÏì$ ôì;óõHè|ì£õ>ídøPæÏåÇ÷°í_é°é_òyóJõ“ìHóî]^ígæ!_6ôì;æâõJíŽä–å;æç|å;æÏìHß Ó!ÊHÖÂÇÒ2;ú`Hù|ßba"Á|×c@>ÂÄ˰ÒÍ×eÌtÃnÐHÑ .ÊËÏÃÙWÊHÁ|ؕßd9ÐِÙe£Ö ÉÊËe>ˆß`PYß#a‚½ ¿rß  Ü Á Â>¾ ? ¾½Ê0þIÊ;Ñtߙýd¾™½Ô°Õgf ÜÌßkÊHÁ|Ø ? ¾OJ#ÂÄþ|¾L‚ÁpÌ²Õ Â Ü!ÒÍÂEÐHÑAÖ>˲ÊÿdÿdÊËÀ|Ê>ÒÍÂÇØ\ÙWÊHÁ Ö>Ü|ÊHÖ>Â#ÿdÐØÂÄÙWÒ9Ñ3ÐËIÒt̰Ê;Π̲אÒÍÌÏ×WÔÄÊÙAÿ\ÊÔ°Õ|×eÁ ÂrÌÏ˰ÊHÁ|ÒÏِÊHÌÏאÐÁ™¾IڑÁh6;ðyñi%gó|ð9+TjµåHæek;ê éÏñ|å²ç$å;ó(ø ìHæ°é_òyóJõl!í°÷²ñóå eåÏõò ígé°ß9Ó|ÊHÖ>£ÒmHû|Å_ú`<|Å>ß ˲ÂÄÁJÌÏÐ!ßÚ̲ÊHِÃß |ÂÄÀ ˲Ü|ÊËÏÃ>¾ ¿B¾b½Ô°Õ אِÙe£ˣߵ½ˆ¾nÂÄÜÑ3£ÙâߋÊHÁ|Øo‚¾nIÕ ×e£Ùe£Á™¾¶Å£ÆÆW¾ ÝEÜ|אØ£ÙeאÁ ÂÇÒqÑf Ü Ë¡Ø Ê>Ò]ÊÖÖ>×eÁ ևØ£ÜÌ²Ò²Ô°Õ ÂÄË[I * ÌÍÎ >>Ð˲Ó!Ð>˲ÊNÿd×e̬½d"½ˆ¾pAÂÇÔ°Õ Á ×WÔÄÊHً˲ÂÄÓÐËÏÌ£ßqa"Á ×eÎ @>ÂÄ˰ÒÍ×eÌ,f ÊH̓½J̲ÜÌÍ̲Ö>ÊËÍÌÊHÁ|Øra"Á ×A@£˲ÒÏ×ZÌ=f Ê;ÌDf Ü À אÁ Ö£Á™¾ s`ttu v-wwxxx\y{z|}zgy~uus€‚yƒ~&€„$tƒ…†‡€:~`ˆ…~ yЉ`…w ‹#‚x‡€zwz$ttzwz$ttzys`tŒ‚ ¾ ½ˆ¾ Ð>Ö£ÙdÊÁ|Ø ? ¾ J"£Ã¾|¾Ž9ÐÁ|ÒÍÌÏ˲Ü|Ô_̲×eÐ>Á±ÐHÑ Ê¡Õ ×ÂÄ˰ÊHË°Ô°Õ ×Ô£ÊHÙBÌÏ˰ÊHÁ|ÒÏِÊHÌÏאÐÁ³ÿdÂÄÿdÐ>ËÏÃ>¾ ڑÁ‰øPæÏåÇ÷:+ å-'±ä .ôO%e01Q7‘ˆñ|í’8“Hð3ñ”%_ó!ðŠ+“ä–åHó:';+ å;ó ä–å("çˆèðìHðâò åHóì$ sô{òyóõHèòWé_ðò ÷_é_ßÓ|ÊÖÂÇÒ¡Å>Å£û ÅgúÅ>Å£ûW ß ½ ÊÊH˲À Ëf Ü|Ô{>£Á™ßÝEÂÄ˲ÿdÊÁÃß  Ü ÙeÃ>¾ • ÊHÕ ÙWÒÍÌÏÂÄËÇß • ¾ à á Ø•¾ ㎠¾i^!íÄæÄë;(\åëÄòI c7–å;èóˆîì;ê ðò å;ó|éIå-'öç í°í°÷°ñê‘ð‘å;êöHç|í°í°÷²ñ— æÏì;ó|é! eì;ðò å;ó&+½Ó ˲אÁ Ö£ËÍÎ £ËÏÙWÊHÖ ? ÂÄ×WØÂÄِÀ!£ËÏÖ!¾ Ûb¾ ÎÛY¾ • ÊÁ ÖBÊHÁ!Øb¿B¾ • ÊHאÀ!£Ùâ¾IÅÇÆÆJȾ/‚£ÔgÐØאÁ ÖBÊÙeÖ>ÐHΠ˲×eÌÏÕ ÿo×eÁŽÒt̰Ê;ÌÏ×WÒÍÌÏ×WÔÄÊHٙÌÏ˰ÊHÁ|ÒÏِÊHÌÏאÐÁ™¾PڑÁŽøPæÏåÇ÷,+åC'Bðyñ|í E˜;ð3ñG‚ó|ó!è|ì$ Aä–åHó:';+{åC'‚ðyñ|í™GEé°éÄåÇ÷gò ìHðâò åHó 'gå;æbäå(Yê çˆèðìHðâò åHóì •ôòyóJõèòWé°ðò ÷gé°ßˆÓ|ÊÖÂÇÒ#ûü>ü0úûJÈ ß.ÊØ ËÏ×WØ•ß ½Ó|ÊHאÁ™ß  Ü ÙÃ¾ Y¾ • ܙ¾YÅ£Æ>Æü ¾‚¿wɖÐ>ÙeÃÁ Ð>ÿdאÊÙZÎ9I×eÿdÂr¿#ÙeÖ>Ð˲×ZÌ²Õ ÿoÑ3ÐË ½̲Ê;̲אÒÍÌÏ×WÔÄÊÙ#.Ê>Ô°Õ ×eÁ| ˰ÊHÁ|ÒÏِÊHÌÏאÐÁ™¾AڑÁsøIæÏå£÷,+™å-' ðyñ|í E:š ð3ñG‚ó|ó!è|ì$ Aä–åHó:';+{åC'‚ðyñ|í™GEé°éÄåÇ÷gò ìHðâò åHó 'gå;æbäå(Yê çˆèðìHðâò åHóì {ô{òyóõHèòWé_ðò ÷_é_ß½ÊÁJ̲ÊY9˲ÜßP¿rßÓ|ÊHÖ>Â£Ò Å=W0úÅ,Wù ß  Ü Á|¾
2000
4
      !"$#%!  & ' (*)+  ! , -/. 021340257683:9;52<=9?> @ 340BADC&EGFH9?IJ<=KL3:MNO9QP"PRASC&ET6UK&MWVYX> @ 3:MZASC&ET[\K>^]_MQ34`aK>"b 340Jcdce.f>"g hikjl(monpq hrtsusLv2wDxzyLAuAS{p&{D| }&~€D Ca‚\m  {n8ASC„ƒ …d†‡uˆ„‰ Št‹\Œd4Š ˆL‡DŽ::‡:'Š‘4’DŽ&’L”“J•„–4—”‹fŒ"˜–&‡ ]U1'c™5š02KIS5 ›_œž4Ÿ œ¡ tœ¡¢D£¤£ ¥„œ§¦©¨4ª«£ ¬­ª­¬®¢„¯2¨4°šª± ¨4¦² ¦±°dŸ¬«³¡°d£ ¬«´2¢¶µ·¨4¢”¸¹£ ¬«´2¢4°šª­¬«£?º/µ·´šŸ¼»½”¾¿4À Á࿔ķńÆ4° t&œ€œ¡¸¥7£tŸ°š¢4  ª­°d£ ¬­´2¢z tºS  £tœ¡¦zÇ ›_œŸ œ¡¨4 tœÈŸ œ¡ t´2¨„Ÿ¸¹œ¡  ´šµÉ£ ¥„œÈ tºu t£tœ¡¦Ê£t´ ¸¹Ÿœ¡°d£tœ°Y  ¨”¦o¦o°dŸ ºšÇaË µ·£tœ€Ÿ"¸¹´2¢D£tœ¡¢D£"œ¹ÌS² £tŸ°š¸¹£ ¬«´2¢ÆfÍ'œ±¬­¢D£tœ€Ÿ 4Ÿœ€£Î£ ¥„œŸ œ¡  ¨4ª­£  Y¬­¢ £ ¥4œ Ï4¬­°šª«´š¯¸¹´2¢J£tœ¹Ìu£€Ç=ËР ¨4¦±¦o°dŸ º©¯šœ¡¢u² œ€Ÿ°d£t´šŸ 4Ÿ ´ÒÑS¬®Ï„œ¡ £ ¥„œÈ¬®¢„”¨„£k£t´o¯šœ¡¢„œ€Ÿ°e² £ ¬­´2¢ÇÓËÕÔ4Ÿ  £Éœ€Ñe°šª­¨”°d£ ¬«´2¢¬­¢4Ï4¬®¸€°d£tœ¡ a£ ¥„œ µ·œ¡°š  ¬«Ö:¬­ª­¬«£?ºo´šµ\£ ¥4œÈ°d44Ÿ ´2°š¸¥Ç × Ø >\5š02.Éb=Ù=IS529W.f> Ú¢7£ ¥„œª­°š t£(τœ¡¸€°šÏ„œšÆ&°š¨„£t´2¦o°d£ ¬­¸  ¨4¦±¦o°dŸ¬«³¡°d£ ¬«´2¢ ´šµ¼£tœ¹ÌS£ ¨”°šªÜÛ·´2¢u²Wª­¬®¢„œÒÝܦo°d£tœ€Ÿ¬­°šªÍk°š £ ¥4œ¶¦o°š¬­¢ ¯š´2°šª´šµ(4Ÿ ´š¯šŸ°š¦± oª­¬«Þšœzßàá'â㰚¢4ϤßÚä'å„ßá=à Ûæ tœ€œoœšÇ範Ç'Û;è鰚¢4¬=°š¢4Ïè鰙ºDÖ:¨„Ÿ ºšÆ"ê¡ëšëšë2ÝtÝÇìߥ„œ¡ tœ 4Ÿ ´eítœ¡¸¹£  fÏ4œ¡°šª«£ Í ¬«£ ¥Y£ ¥„œ=  ¨4¦±¦o°dŸ¬«³¡°d£ ¬«´2¢Y´šµLîÉïñðæòæó òWô¹õö£tœ¹Ìu£  €Ç÷›ø¬«£ ¥Ð£ ¥4œZ°¡Ñd°š¬­ª­°d֔¬®ª­¬«£WºÕ´šµ t&œ€œ¡¸¥u² ֔°š tœ¡Ï_ϔ¬­°šª«´š¯2¨„œ tºu t£tœ¡¦o €Æ ¬­£Ã¬® Î°šª­ t´¼&´2    ¬«Ö”ª­œÈ£t´ 4Ÿ ´uÏ4¨4¸¹œÎ  ¨”¦o¦o°dŸ¬«œ¡ kµ·´šŸ  t&´šÞšœ¡¢7ϔ¬­°šª«´š¯2¨„œšÇ ›ø¬«£ ¥4¬­¢ù£ ¥„œT t&œ€œ¡¸¥„²Q£t´d²W t&œ€œ¡¸¥ú£tŸ°š¢”  ª­°d£ ¬«´2¢  tºu t£tœ¡¦»½4¾¿ Áà¿:ÄûÅ/Û·›°š¥4ª­ t£tœ€Ÿ¡ÆÉüdýšýšýJÝÆ °7 tºu ² £tœ¡¦þ£ ¥4°d£é£tŸ°š¢”  ª­°d£tœ¡ 8¢„œ€¯š´š£ ¬­°d£ ¬«´2¢4 8¬­¢Ð£ ¥4œ/τ´d² ¦o°š¬­¢” k´šµÓ  ¸¥„œ¡Ï4¨4ª®¬­¢„¯„Æu£tŸ°¡Ñšœ¡ªf”ª­°š¢4¢”¬­¢„¯„Æu°š¢4Ï7¥„´d² £tœ¡ªSŸ œ¡ tœ€Ÿ Ñd°d£ ¬«´2¢©Ö&œ€£WÍ'œ€œ¡¢±ÿ(œ€Ÿ¦o°š¢°š¢4ÏJ°d”°š¢„œ¡ tœ ´šŸìá"¢„¯2ª­¬­ ¥Æ Í'œéτœ€Ñšœ¡ª«´š&œ¡Ï¤  ¨4¦±¦o°dŸ¬«³¡°d£ ¬«´2¢ µ·°e² ¸€¬­ª­¬­£ ¬«œ¡ ±£ ¥”°d£z£ °dޚœÞS¢4´™Í ª«œ¡Ï4¯šœU  ´2¨„Ÿ¸¹œ¡ z°šª«Ÿ œ¡°šÏ4º 4Ÿ œ¡  œ¡¢J£aµ·´šŸÉ£tŸ°š¢4  ª­°d£ ¬­´2¢:¨„Ÿ &´2 tœ¡ Ó°š¢4Ϩ4 tœk£ ¥„œ¡¦ £t´o¯šœ¡¢„œ€Ÿ°d£tœ°o  ¨”¦o¦o°dŸ º´šµa£ ¥„œÎ£tŸ°š¢4  ª®°d£tœ¡ÏéÏ4¬­°e² ª«´š¯2¨„œšÇ ߥ4œ Ÿ°d£ ¬«´2¢”°šª«œ Ö&œ¡¥4¬­¢”Ï£ ¥„œ( ¨4¦o¦o°dŸ¬­³¡°d£ ¬«´2¢o¬­¢ °_£tŸ°š¢”  ª­°d£ ¬«´2¢   ºS t£tœ¡¦ù¬­ £t´”Ÿ ´™Ñu¬­Ï„œz£ ¥„œÜ¨4 tœ€Ÿ       "!$#%&'(*)+,-./ 01 + 2!33 46578+9,:!; =<?>89 9@<=AB349.C4C=9-8D + / @EGFIHJ0KH)MLN C!+OP.'Q   !+QRS5MABHJ0UTBD H VXWZYM :[2&9&!%8&B#B. !]\8^&VXR6^,\8^&_6`a^b Í ¬­£ ¥7¢„´š£tœ¡ (°dÖL´2¨4££ ¥„œÏ4¬®°šª«´š¯2¨„œY¬­¢z£ ¥4œ¡¬«Ÿ ¢4°d£ ¬«Ñšœ ª­°š¢4¯2¨4°d¯šœšÇÓß ¥„œ€º(¸€°š¢Ö&œ ¨4 tœ¡ÏÆÒœšÇ範ǫÆdµû´šŸÓ¬­¢4 tœ€Ÿ £ ¬­´2¢ ¬­¢   ¸¥„œ¡Ï4¨4ª«œ¡ ¡Æ´šŸ¼£t´ ¸¥„œ¡¸ ÞÕÍ ¥„œ€£ ¥„œ€Ÿ¼£ ¥„œU¦o°š¬­¢ &´2¬­¢D£  ´šµÓ£ ¥„œÈ¸¹´2¢Dњœ€Ÿ  °d£ ¬«´2¢ÜÍ"œ€ŸœÈ¸¹´šŸ Ÿ œ¡¸¹£ ª«º¼Ÿ œ¡¸ñ² ´š¯2¢4¬­³€œ¡Ï7°š¢4Ïz£tŸ°š¢”  ª­°d£tœ¡ÏzÖSº¼£ ¥„œY tºu t£tœ¡¦zÇ c ¨„ŸHÑu¬«œ€Í ´2¢   ¨4¦o¦±°dŸ¬«³¡°d£ ¬«´2¢*¬® H£ ¬«¯2¥D£ ª«º ª­¬®¢„Þšœ¡Ï £t´£ ¥„œé¨”¢4τœ€Ÿª«ºu¬­¢„¯Ü£ °š tÞ ´šµÎ¢„œ€¯š´š£ ¬­°d£ ¬«´2¢ Í ¥4œ€Ÿ œºš´2¨±°dŸ œ¬­¢J£tœ€Ÿœ¡ t£tœ¡Ïo¬­¢£ ¥„´2  œ´šÖuítœ¡¸¹£  =£ ¥4°d£ °šª­ª: t&œ¡°dޚœ€Ÿ '°d¯šŸ œ€œ¡Ï´2¢ÇÉÚ ¢±£ ¥„œÃ¸¹´2¨„Ÿ tœ´šµ °Ï4¬®°e² ª«´š¯2¨4œ¦o°š¢Dº7  ¨„¯š¯šœ¡ t£ ¬­´2¢4 °dŸœÖ4Ÿ´2¨„¯2¥J£ µû´šŸ Ík°dŸÏÆ  t´2¦oœY°dŸ œY°š¸€¸¹œ€4£tœ¡ÏÆ4´š£ ¥4œ€Ÿ Ÿ œ ítœ¡¸¹£tœ¡ÏƔ t´2¦oœ'ít¨4  £ µ·´šŸ ¯š´š£t£tœ¡¢Ð°š¢4Ïø¢„œ€Ñšœ€Ÿz¦œ¡¢D£ ¬«´2¢„œ¡Ïø°d¯2°š¬­¢ÇöÚ¢Õ° Í'´šŸÏÆL£ ¥„œ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢8¬­ (  ¸€°d£t£tœ€Ÿ œ¡Ï_°š¸¹Ÿ ´2   (£ ¥„œ Ï4¬®°šª«´š¯2¨„œšÇed„´šŸÎ  ¨”¦o¦o°dŸ¬«³¡°d£ ¬­´2¢Æ4Í'œ£tŸ ºz£t´֔¨4¢„² Ï4ª­œ'  ¬®¢„¯2¨4ª­°dŸaÏ4°d£ °Ã£t´š¯šœ€£ ¥„œ€Ÿ"£t´Îµû´šŸ¦ö  ¨4¯š¯šœ¡ t£ ¬«´2¢4  Í ¥”¬­ª«œ8ޚœ€œ€”¬­¢„¯¤£tŸ°š¸Þдšµœ¹Ìu”ª­¬®¸€¬«£z°š¢4ÏЬ­¦”ª®¬­¸€¬«£  t£ °d£tœ¡¦oœ¡¢J£  È´šµ'°š¸€¸¹œ€4£ °š¢”¸¹œì°š¢4ÏUŸ œ ítœ¡¸¹£ ¬«´2¢Çߥ4œ Ÿ œ¡ ¨4ª«£ ¬­¢„¯8¬«£tœ¡¦o °dŸ œ¼4Ÿ œ¡  œ¡¢J£tœ¡Ï ¬­¢O°8ԄÌSœ¡ÏO£ ¥„œ¹² ¦o°d£ ¬®¸Î´šŸτœ€Ÿ¡Ç ›_œz t£ °dŸ£ÖSºÔ4Ÿ  £È¯2¬«Ñu¬­¢„¯8°éŸ ´2¨4¯2¥/ tޚœ€£ ¸¥ ´šµ °šª­ª¦´uÏ4¨4ª«œ¡ 7¬®¢Jњ´2ª«Ñšœ¡Ï ÇGߥ„œ¡¢ÆÎÍ'œ/  ¥4´™Í ¥„´™Í Í'œzŸ ´šÖ”¨4  £ ª«º_œ¹ÌS£tŸ°š¸¹£±°U¸¹´2¢J£tœ¡¢D£±Ï„œ¡  ¸¹Ÿ¬«4£ ¬«´2¢/´šµ ¨„£t£tœ€Ÿ°š¢4¸¹œ¡ µûŸ´2¦£ ¥4œ t&œ€œ¡¸¥/Ÿœ¡¸¹´š¯2¢4¬«³€œ€Ÿaf  ´2¨„£² ”¨4£Ü°š¢4Ϟ֔¨4¬­ª­Ïа ¸¹´šŸ œOŸ œ€4Ÿ œ¡  œ¡¢J£ °d£ ¬«´2¢ Í ¬«£ ¥”¬­¢ £ ¥„œÜÏ4¬­°šª«´š¯2¨„œz¦œ¡¦o´šŸ ºšÇÕߥ4œ7Ï4¬­°šª«´š¯2¨„œ¼4Ÿ´S¸¹œ¡ ²  t´šŸ_œ¹Ìu£tœ¡¢4Ï4 8£ ¥4¬® ÜÏ4°d£ °Ð°š¢”Ï ¸¹´šŸŸ œ¡¸¹£  _¬­¦”ª®°š¨u²   ¬­Ö”ª«œé¬­¢4”¨„£€Ç§›_œ°šª­  ´/  ¥„´ÒÍT¥„´™Í Í'œ8¸€°š¢Ð¨4  œ £ ¥„œ¡  œŸ œ€4Ÿ œ¡ tœ¡¢D£ °d£ ¬«´2¢4 Ã£t´é4Ÿ ´uÏ4¨4¸¹œ°š¢_°d֔ t£tŸ°š¸¹£   ¨”¦o¦o°dŸ ºτœ¡  ¸¹Ÿ¬­4£ ¬«´2¢£ ¥4°d£'¬­  ¸¹´2¢Dњœ€Ÿ £tœ¡ÏÖDº£ ¥„œ ÿ(œ€Ÿ¦o°š¢Ðª­°š¢„¯2¨4°d¯šœU¯šœ¡¢„œ€Ÿ°d£ ¬«´2¢§¦´uÏ4¨4ª«œ8¬­¢D£t´ ° ¢4°d£ ¨4Ÿ°šª=ª­°š¢4¯2¨4°d¯šœ ¨4¦o¦o°dŸ ºšÇhg'º¨„£ ¬®ª­¬«³¡¬­¢„¯¼£ ¥„œ £tŸ°š¢” tµûœ€Ÿ7¸¹´2¦oL´2¢4œ¡¢J£zÍ'œ°dŸ œ°d֔ª«œU£t´¤4Ÿ´SÏ4¨”¸¹œ £ ¥„œ¼  ¨4¦±¦o°dŸ º¬­¢O°š¢JºOª­°š¢„¯2¨4°d¯šœ´šµ£ ¥„œz»½4¾¿”À Áà¿:ÄûÅZ tºu t£tœ¡¦zÇida¬­¢4°šª®ª«ºšÆ"°_ԔŸ t£œ€Ñe°šª®¨4°d£ ¬«´2¢ ¬­  4Ÿœ¡ tœ¡¢J£tœ¡Ï Ç jlk;m n o m np q n r s t n u k;m v w xzy|{ } ~ v } n s n u } n u t s; ~ € u s; k ;‚ o2ƒIt n„|o t k ;m…u n u k;m †2‡Pˆ:‰.Š ‹Œ ˆ:Ž  ˆ Œ‹ ‡P ‘ ŒP’  ‹ “  Œ ‘  ‡ ‹” ‡ •   •X’ ŒP– ‡—;— ŒP’ ˜&™3š›šlœ›ž,š›œ›Ÿ3 3¡3¢ £ ¢ Ÿ3¡ ¤ ¥ ¦z§ ¤…¨ ¨ © ª…«…¨ Generation Interpretation Extraction ‰.Ž  Œ”¬ ‡ •X’ Œ– ‡— — Œ’ ­ ª…« ® ¤ «;® ¯ © ° ± ª ²z³ ¤2° ­ ® ´ Ž ˆ:‡:µ ¶ ‡ ’ ‡P – ‡— Summary Generation ¨ ³…··°…§ ¸ ¯ ¤…¨ ­ § © ¦P® © ª « ¹ ¤ ¨…® º ¸ ¦ª…® º ¤…¨ © ¨ da¬«¯2¨„Ÿ œOê» 8&Dl¼:3! !:9-…!9&!UP e3+'Q'(:3 E@&+&.:!; U K½¿¾ÀÂÁÃÅÄ=ÁÆPÇ È ]U0JIJ<"9;5š3”IS52Ù 023 ߥ„Ÿœ€œÃ»½4¾¿ Áà¿:ÄûŦ´uÏ4¨4ª­œ¡ =Í'´šŸ Þ£t´š¯šœ€£ ¥„œ€Ÿ£t´ 4Ÿ ´uÏ4¨4¸¹œk°ÎÏ4¬­°šª­´š¯2¨„œ  ¨4¦o¦o°dŸº°š    ¥4´™Í ¢¬­¢Zda¬«¯d² ¨„Ÿ œ_êdǤߥ„œz¬­¢4”¨„£¬­¢O£ ¥„œz4Ÿ ´u¸¹œ¡    ¬­¢4¯8¸¥4°š¬­¢¤¬­  £ ¥„œÖ&œ¡ t£Ü¥JºS&´š£ ¥„œ¡  ¬­ zµ·Ÿ ´2¦ú£ ¥4œZ t&œ€œ¡¸¥ Ÿ œ¡¸¹´š¯d² ¢4¬«³€œ€Ÿ ÊÉñǼߥ„œÍ'´šŸÏ_œ€ŸŸ ´šŸÈŸ°d£tœì´šµk£ ¥„œ± t&œ¡°dޚœ€Ÿt² ¬­¢4Ï4œ€Lœ¡¢”Ï„œ¡¢J£Ü t&œ€œ¡¸¥ Ÿ œ¡¸¹´š¯2¢4¬«³€œ€Ÿ _¬­ Ü¸€¨4Ÿ Ÿ œ¡¢D£ ª«º °dÖ&´2¨„£7üdýËÍÌdýÏÎÇ÷ߥ4œ8Ö&œ¡ t£7¥DºSL´š£ ¥4œ¡  ¬­ ¼¬­ z°š¢u² ¢„´š£ °d£tœ¡ÏÍ ¬«£ ¥±4Ÿ ´2 t´uÏ4¬­¸ ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢o¢4œ€œ¡Ï„œ¡Ï±µ·´šŸ £ ¥„œY tœ€¯2¦oœ¡¢J£ °d£ ¬«´2¢é´šµ'ò2Ðuïñõ?ÑÇ"˞£ ¨„Ÿ¢z¬­ k´2¢„œYÏ4¬­°e² ª«´š¯2¨„œ¸¹´2¢J£tŸ¬­Ö”¨„£ ¬«´2¢7°š¢”Ï鸀°š¢é¸¹´2¢”  ¬­ t£   ´šµÉ´2¢„œY´šŸ ¦´šŸ œk tœ¡¢D£tœ¡¢4¸¹œ¹²Wª­¬«Þšœ¨”¢4¬«£  €Æd¥„œ¡¢4¸¹œ€µ·´šŸ £ ¥¸€°šª­ª«œ¡ÏÒЄòæó òWô¹ï,Ódõ7Ôñô@ÑñÇ"åu¬­¢4¸¹œk£ ¥„œ4Ÿ´2 t´Sϔ¬­¸"¬®¢„µû´šŸ¦o°d£ ¬«´2¢©´šŸ¬«¯d² ¬­¢4°d£tœ¡ YµûŸ ´2¦°z4Ÿ ´šÖ”°dÖ:¬­ª­¬­ t£ ¬®¸Î4Ÿ ´u¸¹œ¡   €Æ\£ ¥„œoτœ¡¸€¬ ²   ¬«´2¢”  Í ¥„œ€Ÿœ=£t´Ã¬­¢4 tœ€Ÿ £Ó¨„£t£tœ€Ÿ°š¢4¸¹œ'Ö&´2¨4¢4Ï4°dŸ¬«œ¡  °dŸ œ  t´2¦œ€£ ¬®¦œ¡ ÍŸ ´2¢4¯„Ç Ú¢Z£ ¥4œÖÕ6×2òæï,ÓØÔ¡òæð3ÙdõiÚÛÙaÜÂÐÍÝ­ô7Í"œ¼¸¹´2¦”¨„£tœ£ ¥„œ ¸¹´šŸ œã¬­¢J£tœ¡¢D£ ¬«´2¢G¨”  ¬­¢„¯%°H t£ °d£ ¬® t£ ¬­¸€°šªZ¸€ª®°š    ¬«Ô4œ€Ÿ Í ¥4¬®¸¥^ tœ¡ª«œ¡¸¹£  ´2¢„œ ´2¨„£_´šµUê¡ëøÖ”°š  ¬®¸¤Ï4¬­°šª«´š¯2¨„œ °š¸¹£¶¸€ª­°š  tœ¡  Ûæà œ¡¬«£ ¥4¬­¢„¯šœ€Ÿ °š¢”ÏßÞYª«œ¡ tœ¡¢ ÆZê¡ëšëØàšÝÇ Ú¢Ü:°dŸ°šª­ª«œ¡ª;ÆLÍ'œŸ ´šÖ”¨4 t£ ª­ºzœ¹ÌS£tŸ°š¸¹£È£ °š tÞJ²QŸœ¡ª«œ€Ñe°š¢D£ ¸¹´2¢D£tœ¡¢J£¨4 ¬­¢„¯¶Ô:¢4¬«£tœ  t£ °d£tœ £tŸ°š¢4  Ï4¨”¸¹œ€Ÿ ¤Ûæàœ¡¬ ² £ ¥4¬­¢4¯šœ€Ÿ¡Æê¡ëšëšë2ÝÇág'´š£ ¥Æ=Ï4¬­°šª­´š¯2¨„œì°š¸¹£°š¢4ÏO¸¹´2¢u² â ½¿¾8ÀÂÁ+ÃÅÄÏÁ+ÆPÇ9,4GÂã9 @8äE@8, Å !Gå@:;-!X 4 9 'Q¼Â&! +EO&9 @E+ æ&b £tœ¡¢D£zœ¹ÌS”Ÿ œ¡    ¬«´2¢ ÆŸ œ€4Ÿ œ¡  œ¡¢J££ ¥„œ_¦o°š¬­¢ ¸¹´2¢J£tœ¡¢D£ Ÿ œ¡ª­œ€Ñe°š¢D£"µ·´šŸ'£ ¥„œτ´2¦±°š¬­¢4 =»½4¾¿ Áà¿:ÄûŬ­  ´š&œ€Ÿt² °d£ ¬­¢4¯o¬­¢Ç ß ¥„œ¡ tœ  t£tŸ¨4¸¹£ ¨„Ÿ œ¡  °dŸœ tœ¡¢D£ £t´È£ ¥4œKçõ”òWô€ïè:ï ô€òӚó òæð3Ùdõ*ÚÛÙaÜÐéÝ«ôÍ¥„œ€Ÿ œÃ£ ¥„œ€ºì°dŸœY t£t´šŸ œ¡Ï7¬­¢z¸¥„Ÿ ´2¢„´d² ª«´š¯2¬®¸€°šªÜ´šŸτœ€ŸÐ°š  ô:×2òæï,ÓØÔ™òWô:ÜßÙÏê3ë€ô4Ô¡òXÑ÷ÛæáÉÌ c f  ÝÇ ß¥4œ ¦´uÏ4¨4ª­œÂf  ìÜdð3ÓÝ|Ù:íÂДôîèLïÙaÔñô@ÑÑÊÙdï^¬­¢D£tœ€Ÿ 4Ÿ œ€£   £ ¥4¬® éÏ4°d£ °ø¬­¢ž£tœ€Ÿ¦o Ü´šµo  ¨„¯š¯šœ¡ t£ ¬«´2¢” 8°š¢”Ï °d£t£ ¬ ² £ ¨4Ï4œ¡ ¼Û氚¸€¸¹œ€”£ °š¢4¸¹œšÆkŸ œ 휡¸¹£ ¬­´2¢:Ý  ¬­¢4¸¹œ¼´2¨„Ÿ  ¨4¦² ¦o°dŸ¬«³¡°d£ ¬«´2¢ ¯š´2°šª¬­ o£t´O¸¹´2ª­ª«œ¡¸¹£±°šª®ª£ °š tÞD²QŸ œ¡ª­°d£tœ¡Ï °d¯šŸ œ€œ¡¦oœ¡¢J£  €Ç ß\´¤£ ¥4¬­ zœ¡¢4Ï ÆÍ'œZ¨4 tœϔ¬­  ¸¹´2¨„Ÿ  œ °š¢4ÏÍ'´šŸª­ÏìÞS¢4´™Í ª«œ¡Ï4¯šœ£t´¸¹´2¦”ª­œ€£tœ(£ ¥4œ(¸€¨„ŸŸ œ¡¢J£   ¨4¯š¯šœ¡ t£ ¬«´2¢¼Í ¬«£ ¥°šª­ªL”°š t£kÏ4°d£ °Ÿ œ€µ·œ€Ÿ Ÿ¬®¢„¯È£t´£ ¥4¬­  4Ÿ´šL´2 °šª;ÇÎߥ„œ©Ÿ œ¡  ¨4ª«£ ¬®¢„¯ì t£tŸ¨4¸¹£ ¨„Ÿ œ¡ Ã°dŸ œ¸€°šª­ª«œ¡Ï õLô.íÏٚòæð3Ӛòæð3ÙdõïÙÏê2ë¡ô:ԙòXѱÛ2ð œ c  ñÝÆ4°  ¨„Ö” tœ€£'´šµfÍ¥4¬­¸¥ ËΣ ¥„œk°š¸€¸¹œ€4£tœ¡Ï´2¢„œ¡ ]ËΰdŸœ"£ ¥„œ¡¢© tœ¡ª«œ¡¸¹£tœ¡Ï°š a¸¹´2¢u² £tœ¡¢D£ µ·´šŸ£ ¥„œY  ¨”¦o¦o°dŸ ºšÇ ß ¥„œZñMÐéÚCÚÛÓdï@òóô€õLô¹ï,Ӛòæð3Ùdõ*ÚÛÙaÜÐéÝ«ô¬­ k°š¢z¬­¢„² £tœ€Ÿ µæ°š¸¹œÎ£t´»½4¾¿ Á࿔ķÅ?f  'ÿ܀Ÿ¦o°š¢z¢4°d£ ¨„Ÿ°šª:ª­°š¢„² ¯2¨4°d¯šœ_¯šœ¡¢4œ€Ÿ°d£ ¬«´2¢§¦´uÏ4¨4ª­œšÇ Ú £z°š   tœ¡¦֔ª«œ¡ ¼£ ¥„œ £ ¥„œ¡¦±°d£ ¬­¸€°šª­ª«º  t£tŸ¨4¸¹£ ¨„Ÿœ¡Ï  ¨4¦o¦o°dŸ ºÕτ´S¸€¨”¦œ¡¢J£ ¨4 ¬­¢„¯¬­¢D£tœ€Ÿ µ·°š¸¹œÎ£tœ€Ÿ¦o £ ¥4°d£ Ï4œ¡  ¸¹Ÿ¬«Ö&œ(њœ€Ÿ Ö Æ4 tœ¡¢u² £tœ¡¢4¸¹œ¦´S´uÏÆL  œ¡¦o°š¢J£ ¬®¸τœ¡ ¸¹Ÿ¬«4£ ¬«´2¢” µ·´šŸ(œ€Ñšœ¡¢D£   °š¢4Ï+ª«´u¸€°d£ ¬«´2¢4 Oœ€£ ¸dÇ ß¥4œÐÿ(œ€Ÿ¦o°š¢+ tºS¢D£ °š¸¹£ ¬­¸ ¯šœ¡¢„œ€Ÿ°d£t´šŸÎ´šµ=»½4¾¿ Á࿔ķÅo4Ÿ´SÏ4¨”¸¹œ¡  tœ¡¦±°š¢J£ ¬­¸ñ²  tºu¢D£ °š¸¹£ ¬­¸  £tŸ¨4¸¹£ ¨„Ÿ œ¡ µ·´šŸ £ ¥„œ  ¨4¦o¦±°dŸ º°š¢”ÏÆL¬®¢ ¸€°š tœ'´šµ4°š¢©á=¢„¯2ª­¬®  ¥Y  ¨4¦±¦o°dŸ ºšÆeµûœ€œ¡Ï4 \£ ¥„œ¡ tœ' t£tŸ¨4¸ñ² £ ¨„Ÿœ¡  £t´(£ ¥„œ £tŸ°š¢4  µûœ€Ÿa¦´uÏ4¨4ª«œÉ´šµ„»½4¾¿ Á( ¿”Ä·Å£t´ ´šÖ4£ °š¬®¢_£ ¥„œ±¸¹´šŸ Ÿœ¡ t&´2¢4Ï4¬­¢„¯zá"¢„¯2ª­¬­  ¥_  £tŸ¨4¸¹£ ¨„Ÿ œ¡ ¡Ç ˵û£tœ€Ÿ£ ¥4œì¯šœ¡¢„œ€Ÿ°d£ ¬«´2¢¤¬­¢/£ ¥„œ¼£ °dŸ ¯šœ€£oª­°š¢„¯2¨4°d¯šœšÆ £ ¥„œŸ œ¡ ¨4ª«£¬® ¦o°dŸޚœ¡Ï8¨4ÜÍ ¬­£ ¥ô(ß èõ £ °d¯2 Ãµû´šŸ °šÏ„œaöS¨4°d£tœÑS¬­ ¨4°šª­¬«³¡°d£ ¬«´2¢ Ç ÷ X©`Ó5š02KIS529Q.\> ߥ4œìÔ4Ÿ t£©  £tœ€ ¬­¢Z£ ¥4œ”Ÿ ´S¸¹œ¡   ¬­¢„¯8¸¥4°š¬­¢ ¬­ ©£ ¥„œ œ¹Ìu£tŸ°š¸¹£ ¬«´2¢U´šµ=°š¢U°d֔  £tŸ°š¸¹£Î¸¹´2¢J£tœ¡¢D£ÎŸ œ€4Ÿ œ¡  œ¡¢J£ °e² £ ¬«´2¢žµû´šŸÜœ¡°š¸¥ ¨„£t£tœ€Ÿ°š¢4¸¹œšÇTߥ”¬­ zµæ¨4¢4¸¹£ ¬«´2¢4°šª®¬«£Wº Ík°š  ´šŸ¬­¯2¬­¢4°šª­ª«º©Ï„œ€Ñšœ¡ª«´š&œ¡Ïo°š "°Y  ¨„Ö„²W¦o´SÏ4¨”ª«œk´šµ&° Ï4¬®°šª«´š¯2¨„œÎ°š¸¹£֔°š tœ¡Ï¼£tŸ°š¢”  ª­°d£ ¬«´2¢¼¦´uÏ4¨4ª«œ(Í ¬«£ ¥”¬­¢ »½4¾¿ Á( ¿:ÄûÅOÛæàœ¡¬«£ ¥4¬®¢„¯šœ€Ÿ¡Æê¡ëšëšë2ݰš¢4Ï/ª®°d£tœ€Ÿ´2¢ œ¡¦œ€Ÿ¯šœ¡Ï°š k°š¢¬­¦&´šŸ £ °š¢D£=”°dŸ£=´šµ£ ¥„œÃÏ4¬­°šª«´š¯2¨4œ 4Ÿ´S¸¹œ¡   ¬­¢„¯¸¥4°š¬®¢Ç âk´2¢4  ¬­Ï„œ€Ÿºš´2¨é¥4°¡Ñšœ£t´o4Ÿ ´u¸¹œ¡   ¬­¢4”¨„£kª­¬«Þšœ øSù7úû8ü ýQþ-ú(ù ÿSùÿ3ÿlú(ü;ÿÿ    &ûãú +ÿ-þ  þ8úÊÿ Êú,ú,ý(þ-úQù ÿ¿ù!|üüØü;ÿ" ÿ   # &û$%1ú &+ÿ''.þ(*) +-,/.101235468791: ;=< >1?A@1+;=< >5?A6 7$91: : <B1C$D ? E5<1DF9 GHF<I'@1+JD5HG5KL67$91: I$9M;N?FOAP87$9M;Q5B1C5RLP8SL6 7$91: T5? U$9 C5G B1C1? G$H;N?L@1+T19 G1?A6JGHV;N?5OAP8T9 K@W=PJS5S5S Í ¥4œ€Ÿ œ£ ¥„œŸ œ¡¸¹´š¯2¢4¬«³€œ€ŸYŸœ€”ª­°š¸¹œ¡ÏYX›í=ÙaÙaÜ"ÑÊÙéî"ô îÉðXݛÝZ"Í ¬­£ ¥[XXçÎî6ÙÐéÝÜGÑ Ù±î"ôî"ô€ïtôAZÇ ß¥4œ=°š¬­¦ž¬® £t´¯šœ€£Ó°š¢°dÖ: t£tŸ°š¸¹£fŸœ€4Ÿ œ¡ tœ¡¢D£ °d£ ¬«´2¢ ´šµ £ ¥„œ©¸¹´2¢J£tœ¡¢D£Î°š¢4ÏÜ£ ¥„œ©¬®¢J£tœ¡¢D£ ¬«´2¢ÆL¬«Ÿ Ÿ œ¡ t&œ¡¸¹£ ¬«Ñšœ ´šµ Ÿ œ¡¸¹´š¯2¢”¬«£ ¬«´2¢Üœ€Ÿ Ÿ ´šŸ (ª­¬«ÞšœY£ ¥4œ¡ tœšÆ°š ( ¥„´™Í ¢8¨4¢u² τœ€Ÿ£ ¥4œY tœ¡¢J£tœ¡¢”¸¹œšÇ Ë( (£ °dŸ ¯šœ€£ÎŸ œ€4Ÿ œ¡  œ¡¢J£ °d£ ¬«´2¢8´šµ £ ¥4œ¸¹´2¢D£tœ¡¢D£ÎÍ"œ ¨4 tœ°7µ·´šŸ¦o°šª­¬®  ¦T£ ¥4°d£©¸¹´2¦4Ÿ¬­  œ¡ Y£ ¥„œÏ4¬­°šª«´š¯2¨„œ °š¸¹£_Í ¥4¬®¸¥öτœ¡  ¸¹Ÿ¬­ÖLœ¡ 8£ ¥„œ¤ t&œ¡°dޚœ€Ÿaf  Z¬­¢J£tœ¡¢D£ ¬«´2¢ Ûæ¬­¢¼£ ¥„œÃœ¹Ì„°š¦”ª«œÎÄJ\ ]  ¾ Á Ý'°š¢”Ïz°d£t£tŸ¬«Ö”¨„£tœ¹²QÑd°šª­¨„œ ”°š¬«Ÿ fµ·´šŸa£ ¥„œ'¸¹´2¢D£tœ¡¢D£É´šÖu휡¸¹£  Ûæ  œ€œÎÛ2õfœ€ÑS¬®¢Èœ€£É°šª;Ç«Æ ê¡ëšë_^2Ýfµû´šŸÓ° ¸¹´2¦:°dŸ°d֔ª«œ=°d”4Ÿ ´2°š¸¥©¬­¢Y  Lœ€œ¡¸¥u²Q£t´d²  t&œ€œ¡¸¥7£tŸ°š¢4  ª­°d£ ¬­´2¢7 tºu t£tœ¡¦o ÝÇ ß¥4œ©Ï4¬®°šª«´š¯2¨„œ°š¸¹£Y¬­ Ã¸¹´2¦”¨„£tœ¡ÏU t£ °d£ ¬­  £ ¬­¸€°šª­ª«ºšÆ ¨4  ¬®¢„¯8ª®°š¢„¯2¨4°d¯šœz¦´uτœ¡ª­ zÛæàœ¡¬«£ ¥”¬­¢„¯šœ€Ÿ°š¢4Ï ÞYª«œ¹²  tœ¡¢Æ ê¡ëšëØàa`&ßÓ°š¢4°dÞe°±°š¢4Ïcb ´šÞš´S´„Æ\ê¡ëšëšë2ÝÇkߥ„œYÏ4¬ ² °šª«´š¯2¨„œo°š¸¹£ÎŸœ¡¸¹´š¯2¢4¬«³€œ€ŸÈ¸€¨„ŸŸ œ¡¢J£ ª­ºÜÏ4¬­ ¸¹Ÿ¬­¦o¬­¢”°d£tœ¡  ê¡ëYϔ¬edLœ€Ÿœ¡¢J£ £?ºD&œ¡ =´šµ&°š¸¹£  '£ ¥4°d£"¸¹´™Ñšœ€Ÿ¡ÆuœšÇ範ǫƄ  ¨„¯d² ¯šœ¡ t£ ¬«´2¢” €ÆÃŸ œaöD¨4œ¡ t£  €ÆÎ°š¸€¸¹œ€4£  8°š¢4ÏП œ ítœ¡¸¹£  €ÆYÏ4¬­°e² ª«´š¯2¨„œ8´š&œ¡¢4¬­¢„¯O°š¢4Ï ¸€ª«´2  ¬­¢„¯ °š¸¹£  €Æ(°š¢4Ï ´š£ ¥„œ€Ÿ €Ç ߥ„œ/¸€ª­°š    ¬«Ô”œ€ŸÜ¬­ z£tŸ°š¬­¢„œ¡Ï§´2¢ ° £t´š£ °šª©¢S¨4¦ÈÖ&œ€Ÿ ´šµÃ°dÖL´2¨4£7êdÆ ýšýšýZÏ4¬­°šª­´š¯2¨„œ¡ éÛæ¸¹´2¢4  ¬® t£ ¬­¢„¯U´šµYÿ(œ€Ÿt² ¦o°š¢Ƅá"¢„¯2ª­¬­  ¥ ÆD°š¢4ÏÖ2°d:°š¢„œ¡ tœÃÏ4¬­°šª«´š¯2¨„œ¡ ñÝÉÍ ¥”¬­¸¥ °š¦´2¨4¢D£Y£t´ÌØàDÆgfdýfܨ„£t£tœ€Ÿ°š¢”¸¹œ¡ €ÇìË(¢_œ€Ñd°šª­¨4°d£ ¬«´2¢ Í ¥„œ€Ÿœzœ¡°š¸¥   ¬­¢„¯2ª«œzÏ4¬®°šª«´š¯2¨„œzÍk°š o£tœ¡ t£tœ¡Ï ¨4  ¬­¢4¯ °šª­ªu£ ¥„œ ´š£ ¥„œ€Ÿ"Ï4¬®°šª«´š¯2¨„œ¡ É°š  £tŸ°š¬­¢4¬­¢„¯Ã tœ€£ Ÿœ¡  ¨4ª«£tœ¡Ï ¬­¢o°š¢±´™Ñšœ€Ÿ°šª®ª”Ÿ œ¡¸€°šª®ª4Ñd°šª­¨„œ´šµ]àdüSÇihj^ØÎ㰚¢4Ïo°È4Ÿ œ¹² ¸€¬­  ¬­´2¢¶´šµlkšëSÇ ëdýÏÎÇãß ¥„œ8ϔ¬­°šª«´š¯2¨„œU°š¸¹£7¬­ ¼¨4 tœ¡Ï ª­°d£tœ€Ÿ±¬®¢/£ ¥4œáÜdð2ÓÂÝÙ:íØÐ”ôZèLïÙaÔñô@ÑÑÊÙdïo£t´_£tŸ¬«¯š¯šœ€Ÿ±¬­¢u² £tœ€Ÿ¢4°šª:Ï4¬­°šª«´š¯2¨4œ°š¸¹£ ¬«´2¢4  µ·´šŸ=£ ¥4œ  ¨4¦±¦o°dŸ¬«³¡°d£ ¬«´2¢ 4Ÿ ´u¸¹œ¡   €Æ„œšÇ範ǫÆnmoqprpf½smt7°šÏ”Ï4 ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢ Æu¾½uÀ u ½'vtÈϔ¬­  ¸€°dŸÏ4 ¬®¢„µû´šŸ¦o°d£ ¬«´2¢Æe¨„£t£tœ€Ÿ°š¢4¸¹œ¡ \¦o°dŸ ޚœ¡Ï Í ¬«£ ¥wpfÄxÓ½ ¾½Lyzm  \U°dŸœÈ¬«¯2¢„´šŸ œ¡Ï Ç ß¥4œ+  œ¡¸¹´2¢4Ï ”°dŸ £ö´šµ¶£ ¥4œ%œ¹ÌS4Ÿœ¡    ¬«´2¢ τœ¹²   ¸¹Ÿ¬­ÖLœ¡ &£ ¥4œ=œ¹Ìu£tŸ°š¸¹£tœ¡Ï¸¹´2¢D£tœ¡¢D£€Çɛ_œ"¥4°™Ñšœ'¸¥„´2 tœ¡¢ ¢„œ¡ t£tœ¡Ïʰd£t£tŸ¬«Ö”¨4£tœ¹²QÑe°šª­¨4œ§Ï„œ¡  ¸¹Ÿ¬«”£ ¬«´2¢4  µ·´šŸ £ ¥4¬­  £ °š tÞLÇYhJë Ï4¬ed&œ€Ÿ œ¡¢J£z¸€ª­°š  tœ¡ z´šµ°d£t£tŸ¬«Ö”¨„£tœ¹²QÑd°šª­¨„œ τœ¡  ¸¹Ÿ¬«4£ ¬«´2¢4  œ¹Ìu¬­  £€Çaß ¥„œœ¹Ìu£tŸ°š¸¹£tœ¡Ïz¬­¢„µ·´šŸ¦o°d£ ¬«´2¢ τ´Sœ¡  ¢Jfç£Îτœ¡  ¸¹Ÿ¬«Ö&œÈœ¹Ì„°š¸¹£ ª«ºÜ£ ¥4œ¨4£t£tœ€Ÿ°š¢4¸¹œ֔¨„£Ã¬­  Ÿ œ¡ t£tŸ¬­¸¹£tœ¡ÏУt´ £ ¥„œ_”Ÿ ´š&´2  ¬«£ ¬«´2¢4°šªÃ¸¹´2¢D£tœ¡¢J£ÜŸ œ¡ª«œ¹² Ñd°š¢J£ìµû´šŸo£ ¥4œÜ  ¨4¦±¦o°dŸ¬«³¡°d£ ¬«´2¢ ”Ÿ ´S¸¹œ¡  €Æª­¬«Þšœ7ª«´d² ¸€°d£ ¬«´2¢4 ¡Æ(Ï4°d£tœ¡ ¡ÆÃ¥4´š£tœ¡ª­ €Æ(£tŸ°š¬­¢Ð¬­¢„µ·´šŸ¦o°d£ ¬­´2¢Æ´šŸ ¦´ÒÑS¬­¢4¯Uϔ¬«Ÿ œ¡¸¹£ ¬«´2¢ÐÛ·œšÇ範ǫƪ«œ¡°¡Ñu¬­¢„¯UÑu €ÇɰdŸ Ÿ¬«Ñu¬­¢„¯DÝÇ ß¥„œÃ°d£t£tŸ¬«Ö:¨„£tœ¹²QÑe°šª®¨„œ(Ï4œ¡  ¸¹Ÿ¬«4£ ¬­´2¢4 =°dŸ œÎ°šª­  ´  Lœ¹² ¸€¬­°šª­ª­ºUÏ4œ¡  ¬«¯2¢„œ¡ÏZ£t´Üµæ°š¸€¬­ª­¬«£ °d£tœì£ ¥„œ±£ °š  Þ´šµ¸¹´2¦² ֔¬­¢”¬­¢„¯©£ ¥„œ¡¦H¬­¢¼£ ¥„œÏ4¬­°šª«´š¯2¨„œÃ4Ÿ´S¸¹œ¡    ´šŸ¡Ç ß\´Ðœ¹ÌS£tŸ°š¸¹£_£ ¥„œ ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢ Í'œ ¨4 tœ Ԕ¢4¬«£tœ  t£ °d£tœk£tŸ°š¢4 Ï4¨4¸¹œ€Ÿ kÛ2d åuß Ý"ÛæË ”Lœ¡ª­£fœ€£a°šª;Ç«Æ4ê¡ëšëÂÌ2Ý °š¨„¯2¦œ¡¢D£tœ¡ÏÍ ¬­£ ¥Èµæ¨4¢4¸¹£ ¬«´2¢” Ó¨” tœ¡ÏÆdœšÇ範ǫÆJµ·´šŸ   ¸€°š¢u² ¢4¬®¢„¯¬­¢„”¨„£¬­¢z°šÏ4Ñe°š¢4¸¹œÈ´šŸ ¥4°š¢4Ï4ª®¬­¢„¯¢„œ¡ t£tœ¡Ïz´šÖ„² ítœ¡¸¹£  €ÇÊß ¥„œïd åuß 7°dŸ œ/¥4¬«œ€Ÿ°dŸ¸¥4¬­¸€°šª­ª­º ´šŸτœ€Ÿ œ¡Ï °š¢4Ï ¯šŸ ´2¨„&œ¡Ï¤¬­¢ £ ¥„Ÿ œ€œ7  œaöD¨„œ¡¢D£ ¬­°šª­ª­ºZ4Ÿ ´u¸¹œ¡   tœ¡Ï ª­°™ºšœ€Ÿ 7Û·œ¹Ìu£tŸ°š¸¹£ ¬­¢„¯_£tœ¡¦&´šŸ°šª'œ¹Ìu4Ÿ œ¡   ¬«´2¢4 €Æ ¸¹Ÿ œ¹² °d£ ¬­¢4¯¼ ¬­¦”ª«œÈ´šÖu휡¸¹£  È¨4  ¬­¢„¯ìޚœ€ºDÍ'´šŸÏ_ t&´š£t£ ¬­¢4¯„Æ ¸¹´2¦֔¬­¢4¬®¢„¯±£ ¥„œ¡  œo  ¬­¦”ª­œY´šÖuítœ¡¸¹£  Y¬­¢D£t´¼¸¹´2¦o”ª«œ¹Ì ´2¢„œ¡ ñÝÇ ß ¥„œY¸¹´2¢4 t£tŸ¨”¸¹£ ¬«´2¢7´šµa£ ¥„œ d åuß  ¬­ µæ°š¸€¬­ª­¬«£ °d£tœ¡Ï ÖSºÎÑe°dŸ¬«´2¨4  £t´S´2ª­ €ÆdœšÇ範ǫÆJ° ¯šŸ°d”¥4¬®¸€°šªJτŸ°™Í ¬­¢„¯ £t´D´2ª µ·´šŸ$d åußHτœ€Ñšœ¡ª­´š”¦œ¡¢D£€Æ'°  ºS¢D£ °eÌ ¸¥„œ¡¸ޚœ€Ÿ°š¢4Ï  tœ€Ñšœ€Ÿ°šªτœ€Ö”¨„¯š¯2¬­¢4¯é£t´S´2ª­ €ÇZâk¨„Ÿ Ÿœ¡¢J£ ª«ºšÆaÍ'œz¥4°¡Ñšœ τœ€Ô:¢„œ¡ÏCÌÂÌNhY¦©¨4ª«£ ¬«²Wª­°š¢„¯2¨4°d¯šœ d åuß Óµû´šŸa£ ¥„œk°š¢4°šª ² ºu  ¬­ È´šµ(ÿ(œ€Ÿ¦o°š¢ Æ á=¢„¯2ª­¬®  ¥°š¢4Ï 2°d”°š¢4œ¡ tœšÇÜߥ4œ d åuß 'Í'œ€Ÿ œÃœ¡¦”¬«Ÿ¬­¸€°šª­ª«ºτœ€Ÿ¬­Ñšœ¡Ïìµ·Ÿ ´2¦ã´2¨„Ÿ  °š¦² ”ª­œY¸¹´šŸ ”¨4 k´šµÉ°dÖ&´2¨„£eÌdýuÆ ýšýšý¨„£t£tœ€Ÿ°š¢4¸¹œ¡ €Ç { Ø >\5š3402P"023„5šKL5š9W.f> ßÓ°d֔ª«œÃê» +'(4¼+¼;EJP @'ï8;/ @E%9&!?! B+E !;4! @ 9&!B4e&3¼Â9&! åÊN¼ 9&33 +E ý !@ü úû+ÿ ØÿÊú %!% %!Iú  =3ú|.ÿ-þþ!e 5|V 5|V }M~a ¾ }V€  À+Ä  Ä } ¾ F2^:L 9 'Q¼+/ &!N @a[2&9&! ƃ‚ÏÆ € F„@L 9 'Q¼+!B/;4! @ Äj…1… ¾À !  933,1 8[X9&! † ÄÏÃÅÃ]Æ € F‡ÊL7 9&+] @a[29&! ˆ_†N† ¾ € … ¾8¾Š‰ÏÁ ˆ=†=‹ + !.:!S 933, À¾ŠŒ.¾ †N€ @a[29&!Å Å…! 99&¼!.9:`,X[29-!;  ƃ‚j…Ä=ÀÂà ¾8Ç ˆ Á+ÄÏÀ ˆ€ ¾ 'Q&E@O @a[29&!N Å…!  D 9+33,e @a[29-! À¾$ ~ ¾ }V€ À¾$ ~ ¾ }V€ 2! J @a[2&9&!  6!'Q¼Â D .43e'Q'Q 3 Ú¢D£tœ€Ÿ¢4°šª­ª«ºšÆZÍ"œ ¦o´Sτœ¡ª8£ ¥„œ ¢4œ€¯š´š£ ¬­°d£ ¬«´2¢¬®¢ £tœ€Ÿ¦± Y´šµ¢„œ€¯š´š£ ¬­°d£ ¬­´2¢/°š¸¹£  Í ¥4¬­¸¥_£tœ¡ª­ª"¨4 ÎÍ ¥4°d£ ´šÖuítœ¡¸¹£  8°dŸ œ”°dŸ £z´šµ°  ¨„¯š¯šœ¡ t£ ¬«´2¢§°š¢”Ïž  ¬«¯2¢”°šª £ ¥„œì t&œ¡°dޚœ€Ÿ  ff°d£t£ ¬«£ ¨”Ï„œ¡ ±Û氚¸€¸¹œ€4£ ŽÒŸ œ ítœ¡¸¹£ÝÇ8åu¨4¯d² ¯šœ¡ t£ ¬­´2¢4 ì°dŸœÜ¸¹´2¢4 t£ °š¢D£ ª«º ÔÙÚãè7Ý­ô€òWô:ÜöÛæ tœ€œU¸¹´2¦² ”ª­œ€£ ¬«´2¢7°dŸ Ÿ ´ÒÍ ¬®¢hda¬«¯„Ç Ì2ݰš¢4Ï7Ÿ œ¡ª®°d£tœ¡Ï7£t´±”Ÿ œ€ÑS¬«² ´2¨4 "  ¨„¯š¯šœ¡ t£ ¬­´2¢4 ÉÖSº¦oœ¡°š¢4  ´šµ&£ ¥„œ¦´šŸœ  t&œ¡¸€¬«Ô:¸ Ÿ œ¡ª®°d£ ¬«´2¢ÇÜß ¥4¬­ È°šª®ª«´™Í  ©¨4 £t´UԔ¢4°šª­ª«º_ tœ¡ª­œ¡¸¹££ ¥„œ   ¨”¦o¦o°dŸ º¼¬«£tœ¡¦o µ·´šŸ¯šœ¡¢„œ€Ÿ°d£ ¬«´2¢J» £ ¥„œ$ÚÙÑñòOÑè„ô€ó Ô¡ð OÔÓÏÔ4Ôô&èLòWô:ÜGÑ@ÐØí8íDôÊÑñòæð3Ùdõ?ÑñÇ'ߥ4œÎÍ ¥„´2ª«œÎ4Ÿ´S¸¹œ¡    ¬­   ¸¥„œ¡¦o°d£ ¬­¸€°šª®ª«ºZτœ€”¬®¸¹£tœ¡Ï/¬­¢ da¬«¯„Ç ü8°š¢4Ï Ìc ¬«£Í¬­ª­ª&Ö&œÎœ¹ÌS:ª­°š¬­¢„œ¡Ïz¬­¢¼£ ¥4œYŸ œ¡ t£´šµa£ ¥4¬®  tœ¡¸¹£ ¬«´2¢  t£ °dŸ£ ¬­¢„¯Í ¬«£ ¥¼£ ¥4œÈ¬­¢D£tŸ ´Sϔ¨4¸¹£ ¬«´2¢´šµòÙè:ð3ÔÊÑÇ ‘“’'” •J– ß\´š”¬­¸€ f”°dŸ £ ¬«£ ¬­´2¢Î´2¨„Ÿaτ´2¦o°š¬­¢Y¬®¢J£t´µû´2¨„Ÿ °dŸ œ¡°š a»o  ¸¥„œ¡Ï”¨4ª­¬­¢„¯„Æ\£tŸ°¡Ñšœ¡ª®¬­¢„¯„ÆÉ°š¸€¸¹´2¦o¦´uÏ4°d£ ¬«´2¢ —˜%™ š ›%œ™ —% ž8ŸJ  —%œ%™ ¡ ¢ —%£8žJ™ ¤ ›%™ ¤ ž ¢ ›%œ%™ ¡ ¢ —£8ž8™ ¤ ›™ ¤ ž ¢ ž8Ÿ8  —%œ™ ¡ š —%¡%¥J¦ ™ ž8Ÿ8  —%œ™ ¡ J¤ ›%¦ ž8£J¥8— ›%œ%™ ¡ GENERATION PROCESSING SUGGEST ACCEPT REJECT SUGGEST SUGGEST ACCEPT INFORM FEEDBACK FEEDBACK PROPOSE ELABORATE FEEDBACK PROPOSE PROPOSE §V¨*© ª «e¬ ­%ªg® ¯ ° ¯ ® «± ²g³g´ EXTRACTION da¬«¯2¨„Ÿ œ üÍ» >a9-'(4! 9Û&¼+ 9&! @ 4Q!+3+'Q'(: æ,4! @¼ 93Mµ ;/ @E+G9&! 4Û'(4¼+¼Â,h! Ö+E !;4! @ 9-!e Å;9. 9  !3 @/% 8[X9&!ã9 @'Q¼/ &! @=b·¶!3!…!K+ !.:!; + FP ' P,89. 49&!.LQ+Ò a!-3D› 8[X9&!e/;4! @ F|'Q  3¼Â9& ä+9 !+ã&!9@b L &!-'Q;%!+Bä4/=3&/;&9&! @1P ]!+%3/…!B3'Q'(43 b °š¢4Ϥœ¡¢D£tœ€Ÿ £ °š¬­¢”¦œ¡¢J£€Çøß\´Ԕ¢4Ϥ£ ¥4œ7£t´š”¬­¸¼´šµÎ°š¢ ¨„£t£tœ€Ÿ°š¢”¸¹œÎÍ"œÎ¨4 tœÃޚœ€ºSÍ"´šŸÏz t&´š£t£ ¬­¢„¯©”ª­¨4 ' t´2¦œ ¥„œ¡¨„Ÿ¬­ t£ ¬­¸€ €Ç ›ø¬«£ ¥4¬­¢ ´2¢4œH£t´š”¬­¸%£ ¥4œ÷ t&œ¡°dޚœ€Ÿ  °dŸœÊ°š ²   ¨4¦oœ¡ÏÕ£t´¤¢„œ€¯š´š£ ¬®°d£tœ°Oª®¬­¦o¬«£tœ¡ÏÕ tœ€£z´šµY´šÖuítœ¡¸¹£   Û·œšÇ範Ça´šÖuítœ¡¸¹£  È´šµ£ ¥„œ¸€ª­°š   l¸a¹_ºL»¼'½¾LÆ¿¹jÀA½8°š¢4Ï Á ¹a¹ ÃLÄNÅsÆ_¹_¼ µû´šŸO£ ¥„œÕ£tŸ°¡Ñšœ¡ª­¬®¢„¯ž£t´š”¬®¸™ÝÇ ›_œ ޚœ€œ€ °7 tœ€£È´šµk£tœ¡¦:ª­°d£tœ¡ Yµ·´šŸÈœ¡°š¸¥Z£t´š”¬®¸Í ¥„œ€Ÿ œ ¬­¢4¸¹´2¦±¬­¢„¯Y  ¨„¯š¯šœ¡  £ ¬«´2¢4 =°dŸœ ¬­¢D£tœ€¯šŸ°d£tœ¡Ï±£t´´šÖ4£ °š¬­¢ °š¢7´šÖ„휡¸¹£Í"œÈ¸€°šª®ªf°zõLô.íÏٚòæð3Ӛòæð3ÙdõîÙÏê3ë€ô4Ô¡òYÛ2ðœ c ÝÇ Ú¢ da¬«¯„ÇÌ£ ¥„œ=´šŸ¬­¯2¬­¢4°šª:ô4×2òæï,ÓØÔ¡òWô4Ü ÙÏê3ë€ô4Ô¡ò ÛæáÉÌ c Ý´šµ ¨„£t£tœ€Ÿ°š¢”¸¹œÈÇÉ쬭  ¬­¢D£tœ€¯šŸ°d£tœ¡Ï8¬­¢7°c¸a¹_ºL»¼'½¾ì£tœ¡¦² ”ª­°d£tœšÇ d„´šŸ(œ¡°š¸¥8£t´š”¬®¸ÎÍ"œޚœ€œ€Ü£t´š:¬­¸ñ²W t&œ¡¸€¬«Ô”¸Î¬­¢4µû´šŸt² ¦o°d£ ¬«´2¢¬­¢©°òÙè:ð3ÔzÊï,ÓÚ±ô¹Ç ߥS¨4 €Æš°šª­ªS  ¨„¯š¯šœ¡  £ ¬«´2¢4  Û2ðœ c  Ý©¦o°šÏ„œµ·´šŸ´2¢„œ¼£t´š”¬­¸¼°dŸ œ¼”¨4  ¥4œ¡ÏZ´2¢¤° £t´š”¬­¸ñ²W  Lœ¡¸€¬­Ô”¸“ÊÊÙaÔaÐÍÑUÑñòÓØÔ1ËÒÇ ÌwÍÏÎ ’'Њ•JÑLЊ•J’sÒwÑ–_Ð$Ó › ¥4œ€Ÿ œ¡°š f£ ¥4œ £t´š”¬­¸" tœ€Ÿ њœ¡  £t´±¬­¢” tœ€Ÿ £'£ ¥„œÈáÉÌ c ¬­¢D£t´o°£tœ¡¦”ª­°d£tœÈ£t´o¸¹Ÿ œ¡°d£tœ° ðœ c Æ\£ ¥4œÜõLô.íÏٚòæð3Ӛòæð3Ùdõ ÓÏÔ¡òYτœ€£tœ€Ÿ¦o¬­¢„œ¡ ©¥„´™Íã£t´ ¥4°š¢4ϔª«œ¼£ ¥„œ7Ÿ œ¡  ¨”ª«£ ¬­¢„¯ ðœ c 4Ÿ°d¯2¦±°d£ ¬­¸€°šª­ª«ºšÇÕÚ ¢ œ€Ñšœ€Ÿ ºÈ¢4œ€¯š´š£ ¬­°d£ ¬«´2¢£ ¥„œ€Ÿ œk°dŸ œ=œ¡    œ¡¢J£ ¬­°šª®ª«ºÎµû´2¨4Ÿ\°š¸ñ² £ ¬«´2¢4 Ó£ ¥4°d£a°Ã t&œ¡°dޚœ€ŸÉ¸€°š¢Lœ€Ÿµû´šŸ¦+Ûê™ÝrÔ4¾ Â Ô Â m¡½ °š¢_´šÖuítœ¡¸¹£Î´šµk¢„œ€¯š´š£ ¬­°d£ ¬«´2¢ÆkÛ;ü2Ý(¯2¬«ÑšœÕ]u½4½Ö\¿'y v× ´2¢Z°éµû´šŸ¦œ€ŸY4Ÿ ´š&´2  °šªQÆ'Û3Ì2ÝY½”Åyf¿  ¾nyt&½8°7µ·´šŸt² ¦œ€ŸÎ4Ÿ´šL´2 °šªaÖDº8°šÏ4Ï4¬­¢„¯z¦o°d£t£tœ€Ÿ ²Q´šµ®²Qµæ°š¸¹£¬­¢4µû´šŸt² ¦o°d£ ¬«´2¢ ƚ´šŸÛJhDÝa¾½'Øzo ½'mt£ °š tÞJ²QŸœ¡ª­°d£tœ¡Ïo¬­¢„µ·´šŸ¦o°e² £ ¬«´2¢Ç ß ¥4¬­ k¬­¢„µ·´šŸ¦o°d£ ¬«´2¢¼¬­ ¸¹´2¢D£ °š¬­¢„œ¡Ï鬭¢¼£ ¥„œYÏ4¬ ² °šª«´š¯2¨„œÎ°š¸¹£€Ç=ß ¥D¨4 ¡ÆDÍ'œÎ¨4 tœÎ°ϔ¬«Ÿ œ¡¸¹£k¦o°d4”¬­¢4¯È£t´ Ÿ œ€£tŸ¬«œ€Ñšœ"£ ¥4œ'¢„œ€¯š´š£ ¬®°d£ ¬«´2¢°š¸¹£aÍ ¥”¬­¸¥¬­¢Y£ ¨„Ÿ¢¸¹´2¢u² £tŸ ´2ª® µæ¨„Ÿ £ ¥4œ€ŸÈ4Ÿ ´u¸¹œ¡    ¬®¢„¯é´šµ£ ¥4œÒðœ c Ûæ tœ€œ¼ß\°e² ֔ª­œìê™ÝÇðœ€¯š´š£ ¬­°d£ ¬«´2¢_°š¸¹£  Î¸€°š¢8Ö&œ tœ€œ¡¢_°š Ã t£ °d£tœ £tŸ°š¢”  ¬«£ ¬«´2¢4 '¬­¢°š¢z¬­¢D£tœ€Ÿ¢4°šª:Ԕ¢4¬«£tœ( t£ °d£tœYÏ4¬­°šª«´š¯2¨4œ ¦´uτœ¡ª;Ç c ¢4ª«ºU£ ¥„œ¡  œìµ·´2¨„Ÿ¸€°š tœ¡ ©Ö4Ÿ¬­¢4¯é°dÖ&´2¨„£©°  t£ °d£tœ©¸¥4°š¢4¯šœÈ¬­¢¼´2¨„Ÿ ϔ¬­°šª«´š¯2¨„œY¦´uτœ¡ª;Ç*Ù Ú&Û ’Ü– Í ÓÓ$•-Ò Î ›_œœ¹ÌSœ¡¦o”ª­¬«µ·ºY£ ¥„œ4Ÿ´S¸¹œ¡   ¬­¢„¯Ã´šµ £ ¥„œ"¢„œ€¯š´š£ ¬­°d£ ¬«´2¢©°š¸¹£  rԔ¾ Â Ô Â m¡½Ã°š¢4ÏÝ]u½4½'Öf¿syÜv×=Ç â'´2¢”  ¬­Ï„œ€Ÿ £ ¥4œÏ4¬­°šª­´š¯2¨„œœ¹Ì„¸¹œ€Ÿ 4£k¬­¢Ûda¬«¯„ÇÍÌÈÍ ¥„œ€Ÿœ ºš´2¨_ tœ€œ©£ ¥„œ¨„£t£tœ€Ÿ°š¢”¸¹œšÆ áÉÌ c  °š¢”Ïðœ c  €ÇÎߥ4œ 4Ÿ´šL´2 °šªf¬­¢ÞÇLÉßX?òJàuô¹ï ôá ÑÛÙdõLôÛÓ2ò Ñð|×Ê ÙÐuïñò2òMãâšôÏZ ´šÖSÑS¬­´2¨4  ª«ºŸ œ¡ª­°d£tœ¡ '£t´£ ¥„œÃτœ€”°dŸ £ ¨4Ÿ œ £ ¬­¦oœ ´šµ £ ¥„œ £tŸ°š¬®¢B  ¨4¯š¯šœ¡ t£tœ¡Ï%¬­¢åäæçX.Ý«ô¡òá ѶòÓ=˚ôøòJàuôÐòæïÓdð·õ òÙcè&ï,ÓdõLËVÊÐuï¹òFZÇ c ¨„Ÿ$ÔÙÚãè7Ý­ô€òæð3Ùdõ¤4Ÿ ´u¸¹œ¡   Ã£ °dޚœ¡  ¸€°dŸ œ±£ ¥4°d£Î£ ¥„œÛðœ c ´šµ“ÇLÉ7¬­ Ãœ¹ÌS:°š¢4τœ¡ÏU£t´zŸ œ€„² Ÿ œ¡  œ¡¢J££ ¥„œÈÍ ¥4´2ª«œÈ¬­¦:ª­¬­¸€¬«£  ¨„¯š¯šœ¡  £ ¬«´2¢OÛæ  œ€œ tœ¡¸ñ² £ ¬«´2¢éh¤ÖLœ¡ª­´™Í(ÝÇ%Ëk£7£ ¥4¬® ¼L´2¬®¢J£¼Í'œ°šª­ t´ ¸¹´2¦² ”¨4£tœ±£ ¥„œl¿s¹j»A½ êë'½LÄÏÆ_ìÆÏÄŸœ¡ª­°d£ ¬«´2¢´šµ £ ¥„œì¢„œ€Í   ¨4¯š¯šœ¡ t£ ¬«´2¢ £t´¤°šª­ª ´š£ ¥4œ€Ÿ¼ ¨„¯š¯šœ¡ t£ ¬«´2¢4 ¼¦o°šÏ4œÜ£t´ £ ¥4¬® Y&´2¬­¢D£È°š¢4ÏZ°šÏ”Ï£ ¥„œÒð œ c £t´Ü£ ¥4œ±£t´š”¬­¸±µû´d² ¸€¨4 ( t£ °š¸Þ:ÇÎË ðœ cîí É ¬­ (¦´šŸ œ© t&œ¡¸€¬«Ô”¸È£ ¥4°š¢Ü° 4Ÿœ€ÑS¬«´2¨” k´2¢„œ í Ù ¬­µ ï Ÿ ´D´š£(´šµ í É ¬­ ´šµ £ ¥„œ©  °š¦œ¸€ª­°š   (´šŸ(°  ¨„Ö4² ¸€ª­°š   ´šµ í Ù ð E@ 4!z:! @*49&!K9,4h,åÊ ¼4.4'Q&!&11 h!+ 9,3 4ñ… ¾8¾Š‰ÏÁ ˆ_†N‹ Å&ÛG¼+4.4'Q&!&8;2! E@+ 3 ¼Â @3…! åÊÖFI9&9¼!.4+9,L4"+E@4! åÊ FX[X9&! @L1,aD 49- ... ò%ó ô õ ö ÷Jó ø ö ÷Jó ø ù úqûMü ý þMÿMü ù                     ù! "ù ý ÿ ù# ùü ý%$ þ &  '  ( ò%ó ô õ )+*-,. / ÷0)+.-132 4 ö ÷ /-5 4 6Jö ô 2 7 *-68ö / .Jõ98;:0<= )+*-,. > =@? A B C-DA B : C A E D0C ?GF: E A DA B : C / ÷0)+.012 4 ö ÷ /05 4 6Jö ô 2 > =@? A B C-DA B : C A =@8+F H DIA = ô ó )+. ò%ó ô õ ö ÷Jó ø ô ó )+. J K0L-M0N ÷8õ K-L0L0L-O0P- Q-R+S0J / ÷0)+.-132 4 ö ÷ /-5 4 6Jö ô 2 K0 ýù"%$ TUúü ù V $ W &    ( J P0 Q0R+S-J ù# ü ! "júþ XZY [\ ] ^%_ ` acb ] 7 *-68ö / .8õ 8;:0<= 8;:0<= )+*0,. )+*0,. 7 *-68ö / .8õ > =@F DE A d0E =e A B 8;= > =@? A B C0DA B : C A E D0C ?GF: E A DA B : C > =@F DE A d0E =e A B 8;= A =@8+F H DA = completion sponsoring A E D0C ?F: E A DIA B : C da¬«¯2¨4Ÿ œ ÌÍ»gf ;/ E@+¿h89-¼!J Å…!Kha!3.9-!,ã 8[X9&!6FI5ih+T%.L+K-;å@,ãE@ 4!z:! @U @a[2&9&!OFjB:T%.L-b ï µ·´šŸÉœ€Ñšœ€Ÿ ºŸ œ¡ª­°d£ ¬­´2¢lkoÛ í Ùnm ípo Ý\£ ¥4œ€Ÿ œ¬­ É°ÃŸ œ¹² ª®°d£ ¬«´2¢qkoÛ í É m í o o Ý(°š¢”Ï í o o ¬­ Î¦´šŸ œo t&œ¡¸€¬ ² Ԕœ¡¸U£ ¥4°š¢Ð´šŸ7œaöS¨4°šªÃ£t´ í o Í¥„œ€Ÿ œ í o m í o o °dŸœ*ðœ c f  €Æ(£tœ¡¦&´šŸ°šª(´šÖuítœ¡¸¹£   r8´šŸ¼4Ÿ¬­¦o¬«² £ ¬­ÑšœÈÏ4°d£ °£?ºD&œ¡  ]u½4½Ö\¿'y v× ¨„£t£tœ€Ÿ°š¢4¸¹œ¡  ª­¬­Þšœ st°šª«Ÿ¬­¯2¥J£ tuÆ s¯š´S´SÏituÆust¢„´vtuÆws£ ¥4°d£τ´Sœ¡  ¢Jfç£ Í"´šŸ Þxtoœ€£ ¸dÇL¦o°dޚœ £ ¥„œRÏ4¬­°šª«´š¯2¨„œ 4Ÿ ´u¸¹œ¡   t´šŸ °šÏ4Ï ° Ÿ œ¡ t&œ¡¸¹£ ¬«Ñšœ °š¸€¸¹œ€4£ °š¢4¸¹œŠŽÒŸœ 휡¸¹£ ¬«´2¢+¦o°dŸÞ £t´ž£ ¥„œ £t´š ðœ c ´2¢z£ ¥„œÈµû´u¸€¨4  t£ °š¸Þ:Ç Ë   ¸¥„œ¡¦o°d£ ¬­¸ Ï4œ€”¬­¸¹£ ¬«´2¢Ã´šµ„°&´2    ¬­Ö”ª«œaÏ4¬­°šª«´š¯2¨„œ ¬­   ¥„´™Í ¢È¬­¢da¬«¯„ÇÒüSÇaߥ„œ€Ÿ œÉºš´2¨©¸€°š¢È  œ€œ=¥„´™ÍO°d£t£ ¬ ² £ ¨4τœk°š¢4¢„´š£ °d£ ¬­´2¢°š¢4Ϭ­¢J£tœ€Ÿ ²Q´šÖu휡¸¹£ÉŸœ¡ª­°d£ ¬«´2¢4 a°dŸ œ ¨4 tœ¡Ï£t´Ã tœ¡ª«œ¡¸¹£a  ¨4¦o¦o°dŸºY¬«£tœ¡¦o €Ç\›_œ'£ °dޚœ°šª­ªD´šÖ„² ítœ¡¸¹£  ¦o°dŸޚœ¡Ï7Í ¬«£ ¥z°d£(ª«œ¡°š t£´2¢4œÈ°š¸€¸¹œ€4£Ã°š¢4Ï7¢„´ Ÿ œ ítœ¡¸¹£=°d£t£ ¬«£ ¨4Ï4œšÆJ°š¢4Ϭ«¯2¢„´šŸœk°šª®ªu´šÖu휡¸¹£   £ ¥4°d£=°dŸ œ Ÿ œ¡ª­°d£tœ¡ÏU£t´7° ¿¹»L½ êë'½ÄaÆ_ìÆÏı¬«£tœ¡¦zÇߥ„œ  ¨4¦² ¦o°dŸ ºY¬­£tœ¡¦o f°dŸ œ'”°š    œ¡ÏY£t´Ã  ¨4¦o¦o°dŸº(¯šœ¡¢4œ€Ÿ°d£ ¬«´2¢ Ûæ tœ€œ©åuœ¡¸dÇsf2ÝÇ y ’cz ”|{ Í ÐŠ•J’Ò ß¥4œÊ¸¹´2¦”ª­œ€£ ¬«´2¢ °šª«¯š´šŸ¬«£ ¥4¦Öf   ´šÖuítœ¡¸¹£ ¬«Ñšœ/¬® z£t´¶°šÏ4Ï ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢ø£t´ £ ¥4œZ¸€¨„Ÿt² Ÿ œ¡¢D£Cðœ c í~}€‚ µ·Ÿ ´2¦´2¢„œ4Ÿ œ€Ñu¬«´2¨4 Cð œ c ÆÓ£ ¥„œ ƒ ! &/z:! @ ;N< C1? :U?D5H„H5D  4é!'Q¼Â 4./Ê 8[X9&!?  I… + å/  !J! ¶]/;/ U† Å&/z:! @ˆ‡X ЉKF|3QF¶B/ / Ï<=^‹'ŒF‡@L3L  t´d²W¸€°šª®ª«œ¡Ï Ñ.èéÙdõ?Ñ ÙdïÇ ß¥4œ§°šª«¯š´šŸ¬«£ ¥4¦ ¸¹´2¢4  ¬­ t£   ´šµìÛê™ÝoԔ¢4ϔ¬­¢„¯Z°/  ¨4¬«£ °d֔ª­œé t&´2¢4 t´šŸ¬®¢„¯áðœ c ¬®¢ £ ¥„œÐ¸€¨„ŸŸ œ¡¢J£O£t´š”¬­¸f   µ·´u¸€¨4  ª­¬­ t£O°š¢4ÏTÛ;ü2Ý/£ °dÞJ² ¬­¢4¯Õ´Òњœ€Ÿ”°dŸ £  8´šµ±£ ¥4œ   L´2¢” t´šŸ Ûæ  œ€œ da¬«¯„ÇÌ2ÝÇ g'´š£ ¥O t£tœ€” ©°dŸ œì¦´uτœ¡ª«œ¡ÏÖDº_°é ¬­¢„¯2ª«œoµæ¨4¢4¸¹£ ¬­´2¢ Ä¹Š¿L뎍j½jÅA½LÛ í}n‚ Æ ío ÝÍ¥4¬­¸¥ £tŸ¬«œ¡ o£t´O¸¹´2¦”ª«œ€£tœ í}n‚ ¨4  ¬­¢4¯ ðœ c í o °š °© t&´2¢4 t´šŸ¡ÆSŸ œ€£ ¨„Ÿ¢4¬­¢„¯È° Ö&´S´2ª«œ¡°š¢7Ñd°šª­¨„œµû´šŸÃ  ¨4¸€¸¹œ¡   (´šŸ(µ·°š¬®ª­¨„Ÿ œÈ°š¢”Ï骫œ¡°™ÑJ² ¬­¢4¯ í}n‚ ¨4¢4¸¥4°š¢„¯šœ¡ÏЬ­¢ ¸€°š tœ_´šµµ·°š¬­ª®¨„Ÿ œšÇ g'º °d4:ª«ºS¬®¢„¯£ ¥”¬­ kµ·¨”¢4¸¹£ ¬«´2¢¼´2¢zœ€Ñšœ€Ÿ º í o ´2¢7£ ¥„œÎµû´d² ¸€¨4   t£ °š¸ Þ7¨4¢D£ ¬­ª&¬«£  ¨”¸€¸¹œ€œ¡Ï4 I‘ÎÍ'œÎԔ¢4Ïz°o t&´2¢4  ´šŸ °š¢4Ï鸹´2¦”ª«œ€£tœ í}€’ Ç ß ¥„œµæ¨4¢4¸¹£ ¬«´2¢ ĹN¿A뎍j½jÅL½¤Í'´šŸ ÞS ÜŸ œ¡¸€¨„Ÿ  ¬«Ñšœ¡ª«º £ ¥„Ÿ´2¨„¯2¥o£ ¥„œ í~}€‚ ´šÖuítœ¡¸¹£ÎÛæ°š¢4ÏoŸ œ¡  Lœ¡¸¹£ ¬­Ñšœ  ¨„Ö4² ´šÖuítœ¡¸¹£  Ü´šµ ípo ÝÇÊÚ £7Ô4Ÿ t£é¸¥4œ¡¸ Þu Ü¸¹œ€Ÿ £ °š¬®¢§4Ÿ œ¹² ¸¹´2¢4ϔ¬«£ ¬«´2¢4  »Ê¢4°š¦œ¡Ï㜡¢J£ ¬­£ ¬«œ¡ ÐÛæ¸€¬«£ ¬«œ¡ €Æz&œ€Ÿ t´2¢”  œ€£ ¸dÇ Ý¶¸€°š¢´2¢4ª­ºÜÖ&œo t&´2¢4 t´šŸœ¡ÏUÖDº8´šÖuítœ¡¸¹£  ÈÍ ¬­£ ¥ œaöS¨4¬«Ñd°šª«œ¡¢D£a¢”°š¦œšÆJ¦´Òњœ´šÖu휡¸¹£  =¦¨4  £a¥”°¡Ñšœ ¸¹œ€Ÿt² £ °š¬­¢H£tœ¡¦&´šŸ°šª¼4Ÿ ´š&œ€Ÿ £ ¬«œ¡ ÐÛæ¦´ÒњœžÖ”°š¸Þ ÓMÊñòWô€ï ¦´Òњœ¼£ ¥„œ€Ÿ œÒÝΰš¢”Ï/ t´Ü´2¢ÇzÚ µ£ ¥„œ±4Ÿœ¡¸¹´2¢4Ï4¬«£ ¬«´2¢”  ¥„´2ª®ÏZ°šª­ª"  ¨„Ö4£tŸ œ€œ¡ ´šµ í o £ ¥4°d£τ´Ü¢„´š£´S¸€¸€¨4Ÿ¬®¢ í}n‚ °dŸ œ_°šÏ4τœ¡ÏÕ£t´ í~}€’ Ûæ tœ€œ da¬«¯„ÇQÌ2Ýǔ“¢u² •‚–   + …!¿+3&P+/M! 1  !3 a+9&Q +¼+¼Â-N @  46! a+'OÂ&S 4J @a[29-!O Eã!2!,  @b EbL‡ F-ÿ— |.ÿ_|˜ñÿ-þ-úü ý+L-b τœ€Ÿf¸¹œ€Ÿ£ °š¬­¢Y¸¹´2¢4Ï4¬­£ ¬«´2¢4 &Ÿ œ¡ª­°d£ ¬­´2¢4  ¸€°š¢ÎÖ&œ  t&œ¡¸€¬­°šª ² ¬«³€œ¡ÏZÛ·œšÇ範Çi™'Ãê ÅsÆ1¿½£t´l™'Ãê šA½=ësûaÅjºA»L½ ÅsÆ1¿½SÝÇ ð´š£tœ£ ¥4°d£"  ¬­¢4¸¹œ í o ¬­  °šª«Ÿœ¡°šÏ„º°È¸¹´2¦”ª­œ€£tœ¡Ïo´šÖ„² ítœ¡¸¹£€Æ4Í"œÈ´šÖ4£ °š¬­¢z°o¸¹´2¦:ª«œ€£tœÎ´šÖuítœ¡¸¹£ í~}€‚ Í ¬«£ ¥u² ´2¨„£µ·¨„Ÿ£ ¥„œ€Ÿ 4Ÿ´S¸¹œ¡   ¬­¢„¯´šµÉ´š£ ¥„œ€Ÿ(4Ÿ œ¡¸¹œ¡Ï4¬®¢„¯o´šÖ„² ítœ¡¸¹£  €Ç ߬®¦œÃœ¹ÌS4Ÿœ¡    ¬«´2¢4 °dŸœY¸¹´2¦”ª«œ€£tœ¡ÏzÖSº¼°o tœ€”°e² Ÿ°d£tœÈ ¨„Ö”¦´uÏ4¨4ª«œÛ2ÞY¬«”œ€£°šª;Ç«Æfê¡ëšëšë2ÝÇ › œ ٞŸ K&0¡ ”¢_3”> 3”02KL5š9W.f> àœ¡  L´2¢”  ¬«Ö”ª«œ±µû´šŸ£ ¥„œz°š¸¹£ ¨4°šª'¯šœ¡¢„œ€Ÿ°d£ ¬­´2¢ ´šµ £ ¥„œ   ¨4¦±¦o°dŸ¬«œ¡ Õ¬­  £ ¥„œ ª®°š t£Õ”Ÿ ´S¸¹œ¡   ¬­¢„¯B֔ª«´u¸ ÞH¬­¢ da¬«¯„Ç ê Ëé£ ¥„œo  ¨”¦o¦o°dŸ º7¯šœ¡¢4œ€Ÿ°d£t´šŸÛæË(ª«œ¹Ì„°š¢4τœ€Ÿt²    t´2¢ œ€£z°šª;Ç«ÆÃüdýšýšýJÝÇ c ¢ø¨4 tœ€Ÿ¼Ÿ œaöS¨„œ¡ t£¼¬«£¼¸¹´2¢u² њœ€Ÿ £  8£ ¥„œO¦´2 t£Ü  Lœ¡¸€¬­Ô”¸°š¸€¸¹œ€4£tœ¡Ï ðœ c  Ü¬®¢J£t´  tœaöS¨„œ¡¢4¸¹œ¡ È´šµk¥”¬«¯2¥ª«œ€Ñšœ¡ªkÿ(œ€Ÿ¦o°š¢/ tœ¡¢J£tœ¡¢”¸¹œìτœ¹²   ¸¹Ÿ¬­4£ ¬«´2¢4 €ÇUߥ„œ¡  œ°dŸ œz¸¹´2¢Dњœ€Ÿ £tœ¡Ï ¬­¢D£t´U tœ¡¦o°š¢u² £ ¬­¸(τœ¡  ¸¹Ÿ¬­4£ ¬«´2¢4 ÃÛ-£Úß Ý=°š¢4ÏԔ¢4°šª®ª«ºŸ œ¡°šª­¬«³€œ¡Ï¼°š  ÍŸ¬­£t£tœ¡¢Î£tœ¹ÌS£fÖDº(£ ¥„œ œ¹Ì„¬­ t£ ¬­¢4¯ÿ܀Ÿ¦o°š¢Y¯šœ¡¢4œ€Ÿ°d£t´šŸ µ·´šŸY4Ÿ œ¡ tœ¡¢D£ °d£ ¬«´2¢ÇÒd„´šŸÈ£ ¥4œo¯šœ¡¢„œ€Ÿ°d£ ¬«´2¢Z´šµ?ÆÓœšÇç¯„Ç«Æ á=¢4¯2ª­¬­  ¥Î  ¨”¦o¦o°dŸ¬«œ¡ ¡Æ¡£ ¥„œg£(Ú ß  f°dŸ œ=  œ¡¢J£\£ ¥„Ÿ ´2¨„¯2¥ £ ¥„œÃ£tŸ°š¢4  µûœ€Ÿ¸¹´2¦&´2¢„œ¡¢D£Ö&œ€µû´šŸœŸ œ¡°šª®¬«³¡¬­¢„¯©£ ¥„œ¡¦ ¬­¢¼£ ¥„œá=¢„¯2ª­¬®  ¥¯šœ¡¢„œ€Ÿ°d£t´šŸ¡Ç ›_œ¼¸¥4°dŸ°š¸¹£tœ€Ÿ¬«³€œ£ ¥„œì  ¨”¦o¦o°dŸ ºU”ª­°š¢”¢4¬­¢„¯z°š    ¬­¦o”ª­¬«Ô4œ¡Ï £tœ¹Ìu£z°š¢4Ïø tœ¡¢D£tœ¡¢4¸¹œU”ª­°š¢4¢4¬®¢„¯„Ǟߥ„œ   ¨4¦±¦o°dŸ º¶¯šœ¡¢4œ€Ÿ°d£t´šŸÜ¨4  œ¡ z°š¢§¬­¢4 t£ °š¢”¸¹œ8´šµ©£ ¥„œ ”ª­°š¢ 4Ÿ´S¸¹œ¡    ´šŸτœ¡  ¸¹Ÿ¬«Ö&œ¡Ï ¬­¢%ÛæËª­œ¹Ìu°š¢4Ï4œ€Ÿ   t´2¢ °š¢4ÏÕàœ¡¬«£ ¥4¬®¢„¯šœ€Ÿ¡ÆÃê¡ëšëØàšÝZËOµû´šŸ¼¸¹´2¦”°dŸ°dÖ:ª«œé°d„² 4Ÿ ´2°š¸¥„œ¡   tœ€œYÛ;èé´D´šŸœšÆ:ê¡ë_^šë2Ý]ËYÍ ¥4¬®¸¥¬­¢D£tœ€Ÿ 4Ÿ œ€£   ”ª­°š¢Õ´š&œ€Ÿ°d£t´šŸ µ·´šŸ£tŸ°¡Ñšœ€Ÿ  ¬®¢„¯/£ ¥4œ*ðœ c  ¼°š¢4Ï ”°dŸ £ ¬­£ ¬«´2¢sŽe¸¹´2¢Dњœ€Ÿ ££ ¥4œ¡¬«Ÿ¸¹´2¢D£tœ¡¢J£¬­¢D£t´Ü°dÖ: t£tŸ°š¸¹£  tœ¡¢D£tœ¡¢4¸¹œÏ4œ¡  ¸¹Ÿ¬«4£ ¬­´2¢4 €Ç ߥ4œ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢Ð¬­¢Ð»½4¾¿ Áà¿:ÄûÅMf  z tœ¡¦o°š¢u² £ ¬­¸(Ï4°d£ °d֔°š tœoÛæ tœ¡¦±Ï„ÖLÝ=¥4°š 'ÖLœ€œ¡¢ìœ¹Ìu£tœ¡¢4τœ¡ÏÍ ¬«£ ¥ ¬­¢„µ·´šŸ¦o°d£ ¬­´2¢¤°dÖL´2¨4£±°dŸ ¯2¨4¦œ¡¢D£  ±°š¢”϶°dŸ ¯2¨4¦oœ¡¢J£ £WºS&œ¡ a´šµ:£ ¥4œ tœ¡¦o°š¢D£ ¬­¸kœ¡¢J£ ¬­£ ¬«œ¡ aµû´šŸÉ£ ¥4œ”ª­°š¢4¢4¬®¢„¯ 4Ÿ ´u¸¹œ¡   €Çìd„´šŸì£ ¥4œ7њœ€Ÿ ֔ €Æ'´š4£ ¬«´2¢”°šª­¬«£WºO´šµÎ°dŸ ¯2¨u² ¦œ¡¢D£  °š¢”Ï/°šÏeí ¨4¢4¸¹£  ¥4°š ÖLœ€œ¡¢O°šÏ4τœ¡Ï Lj£Éœ€Ÿ ֔ €Æ ð(ä= é°š¢”Ïžä=ä" 7°dŸ œ/֔°š  ¬­¸U֔¨”¬­ª­Ï4¬­¢4¯/֔ª­´S¸ÞS zµ·´šŸ £ ¥„œU tœ¡¢D£tœ¡¢4¸¹œ¡ €Ç ߥ„œÜ”ª­°š¢Õ4Ÿ ´u¸¹œ¡   t´šŸz¸¹´2¢Dњœ€Ÿ £   £ ¥„œÛðœ c  €ÆfÏ4œ€Lœ¡¢”Ï4¬­¢„¯´2¢U£ ¥„œ±¢S¨4¦ÖLœ€ŸÃ´šµ'Ÿ œ¡ª­°e² £ ¬«´2¢4 =°š¢4Ï£ ¥„œ Ï4œ€4£ ¥´šµL£ ¥„œ ¸¹´2¢D£tœ¡¢J£'´šµ:£ ¥4œŸ œ¡ª­°e² £ ¬«´2¢4 ¡Æ„£t´o´2¢„œY´šµa£ ¥„œÎ֔°š ¬­¸Ã֔¨4¬­ª®Ï4¬­¢„¯֔ª«´u¸ Þu QðäÓÆ ä=äø°š¢4Ï Ûæ tœaöS¨„œ¡¢4¸¹œ¡ (´šµÝ  tœ¡¢J£tœ¡¢”¸¹œ¡ €ÇUd„´šŸÎ  ¬­¦”ª­œ ðœ c  éÛ·œšÇç¯„Ç £tŸ°š¢4 t&´šŸ £ °d£ ¬­´2¢¤Ï„œ€ÑS¬®¸¹œ¡ €Æ=£ ¬­¦oœœ¹ÌS² 4Ÿ œ¡   ¬«´2¢4 Ý°ïð(䓎eä"äÓÆ=°š¢4Ϥµ·´šŸ±¸¹´2¦”ª­œ¹Ì ðœ c   Û·œšÇç¯„Ç ¿¹ÀL½”Æ“ÃNëÏë'¹LƊ¼AÅ_¿½=¼LńÝY tœaöS¨„œ¡¢4¸¹œ¡ ´šµÃ tœ¡¢u² £tœ¡¢4¸¹œ¡ °dŸ œY¯šœ¡¢„œ€Ÿ°d£tœ¡Ï Ç ß\´τœ¡¦´2¢4 t£tŸ°d£tœ(£ ¥4œ(¯šœ¡¢4œ€Ÿ°d£ ¬«´2¢z¬­¢¦´šŸ œÎτœ¹² £ °š¬­ªQÆ\¸¹´2¢4  ¬®Ï„œ€Ÿda¬«¯„Ç h7Í ¥”¬­¸¥_¬­ È°hð œ c £ ¥4°d£YŸ œ¹²   ¨”ª«£  (µûŸ ´2¦ °z¸¹´2¢D£ ¬­¢S¨4°d£ ¬«´2¢8´šµ=£ ¥„œÏ4¬®°šª«´š¯2¨„œœ¹ÌD² ¸¹œ€Ÿ ”£ì  ¥„´ÒÍ ¢¶¬®¢ da¬«¯„Ç ÌSÇ¥¤(œ€&œ¡¢4Ï4¬­¢4¯_´2¢ £t´š”¬­¸ Û·£tŸ°™Ñšœ¡ª­¬­¢„¯DÝÆu¸€ª­°š   (Û í´2¨4Ÿ¢„œ€º4Ýɰš¢4Ï£ ¥4œ¸¹´2¢D£tœ¡¢J£'´šµ £ ¥„œ"£t´šÈ´šÖuítœ¡¸¹£ÓÍ"œ' tœ¡ª«œ¡¸¹£\° tœ€£f´šµu&´2    ¬­Ö”ª«œaњœ€Ÿ ֔ €Ç d„´šŸÉœ¡°š¸¥њœ€Ÿ Ö©Í"œkŸ œ¡¸€¨„Ÿ  ¬«Ñšœ¡ª«ºÃ¯šœ¡¢„œ€Ÿ°d£tœ£ ¥4œ'¸¹´2¢u² £tœ¡¢D£È´šµ£ ¥„œ¡¬­ŸÈ°d44Ÿ ´š”Ÿ¬­°d£tœŸ œ¡ª­°d£ ¬­´2¢4 Îºu¬«œ¡ª­Ï4¬­¢4¯¼°  tœ€£¼´šµCðä" €Æ ä=ä" ¼°š¢4ÏÆ œ€Ñšœ¡¢D£ ¨4°šª­ª­ºšÆ tœ¡¢D£tœ¡¢4¸¹œ¡ €Ç Ë(¸€¸¹´šŸÏ4¬­¢4¯±£t´£ ¥„œ¸¹´2¢4  £tŸ°š¬­¢D£   ´šµÉ£ ¥„œњœ€Ÿ Ö¤Û·Ñe°e² ª«œ¡¢”¸¹œ7Ÿ ´2ª«œ¡ €Æ t´šŸ£ °šª ¸¹´2¢4 t£tŸ°š¬­¢J£  ìµ·´šŸì°dŸ ¯2¨4¦oœ¡¢J£   °š¢4Ï찚Ïeí ¨4¢4¸¹£™Ûæ ÝtÝÓÍ'œ£tŸ º£t´©ª­¬­¢„Þ£ ¥„œeð(䓎eä=ä"  £t´ £ ¥„œ©Ñšœ€Ÿ Ö Ç1d„´šŸ$êô.íšð·õ4õLô¹õ¼£ ¥„œ¸¹´2¦”¨”ª­ t´šŸ ºz°dŸ ¯2¨u² ¦œ¡¢D£eË7 ¨„Öuítœ¡¸¹£QËé¥4°š £t´¼¸€°dŸŸ º7£ ¥„œ t´šŸ £ Ñðæò2ÐMÓdó òæð3Ùdõ4ÇìÚ ¢_£ ¥”¬­ Î¸€°š tœšÆÓÍ"œ±¨” tœ£ ¥„œ±¦´Òњœ±Í ¥”¬­¸¥_¬­  Ÿ œ¡ª®°d£tœ¡Ï£t´é°š  ¿s¹jÀA½ Åv™'½»L½”DZߥ4¬­ ÃŸœ¡ª­°d£ ¬«´2¢¸¹´šŸt² Ÿ œ¡  L´2¢”Ï4 £t´§¦Îð·õ4ï ô¹ð›Ñ€ôÛæá=¢„¯?»k£tŸ¬«é£ ¥„œ€Ÿ œÒÝkÍ¥4¬­¸¥ ¬­ Y´šµk  ´šŸ £ ¿¹ÀL½ êaÆNÅLÇêô.íšð·õ4õLô€õܰšª­ t´7°šª®ª«´™Í  Yµû´šŸ ´2¢„œì°šÏeít¨”¢4¸¹£  Î´šµ t´šŸ £oòæðXÚ±ô èéÙdð·õ”ò=°š¢”Ï_£ ¥4°d£Y£ ¥„œ  t´2¨4Ÿ¸¹œ8°š¢”Ï £ °dŸ ¯šœ€£zª«´u¸€°d£ ¬«´2¢ ¸€°š¢¶Ö&œÜª®¬­¢„Þšœ¡Ï¤£t´ £ ¥„œY ¨„Öuítœ¡¸¹£€Çg¤(¨4Ÿ¬­¢„¯£ ¥4¬­ k”Ÿ ´S¸¹œ¡  kÍ"œY¦o°š¬®¢J£ °š¬­¢ °¸¹´2¢J£tœ¹Ìu£€Æ2¸¹´2¢4  ¬­  £ ¬­¢„¯´šµ ÆÒœšÇ範ǫÆdµ·´S¸€¨4 \°š¢4ÏY¥4¬® t£t´šŸ º ª­¬® t£€Æ Ûæ¸¹µ?Ç\Û3¤Ã°šª«œšÆ\ê¡ëšë_f2ÝtÝ  ¨„”L´šŸ£ ¬­¢„¯©£ ¥„œÎ¯šœ¡¢„œ€Ÿ°e² £ ¬«´2¢7´šµ?Æ4œšÇ範ǫÆ:4Ÿ ´2¢„´2¨”¢4 k°š¢4Ï7τœ¡¦´2¢” t£tŸ°d£ ¬«Ñšœ¡ €Ç ¨ 7?M;N?L@ª©FU5U<1H/I1G/;Š? I1G·:5D/7? TFB$EF?¬«ŠH/GF7 G5C$HUw9I1T 91D5D <M;5;=< T19 G$HF<I ­ D/71?FTFB$E1HI1R@ ­ U?19I®$?FC°¯ 9/I1Tw:/U?59 ®$?FCˆ©¬«$HFE5E ;N?5? GÞH/IcGF71? G5C91HIÞ:G9 GHF<IÞ<I G 7?·Wv± <„ ;=9 CD/7p²'³³³c9 G 9µ´ B$9 C5G1? C G<ÞWI³cH/I GF7? ;=< C IŠH/I1Ri± ¨ C9>1?1E1H/I5R@ ¨ 7? G5CH/U GF7? C1?¬„5C<V;¶$9M;$QFB1C5R G< ¶$9I< >1?FCcQ1K G5C91H/I¬«ŠHFE5E :/G9 C5Gw<I GF7?ˆ²;± <%„ ;=9 CD/7Þ9 G·W³ <ÏPDFEF<1DI® H/I G 7? ;=< C IŠH/I1Ri± ¨ 7? «$9 K Q$91DI® Q5KcG5C91HI°«ŠHFE5E :/G9 C5G </I GF7?ˆ²;± <%„ ;=9 CD/7Þ9 G 795E%„cU$91:/Gˆ· H/IcGF7?c?F>5? IŠH/I1R± ©$D5D <M;5;=< T19 G$HF<I@ ¨ 71? 7$< G1?5Eµ¸5BŠH5:? I57$<%„ÞHI ¶$9I< >1?FC°«$95: 9 R5C1?F?FTw<I¹± ­ U?19 ®$? C°©wH5:ÕG19 ®NH/I1R DF9 C5? <%„ GF71? 7$< G1?5E C1?: ? C5>9 G$H <I¹± da¬«¯2¨„ŸœÝfÍ» ¶]K5M+E@/  3U8z4/ @E@¿3+'Q'(43 ß ¥4¬­ Ã4Ÿ ´u¸¹œ¡   Y¬® Y¬«£tœ€Ÿ°d£tœ¡Ï/¨4¢J£ ¬®ªÉ°šª­ª¿ðœ c  È°dŸ œ 4Ÿ´S¸¹œ¡    œ¡ÏǤßf´Ö&œ¼Ÿ ´šÖ”¨” t£€Æ=Í'œzԔ¢4°šª­ª­ºZ¨4 tœz£ ¥„œ њœ€Ÿ Ö âšô¹ï ô¹ð·õ ê:Ӛïtô€õÕÛæá=¢„¯?»°d¯šŸ œ€œÒÝ£t´_Ÿ œ¡°šª­¬­³€œì£ ¥„œ Ÿ œ¡ª®°d£ ¬«´2¢4 ÎÍ ¥”¬­¸¥UÍ'œ€Ÿ œ±¢„´š£¸¹´2¢4  ¨4¦œ¡Ï DZߥ„œŸ œ¹²   ¨”ª«£ ¬­¢„¯   œ¡¢J£tœ¡¢4¸¹œkτœ¡  ¸¹Ÿ¬«4£ ¬«´2¢4 f°dŸ œ":°š   tœ¡Ï£t´£ ¥„œ ¢4°d£ ¨4Ÿ°šªª­°š¢„¯2¨4°d¯šœ¤¯šœ¡¢4œ€Ÿ°d£t´šŸZÍ ¥”¬­¸¥ö4Ÿ´SÏ4¨”¸¹œ¡  £ ¥„œ  ¨„Ÿ µæ°š¸¹œ  t£tŸ¨”¸¹£ ¨„Ÿ œ°š¢4ϝ”Ÿ ´™Ñu¬­Ï„œ¡ É°š¢Gôß èõ² µ·´šŸ¦o°d£t£tœ¡ÏÜτ´S¸€¨”¦œ¡¢J£ÈÛ2da¬«¯„Çf2ÝÇ º »¼½ ¾¿À ½ ÁÂ Ã Ä »ÅZ¿ Ä »ÅZ¿ Æ Ç ÈÉÊ ËÌ@Ç Æ ÈZÆ Í Ì@É Î ÌZÏ Ð ÑÒÈÓ Ô Õ Ö×ZØÙ Ú ÛZÜÜ@ÜZÝ Þ‚ßàá âÚ ÞØZÙ Ý ãØ ÖÚ ÛäÝ åZØZÙ á Ú ×@æZ×Zàäç àZèZé ê Ð ËäÈÇ Æ ëìÇ Ð ÑíÆ Í Î Ð Õ Ö×ZØÙ Ú ÛZÜÜ@ÜZÝ Þ‚ßàá âÚ ÞØZÙ Ý ãØ ÖÚ îÝ åZØZÙ á Ú ÞßÙ àäç àZèZé ê Ð ËäÈÇ Æ ëìÇ Ð ÑíÆ Í Î Ð Æ Ç ÈÉÊ ËÌ@Ç Æ ÈZÆ Í Ì@É Î ÌZÏ Ð ÑGÆ ïZÐìÇ Ð ½ Á à ÊìÌë@Ç Ó Ð ê Ð@Ê@Æ Í É@ÈäÆ Í ÌìÉ ð Â Ä ¿ ð Â Ä ¿ ñ  ð À ñ  ð À ò@óô’õ@öZ÷ ø@óô‚ù@úû ü÷ ò@óô’õ@öZ÷ ø@óòò@ý þ@õZû ÷ da¬«¯2¨4Ÿ œh?» ÿ @'Q¼/ &!,ªjB:T  X&KMWÙ=K:529W.f> á Ñd°šª­¨4°d£ ¬­´2¢é¬­ 4Ÿ ´šÖ”ª­œ¡¦o°d£ ¬­¸Y¬­¢é¯šœ¡¢„œ€Ÿ°šª;Æ  ¬®¢4¸¹œ¬«£ ¬­ ¥”°dŸÏz£t´±Ô:¢4Ïz£ ¥„œ©¬­Ï„œ¡°šª   ¨”¦o¦o°dŸ º_Û;è鰚¢”¬f°š¢4Ï èܰ¡ºS֔¨„Ÿ ºšÆ=ê¡ëšëšë2ÝÇÚ¢U´2¨„ŸY¸€°š tœšÆ\£ ¥4¬­¢„¯2 Ã°dŸ œµæ¨„Ÿt² £ ¥„œ€ŸÎ¸¹´2¦”ª®¬­¸€°d£tœ¡ÏUÖDºé£ ¥„œ¢4°d£ ¨„Ÿ œ´šµ"  Lœ€œ¡¸¥u²Q£t´d²  t&œ€œ¡¸¥ø£tŸ°š¢4  ª­°d£ ¬«´2¢ ÇBߥ„œ€ŸœU°dŸœ°¤ª«´š£z´šµ tºu ² £tœ¡¦Hœ€ŸŸ ´šŸ £ ¥4°d£(¸€°š¢éª«œ¡°šÏ7£t´ì°&´2    ¬­Ö”ª«œ(”°dŸ £ ¬­°šª Ö4Ÿ œ¡°dÞuτ´ÒÍ ¢´šµ”£ ¥„œϔ¬­°šª«´š¯2¨„œk°š¢4Ï  ¨„Ö: tœaöD¨4œ¡¢J£aŸ œ¹² ”°š¬«ŸÏ4¬®°šª«´š¯2¨„œ¡ €Ç “(  ¬­¢„¯„ÆÉµ·´šŸo¬­¢4  £ °š¢4¸¹œšÆÉ£ ¥„œ¼Ÿ œ¡¸ñ² ´š¯2¢4¬«³€œ¡Ï/°š¢4Ï£tŸ°š¢4  ª­°d£tœ¡Ï/¨„£t£tœ€Ÿ°š¢4¸¹œ¡ ©°š °7֔°š  ¬­  ¬«£¬­ ì¬®¢¶¦o°š¢Dº¶¸€°š tœ¡ ¼°šª­¦´2  £¬­¦oL´2   ¬«Ö”ª«œšÆ"œ€Ñšœ¡¢ µ·´šŸ°é¥S¨4¦o°š¢ Æf£t´í ¨4τ¯šœoÍ¥4°d£È¥4°š °š¸¹£ ¨4°šª=Ö&œ€œ¡¢ °d¯šŸ œ€œ¡Ï¨4L´2¢ Çâ'´2¢4  ¬­Ï4œ€Ÿ£ ¥„œœ¹Ì„¸¹œ€Ÿ 4£Ãµ·Ÿ ´2¦ ´2¢„œ ´šµY´2¨„Ÿéÿ(œ€Ÿ¦o°š¢u²Wá=¢4¯2ª­¬­  ¥¶œ€Ñe°šª­¨”°d£ ¬«´2¢Õϔ¬­°šª«´š¯2¨„œ¡  ¬­¢Òda¬«¯„ÇkSÆ„Í ¥„œ€Ÿ œ(µû´šŸkÖ&´š£ ¥”°dŸ £ ¬­¸€¬­”°d£ ¬­¢„¯© t&œ¡°dÞJ² œ€Ÿ \°š¢4ÏȰš¢È´šÖ” tœ€Ÿ њœ€Ÿa¬«£f¬­ fÏ4¬±¸€¨4ª­££t´ ¯šŸ°š tÍ ¥4°d£ ¬­ k¯š´2¬­¢4¯o´2¢7¬­¢¼£ ¥„œÏ4¬­°šª«´š¯2¨„œšÇ ߥ4œ€Ÿ œ€µû´šŸœšÆµû´šŸ§°ãÔ4Ÿ t£Õœ€Ñd°šª­¨4°d£ ¬­´2¢ÊÍ'œ °š ²   ¨4¦oœ¡Ï$Lœ€Ÿµûœ¡¸¹£%Ÿ œ¡¸¹´š¯2¢”¬«£ ¬«´2¢ °š ÷°   £ °dŸ £ ¬­¢„¯ &´2¬­¢D£ °š¢4ÏYœ€Ñd°šª­¨4°d£tœ¡Ïµû´2¨4ŸÓÿ(œ€Ÿ¦o°š¢„²Wá=¢„¯2ª­¬®  ¥YÏ4¬­°e² ª«´š¯2¨„œ¡ \Í ¥4¬®¸¥YÍ'œ€Ÿ œk¦œ¡Ï4¬­°d£tœ¡Ï©°š¢4ÏÈ£tŸ°š¢4  ª­°d£tœ¡ÏÖSº £ ¥„œ±»½4¾¿ Áà¿:ÄûÅé tºS  £tœ¡¦zLj¤(¨4Ÿ¬­¢„¯¼£ ¥„œìŸ œ¡¸¹´šŸÏu² ¬­¢„¯(´šµ”£ ¥4œkϔ¬­°šª«´š¯2¨„œ¡ Ó£ ¥„œª«´u¸€¨„£t´šŸ É¥4°šÏ¢„´ÃÑu¬­  ¨4°šª ¸¹´2¢D£ °š¸¹£€Ç d„´šŸ'œ¡°š¸¥ì´šµ&£ ¥„œ£tŸ°š¢”  ¸¹Ÿ¬«Ö&œ¡ÏÏ4¬­°šª­´š¯2¨„œ¡ €ÆJ°Y¥S¨u² ¦o°š¢ ¦o°dŸ ޚœ¡Ïž£ ¥„œO°d¯šŸœ€œ¡Ïž´2¢ µûœ¡°d£ ¨4Ÿ œ¡ €ÆÈ¦o°ē¬ ² ¦o°šª­ª­º h=à¶Û·œšÇ範Ǫ«´u¸€°d£ ¬«´2¢ÆÏ4°d£tœUµû´šŸ¼° ¦œ€œ€£ ¬­¢„¯„Æ  t&œ¡°dޚœ€Ÿ ©¢4°š¦œ°š¢4ÏZ£ ¬­£ ª«œšÆaÖL´S´šÞ_°d¯šœ¡¢J£ÝÇUá=°š¸¥ Ï4¬­°šª­´š¯2¨„œo´2¢4ª«ºU¸¹´2¢D£ °š¬­¢4 È°Ü  ¨„Ö”  œ€£Y´šµk£ ¥„œ¡ tœ±µ·œ¡°e² £ ¨„Ÿ œ¡ ¡Ç ߥ4œÜÏ4¬­°šª­´š¯2¨„œ¡ ±Í'œ€Ÿ œ8Ÿ¨4¢Õ£ ¥„Ÿ ´2¨„¯2¥Õ£ ¥„œ  tºu t£tœ¡¦zÆd°š¢4ÏY£ ¥4œ=  ¨4¦o¦±°dŸ ºÍk°š \¯šœ¡¢„œ€Ÿ°d£tœ¡ÏÇÉߥ„œ  5MåÊã+3 +E6!+ Å2!3.4!E4 <8 !Å <Å a+'(4=<a3 'Q&D ! 'Qé+4.N! B-2!.S!73/ !M 4+!7E@ 4!z:! @=b rµ @(!+Ï<a!4!Å3 ++Jä@bM'Q;++e:!]!› / åÊ µ 4ã3 ++äN'Q&!Å:!Å!X &/;å@S!+ >_µ [2 E@8!%ã/;3]+] 9.1+' æ& @/…%8J!3Ø  µ rµ 4Â&Bã / 9-' é4E µ 4Â&Bã / 9-' é4E >_µ !!+4!B+e3+…!Å'QNJ / /  µ! #"%$&')(&*+$, -."%(/0"%$&')(,!"%21'4365!798)":1'<;=)2*>=@?)>A#B rµ Å;9.K18;ã @e J4a!J! /;:å@C U4!]'Q;++ b µ aJ!;E !];e ã  !Å! 'QN4!];E !Å @! >_µ @/ /…!>8 S> B'Q 9-e&Ê+  µ! #"%$&')(,20"%21'D1C.(EAF(G"IH&JK/ (L108)"-.8)">3 5NM61O8P!?QAF108R$+H*G1 ST')VUCHW1$XJ6HPB da¬«¯2¨„Ÿœ kÍ» 5ih89&¼8!¿P @' @( ] @8 å/ +4! @Û8z:D / @E@bJ5 9.U+/ 9.ã3 , ÅLYN.39& ¼! @Ï<[Zã8¼Â !3 < \ 82!'Q†=.3/;4! @Ï<Ø+1=.4+3/;4! @] +/ …!X µ·œ¡°d£ ¨„Ÿ œ¡ É¬­¢£ ¥„œk  ¨4¦o¦±°dŸ ºYÍ'œ€Ÿ œ'¸¹´2¦o”°dŸ œ¡Ï¨4  ¬®¢„¯  t£ °š¢”Ï4°dŸÏ¸€ª­°š    ¬«Ô:¸€°d£ ¬«´2¢4 =°š 'τœ¡  ¸¹Ÿ¬«Ö&œ¡Ïo¬­¢ÜÛ;èܰš¢4¬ °š¢4ÏÜè鰙ºD֔¨4Ÿ ºšÆfê¡ëšëšë2Ý» y ’ ۊÛkߥ4œ6d„œ¡°d£ ¨„Ÿ œ'°d44Ÿ ´™Ìu¬®¦o°d£tœ¡ª«ºÎ¸¹´šŸ Ÿ œ¡ t&´2¢4ϔ  £t´£ ¥4œ7¥D¨4¦±°š¢¤°š¢4¢„´š£ °d£ ¬«´2¢Ǥߥ”¬­ ¦œ¡°š¢4 £ ¥4°d£ £ ¥„œÈµûœ¡°d£ ¨„Ÿœ¬­ œ¡¬­£ ¥„œ€Ÿ©Ûê™Ý°éê€ýšýÏÎ÷¦o°d£ ¸¥ÆaÛ;ü2ݬ­£ Ík°š  ¢„´š£ œ¡¢„´2¨„¯2¥7 t&œ¡¸€¬«Ô”œ¡Ï촚ŸÛ3Ì2Ý'£t´S´± t&œ¡¸€¬«Ô”¸^eÇ _ •JÓÓ Ë µ·œ¡°d£ ¨„Ÿ œY¬­  ¢„´š£ ¬­¢4¸€ª®¨4τœ¡ÏÇ `zѹ{JÓ ÍUËBµ·œ¡°d£ ¨„Ÿ œ±Í'°š œ€Ÿ Ÿ ´2¢„œ€´2¨”  ª«ºÜ¬®¢4¸€ª­¨4τœ¡Ï_¬®¢ £ ¥„œz  ¨”¦o¦o°dŸ ºšÆ ¦œ¡°š¢”¬­¢„¯8£ ¥4°d££ ¥„œ¼µûœ¡°d£ ¨4Ÿ œ¼Í'°š  ¢„´š£É:°dŸ £a´šµ”£ ¥„œÏ4¬­°šª­´š¯2¨„œk´šŸ ¬«£aŸ œ¡¸¹œ¡¬«Ñšœ¡Ï°ÃÍŸ ´2¢4¯ Ñd°šª­¨„œšÇ ‘ ÌþÛæß\Ÿ¨„œ$𜀯2°d£ ¬«ÑšœÒÝÈË µ·œ¡°d£ ¨„Ÿ œ±Í'°š ¢„´š£Y”°dŸ £ ´šµ\£ ¥„œYÏ4¬­°šª«´š¯2¨4œšÆu°š¢”Ï¢4´š£k¬®¢4¸€ª­¨4τœ¡Ï¬­¢£ ¥„œÎ  ¨4¦² ¦o°dŸºšÇ a 5h'Q¼+/ N 4ÅF„@L ] Å1!+N9& 39&!B4!¿ +9/ 8,  ! 'Q@<M Å+ 9.G Q+ 4!Q9,4¼!Ïb$5h+4'Q¼+/ 1 4 F‡@L   Å+1O4!% Å…!e!;'Q] J4Å+ !.:!,ã! !%4! 9 @ !.4;,O[X+2!B +:!@b f ;4/; E@ ^ „ ‡ b ,åÊ& Í8+ ‡F‡ ‡ ‡ ‡a^ ‡5„ ‡5„ab „Pc ÿ 43 d ^M‡ ‹ ^@^ ‹8b efc 01 3 d ‡ c b bb c )/ 3 ‡ ‡ ‡ \ „ab „fc  j ‡5„ „Œ ‡\ ‡5„ ‡@\ab c A]9,4/ / \8b c\ \ab Œ8^ \8b db \8b e ‡ \8b dTe Y?9@b \8b dTe \ab Œ8^ \8b ePc ^@b \ \8b Œ8^ )+/ / @! \8b \'‹ \ab ^,\ \8b \%‹ \8b \@\ \8b \Te da¬«¯2¨„Ÿ œ à=» 5Må/ :!; UAB&3+/…! ߥ4œéŸ œ¡  ¨”ª«£  ì°dŸœ8  ¥4´™Í ¢ ¬­¢ida¬«¯„Ç(àDÇöË ¼¸€°š¢ Ö&œo tœ€œ¡¢´2¨4ŸÈ°d44Ÿ ´2°š¸¥_£tŸ¬«œ¡ Y£t´éÖ&œ´2¢£ ¥„œ± °dµûœ   ¬­Ï4œ_`Y£ ¥„œ_  ¨4¦o¦±°dŸ ºÕ¸¹´2¢D£ °š¬­¢4 z´2¢”ª«º £ ¥„´2 tœ_µ·œ¡°e² £ ¨„Ÿ œ¡ 7£ ¥4°d£z£ ¥„œZ tºu t£tœ¡¦ú£ ¥”¬­¢„Þu ìÖ&´š£ ¥Ð:°dŸ £ ¢„œ€Ÿ  °d¯šŸ œ€œ¡ÏU´2¢Çߥ„œ¦o°š¬­¢ÜŸ œ¡°š t´2¢4 Ãµ·´šŸÎ¢„´š£Ã¯šœ€£t£ ¬­¢„¯ ¥4¬«¯2¥4œ€Ÿk¢S¨4¦Ö&œ€Ÿ k¬­ Ï4¨„œÃ£t´£ ¥„œYª­¬®¦o¬«£tœ¡ÏìŸ œ¡¸¹´š¯2¢4¬ ² £ ¬«´2¢_´šµϔ¬­°šª«´š¯2¨„œo°š¸¹£  ¼ÛàeýÏÎTŸœ¡¸€°šª­ªûÝ(°š¢4ÏZœ€Ÿ Ÿ ´šŸ  ¬­¢¼£ ¥„œ¸¹´2¢J£tœ¡¢D£ œ¹Ìu£tŸ°š¸¹£ ¬«´2¢Ç g h .f>=I„M?Ù=cd9Q.\> ›_œτœ¡¦´2¢” t£tŸ°d£tœ¡Ïo¥„´Òͤ´2¢„œ¸€°š¢o°š¸¥”¬«œ€Ñšœ°Î  ¨4¦² ¦o°dŸ¬­³¡°d£ ¬«´2¢ µæ¨4¢4¸¹£ ¬­´2¢4°šª­¬«£?º ´šµÜ»½”¾¿ Áà¿:ÄûŞÖSº ¦´2 t£ ª­º7¨„£ ¬­ª­¬«³¡¬®¢„¯o°š¢4Ï8œ¹ÌS£tœ¡¢”Ï4¬­¢„¯°šª«Ÿ œ¡°šÏ„ºzœ¹Ì„¬­ t£² ¬­¢„¯ ¸¹´2¦&´2¢„œ¡¢D£  €ÇBß ¥4¬­ ìµ·¨4¢”¸¹£ ¬«´2¢4°šª­¬«£?º ¬­ µ·¨”ª­ª«º ¬­¢D£tœ€¯šŸ°d£tœ¡Ï鬮¢¼£ ¥„œYԔ¢4°šª&њœ€Ÿ  ¬«´2¢z´šµÓ£ ¥„œÈ tºu t£tœ¡¦zÇ ›_œ ¨4 tœ  t£ °š¢4Ï4°dŸϞ¦œ€£ ¥„´uÏ4 ÜµûŸ´2¦ £ ¥„œO°dŸ œ¡° ´šµ ¢4°d£ ¨4Ÿ°šªkª­°š¢„¯2¨4°d¯šœ4Ÿ ´u¸¹œ¡    ¬­¢4¯Ü°š¢4Ï ¬­¢„µ·´šŸ¦o°e² £ ¬«´2¢œ¹Ìu£tŸ°š¸¹£ ¬«´2¢oµû´šŸ= ¨4¦o¦o°dŸ¬­³¡°d£ ¬«´2¢J»aåu£ °d£ ¬­ t£ ¬­¸€°šª ¦œ€£ ¥„´uÏ4 "°dŸ œ ¨4 tœ¡Ï±£t´¸¹´2¦:¨„£tœ £ ¥„œ¬­¢J£tœ¡¢D£ ¬«´2¢±´šµ °š¢©¨„£t£tœ€Ÿ°š¢4¸¹œ°š¢4ÏԔ¢4¬«£tœ" t£ °d£tœk£tœ¡¸¥4¢„´2ª­´š¯šºY£t´œ¹ÌS² £tŸ°š¸¹££ ¥„œzτ´2¦±°š¬­¢OŸ œ¡ª«œ€Ñd°š¢J£o¬­¢4µû´šŸ¦±°d£ ¬«´2¢Çߥ„œ Ï4¬­°šª­´š¯2¨„œU4Ÿ ´u¸¹œ¡   t´šŸé¬­¢D£tœ€Ÿ 4Ÿœ€£  z°š¢4ÏЦo°š¬­¢D£ °š¬­¢4   t£tŸ¨”¸¹£ ¨„Ÿ œ¡ £ ¥4°d£(¦o¬«Ÿ Ÿ´šŸ£ ¥„œÈ¢„œ€¯š´š£ ¬­°d£tœ¡ÏÜ´šÖuítœ¡¸¹£   °š¢4϶£ ¥„œ¡¬«Ÿì°š¸€¸¹œ€”£ °š¢4¸¹œ_ t£ °d£ ¨4 €Ç§ß¥4œé  ¨4¦o¦±°dŸ º ¯šœ¡¢„œ€Ÿ°d£t´šŸ t£tŸ¨4¸¹£ ¨„Ÿœ¡ =£ ¥„œÃԔ¢”°šª­ª«º°d¯šŸ œ€œ¡Ïz´2¢´šÖ„² ítœ¡¸¹£  ¼”°dŸ £ ª«º¶°š¸€¸¹´šŸÏ4¬­¢„¯ £t´ £ ¥„œ_¬­¦&´2 tœ¡ÏÕ£t´š”¬­¸  t£tŸ¨”¸¹£ ¨„Ÿ œ°š¢4ÏUÏ4¬«Ñu¬­Ï„œ¡ £ ¥„œ¬­¢„µ·´šŸ¦o°d£ ¬«´2¢ÜÍ ¬«£ ¥4¬®¢ œ¡°š¸¥ £t´š:¬­¸o£t´_°d֔ t£tŸ°š¸¹£© tœ¡¢D£tœ¡¢4¸¹œzτœ¡  ¸¹Ÿ¬«4£ ¬«´2¢4 ¡Ç ߥ„œ¡  œ°dŸ œњœ€Ÿ ֔°šª®¬«³€œ¡Ï°š¢4ϝ4Ÿœ¡ tœ¡¢J£tœ¡ÏoÖDº»½4¾¿”À Áà¿:ÄûÅMf  f¢4°d£ ¨„Ÿ°šª2ª­°š¢„¯2¨4°d¯šœ=¯šœ¡¢„œ€Ÿ°d£t´šŸ¡ÇNg'ºY¨4  ¬­¢4¯ £ ¥„œÃ£tŸ°š¢4  µûœ€Ÿ¦´uÏ4¨4ª«œÍ"œÎ¸€°š¢¼4Ÿ´SÏ4¨”¸¹œ¦©¨4ª«£ ¬­ª­¬®¢u² ¯2¨4°šª:  ¨4¦o¦o°dŸ¬«œ¡ €ÇaËøÔ4Ÿ t£Éœ€Ñd°šª­¨4°d£ ¬«´2¢±´2¢ì°  ¦o°šª­ª ¢S¨4¦ÈÖ&œ€Ÿ´šµÏ4¬­°šª«´š¯2¨„œì  ¥„´™Í °š¸€¸¹œ€4£ °dÖ:ª«œìŸœ¡  ¨4ª«£   µ·´šŸ£ ¥„œÈ¸¹´2¢D£tœ¡¢J£(¸¹´2¢J£ °š¬®¢„œ¡Ï鬭¢¼£ ¥4œY  ¨4¦o¦o°dŸ¬«œ¡ €Ç da¬­¢4°šª®ª«ºšÆ±Í'œÐ¸¹´2¢4  ¬­Ï„œ€Ÿ   ¸€°šª­°dÖ:¬­ª­¬«£?º °š¢4ÏB¥4´™Í £t´§°šÏ4°d4£_£t´§¢4œ€Í τ´2¦o°š¬­¢”  ŽÒ£ °š tÞu _°š¢4Ï^°d4”ª­¬«² ¸€°d£ ¬«´2¢4 a»ZÚ µÈ°š¢ø°šª«Ÿ œ¡°šÏ„º¶¬®¦”ª«œ¡¦œ¡¢D£tœ¡Ï τ´2¦o°š¬­¢ ¬­ Õœ¹ÌS£tœ¡¢”Ï„œ¡ÏÆU£ ¥„œö°šª«¯š´šŸ¬«£ ¥4¦± Õ¸€°š¢ œ¡°š  ¬­ª­º%ÖLœ °šÏ4°d4£tœ¡Ï Çîd„´šŸo¢„œ€Í%£ °š  ÞS 7Û·´š£ ¥„œ€Ÿ±£ ¥4°š¢ ¢„œ€¯š´š£ ¬ ² °d£ ¬«´2¢LÝ©£ ¥„œéÏ4¬®  ¸¹´2¨„Ÿ tœ7¬®¢J£tœ€Ÿ ”Ÿ œ€£ °d£ ¬«´2¢¤µæ¨4¢4¸¹£ ¬«´2¢„² °šª­¬­£Wºž¦©¨4 t£UÖ&œ/Ÿœ€Ö”¨4¬­ª«£€Ç Ë(ª­ t´„Ƶû´šŸUœ¹Ìu£tœ¡¢4Ï4¬­¢4¯ µ·Ÿ ´2¦ã£WÍ'´± t&œ¡°dޚœ€Ÿ '£t´o¦©¨4ª«£ ¬ ²Q”°dŸ£Wº±Ï4¬®  ¸€¨4    ¬­´2¢4 €Æ °O£ ¥„´šŸ ´2¨4¯2¥øŸ œ¹²W t£tŸ¨4¸¹£ ¨4Ÿ¬­¢„¯/´šµÈ£ ¥„œ_¬­¢D£tœ€Ÿ 4Ÿ œ€£ °e² £ ¬«´2¢Ü¬­ Ÿ œaöS¨4¬«Ÿœ¡ÏÇ Ú¢7°šª­ª ¸€°š tœ¡ ¡ÆL°o¸¹´šŸ”¨4 ´šµÉÏ4¬®°e² ª«´š¯2¨4œ¡ Î¦¨4  £Ö&œ°¡Ñd°š¬­ª­°d֔ª«œ©£t´¼Ö&œ©°š¢4¢4´š£ °d£tœ¡ÏUµû´šŸ £tŸ°š¬®¢4¬­¢„¯°š¢4Ïz£tœ¡  £”¨„Ÿ &´2 tœ¡ ¡Ç 6U34V34023:>=Iu3”c ikjml n!oCprqOsut[ovGww&x)syq0sktyz{jm|}o~2+€u~2sr)ovfj ‚PƒOƒF„Fj …oPqOv+sr~!sr‡†Q~!qOn2x)OˆroЉ‹+vˆuŒ ˆrvoPwŽ%vxOq’‘9xOv“rˆuwPj ”,s–•L—+˜f™Pš›˜>œžLŸ[—˜T 0¡u¢G¢G™£[¤>¥r¦§¨“r“j‡©)©0ªr‚C«[©)©0ª)¬r§ | €rx[t[oPwPj ikjul n!oCprqOsut[ovGww&x)s§[­®jr­®xOn!n2oPvP§r¯°jk±²~2“r“§rq0sut³|´juµXs[¶ )onIj·©O¸O¸)¸rj¹¯KˆunV~2n!~2suOˆuqOnW‰Fˆuº@qOv+»‡¼RosuovGq+~!xOs ~!sKq½‰F“4ooPŒG€[«F¾¿x«[‰[“koPoPŒG€¾¿vGq0suw+n!q0+~!xOsK‰F»[w>oÀ%xOv ¯ÁˆrnV~2n!~!srOˆuqOn‹†Q~q0n!xOOˆuoPwPj#”,sº•L—+˜f™Pš4˜>œ®Â&ÃRÄWÅW¤,ƋÇ)ÇOÇ0§ ¯Á~VÈ“4oÉ| q0ŽxOs§r”>w&vGq0oPnÊj ikjkËLj.l n!n2oPsjɂPƒ)ÌOªujL¯ÁqO~2s‹Gq0~!sr~2suD±²srxÍ}n!oPt[)o²qOÎkx)ˆ[ ¾¿oŽ“4xOvGq0nŽ”,s‹+oPv+Ïq0nwPjÑÐҘÓºÓ6Ÿ[ÔkÕE™GÖ×IÕE˜ÔrØÙ˜>œ’×£u¢ Ú Ð:ÛK§k©OÜuÝ>‚)‚fÞCß Ì)ª)©f«[Ì0à‹ªr§OzáxÏOo6ÎkoPvPj †6jµ j¿l “r“4on2P§®iuj¿â xOÎuÎuw§¿iujãLofq0vf§q0sutН°j¾9»[w&x)sj ‚fƒOƒ)ªrjXË[lR‰[¾}äQ‰.ß.læåusu~VoC¶,w>Gq+o{“rv+x[ŒoPww&x)v9%x)v ~!s[¶ %x)v+@q0+~!xOs‡op‹vq)Œ ~2x)s°%v+x)çv+ofq0n2¶IÍLxOvn!tŠoCpFPj·”,s •W—+˜P™fšÒ˜&œ ÂCè4Ð Ú Â ¤>¥)é0j |´j#†Rq0n!oOjŽ‚Pƒ)ƒ)¬rjRlás¹”,s‹+vx[t[ˆuŒ ~2x)sK+x³záq0+ˆrvGq0n®…qOs[¶ )ˆuq0)o‡¼RosroPvq0+~!xOsjê¾ofŒG€já|}oP“kx)v&f§{¯ÁqOŒP니uq0v~2o äásr~!ÏOovGw+~V>»)j®­Xv+ofw&oPs‹+oPtqáµL‰r‰[……”N¶Nƒ‹¬[j ¯°j ±²~!“r“§²ikjWlán2oprq0sut[oPvww+xOs§áqOsut¨zÉj |}oP~V€r~2suOovfj ‚fƒOƒ)ƒrj9äásut[oPvw&qOsut[~!srD‰F“4xOs‹qOsrox)ˆuw z oPOxO+~q+~!xOs †Q~q0n!xOOˆuoOjR”,síìK˜—+îØ&£u˜¡m•L—+˜f™Pš6ï ð6Ô4˜ñWò2¢GóPô)¢ Ú Ô4ó õQ¢GÖTؘÔkÕԋô´ÕÔ@•L—+Ö)™C×IÕE™GÖò)ö{ÕEÖò2˜ôOŸu¢  4÷TØ ×N¢Ó{Ørï˜>œÒÂ苤 Ð Ú ÂÁï ¥O¥O§u“uqOOofwW¬‹„P«[Ü0àuj …XjR…oÏF~2s§º†6jɼ²q+ofw§ŽlÉj²…qTÏF~!oO§ºq0sutøl{j²ùŠqO~2Î4onIj ‚fƒOƒ)Ìrjlás6”,s‹+oPv+n!~!srOˆuqQãWqOw+oPtÉx)s6†QxO@q0~!s6lQŒ ~2x)suw %x)v ¯ÁqOŒG€u~2sro´¾¿vqOsuw+n!q0+~!xOsDxO®¾}q)w&ú‹¶&ûQv+~!os‹oPtD†Q~q¶ n!xO)ˆroPwPj®”,s¹•L—+˜f™Pš˜&œáÂÐ: [Ä¿•Žï ¥OüOj ” ju¯¹q0sr~qOsut³¯°jr¯ÁqT»FÎrˆrv»O§[oftrwjW‚fƒOƒOƒuj Ú óýÖÔ.™G¢ Ø´ÕÔ Ú Ÿ[×,˜Ó@Ö×IÕE™þk¢ÿO×9 kŸ[ÓºÓ@Ö— ÕfÖ×IÕE˜ÔkjÒ¯K”>¾ ­Xv+ofw+wPj ikj†{j¿¯KxFxOvoOjD‚fƒOÌ)ƒrj Ú õQ¢GÖO™×ÊÕý¢ Ú ¡O¡4—˜PÖ)™£‡×N˜ÿ)¤ ¡.ò2ÖÔ.Ö×IÕE˜Ô@ÕÔ6ÿC¡u¢C— ×Ö0Ô4ó Ú óýfÕE™¢¤ ÅWÕýPÕÔFô² 4÷TØG×,¢Ó{Ø j ­X€j †{j[€roPw+~w§räásr~2Ï)ovGw&~2>»½xOÒ‘LqOn2~2%xOvsr~qr§[…Òj lÉj z{j‹|}oP~V€r~!srOoPv9q0skt@¯°j)±²n!oPw+osjL‚Pƒ)ƒ‹„[j†á~q0n!xO)ˆroálQŒ  ‘9nqOww+~VåkŒPq~2x)s{äáw+~2suR…¿q0sr)ˆuq0)o9¯Kx[t[oPn!wPj”,s@•L—+˜f™Pš ˜&œQLŸ[—+˜ 0¡u¢G¢™G£F¤>¥u¦§r“kq0Oofw ©O©Oª)¬f«r©O©OªOÌr§)|}€rx[t[oPwPj z{jr|}o~2+€r~!sr)ovfj}‚PƒOƒ)ƒrj:|}xOÎrˆkw>}”,s[%x)v+@q~2x)sµÒp‹vq)Œ ¶ ~2x)s¹~!sŠq<‰F“4ooPŒG€·¾¿vqOsuw&nq~2x)s¹‰F»[w>o³jɔ,s°•L—+˜f™Pš ˜&œQLŸ[—+˜ 0¡u¢G¢™G£F¤>¥)¥0§r“kq0Oofw ©àF©)„P«r©à‹ªO¸rj â{jr¾¿qOsuq0úqºq0sutlÉjÒxOú)xFxujW‚Pƒ)ƒOƒrj:l s³µ½ŒC~!os‹ቋGq¶ ~!w&+~ŒqOn¿‰F“4oofŒG€Álጠá¾9»F“koɾ:q0)O~!sr½‰F»[w>oÀ%xOváq ‰[“koPoPŒG€°¾vGq0skw&nq~2x)sЉF»[w&+o³j½”,sm•L—˜P™fš}˜>œ Ú ÐĤ ¥)¥0§k“uq0)oPwWªOÌu‚ «Fª)ÌOÌu§)ãWq0n2+~!ŽxOvoOj ù jøù·q0€un!w&+oPvP§ oPt#j ©0¸)¸O¸rj .õWÛ XÂÄ ¿˜Ÿ[Ô.óOÖ0×ÊÕE˜0ÔrØæ˜&œ  0¡u¢¢G™G£F¤,×,˜¤N O¡r¢G¢G™£ þr—+Ö0ÔrØ ò2Ö×IÕE˜Ôkj ‰[“rv+~!sr)ovfj
2000
40
Headline Generation Based on Statistical Translation Michele Banko Computer Science Department Johns Hopkins University Baltimore, MD 21218 [email protected] Vibhu O. Mittal Just Research 4616 Henry Street Pittsburgh, PA 15213 [email protected] Michael J. Witbrock Lycos Inc. 400-2 Totten Pond Road Waltham, MA 023451 [email protected] Abstract Extractive summarization techniques cannot generate document summaries shorter than a single sentence, something that is often required. An ideal summarization system would understand each document and generate an appropriate summary directly from the results of that understanding. A more practical approach to this problem results in the use of an approximation: viewing summarization as a problem analogous to statistical machine translation. The issue then becomes one of generating a target document in a more concise language from a source document in a more verbose language. This paper presents results on experiments using this approach, in which statistical models of the term selection and term ordering are jointly applied to produce summaries in a style learned from a training corpus. 1 Introduction Generating effective summaries requires the ability to select, evaluate, order and aggregate items of information according to their relevance to a particular subject or for a particular purpose. Most previous work on summarization has focused on extractive summarization: selecting text spans - either complete sentences or paragraphs – from the original document. These extracts are Vibhu Mittal is now at Xerox PARC, 3333 Coyote Hill Road, Palo Alto, CA 94304, USA. e-mail: [email protected]; Michael Witbrock’s initial work on this system was performed whilst at Just Research. then arranged in a linear order (usually the same order as in the original document) to form a summary document. There are several possible drawbacks to this approach, one of which is the focus of this paper: the inability to generate coherent summaries shorter than the smallest textspans being considered – usually a sentence, and sometimes a paragraph. This can be a problem, because in many situations, a short headline style indicative summary is desired. Since, in many cases, the most important information in the document is scattered across multiple sentences, this is a problem for extractive summarization; worse, sentences ranked best for summary selection often tend to be even longer than the average sentence in the document. This paper describes an alternative approach to summarization capable of generating summaries shorter than a sentence, some examples of which are given in Figure 1. It does so by building statistical models for content selection and surface realization. This paper reviews the framework, discusses some of the pros and cons of this approach using examples from our corpus of news wire stories, and presents an initial evaluation. 2 Related Work Most previous work on summarization focused on extractive methods, investigating issues such as cue phrases (Luhn, 1958), positional indicators (Edmundson, 1964), lexical occurrence statistics (Mathis et al., 1973), probabilistic measures for token salience (Salton et al., 1997), and the use of implicit discourse structure (Marcu, 1997). Work on combining an information extraction phase followed by generation has also been reported: for instance, the FRUMP system (DeJong, 1982) used templates for both in1: time -3.76 Beam 40 2: new customers -4.41 Beam 81 3: dell computer products -5.30 Beam 88 4: new power macs strategy -6.04 Beam 90 5: apple to sell macintosh users -8.20 Beam 86 6: new power macs strategy on internet -9.35 Beam 88 7: apple to sell power macs distribution strategy -10.32 Beam 89 8: new power macs distribution strategy on internet products -11.81 Beam 88 9: apple to sell power macs distribution strategy on internet -13.09 Beam 86 Figure 1: Sample output from the system for a variety of target summary lengths from a single input document. formation extraction and presentation. More recently, summarizers using sophisticated postextraction strategies, such as revision (McKeown et al., 1999; Jing and McKeown, 1999; Mani et al., 1999), and sophisticated grammar-based generation (Radev and McKeown, 1998) have also been presented. The work reported in this paper is most closely related to work on statistical machine translation, particularly the ‘IBM-style’ work on CANDIDE (Brown et al., 1993). This approach was based on a statistical translation model that mapped between sets of words in a source language and sets of words in a target language, at the same time using an ordering model to constrain possible token sequences in a target language based on likelihood. In a similar vein, a summarizer can be considered to be ‘translating’ between two languages: one verbose and the other succinct (Berger and Lafferty, 1999; Witbrock and Mittal, 1999). However, by definition, the translation during summarization is lossy, and consequently, somewhat easier to design and experiment with. As we will discuss in this paper, we built several models of varying complexity;1 even the simplest one did reasonably well at summarization, whereas it would have been severely deficient at (traditional) translation. 1We have very recently become aware of related work that builds upon more complex, structured models – syntax trees – to compress single sentences (Knight and Marcu, 2000); our work differs from that work in (i) the level of compression possible (much more) and, (ii) accuracy possible (less). 3 The System As in any language generation task, summarization can be conceptually modeled as consisting of two major sub-tasks: (1) content selection, and (2) surface realization. Parameters for statistical models of both of these tasks were estimated from a training corpus of approximately 25,000 1997 Reuters news-wire articles on politics, technology, health, sports and business. The target documents – the summaries – that the system needed to learn the translation mapping to, were the headlines accompanying the news stories. The documents were preprocessed before training: formatting and mark-up information, such as font changes and SGML/HTML tags, was removed; punctuation, except apostrophes, was also removed. Apart from these two steps, no other normalization was performed. It is likely that further processing, such as lemmatization, might be useful, producing smaller and better language models, but this was not evaluated for this paper. 3.1 Content Selection Content selection requires that the system learn a model of the relationship between the appearance of some features in a document and the appearance of corresponding features in the summary. This can be modeled by estimating the likelihood of some token appearing in a summary given that some tokens (one or more, possibly different tokens) appeared in the document to be summarized. The very simplest, “zero-level” model for this relationship is the case when the two tokens 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 2 4 6 8 10 12 Proportion of documents Length in words Summary lengths headlines Figure 2: Distribution of Headline Lengths for early 1997 Reuters News Stories. in the document and the summary are identical. This can be computed as the conditional probability of a word occurring in the summary given that the word appeared in the document:          where and  represent the bags of words that the headline and the document contain. Once the parameters of a content selection model have been estimated from a suitable document/summary corpus, the model can be used to compute selection scores for candidate summary terms, given the terms occurring in a particular source document. Specific subsets of terms, representing the core summary content of an article, can then be compared for suitability in generating a summary. This can be done at two levels (1) likelihood of the length of resulting summaries, given the source document, and (2) likelihood of forming a coherently ordered summary from the content selected. The length of the summary can also be learned as a function of the source document. The simplest model for document length is a fixed length based on document genre. For the discussions in this paper, this will be the model chosen. Figure 2 shows the distribution of headline length. As can be seen, a Gaussian distribution could also model the likely lengths quite accurately. Finally, to simplify parameter estimation for the content selection model, we can assume that the likelihood of a word in the summary is independent of other words in the summary. In this case, the probability of any particular summarycontent candidate can be calculated simply as the product of the probabilities of the terms in the candidate set. Therefore, the overall probability of a candidate summary, , consisting of words  !#"#"#" $% , under the simplest, zero-level, summary model based on the previous assumptions, can be computed as the product of the likelihood of (i) the terms selected for the summary, (ii) the length of the resulting summary, and (iii) the most likely sequencing of the terms in the content set. #"#"#"$&' $ ( *)+  ,+ -/.10#23/ 4562+  $ ( *)7 3 #"#"#"89: In general, the probability of a word appearing in a summary cannot be considered to be independent of the structure of the summary, but the independence assumption is an initial modeling choice. 3.2 Surface Realization The probability of any particular surface ordering as a headline candidate can be computed by modeling the probability of word sequences. The simplest model is a bigram language model, where the probability of a word sequence is approximated by the product of the probabilities of seeing each term given its immediate left context. Probabilities for sequences that have not been seen in the training data are estimated using back-off weights (Katz, 1987). As mentioned earlier, in principle, surface linearization calculations can be carried out with respect to any textual spans from characters on up, and could take into account additional information at the phrase level. They could also, of course, be extended to use higher order n-grams, providing that sufficient numbers of training headlines were available to estimate the probabilities. 3.3 Search Even though content selection and summary structure generation have been presented separately, there is no reason for them to occur independently, and in fact, in our current implementation, they are used simultaneously to contribute to an overall weighting scheme that ranks possible summary candidates against each other. Thus, the overall score used in ranking can be obtained as a weighted combination of the content and structure model log probabilities. Cross-validation is used to learn weights ; , < and = for a particular document genre. >!?A@CBD>,E F  ;  $ G *)+ ./H IJ/ K , LNM < -./H IJ//.1023/ O62+NM =  $ G *)7 ./H IJ/O !P89Q To generate a summary, it is necessary to find a sequence of words that maximizes the probability, under the content selection and summary structure models, that it was generated from the document to be summarized. In the simplest, zerolevel model that we have discussed, since each summary term is selected independently, and the summary structure model is first order Markov, it is possible to use Viterbi beam search (Forney, 1973) to efficiently find a near-optimal summary. 2 Other statistical models might require the use of a different heuristic search algorithm. An example of the results of a search for candidates of various lengths is shown in Figure 1. It shows the set of headlines generated by the system when run against a real news story discussing Apple Computer’s decision to start direct internet sales and comparing it to the strategy of other computer makers. 2In the experiments discussed in the following section, a beam width of three, and a minimum beam size of twenty states was used. In other experiments, we also tried to strongly discourage paths that repeated terms, by reweighting after backtracking at every state, since, otherwise, bigrams that start repeating often seem to pathologically overwhelm the search; this reweighting violates the first order Markovian assumptions, but seems to to more good than harm. 4 Experiments Zero level–Model: The system was trained on approximately 25,000 news articles from Reuters dated between 1/Jan/1997 and 1/Jun/1997. After punctuation had been stripped, these contained about 44,000 unique tokens in the articles and slightly more than 15,000 tokens in the headlines. Representing all the pairwise conditional probabilities for all combinations of article and headline words3 added significant complexity, so we simplified our model further and investigated the effectiveness of training on a more limited vocabulary: the set of all the words that appeared in any of the headlines.4 Conditional probabilities for words in the headlines that also appeared in the articles were computed. As discussed earlier, in our zero-level model, the system was also trained on bigram transition probabilities as an approximation to the headline syntax. Sample output from the system using this simplified model is shown in Figures 1 and 3. Zero Level–Performance Evaluation: The zero-level model, that we have discussed so far, works surprisingly well, given its strong independence assumptions and very limited vocabulary. There are problems, some of which are most likely due to lack of sufficient training data.5 Ideally, we should want to evaluate the system’s performance in terms both of content selection success and realization quality. However, it is hard to computationally evaluate coherence and phrasing effectiveness, so we have, to date, restricted ourselves to the content aspect, which is more amenable to a quantitative analysis. (We have experience doing much more laborious human eval3This requires a matrix with 660 million entries, or about 2.6GB of memory. This requirement can be significantly reduced by using a threshold to prune values and using a sparse matrix representation for the remaining pairs. However, inertia and the easy availability of the CMU-Cambridge Statistical Modeling Toolkit – which generates the full matrix – have so far conspired to prevent us from exercising that option. 4An alternative approach to limiting the size of the mappings that need to be estimated would be to use only the top R words, where R could have a small value in the hundreds, rather than the thousands, together with the words appearing in the headlines. This would limit the size of the model while still allowing more flexible content selection. 5We estimate that approximately 100MB of training data would give us reasonable estimates for the models that we would like to evaluate; we had access to much less. <HEADLINE> U.S. Pushes for Mideast Peace </HEADLINE> President Clinton met with his top Mideast advisers, including Secretary of State Madeleine Albright and U.S. peace envoy Dennis Ross, in preparation for a session with Israel Prime Minister Benjamin Netanyahu tomorrow. Palestinian leader Yasser Arafat is to meet with Clinton later this week. Published reports in Israel say Netanyahu will warn Clinton that Israel can’t withdraw from more than nine percent of the West Bank in its next scheduled pullback, although Clinton wants a 12-15 percent pullback. 1: clinton -6 0 2: clinton wants -15 2 3: clinton netanyahu arafat -21 24 4: clinton to mideast peace -28 98 5: clinton to meet netanyahu arafat -33 298 6: clinton to meet netanyahu arafat israel -40 1291 Figure 3: Sample article (with original headline) and system generated output using the simplest, zero-level, lexical model. Numbers to the right are log probabilities of the string, and search beam size, respectively. uation, and plan to do so with our statistical approach as well, once the model is producing summaries that might be competitive with alternative approaches.) After training, the system was evaluated on a separate, previously unseen set of 1000 Reuters news stories, distributed evenly amongst the same topics found in the training set. For each of these stories, headlines were generated for a variety of lengths and compared against the (i) the actual headlines, as well as (ii) the sentence ranked as the most important summary sentence. The latter is interesting because it helps suggest the degree to which headlines used a different vocabulary from that used in the story itself.6 Term over6The summarizer we used here to test was an off-theGen. Headline Word Percentage of Length (words) Overlap complete matches 4 0.2140 19.71% 5 0.2027 14.10% 6 0.2080 12.14% 7 0.1754 08.70% 8 0.1244 11.90% Table 1: Evaluating the use of the simplest lexical model for content selection on 1000 Reuters news articles. The headline length given is that a which the overlap between the terms in the target headline and the generated summary was maximized. The percentage of complete matches indicates how many of the summaries of a given length had all their terms included in the target headline. lap between the generated headlines and the test standards (both the actual headline and the summary sentence) was the metric of performance. For each news article, the maximum overlap between the actual headline and the generated headline was noted; the length at which this overlap was maximal was also taken into account. Also tallied were counts of headlines that matched completely – that is, all of the words in the generated headline were present in the actual headline – as well as their lengths. These statistics illustrate the system’s performance in selecting content words for the headlines. Actual headlines are often, also, ungrammatical, incomplete phrases. It is likely that more sophisticated language models, such as structure models (Chelba, 1997; Chelba and Jelinek, 1998), or longer ngram models would lead to the system generating headlines that were more similar in phrasing to real headlines because longer range dependencies shelf Carnegie Mellon University summarizer, which was the top ranked extraction based summarizer for news stories at the 1998 DARPA-TIPSTER evaluation workshop (Tip, 1998). This summarizer uses a weighted combination of sentence position, lexical features and simple syntactical measures such as sentence length to rank sentences. The use of this summarizer should not be taken as a indicator of its value as a testing standard; it has more to do with the ease of use and the fact that it was a reasonable candidate. Overlap with headline Overlap with summary L Lex +Position +POS +Position+POS Lex +Position +POS +Position+POS 1 0.37414 0.39888 0.30522 0.40538 0.61589 0.70787 0.64919 0.67741 2 0.24818 0.26923 0.27246 0.27838 0.57447 0.63905 0.57831 0.63315 3 0.21831 0.24612 0.20388 0.25048 0.55251 0.63760 0.55610 0.62726 4 0.21404 0.24011 0.18721 0.25741 0.56167 0.65819 0.52982 0.61099 5 0.20272 0.21685 0.18447 0.21947 0.55099 0.63371 0.53578 0.58584 6 0.20804 0.19886 0.17593 0.21168 0.55817 0.60511 0.51466 0.58802 Table 2: Overlap between terms in the generated headlines and in the original headlines and extracted summary sentences, respectively, of the article. Using Part of Speech (POS) and information about a token’s location in the source document, in addition to the lexical information, helps improve performance on the Reuters’ test set. could be taken into account. Table 1 shows the results of these term selection schemes. As can be seen, even with such an impoverished language model, the system does quite well: when the generated headlines are four words long almost one in every five has all of its words matched in the article s actual headline. This percentage drops, as is to be expected, as headlines get longer. Multiple Selection Models: POS and Position As we mentioned earlier, the zero-level model that we have discussed so far can be extended to take into account additional information both for the content selection and for the surface realization strategy. We will briefly discuss the use of two additional sources of information: (i) part of speech (POS) information, and (ii) positional information. POS information can be used both in content selection – to learn which word-senses are more likely to be part of a headline – and in surface realization. Training a POS model for both these tasks requires far less data than training a lexical model, since the number of POS tags is much smaller. We used a mixture model (McLachlan and Basford, 1988) – combining the lexical and the POS probabilities – for both the content selection and the linearization tasks. Another indicator of salience is positional information, which has often been cited as one of the most important cues for summarization by ex1: clinton -23.27 2: clinton wants -52.44 3: clinton in albright -76.20 4: clinton to meet albright -105.5 5: clinton in israel for albright -129.9 6: clinton in israel to meet albright -158.57 (a) System generated output using a lexical + POS model. 1: clinton -3.71 2: clinton mideast -12.53 3: clinton netanyahu arafat -17.66 4: clinton netanyahu arafat israel -23.1 5: clinton to meet netanyahu arafat -28.8 6: clinton to meet netanyahu arafat israel -34.38 (b) System generated output using a lexical + positional model. 1: clinton -21.66 2: clinton wants -51.12 3: clinton in israel - 58.13 4: clinton meet with israel -78.47 5: clinton to meet with israel -87.08 6: clinton to meet with netanyahu arafat -107.44 (c) System generated output using a lexical + POS + positional model. Figure 4: Output generated by the system using augmented lexical models. Numbers to the right are log probabilities of the generated strings under the generation model. Original term Generated term Original headline Generated headline Nations Top Judge Rehnquist Wall Street Stocks Decline Dow Jones index lower Kaczynski Unabomber Suspect 49ers Roll Over Vikings 38-22 49ers to nfc title game ER Top-Rated Hospital Drama Corn, Wheat Prices Fall soybean grain prices lower Drugs Cocaine Many Hopeful on N. Ireland Accord britain ireland hopeful of irish peace Table 3: Some pairs of target headline and generated summary terms that were counted as errors by the evaluation, but which are semantically equivalent, together with some “equally good” generated headlines that were counted as wrong in the evaluation. traction (Hovy and Lin, 1997; Mittal et al., 1999). We trained a content selection model based on the position of the tokens in the training set in their respective documents. There are several models of positional salience that have been proposed for sentence selection; we used the simplest possible one: estimating the probability of a token appearing in the headline given that it appeared in the 1st, 2nd, 3rd or 4th quartile of the body of the article. We then tested mixtures of the lexical and POS models, lexical and positional models, and all three models combined together. Sample output for the article in Figure 3, using both lexical and POS/positional information can be seen in Figure 4. As can be seen in Table 2,7 Although adding the POS information alone does not seem to provide any benefit, positional information does. When used in combination, each of the additional information sources seems to improve the overall model of summary generation. Problems with evaluation: Some of the statistics that we presented in the previous discussion suggest that this relatively simple statistical summarization system is not very good compared to some of the extraction based summarization systems that have been presented elsewhere (e.g., (Radev and Mani, 1997)). However, it is worth emphasizing that many of the headlines generated by the system were quite good, but were penalized because our evaluation metric was based on the word-error rate and the generated headline terms did not exactly match the original ones. A quick manual scan of some of the failures that might have been scored as successes 7Unlike the data in Table 1, these headlines contain only six words or fewer. in a subjective manual evaluation indicated that some of these errors could not have been avoided without adding knowledge to the system, for example, allowing the use of alternate terms for referring to collective nouns. Some of these errors are shown in Table 3. 5 Conclusions and Future Work This paper has presented an alternative to extractive summarization: an approach that makes it possible to generate coherent summaries that are shorter than a single sentence and that attempt to conform to a particular style. Our approach applies statistical models of the term selection and term ordering processes to produce short summaries, shorter than those reported previously. Furthermore, with a slight generalization of the system described here, the summaries need not contain any of the words in the original document, unlike previous statistical summarization systems. Given good training corpora, this approach can also be used to generate headlines from a variety of formats: in one case, we experimented with corpora that contained Japanese documents and English headlines. This resulted in a working system that could simultaneously translate and summarize Japanese documents.8 The performance of the system could be improved by improving either content selection or linearization. This can be through the use of more sophisticated models, such as additional language models that take into account the signed distance between words in the original story to condition 8Since our initial corpus was constructed by running a simple lexical translation system over Japanese headlines, the results were poor, but we have high hopes that usable summaries may be produced by training over larger corpora. the probability that they should appear separated by some distance in the headline. Recently, we have extended the model to generate multi-sentential summaries as well: for instance, given an initial sentence such as “Clinton to meet visit MidEast.” and words that are related to nouns (“Clinton” and “mideast”) in the first sentence, the system biases the content selection model to select other nouns that have high mutual information with these nouns. In the example sentence, this generated the subsequent sentence “US urges Israel plan.” This model currently has several problems that we are attempting to address: for instance, the fact that the words co-occur in adjacent sentences in the training set is not sufficient to build coherent adjacent sentences (problems with pronominal references, cue phrases, sequence, etc. abound). Furthermore, our initial experiments have suffered from a lack of good training and testing corpora; few of the news stories we have in our corpora contain multi-sentential headlines. While the results so far can only be seen as indicative, this breed of non-extractive summarization holds a great deal of promise, both because of its potential to integrate many types of information about source documents and intended summaries, and because of its potential to produce very brief coherent summaries. We expect to improve both the quality and scope of the summaries produced in future work. References Adam Berger and John Lafferty. 1999. Information retrieval as statistical translation. In Proc. of the 22nd ACM SIGIR Conference (SIGIR-99), Berkeley, CA. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, (2):263–312. Ciprian Chelba and F. Jelinek. 1998. Exploiting syntactic structure for language modeling. In Proc. of ACL-98, Montreal, Canada. ACL. Ciprian Chelba. 1997. A structured language model. In Proc. of the ACL-97, Madrid, Spain. ACL. Gerald F. DeJong. 1982. An overview of the FRUMP system. In Wendy G. Lehnert and Martin H. Ringle, editors, Strategies for Natural Language Processing, pages 149– 176. Lawrence Erlbaum Associates, Hillsdale, NJ. H. P. Edmundson. 1964. Problems in automatic extracting. Communications of the ACM, 7:259–263. G. D. Forney. 1973. The Viterbi Algorithm. Proc. of the IEEE, pages 268–278. Eduard Hovy and Chin Yew Lin. 1997. Automated text summarization in SUMMARIST. In Proc. of the Wkshp on Intelligent Scalable Text Summarization, ACL-97. Hongyan Jing and Kathleen McKeown. 1999. The decomposition of human-written summary sentences. In Proc. of the 22nd ACM SIGIR Conference, Berkeley, CA. S. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, 24. Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization — step one: Sentence compression. In Proc. of AAAI-2000, Austin, TX. P. H. Luhn. 1958. Automatic creation of literature abstracts. IBM Journal, pages 159–165. Inderjeet Mani, Barbara Gates, and Eric Bloedorn. 1999. Improving summaries by revising them. In Proc. of ACL99, Baltimore, MD. Daniel Marcu. 1997. From discourse structures to text summaries. In Proc. of the ACL’97 Wkshp on Intelligent Text Summarization, pages 82–88, Spain. B. A. Mathis, J. E. Rush, and C. E. Young. 1973. Improvement of automatic abstracts by the use of structural analysis. JASIS, 24:101–109. Kathleen R. McKeown, J. Klavans, V. Hatzivassiloglou, R. Barzilay, and E. Eskin. 1999. Towards Multidocument Summarization by Reformulation: Progress and Prospects. In Proc. of AAAI-99. AAAI. G.J. McLachlan and K. E. Basford. 1988. Mixture Models. Marcel Dekker, New York, NY. Vibhu O. Mittal, Mark Kantrowitz, Jade Goldstein, and Jaime Carbonell. 1999. Selecting Text Spans for Document Summaries: Heuristics and Metrics. In Proc. of AAAI-99, pages 467–473, Orlando, FL, July. AAAI. Dragomir Radev and Inderjeet Mani, editors. 1997. Proc. of the Workshop on Intelligent Scalable Text Summarization, ACL/EACL-97 (Madrid). ACL, Madrid, Spain. Dragomir Radev and Kathy McKeown. 1998. Generating natural language summaries from multiple online sources. Compuutational Linguistics. Gerard Salton, A. Singhal, M. Mitra, and C. Buckley. 1997. Automatic text structuring and summary. Info. Proc. and Management, 33(2):193–207, March. 1998. Tipster text phase III 18-month workshop notes, May. Fairfax, VA. Michael Witbrock and Vibhu O. Mittal. 1999. Headline generation: A framework for generating highlycondensed non-extractive summaries. In Proc. of the 22nd ACM SIGIR Conference (SIGIR-99), pages 315– 316, Berkeley, CA.
2000
41
       "!#$%"& ' ( )*+-,.   / ( *)012 * 342 *$6578) & :9;,<0$ =?>A@CBDFEG%EIHKJMLN>POIBDQBSR2TU>PVWYXEZRX[E\]EG^>&X[_`aE%DQE^R b?>A`aBCOc>d1efEG^_gEVh;b?>iDFBC\QLN>jk\QELE`aE l&mn1npoqrtsQuMvwrxmyqz{|]zw|Quy}~skK€SuyCm}wuMv‚my}~ƒ „ rtqrxz~vw}~ƒ…my†ˆ‡)mz~vwzuMq‰?Š)|]‹x|]s]mynUnŒoqrtsQuMvwrxmqz MŽyސ+‘“’”• uMm–u ’%”• uymy–u  s—m ’˜ rxzwr  –™o š mC| ’› ƒ“mœym ’f™ž7+‘Ÿ “‘“’¡ uM¢uyq £¥¤§¦F¨©]ª§«¬«N­~®MªC¯)­ª™¤%°%¯y¬¯ˆ­°«aª©­—©“±¯M¨¯°%¯²™³¦°´µ¶«2µ·M¸ ¹…º \]DQ`aEJyD »2¼“½¿¾ÁÀ™ÂQÀ6×ÄpÅf×¾ÆÇĽ¿È%×¾ÁɓÂFÊ.Ã7Åc×É˽¿Ë¥Ì ÍAÎ&ψРÂÑËÒÄ+ÂFÆÇ˽¿ÓaÉÔșÂF¾ÒÃ7ÅÕӐÉÕÂÖÊ.ÂQÑy× ½ØÊÚÙ“Ê Ã7ÉMËÒÄÓaÀyÌ ÍAÛ?ÜÝÏ2ÜÞÐ Ê*ÓfÅÃ7ßÂaɓŠËÄ+ÂFɓ¾ÒàxÓaÄÊ<ÂF˽¿ÓaÉpÄٓßÞÃ7¾ Ü »2¼×ÄÃÂFÄÒÃËâá)Ó ËâÌyÀ6×¾ÓFà^ÉfÂaÊ.×Å1Ã7Éy˽¿Ë½ÞÃ7¾á¼Ã7É1àxÓƗÙf¾× ½ØÉã ӐÉä˼fÃNÄÒÃ7ß¿ÂF˽ÞӐɓ¾¼f½¿ÀåÈ%ÃÇËâáˆÃÇÃ7ÉÊ.ÓaÄÒ× À™¼f×Ê.×¾ŒÂFɓŠÎ&Ï ¾ÁÂF¾ŒÅfÂæ6ÉÃ7Åc½¿ÉçË+¼à Î&Ï ËÂa¾ÒècÓaà.˼fÃKéê ÏSë ÆÇÓaÊ.ÀÃÇ˽¿Ë½ÞÓÉ ¼fÃ—ßØÅì½ØÉîí—ïaïFï ÜçÏ ÂaƂ¼ Î&Ï Æ‚ÓÉ“¾½¿¾Ò˾ðÓFà ӐÉfÃÓaĈÊ.ÓaÄÒÃ&Ê.ÓaÄÀ“¼f×Ê.Ã7¾ÇñyÓFÄ)½ØÉfƗ߿ٓÅÃ7¾ ÂÁ¾Ùș¾ÒËÒÄ+½¿ÉfãŒÓaàSÂÁÊ.ÓFÄÀ“¼fÃ7Ê*à Üò ÃåÂÑy× ËÄ+ÂFÆÇËZ˼fÈàxÓaÄ+Ê.ÃÇÄ^ËâÌMÀ6Ã)Óaà Î&Ï ÈÌäٓ¾½¿Éfã Ë+¼à Û?ÜÝÏ2Ü Ê.ÓyÅfÃ—ß Ü;ò Ã4Ë+¼Ã7ÉcÂÑËÒÄ+ÂaÆ‚Ë Ë+¼Ãß¿ÂFËÒËÃÇÄ1ËâÌMÀ6ÃKÓFà Î Ï ÈÌIÂFÀ“À“ßÞ̽¿Éfã ËÄ+ÂFɓ¾ÒàxÓaÄÊ<ÂF˽¿ÓaÉpÄٓßÞÃ7¾ËÒÓð˼fÃóËÂÑË Ü ô jkVDQ`aBZhN_JyDF>PBCV Î ÂFÊ.Ã7Åî×É˽ÞËõÌ ÍiÎ&ψРÃÇÑËÒÄ+ÂFÆÇ˽ÞӐÉÖ½¿¾öÓaÉfÃ÷șÂF¾½¿Æ ËÒÃ7Æw¼“Éf½ØøMÙfÃðٓ¾Ò׎ØÉK½ØÉàùÓFÄ+Ê<ÂQË+½ÞӐÉúÃÇÑyËÄ+ÂFÆÇ˽ÞÓÉ Ü éâË ÆÇÂaÉðÂFߨ¾ÒÓä¼fÃ7ßÞÀ.ËÓ彿Ê.ÀfÄÓFûQÃ2˼fà ÂFƗÆÇÙfÄ+ÂFÆÇÌÚÓaà%Ê.ÓaÄ× À™¼ӐßÞÓaãa½¿Æ—ÂFßNÂaÉfžÒÌÉyË+ÂFÆÇ˽¿ÆðÂaÉfÂaßÞ̾½¿¾ Ü »2¼ÌÆÇÓaÊ*× À%×˽ÞË+½ÞӐÉI¼Ã7ß¿ÅIÂFËðË+¼à Û4üóý8ÍAÛ Ã—¾¾ÂFãFà ü ɓÅ×ÄÒ× ¾ÒËÂaɓÅf½ØÉã ý ÓaÉfàx×Ä×ɓƂà Ð^ÍõþMÿ é ý ñyí7ïFïÐ ¾½¿É“ÆÇÃí—ï ½¿É ˼fà üåÜ þ6Ü ¼fÂa¾.¼Ã7ßÞÀ%Ã7Å÷ËӅ½¿Ê.ÀfÄÓFûFÌ˼fÃÁËÒÃ7Ƃ¼× É“½¿øMÙfÃ Ü éÉaÂFÀ“ÂaÉñÚË+¼Ãéê ÏSë Í éÉàxÓaÄ+Ê<ÂQË+½ÞÓaÉ ê2×ËÒÄ+½Þ×ûQÂaß&ÂFÉ“Å Ï ÑyËÄÂaƂË+½ÞÓÉ Ï ÑÃÇÄ+ÆÇ½Ø¾Òà Р À“ÄÒÓ Ã7Æ‚Ë È%×ãaÂaÉ ¾ÒÀ6ӐÉf¾ÒÓaÄ+½¿Éfã; ¾½ØÊ<½¿ß¿ÂFÄìÆÇÓaÊ.ÀÃÇ˽¿Ë½ÞÓÉ ½ØÉ í7ïFïÜ Î&Ï Ã‚ÑËÒÄ+ÂFÆÇ˽ÞӐÉI½¿¾ðÓaÉfÃöÓFàËâá)ÓKËÂa¾Òèy¾Œ½¿É ˼fÃUƂӐÊ*À6×˽¿Ë½ÞÓÉ Ü »2¼fÃpË+ÂQÄãaÃÇ˾.àùÓFÄ.ÂÑËÒÄ+ÂaƂË+½ÞÓaÉ ½¿ÉY˼“½¿¾ÁËÂa¾ÒèIÂFÄÅ˼fÃKɓÂFÊ.Ã7¾ÁÓFàäÓaÄãaÂaÉf½ —ÂF˽ÞӐɓ¾ ¾ٓÆw¼îÂF¾  Í Ë+¼Ã Û ½¿É“½¿¾ÒËÄÌ ÓFà^Óa¾ÒË+¾UÂaÉfÅ »§Ã7ßÞÃ7ƂӐÊ<ÊÚٓÉf½ØÆÇÂF˽ÞӐɓ¾ Ð ñ  À%ÃÇÓaÀ“߿à ¾UÉfÂaÊ.×¾ö¾ÙfƂ¼ Âa¾ Í Âa¾ÙfɓÂFĽ! Â]á)ÂFȓÂFË Рñ  ɓÂaÊ.×¾ ÓaàNßÞÓƗÂQË+½ÞÓaɓ¾ó¾ÙfƂ¼4ÂF¾"#%$ Í óÓFÈà Рñ  ɓÂaÊ*Ã7¾ÓFà ÂFÄÒË+½ÞàùÂaƂË+¾å¾ٓƂ¼ÂF¾%'&)(+*%, Í »Ó]ÌFÓFË+Â-ݾ ý ÓFÄÓaßØß¿Â ÆÇÂFÄ Ð ñ  ÂaÉfÅpÂÑÀfÄÃ7¾¾½ÞӐÉf¾ˆá¼f½ØÆ‚¼ ÄÃÇÀ“Ä×¾ÒÃ7ÉyËœÂQË×¾Çñ Ë+½¿Ê.×¾Çñ¾ٓÊ<¾ ÓaàÊ.ÓaÉfÃÇ̐ñ6ÂaÉfÅ4À6ÃÇÄ+ƂÃ7ÉyË+ÂQãa×¾Çñ%¾ٓÆw¼ Âa¾.ï0/2143 Íõþ ÃÇÀ“ËÒ×ÊÚÈ6ÃÇÄ51]Ë+¼ Ð ñ  76%8:9<; Í 9<À Ü Ê ÜÞÐ ñ  —í'0=?> Í ÓaÉfÃÚÊ<½¿ß¿ßؽÞӐÉUÌaÃ—É Ð ñ  ÂaÉfÅ —í'A@ ÜB »2¼fÃÇÄÃ)ÂQÄÃ)Ê<ÂFÉMÌÂaɓÅäûQÂFÄ+½ÞӐÙf¾ Î&Ï ¾ÇñQÂaÉfÅ ÉfÃÇáIӐÉf×¾ˆÂQÄà ÀfÄÓœٓƂÃ7Å ÂF߿ߙÓaà˼fÃË+½¿Ê.Ãañ¾ÒÓ彿ˈ½¿¾ ½ØÊ*ÀÓa¾¾½¿È“ßÞÃóËÒÓ.ÂaœÅÁÂaß¿ßÓaàC˼fÃ—Ê ËÒÓðÂ*œ½¿ÆÇ˽¿ÓaɓÂQÄÌ Ü »2¼fÃÇÄÃ4ÂQÄÃöÂFß¿¾Ó?ÂaÊÚȓ½ÞãÙ“½Þ˽¿Ã—¾ð½ØÉIÙf¾ÂFãFÃö¾ÒÓ?˼“ÂFË Â<ãa½¿ûÃ7ÉÁÃÇÑyÀ“Ä×¾¾½ÞӐÉöÊ<ÂkÌÁÈ6ÃåÙf¾ÒÃ7ÅöÂF¾ ߿ÓyÆÇÂF˽ÞÓÉ É“ÂFÊ.Ã*½ØÉ4ӐÉÃ*ÆÇÓaÉËÒÃÇÑyË&ÂaÉfÅ4Âa¾ó À6ÃÇÄ+¾ÒӐÉC ¾ɓÂaÊ*à ½ØÉúÂaÉÓa˼fÃÇÄäÆÇÓaÉMËÂÑË Ü »2¼×ÄÃÇàxÓaÄÃFñ§½Þ˽ؾÉÓaËóÃ7ÂF¾ÒÌ ËÓú½¿Åf×É˽ÞàùÌ Î&Ï ¾Çñ)ÂFɓÅ÷ËÓK½ØÅÃ7ÉM˽¿àxÌKË+¼Ã1ËâÌMÀ6ÃpÓFà Ã7ÂFÆÇ¼ Î&Ï ñ“½ØÉpÂ<ãa½¿ûa×É1¾ÒÃ7ÉMËÒÃ7ÉfÆÇÃ Ü »2¼fÃÇÄÃÂFÄÃóËâáNÓ<Ê<ÂF½ØÉpÂFÀfÀ“ÄÓaÂaÆw¼Ã7¾ˆËÒÓ<ÂÑËÒÄ+ÂFÆÇË× ½ØÉã Î Ï ¾ÇñpӐÉÃ#ȓÂa¾Ò׊ÓaÉ[¼fÂaɓÅy×õƂÄ+ÂQàùËÒÃ7Å Ä+Ùf߿×¾ ÂaÉfÅ Ë+¼ÃUÓFË+¼Ã—Ä È™ÂF¾ÒÃ7Å ÓaÉçÂÊ<ÂFƂ¼“½¿ÉfÂ×õßÞ×ÂFÄ+Éf½ØÉã Ü »2¼fÃ&àxÓaÄÊ.×ĈÂFÀfÀ“ÄÒӐÂaÆw¼ ½¿¾ˆÆ‚Ӑ¾ÒËßÞÌðÈ%×ƗÂFٓ¾ÒÃÅfÂæ6Éf½Þ× Ë+½ÞÓaɓ¾&Åf½ED×ÄÂFÆÇÄÓa¾¾&ÂQÀ“À“ßؽ¿ÆÇÂF˽¿Óaɓ¾ÇñÂFɓÅU˼fÃÚÄ+Ùf߿×¾ ¼“Â]ûFÃËÒÓðÈ%ÃåÆw¼fÂaÉfãFÃ7ÅpÂaÆÇÆÇÓFÄ+œ½¿Éã ËÓð˼fÃäÂFÀfÀ™ß¿½ØÆÇÂQ× Ë+½ÞÓaÉ Ü »2¼Ã)Ê<ÂaƂ¼“½¿ÉÃÇץ߿×ÂFÄɓ½¿Éfã ÂQÀ“ÀfÄӐÂFÆÇ¼óÄÃ7øMٓ½ÞÄ×¾  ËÒÄ+ÂF½ØÉf½¿ÉfãìÆÇÓFÄÀ™Ùf¾Çñ&șÙfËp ¼“½Þ㐼 ÂaÆÇƗÙÄ+ÂaƂÌIƗÂFÉ È%à ÂaƂ¼“½ÞÃÇûFÃ7Å<á2½¿Ë¼fÓaÙfËSÄÃ7øMٓ½ÞÄ+½¿ÉfãÚÂߨÂQÄãaà ÂaÊ.ÓaٓÉMË Óaà)œÂQË+Â1½¿àNáÃ.ٓ¾ÒÃ<ÂUßÞÃ7ÂQÄ+Éf½ØÉãUÊ.ÓÅf×ßZἓ½¿Æw¼K½¿É× Æ—ß¿Ù“ÅÃ7¾CáˆÂ7̾ÓaàyÓFûFÃÇÄ+ƂӐÊ<½¿Éfã2Ë+¼ÃNœÂQË+ ¾ÒÀ“ÂFÄ+¾Ò×Éf×¾¾ À“ÄÓFșßÞÃ—Ê Ü »¼f×ÄÒ×àxÓaÄÒÃañNáˆÃp¼“ÂkûaÃpËÂFèÃ—É Ë¼fÃ1ߨÂQËÒ× ËÃÇÄÂFÀ“ÀfÄÓaÂaƂ¼ Ü&Û ÂFÉÌ1Ê*×˼fÓyÅf¾&șÂF¾ÒÃ7ÅöӐÉ4Ê<Â]Ñy× ½ØÊ*ٓÊ×ÉËÒÄÓFÀMÌ ÍAÛ?ÜÝÏ2Ü Ð Ê.ÓyÅf×߿¾&¼fÂ]ûaÃÈ6××ÉUûÃÇÄÌ ÂaÆÇƗÙÄ+ÂFËÒÃ Í êÂFËɓÂFÀ“ÂFÄÒ輓½iñ1í7ïFïFHGöêÂFËɓÂQÀ™ÂQÄè¼f½Añ í7ïFïAIJGLKˆÓFÄ˼yá2½ØÆ+èìÃÇˌÂFß Ü ñåí—ïaïFÂ-G ü Ƃ¼“½¿Ê.ÓaËÒÓÃÇË Âaß Ü ñí7ïFïaï Ð ñQÂFɓÅ˼fà Û?Ü Ï2Ü Ê.ÓÅf×ߐÆÇÂaÉÈ6ÃÂaœÂQÀ“ËÒÃ7Å ËÓpÅf×ÂaßZá½Þ˼…˼fÃ<œÂQË+ÂÁ¾ÒÀ™ÂQÄ+¾ÒÃ7ÉÃ7¾¾ÀfÄÓFșßÞÃ7Ê ÃÇàx× àù×ƂË+½ÞûM×ßÞÌ ÜNò ü“ÂkûaÃ˼fÙf¾2ٓ¾Ò×Å1Ë+¼à Û?ÜÝÏÜ Ê.ÓyÅfÃ—ß ËÓ?ÂÑËÒÄ+ÂFÆÇË Î&Ï ¾ Ü ÿ àùËÒÃ—Ä ½¿Åf×ÉyË+½Þàx̽¿Éfã Î&Ï ¾ ½¿Éç 㐽ÞûÃ7ÉgËÂÑËKÈfÌgÂFÀ“À“ßÞ̽¿ÉfãîÓaÙfÄ?Ê.ÓyÅÃ7ßiñ<áÃçÂQÀf× À™ßÞ̌ËÒÄ+ÂFɓ¾ÒàxÓaÄ+Ê<ÂQ˽¿ÓaÉpÄ+Ùf߿×¾á¼f½ØÆ‚¼Á¼“Â]ûFÃóÈ%ÃÇÃ7ÉpÂaÆw× øyÙf½¿ÄÒÃ7ÅÁÈÌ ÂaÉÁ×ÄÒÄÓaÄ×õÅÄ+½ÞûÃ—ÉÁ߿×ÂFÄ+Éf½¿ÉfãðÊ.ÃÇ˼fÓyÅpËÒÓ Ë+¼ÃËÒÃÇÑyË Ü M N EO O6hP*VDa>iD—@QPSRZDF`aEJyDF>AB§V ¹%T WB`>iDaLO ò à ¼fÂ]ûaÃ÷Ùf¾ÒÃ7ÅgË+¼à ÅfÂæ6ɓ½Þ˽¿ÓaÉ ÓFà Î&Ï ¾…á¼f½ØÆw¼ ½¿¾ÚÙf¾ÒÃ7Å?½¿ÉË+¼à éâê ÏSë × Î&Ï ËÂa¾Òè Í éâê ÏSë[Ï Ñ×ƗÙ× Ë½¿ûaà ý ӐÊ<Ê<½ÞËÒËÃÇÃañóí—ïaïaï Ð~ÜcÏ ½Þ㐼yË.ËÒÌMÀ™Ã—¾ ÓFà Î&Ï ñ VU&êXW ÿ&Î éZY ÿ »éU Î< ñ Ï ê þ U Î< ñZ[\U ýˆÿ × »éU Î< ñ ÿ ê^»é ] ÿ ý »  ñ^ ÿ » Ï_ ñ%»ˆé Û4Ï` ñ  Û U Î&ÏaL ñÚÂFɓÅbZ Ï ê ý)ÏNÎ »  ÂQÄÃcÅfÂæ6ÉÃ7Å Ü »2¼f½Ø¾Ú¾ÒÃ7ƂË+½ÞӐÉ÷ÅÃ7¾ƂÄ+½ÞÈ×¾Ú˼fÃÁÊ.ÃÇË+¼ÓyÅ?ÓaཿÅf×É˽ × àx̽¿Éfã Î&Ï ¾Ú½ØÉ?ÂUãa½¿ûÃ—ÉKËÂÑËÚÂFɓÅ÷ÂF¾¾½¿ãaɓ½¿ÉfãöÓaÉfà Óaà§Ã—½¿ãa¼MË þ W Û [úËÂFãa¾á¼“½¿Æw¼ÁÄÒ×ÀfÄÃ7¾Ò×ÉMË˼fÃËÌyÀ“à Óaà Î&Ï ËÓ.Ã7ÂFÆw¼pÓaÉfÃ Ü Ï ÂaÆw¼ Î&Ï ÆÇÓaɓ¾½¿¾ÒË+¾çÓaà4ӐÉÃîÓFÄYÊ*ÓaÄÃîÊ.ÓaÄÒ× À™¼Ã7Ê.×¾Çñ.ÓaĽ¿É“ÆÇ߿ٓÅf×¾Â#¾Ùș¾ÒËĽØÉã#ÓFàÁÂ#Ê.ÓaÄÒ× À™¼Ã7Ê.à Ü^ò ÃÅÃÇæ™ÉfÃLcJ Î&Ï ß¿ÂQÈ6Ã7ß¿¾Çñ6ÂF¾ÃÇÑyÀ™ß¿Âa½¿Éf׊È%Ã7ßÞÓ]áñÂaɓÅ÷ÂÑËÒÄ+ÂaƂË.ÂaÉ Î Ï á¼“½¿Æ‚¼ ƂӐÉf¾½Ø¾Ò˾*ÓFà ӐÉÃ÷ÓaąÊ.ÓaÄÒà Ê*ÓaÄÀ“¼f×Ê.Ã7¾öÈÌ#×¾ÒË+½¿Ê<ÂF˽¿Éfãc˼fà ÂFÀfÀ“ÄÒÓaÀ“ĽØÂQËÃ Î Ï ß¿ÂFÈ6Ã7ß¿¾ÚÂaÆÇÆÇÓFÄ+œ½¿ÉfãöËÒÓ4ÂaÉ Û?ÜÝÏ2Ü Ê.ÓÅfÃ—ß Ü »2¼à ËÒÄ+Âa½¿Éf׊Û?ÜÝÏÜ Ê*ÓyÅf×ßNÅfÃÇËׯÇ˾å˼fà ÄÃ—ßØÂQ˽¿Óaɓ¾¼“½ÞÀöÈ6ÃÇËÒáNÃÇÃ7É1àx×ÂFËÙfÄ×¾ÂaɓÅU˼fà Î&Ï ß¿ÂQ× È%Ã7ß¿¾^ÂF¾¾½¿ãaÉf×Å.ËÒÓÊ.ÓFÄÀ™¼Ã7Ê.×¾ Ü »2¼fÈàxÃ7ÂQË+ÙÄ×¾SÂFÄà ÆÇߨÙÃ7¾ˆÙf¾ÒÃ7Å àxÓaÄ)Ã7¾Ò˽¿Ê<ÂF˽¿ÉfãÚË+¼ÃߨÂQÈ6Ã7ß¿¾ ÜSÿ àxËÃÇĈ×¾× Ë½ØÊ<ÂQË+½¿ÉfãäË+¼à Î&Ï ß¿ÂFÈ%Ã—ßØ¾NÂaÆÇÆÇÓFÄ+Åf½ØÉãåËÓå˼fà Û?ÜÝÏ2Ü Ê.ÓÅf×ßiñáˆÃUÂÑËÒÄ+ÂaƂˌÂFÉ Î&Ï ñNἓ½¿Æw¼ç½¿É“ÆÇ߿ٓÅf×¾ŒÂ ¾Ùfȓ¾ÒËĽØÉãöÓaà2Â1Ê.ÓaÄÒÀ™¼f×Ê.ÃañZÈyÌúÙf¾½ØÉãöËÄÂaÉf¾ÒàùÓFÄÒ× Ê<ÂQË+½ÞӐÉÁÄ+ٓßÞ׾˼“ÂQË2á2½Øß¿ßÈÃóÃÇÑyÀ™ß¿Âa½¿ÉÃ7Å1ß¿ÂQËÃÇÄ Ü éÉYÅfÃÇË+ÂF½¿ßAñ&˼fÅàxӐ߿ßÞÓ]á½ØÉã ¾ÒËÃÇÀ™¾pÂQÄÃúٓ¾Ò×ÅYËÒÓ Ã‚ÑËÒÄ+ÂaÆ‚Ë Î Ï ¾ Ü í ÜÛ ÓaÄÒÀ™¼fÓaßÞÓaãa½ØÆÇÂaßÂaÉfÂaßÞ̾½¿¾2ÓaàÂ.ãa½ÞûM×ÉÁËÒÃÇÑyË Ü ò Ã?Ùf¾ÒÃ7Å ü&Û4ÿ&Î Í ÙfÄÓa¼“ÂF¾¼“½åÂFɓŠΠÂ]× ãÂQӓñ÷í—ïaïÐ àùÓFÄIÊ.ÓaÄÒÀ™¼fÓaßÞÓaãa½ØÆÇÂaß1ÂaÉfÂaßÞ̾½¿¾ Ü ]™ÓaÄNÂÑfÂFÊ.À™ßÞÃañ˼fÃ&À™¼Ä+ÂF¾Ãdfehghihjlknm!o p qrs kut?v5wdxy5z{)|4}~Uñ  ½¿¾Uœ½ × û½¿Åf׎¿ÉyËÒÓU˼fà Ê*ÓaÄÀ“¼f×Ê.Ã7¾ä¾¼fÓ]áÉú½ØÉúË+¼à æ™Ä+¾ÒË<ß¿½ØÉÃUÓaà&»ÂFȓ߿ÃKíFñˆÂFɓÅìÊ.ÓaÄÀ“¼fÓaß¿ÓF㐽¿Æ—ÂFß ½ØÉàùÓFÄ+Ê<ÂQË+½ÞӐÉúÂa¾ó¾¼ÓFá2Éú½¿É4Ë+¼fÃ<¾ÒÃ7ƂӐÉfÅKÂFɓŠË+¼f½¿ÄÅUߨ½¿ÉÃ7¾ÓaàZ»ZÂQșßÞà í使¾Âa¾¾½ÞãÉÃ7ÅUËÒÓð×ÂaƂ¼ Ê.ÓaÄÀ“¼f×Ê.Ã Ü 1 Üÿ ¾¾½ÞãÉf½ØÉã Î&Ï ß¿ÂFÈ6Ã—ßØ¾2ËÒÓ<×ÂaÆw¼1Ê.ÓFÄÀ“¼fÃ7Ê*Ã Ü ò ÃöÅÃÇæ™ÉfÃ7ÅìË+¼ÃöàùÓa߿߿Ó]὿ÉfãcJ Î Ï ß¿ÂFÈ6Ã—ßØ¾Çñ ÂaɓÅî˼fÃ÷ÄٓßÞÃ7¾UàùÓFÄ4ÆÇÓaɓÉÃ7ƂË+½Þû½ÞËÌYÈ%×Ëâá)Ã—Ã—É Ë+¼à ߿ÂQÈ6Ã7ß¿¾Çñúἓ½¿Æw¼ áÃÖÆÇÂa߿߁€ƒ‚„…„‡†€‰ˆ‹ŠŒŠˆŽ ƒ‘-’ †V“wñ“ÂF¾2¾+¼ÓQáÉp½¿É1»ZÂFȓ߿Ô1 Ü Í Â ÐÚò à ÂFœÅf׊ÂaɕVU–»éU Î&ÿ [  Ë+ÂQã ËÒÓ Ë¼fÃ-×½¿ãa¼Ë Î Ï Ë+ÂQ㐾Çñ Ë+¼Ã7É Å“½ × ûy½ØÅÃ7Å Ã7ÂFÆÇ¼ ½ØÉMËÒÓ#àxӐÙÄËâÌyÀ6×¾KÓaàp¾ÙÈ ß¿ÂFÈ6Ã—ßØ¾äá¼f½ØÆ‚¼KÄÃÇÀ“ÄÒÃ7¾Ò×ÉyËÒ×ÅKË+¼à È%×ãa½ØÉy× É“½¿ÉfãfñÁÊ<½¿Å“Åfß¿ÃFñŒÂFɓÅÕÃ7ÉfÅÔÓaàUÂFÉ Î&Ï ñ ÓaÄ*ÂFÉ Î&Ï á2¼“½¿Æw¼ ƂӐÉf¾½Ø¾ÒËÒÃ7Å?ÓFà Âú¾½¿É× ãßÞà Ê.ÓFÄÀ™¼Ã7Ê.Ã Ü ò ÃÖ˼Ùf¾YÅÃÇæ™ÉfÃ7Š˜c-™–9F Î&Ï ß¿ÂFÈ6×߿¾ Ü ]6ÓFÄÔÃÇÑÂaÊ*× À™ßÞÃFñË¼fÃZ Ï ê þ U Î< ËÂFãóá2ÂF¾^œ½Þû½¿Åf׊½¿ÉyËÒÓ Z Ï ê þ U ÎSš K Ï Wé Î< ñ• Ï ê× þ U ÎSš Û é ^–^L[ Ï` ñnZ Ï ê þ U ÎSšÝÏÎ ^  ñ ÂaÉfśZ Ï ê þ U ÎSš þ é Î W–[ Ï`Üöò ÃÕÅf½ × û½¿ÅÃ7Š˼fà Î&Ï Ë+ÂQ㐾1½¿ÉyËÓ àxӐÙfÄpËÒÌMÀ™Ã—¾ È6×ƗÂaÙf¾ÒþÒÃÇûMÃÇÄ+ÂFßÊ.ÓaÄÀ“¼f×Ê.Ã7¾SÆÇÂaÉ*ÆÇÓaÉ× ¾Ò˽¿ËÙfËÒÃäÂ.¾½ØÉãßÞà Î&Ï2Ü »2¼Ã%VU–»éU Î&ÿ [  ËÂFãŒá2ÂF¾&ÅÃÇæ™Éf׊È6×ƗÂaÙf¾ÒÃañˆ½¿Éç¾ÒÓaÊ.ÃöÆÇÂa¾Ò×¾Çñ)×ûaÃ—É ÂK¼Ù× Ê<ÂFÉ4 ÒٓÅfãFÃ<á)ӐÙfß¿Å4æ™É“ÅK½ÞËœ½Eœ ƗÙfß¿ËËÓ Åf×Ɨ½¿ÅÃá2¼“½¿ÆÇ¼ðËÂFã.¾¼Ӑٓ߿ŌÈ6ÃÂF¾¾½¿ãaÉf׊ËÒÓ Â ¾ÒËÒÄ+½¿Éfã“ñ ÓFÄîá¼fÃÇ˼f×Äî ¾ÒËÄ+½¿Éã ½¿¾ÓFÄ?½¿¾?ÉÓaËÂaÉ Î ÏÜ ]6ÓaÄúÃÇÑfÂFÊ.À™ßÞÃFñ ¾¼fÓaٓ߿ŝfžŸ5 ¡ Í »¼fÃI»ÓaèMÌFÓÖ¼“½Þãa¼ ƂӐÙfÄÒË Ð¢ È6ÃóË+ÂQãaãFÃ7ÅpÂa¾Z[\U ý)ÿ »éU Î< ÓaÄ£7Uê0W ÿ&Î é7Y ÿ »ˆéZU Î<A¤þ ¼fӐÙf߿ŁH3 ¥ ÍiÎ ½¿èMèaÃ7½iñË+¼ÃÂQȓȓÄÒ×ûy½ØÂQË+½ÞÓaÉ#ÓaàÚË+¼à ɓÂFÊ.à ÓFàî ÉfÃÇá¾ÒÀ™ÂQÀÃÇÄÕÀ™Ùș߿½Ø¾¼f½ØÉã Æ‚ӐÊ.À™ÂFÉM̽ØÉaÂFÀ“ÂaÉ Ð¢ ½ØÉ÷Ë+¼ÃpÂÑÀfÄ×¾× ¾½ÞӐÉ-3 ¥h¦h§©¨hª ÍAÎ ½Þèyèa×½¾ÒËÓÆ~èÁÂkû× ÃÇÄ+ÂFãFà Т È%ÃåËÂFãaãFÃ7Å1Âa¾nVUê0W ÿ&Î éZY ÿ × »éU Î< ÓaÄNÉÓaË ¤ éâÉ<˼f×¾ÒÃÆÇÂa¾ÒÃ7¾Çñ«žhŸ  h¡  ÂFɓÅh-3 ¥  ÂQÄÃ)ËÂFãaãFÃ7ÅÚÂF¾–7U–S× »éU Î&ÿ [  ñÂaÉfÅpÂQÄÃÉfÓFËÂÑËÒÄ+ÂFÆÇËÒÃ7ŌÂF¾ Î&Ï ¾ Ü »2¼fÃìÅÃÇæ6Éf½ÞË+½ÞÓÉ Óaà Ë+¼ì7U–^× »éU Î&ÿ [  ËÂFã ½¿¾öÂFß¿¾Ó Ë+¼Ã?¾ÂaÊ*Ã?Âa¾ ˼“ÂFË&á2¼“½¿ÆÇ¼ö½¿¾Ùf¾ÒÃ7ҽ¿É4˼fÃ*éâê ÏSë × Î&Ï ËÂa¾Òè ÜZò à ÅfÂæ6ÉÃ7Š˼fà ËÂFãåËÒÓÚ߿×ÂFÄÉ ½¿Ë¾ Æw¼“ÂQÄ+ÂaƂËÒ×Ä+½¿¾ÒË½ØÆÇ¾.ÂaÉfÅ ËÒÓúÂQûFÓa½¿Å ÂF¾¾½¿ãaÉ× ½¿Éfã Î&Ï ËÂFãa¾ËÓp¾ÒËÒÄ+½¿Éf㐾½ØÉ…¾ٓÆw¼4œ½Eœ<× ÆÇٓßÞËÆÇÂa¾Ò×¾2Âa¾2Ë+¼Ӑ¾ÒÃóÃÇÑyÀ™ß¿Âa½¿Éf×Å1ÂQÈ%ÓQûaÃ Ü Í È Ð*ò Ã÷ÅÃÇæ6ÉÃ7Åî˼fÄÒ×à Ê.ÓFÄà Î&Ï ß¿ÂFÈ%Ã—ßØ¾Çñ Zê Ï` ñZ­U þ »  ñŒÂFɓŮ Û éf^  ñ<ËÓ Å“½¿¾Ò˽ØÉãÙf½Ø¾¼ÕÊ.ÓFÄÀ™¼Ã7Ê.×¾÷ËÒÓgË+¼ÃYß¿ÃÇàxË ÂaÉfÅ<Ä+½Þ㐼MËÇñMÂFɓÅ<È%ÃÇËâáˆÃÇÃ7É Î&Ï ¾ÇñÄ×¾ÒÀ%Ã7Æw× Ë½¿ûa×߿ÌañàùÄÒÓÊ Ë+¼fÃ÷ÓFË+¼׹Ê.ÓFÄÀ™¼Ã7Ê.×¾ Ü ]™ÓFÄ Ã‚ÑfÂFÊ.À™ßÞÃañ¯7°?± Í U&¾ÂFèF Р ÂaÉfÅ #$ Í óÓaÈ%à Р ½¿ÉîË+¼Ã÷À™¼fÄÂa¾Òò´³.3 °±¶µ·#%$?{~ Í Ã7¾ÒËÒ×Ä+ÅfÂ7ÌK½¿É?U&¾+ÂQèF ÂaÉfÅ£ óÓaÈ%Ã ÜØÜ¿Ü Ð  ÂQÄÈË+¼ÃÉfÂaÊ.×¾ZÓFà%ßÞÓyÆÇÂQ× Ë½¿Óaɓ¾Çñ%¾ÒÓ Ë¼fÃäá¼fÓaß¿ÃåÀ“¼fÄ+ÂF¾ÒÃ*½Ø¾ËÂFãFãa׊½¿Ép˼fÃóàùÓaߨßÞÓQ὿Éfã*á2Â7Ì Ü ¸º¹¼»d½¿¾ÁÀ«ÂÄÃCÅ<ÆLǽ È-ÉaÊCËÍ̇΋É\Ï`РщÎÒϘӘÈHÂÄà ÅÔ+½¿Õ–ÎÒÖ×ÃCÅXØ<Ù½ È-ÉaÊCËÍ̇΋É\Ï`РщÎÒϘӘÈHÂÄà »ZÂQșßÞÃðí šSÏ ÑÂaÊ.À“ß¿ÃóÓaàË+¼ÃäÂF¾¾½¿ãaɓÊ.×ÉMË2ÓFà Î Ï ß¿ÂQÈ×߿¾ÈfÌðË+¼à Û?ÜÝÏ2Ü Ê.ÓyÅÃ7ß Ü ÚÛÜÞÝ ß àCáSâEã¢ä¢åEæç‹åéè êCë·â ìºí¢îïç‹åéèðñâ òHíó ô-õSâÒö‹÷ø7ø7î÷åéùƒèú0â ù´å ó û üÄý£â ì‹åBù´þfç‹ùƒè ÿ Üºß Û ۘÜ  ó‹ÛVó Ûf܏ÝÞó  Û\Ý Ü ÿ ó  ­â! " ÝÞó Û#7Û Û#7Û $ ÿ Ü%$& ÿ  Ü ¢Û#'Û#7Û $ ÿ Ü%$& ÿ  Ü ¢Û#' ÿ $&( ' Û#7Û $ Ý Ü '  $ Ý Ü '   `â Û ÝÞó (# Û #¢Û)(ÿ   Ý*+‹Ý #¢Û ÿ  ¢Ý*+Ý ,&# Û7Û Û-# Û\Û#7Û Û#7Û Û#7Û Û7Û ./,7Ý#*‹Ü ÿ 0 Û1 1¢Ü 2 354-Ú6 734-Ú6 734HÚ6 734-Ú86 734-Ú86 6HÚ 69: ;JÚ<97=?> ./' ,' @ 354-Ú6 734-Ú6 734HÚ6 734-Ú86 734-Ú86 6HÚ A%6/3: =?>79B‰Ú C!CDC CDCDC CDCDC C(C!C CDC!C C(CDC C!CDC CDC!C ð â òHíó EGFIH0â þfä¢ùKJZäçLøVèMSâ þfä¢åéè N O–âPJZäó QIR0âTSç‹þºåBèIU  Ý Ü‡Ü Û*Zó (Ü Û ó ¢Ý Û V1Vó $& ÿ Ü%$& ÿ  Ü ¢Û#'XW‹Ý, Û7Û ÿ $&( ' $ ÿ Ü%$& ÿ  Ü ¢Û#'XW‹Ý, ÿ $&( #' $¢Ý ÜT ('  $¢Ý¿Ü ('  ÿ  ¢Ý*#Ý #¢ÛaÛ#7ÛY‹Û1Z./,7Ý#*‹Ü ÿ ÿ  ¢Ý*+Ý (76%9[: \]=?^7^7B‰Ú_769: \]=?^7^7B‰Ú`769: Ú8>%^ < 3 734-Ú6 734-Ú86 734-Ú86 a b  3 734HÚ6 734-Ú6 734-Ú6 734HÚ6 734-Ú86 734-Ú6 a c CDCDC C!CDC C!CDC CDCDC CDC!CdCDC!C CC(C C!CDC â= Û\Ü ÿ Ü#,' ¢èeT76f97g Û1heA%6/3g Û1 ¢Ü]eT76f97A%>%="iA83=?7>%g Û1heA%6/3=?jA 0 3…è g‡Ý ÿ $‹Ü W' ß# ó Ålk½¿¾ÄÉaÑÌ Ãmon ½¿ÌprqZsutKvq+sxwzyx{l|/t}Dq+{~Dprq+w(q+wIyx{/€ryxt~DqZ~Dprq tK{/€dyu€rt~Dq˜Ï\‚sutKv8q+swztKwDw(yƒ&{rq€„~D…†~Dprq w~(}Dy{rƒ&w~D…†~Dprq+yx}Isq#‡P~ˆ à »2¼Ã7¾ÒÃSË+¼Ä×Ã)ß¿ÂQÈ6Ã7ß¿¾ÂQÄÃ)Ùf¾ÒÃ7Å使ÉäÀ™ÂQÄË½ØÆw× ÙfߨÂQÄËÒÓ Å“½¿¾ÒË+½¿ÉãÙ“½¿¾¼óÊ.ÓaÄÀ“¼f×Ê.Ã7¾ËÒÓ˼fà ßÞ×àxË ÓFÄ Ä+½Þ㐼MËÓFàSÂaÉ Î Ï àxÄÓÊ Ë¼fӐ¾ÒÃäËÓ á¼f½¿Æ‚¼ŒË¼fÃ+VU»„‰ Ï ê  ËÂFãfñfÃÇÑyÀ™ß¿Âa½¿Éf׊½¿Ê<Ê.ל½¿ÂFËÒÃ7ßÞ̅È6×߿ÓQáñ§½Ø¾äÂa¾¾½ÞãaÉfÃ7ÅñÈ6ÃÇ× ÆÇÂaÙf¾ÒÃîÊ.ÓFÄÀ™¼Ã7Ê.×¾ç¾ٓÆw¼[ÂF¾ ¾Ù-œ<ÑyÃ7¾ ÆÇÂaɅÈÃ<Ɨ߿Ùf׾ἓ½¿Æw¼KÂF¾¾½Ø¾Ò˽¿Éúæ6Éfœ½¿Éfã Î&Ï ¾ Ü Í Æ Ð*ò ÃÁÅÃÇæ™ÉfÃ7Å Ë+¼ÃUß¿ÂQÈ6Ã7ß¼VU»Š‰ Ï ê  ËÓ ÅÃ7¾½ÞãÉfÂFËÒÃ&Ê.ÓaÄÒÀ™¼f×Ê.×¾)ËÓÚá2¼“½¿Æw¼ ÉfӐÉà ÓFàË+¼ÃKß¿ÂQÈ×߿¾pÅfÂæ6Éf×ÅcÂFÈ%Ó]ûaÅƗÂFÉcÈ%à ÂF¾¾½¿ãaÉfÃ—Å Ü Wó½ÞûaÃ7ÉúËÒÓaèÃ—É“½ 7ÂQË+½ÞӐÉ4ÓaàˆÂ1ËÒÃ7¾ÒËäÆÇÓFÄÀ™Ùf¾Çñ§Ë+¼à ÃÇÑyËÄ+ÂFÆÇ˽ÞÓÉ ÓFàɓÂFÊ.Ã7Åç×ÉMË+½Þ˽¿Ã—¾ðƗÂFÉ È%ÃöÄÃÇ× Å“Ù“Æ‚Ã7Å ËÓç˼fÃKÀ“ÄÓFșßÞÃ7Ê ÓFà.ÂF¾¾+½Þãaɓ½¿ÉfãçÓaÉfà Î&Ï ßØÂQÈ×ßËÓ<Ã7ÂFƂ¼pÊ.ÓaÄÒÀ™¼f×Ê.ýØÉ1×ÂaÆw¼p¾ÒÃ7Éy× ËÃ—É“Æ‚Ã Ü »2¼fãc Î Ï ßØÂQÈ%Ã7ß¿¾&àxÓaÄÊ-˼fÃ*¾ÒÀ™ÂFÆÇà Óaà4ÒàùÙfËÙfÄ×¾  ½ØÉ Ë+¼à Û?ÜÝÏ2Ü àùÓFÄ+Ê.ÙfߨÂQË+½ÞÓaÉ ÓFà ӐÙfÄöÀfÄÓaȓßÞÃ7Ê Óaà.ÂÑËÒÄ+ÂFÆÇ˽ØÉãYɓÂaÊ*Ã7Åî×É˽Þ× Ë+½ÞÃ7¾ Ü »2¼fà Û?Ü Ï2Ü Ê.ÓÅf×ßiñNÂF¾*áÃ—ßØßNÂa¾.Óa˼fÃÇÄ ¾½ØÊ<½¿ß¿ÂFČÊ.ÓÅf×߿¾ÇñÂFߨßÞÓQá¾<Ë+¼ÅÆÇÓaÊ.À™ÙË+ÂQË+½ÞÓÉ Óaào‹ Í(ŒŽ %Ð àxÓaÄóÂaÉMÌ Œ ½ØÉ4˼fÃ.¾ÒÀ™ÂFÆÇÃ*ÓaàÀ6Óa¾× ¾½¿È“ß¿ÃÁàùÙfËÙfÄ×¾Çño*ñNÂFɓÅ÷àùÓFÄ.ÃÇûa×ÄÌ  ½ØÉ÷˼fà ¾ÒÀ™ÂaƂÃöÓFàÀÓa¾¾½¿È“ßÞÃ4¼“½¿¾ÒËÓFÄ+½ÞÃ7¾Çñh‘ Ügÿ ¼“½¿¾× ËÓFÄÌ  ½¿ÉìÊ<Â]Ñf½¿Ê*ÙfÊ Ã—ÉËÒÄÓFÀM̽ؾ.ÂF߿߈ÓFàË+¼à ÆÇÓaɓœ½Þ˽¿Óaɓ½¿ÉfãœÂQË+ÂË+¼fÂFËZÃ7ÉfÂFȓ߿Ã2ٓ¾^ËÒÓäÊ<ÂQèMà Â<Åf×Ɨ½¿¾½ÞӐÉp½ØÉÁË+¼Ãä¾ÒÀ“ÂaƂÃäÓaà§àùÙfËÙfÄ×¾ Ü éÉÁË+¼à À“ÄÓFșßÞÃ7Ê;ÓFàÃÇÑyËÄÂaƂË+½¿ÉfãÚÉfÂaÊ.×Å<×É˽¿Ë½ÞÃ7¾ÇñaáˆÃ ÆÇÓaٓ߿Å.ÄÃÇàxÓaÄ+ÊÚٓ߿ÂFËÒÈË+¼f½Ø¾½¿É.ËÒ×Ä+Ê.¾^Óaà™æ6ɓÅf½¿Éfã Ë+¼ÃðÀ“ÄÒÓașÂQș½¿ß¿½¿ËâÌöÓaà Œ ÂF¾+¾ÒÓMƗ½¿ÂFËÒÃ7ÅKá2½¿Ë¼K˼fà ÄÒÃ7ß¿ÂF˽ÞӐɓ¾¼f½ÞÀUÂF˽¿É“ÅÃÇѓ’ˆ½¿Ép˼fÃËÒÃ7¾ÒË2ÆÇÓFÄÀ™Ùf¾ ÂF¾ š ” ½•%– —8˜‹Ãš™ ” ½•7– Λ{d‡?…K}DœŠt~Dyx…&{l€dq#}Dyx&t&vrsxq ‡P}D…&œž~Dprq]~Dq+w~z+…K}D|rŸ/w }Dq+st~Dq€ ~(…o}Dq+sut~Dy…&{rw(pry| ¡ïà »¼fÃKÆÇӐÊ*À™ÙfËÂF˽ÞӐÉ#ÓFࢋ ÍDŒŽ Ð ½¿É Û?ÜÝÏ2Ü ½¿¾ Å×À6×ɓÅÃ7ÉyË*ӐÉçÂú¾Ò×Ë.Óaà4ÒàxÃ7ÂQË+ÙÄ×¾  ἓ½¿Æw¼ ¾¼ӐÙfߨŌÈ%üÃ7ßÞÀ“àùٓß½¿ÉpÊ.ÂF轿ÉãðÂÚÀ“Äל½¿ÆÇ˽ÞÓÉ ÂQÈ6ӐÙ˧Ë+¼ÃNàùÙfËÙfÄÃ Ü [C½ÞèMÃSÊ.Ӑ¾ÒËÆÇÙfÄÒÄÃ7ÉyË Û?Ü Ï2Ü Ê.ÓMÅfÃ7ß¿¾ó½¿É…ÆÇÓaÊ.À™ÙfËÂQË+½ÞӐÉfÂaßß¿½¿ÉfãÙf½¿¾Ë½¿Æ—¾ÇñӐÙÄ Ê.ÓMÅfÃ7ß½¿¾ÄÒÃ7¾ÒËÒÄ+½¿ÆÇËÒÃ7ÅpËÓð˼fÃàù×ÂFËÙfÄÒÃ7¾2ἓ½¿Æw¼ ÂQÄÃUȓ½¿É“ÂFÄÒÌìàiÙfɓƂË+½ÞӐÉf¾<Óaàó˼fÃU¼“½¿¾ÒËÓFÄÌ ÂaÉfŠ˼fÃÚàùÙfËÙfÄÒÃ Ü ]™ÓaĽ¿É“¾ÒËÂaÉfÆÇÃFñӐÉfÃÚÓFàӐÙfÄ&àxÃ7Â]× ËÙfÄÒÃ7¾2½¿¾ £ ½—¤!•Ã¥™ ¦§ ¨ §©žª Ыyx‡p/tKwZ½"—f¤(¬Ã™­~(}DŸrq&¤ ¬ ™¸‹¾ÍÉaѽ"œŠt®…K}¢Ã ½¯´ÃCÐ&q#}Dvd° ° ± •h™ ª ¯)Ы…K~Dprq#}(²zyxwDq&³ ½ ª à ‰×Äà Ò¼“ÂF¾ ÍD ñµ´ Ð  ½¿¾g ȓ½ØÉfÂFÄÌàùٓÉfÆÇ˽¿ÓaÉ á2¼“½¿Æw¼ ÄÃÇË+ÙÄ+ɓ¾ðËÄÙfŽÞàó˼fÃ4¼“½¿¾ÒËÒÓaÄÌ  ¼fÂa¾ ˼fÃpÂQËËÒÄ+½ÞșÙfËÒÓ´ Ü·¶Í(I¸ŒCÐ ½¿É Ï ø ÜcÍ í Ð ÆÇÂaÉ ÄÒ×ËÙfÄ+Écíðá2¼fÃ7É?˼fÃÁÊ< ÒÓFÄ*À“ÂFÄÒËÒ×PÓaàt×¥¾ÒÀ6××ÆÇ¼ ÓFàSË+¼ÃÚË+ÂQÄãF×ËÊ.ÓFÄÀ™¼Ã7Ê.Ã*½¿¾&ûaÃÇÄÈ Ü&ò ÃÚٓ¾Òà ˼fÃ<àùÓa߿߿Ó]὿Éfã1½¿ÉfàxÓaÄ+Ê<ÂQË+½ÞÓaÉKÂa¾àxÃ7ÂQË+ÙÄ×¾äÓaÉ Ë¼fÃËÂFÄãF×Ë2Ê.ÓaÄÒÀ™¼Ã7Ê.à š Â*ßÞÃÇÑ½ØÆÇÂaß½ÞË×ÊÂFɓŠË¼fÃÀ™ÂQÄ˾ץÓFàx×¥¾ÒÀ%×ׯw¼<½ÞËSÈ6×ßÞӐÉfãa¾SËÓfñMÂFɓÅ.˼fà ¾ÂFÊ.ý¿ÉàùÓFÄ+Ê<ÂF˽ÞӐÉÓaÉó˼fÃ^àùÓaÙfħÆÇßÞӐ¾ÒÃ7¾ÒËÊ.ÓFÄÒ× À“¼f×Ê.Ã7¾Çñ%˼fÃåËâá)ÓðÓaÉUË+¼ÃÚßÞ×àxË&ÂaÉfÅUË+¼ÃÚËâáˆÓ ÓaÉUË+¼ÃåÄ+½Þ㐼MË2Óaà^˼fÃåËÂFÄÒãaÃÇË&Ê.ÓaÄÒÀ™¼f×Ê.Ã Ü éâÉ ÓaÙfÄÚÃÇÑyÀ6׼ØÊ.×ÉyË+¾ÇñáˆÃŒÙ“¾Ò×ÅYí‰1ñé9Fö߿ÂÑf½¿ÆÇÂaß ½ÞËÒÃ7Ê<¾Z˼“ÂQËZÂFÀfÀ%Ã7ÂFÄÒÃ7Åäæ™ûÃ)˽ØÊ.×¾ÓFÄSÊ.ÓFÄýØÉ ˼fà ËÒÄ+ÂF½ØÉf½¿Éfã4ƂÓaÄÀ“Ù“¾ Ü »2¼fà À“ÂFÄÒËÒ×PÓaàt×¥¾ÒÀ6××ÆÇ¼ »ÂFȓßÞÃS1 š)ý ÓaɓÉÃ7ƂË+½Þû½ÞËâÌ Ä+ٓßÞÃ7¾ Ï\‚stKvq+s stKv<q+sw+…&{r{/q+#~DtKv/sxqZ~D…o~DprqŽsxq#‡P~ sutKvq+sxwI+…&{r{rq+#~!tKvrsqZ~D…†~Dprq]}Dyƒ&p¹~ ¬ º0½"» É˜Ñ Ã!¼/¬¼/½8¼/¬AР…ϫֆ¼d½Ð ÂÁÏ«Öo¼¾ÁÀ«Â¼Õ<ÎÖ ¾½¿ÂÄÉaÑ Ã¼d¬f¼r½8¼/¬AÐ »CÂÄÓ\ÎÒÏ¿¼/½Ð »CÂÄÓ\ÎÒÏ¿¼¾ÄÉ˜Ñ Ì]¼Õ<ÎÖ ¬AÐ »CÂÄÓ\ÎÒÏ º0½"» É˜Ñ Ã!¼/¬¼/½8¼/¬AР…ϫֆ¼d½Ð ÂÁÏ«Öo¼¾ÁÀ«Â¼Õ<ÎÖ ¬HРՖÎÒÖ\ÖaÈAÂI¼r¬AÐ ÂÁÏ«Ö ¬AÐ Õ<ÎÒ֘֘ÈH ¬AÐ »CÂÄÓ\ÎÒÏ¿¼/¬AÐ Õ<ÎÖ\֘ÈH ¬HРՖÎÒÖ\ÖaÈAÂI¼r¬AÐ ÂÁÏ«Ö ¬AÐ Â…Ï«Ö ¬AÐ »CÂÄÓ\ÎÒÏ¿¼/¬AÐ Õ<ÎÖ\֘ÈH ¾½¿ÂÄÉaÑ Ã¼d¬f¼r½8¼/¬AÐ »CÂÄÓ\ÎÒÏ¿¼/½Ð »CÂÄÓ\ÎÒÏ¿¼¾ÄÉ˜Ñ Ì]¼Õ<ÎÖ Õ<ÎÖ ¬¼/¬AÐ Â…Ï«Ö ¬f¼d¬AÐ »CÂÄÓ\ÎÏ ¾ÁÀ  º0½"» É˜Ñ Ã!¼¾ÄÉaэÌ]¼ɘÌÀ\ÂÁÀ ¬f¼d¬AÐ »CÂÄÓ\ÎÏ ¾ÄÉ˜ÑÌ ¬¼/¬AÐ Â…Ï«Ö ¾½¿ÂÄÉaÑ Ã¼ ¾ÁÀ«ÂI¼ɘÌÀ\…À É\ÌÀ\ÂÁÀ º0½"» É˜Ñ Ã!¼¾ÄÉaэÌ]¼ɘÌÀ\ÂÁÀ ¾½¿ÂÄÉaÑ Ã¼ ¾ÁÀ«ÂI¼ɘÌÀ\…À ¾½¿ÂÍÉaÑ Ã ¬¼/¬AР…ϫֆ¼¾ÍÉaэÌZ¼ ɘÌÀ\…À º0½"» É˜Ñ Ã ¬f¼d¬AÐ »CÂÄÓ\ÎÏ¿¼¾…À«ÂI¼ ɘÌÀ\…À ½»‡ÉaÑÁtK{/€¼ÂÄɘÑÁyx{8€dyxt~Dq!¸›v8q+ƒ&y{r{ry{rƒl…&‡Iw(q+{¹~Dq+{r+q+nlt&{/€h¸›q+{8€Á…K‡wDq+{¹~Dq+{r+q&¼ nÂ}Dq+w(|q+#~Dyxq+sµÃˆh¬¥tK{/€Ä½Á+…K}(}Dq+w(|…&{/€Á~(… ¸É˜¾Á̇΋É\Ï\Ë«Èn«…K}~DprqŅK~Dprq#}Iq+yƒ&p¹~~!tKƒ&wI€rq#Æ/{rq€h‡?…&}I~Dprq˜ÎÒÀ«Â ÇzȿϘÂÉ~!tKwDÊ8ˆ à ƗÂQËÃÇãaÓFÄ+½ÞÃ7¾2ÂFÄÒÃ˼fÃä¾ÂaÊ.ÃÂa¾Ë¼fӐ¾ÒÃäٓ¾Ò×ÅpÈÌ  ü&Û4ÿ&ÎåÜ)ò Ãöٓ¾ÒÃ7Å 1JIMñé9AI  àx×ÂFËÙfÄ×¾ŒË¼“ÂQË áˆÃÇÄÅàùÓaٓÉfÅY˼fÄÃÇÅË+½¿Ê.×¾ÁÓaÄUÊ*ÓaÄÃú½¿ÉY˼fà ËÄ+ÂF½¿É“½¿Éfã<ÆÇÓFÄÀ™Ùf¾ Ü Wó½ÞûaÃ7É<Âó¾Ò×Ë^Óaà%àxÃ7ÂQË+ÙfÄÒÃ7¾SÂFɓÅ<¾ÒÓaÊ.ÃËÒÄ+ÂF½ØÉf½ØÉã œÂFËÂfñ˼fÃ4Ê<Â]Ñf½¿Ê.ÙfÊÃ7ÉMËÄÒÓaÀyÌ Ã—¾ÒË+½¿Ê<ÂF˽ÞÓÉ À“ÄÓyƂÃ7¾¾ˆÀfÄÓyÅfٓƂÃ7¾ÂÚÊ.ÓyÅÃ7ß½¿É ἓ½¿Æ‚¼ð×ûÃ—ÄÌ àù×ÂFËÙfÄà ¶8Ë ¼“ÂF¾ZÂF¾¾ÒÓfÆÇ½¿ÂFËÒÃ7Ååá½Þ˼*½ÞËZ À™ÂQÄ+ÂaÊÚ× Ã—ËÒ×ÄÍÌ Ë Ü »2¼f½Ø¾ÚÂa߿߿ÓQá¾åٓ¾*ËÒӅÆÇÓaÊ.À™ÙËÃÁ˼fà ÆÇÓaɓœ½Þ˽¿ÓaɓÂFߐÀ“ÄÒÓașÂQș½¿ß¿½¿ËâÌÂF¾§àxÓaߨßÞÓ]á¾ Í K)ÃÇÄãaÃÇÄ Ã—Ë2Âaß Ü ñCí—ïaïF Ð7š ‹ Í(ŒÅ Ð ™ Î Ë ÌGÏ+Ð(Ñ?Ò8Ó Ô¹Õ Ë Ö¿× Í(Ð Í 1 Ð ÖÅ× ÍDÐ ™ Ø ÔÚÙ Ë Ì ÏÐÑ?Ò8Ó ÔdÕ Ë Û Í 9 Ð »2¼fÌÊ<Â]Ñf½¿Ê.ÙfÊÃ7ÉMËÄÒÓaÀy̅×¾ÒË+½¿Ê<ÂQË+½ÞӐÉËÒÃ7Ƃ¼× É“½¿øyÙÃöãÙ“ÂQÄ+ÂFÉËÒ××¾<Ë+¼fÂFË àxÓaÄ ÃÇûMÃÇÄÌ àx×ÂFËÙfÄà ¶ Ë ñ%˼fÃÚÂÑÀ%ׯÇËÒÃ7ÅUûFÂFß¿ÙfÃåÓFà ¶ Ë ÂFƗƂÓaÄ+Åf½¿Éfã ËÒÓ Ë+¼à Û?ÜÝÏ2Ü Ê*ÓyÅf×ßZá2½Øß¿ßZÃ7øMٓÂFßZ˼fÃ.Ã7Ê*À™½ÞÄ+½¿Æ—ÂFß ÃÇÑyÀ6Ã7ƂË+ÂQË+½ÞӐɅÓaà ¶ Ë ½¿É…˼fÃ.ËÒÄ+Âa½¿É“½¿ÉfãpƂÓaÄÀ“Ù“¾ Ü éÉÁÓa˼fÃÇÄ2áÓFÄ+œ¾ š Ø Ü&Ý Þàß ” ½—¤•Ã%á £ Ð ½—¤!•à ™ Ø¹Ü ß ” ½—Ã%á Ø¹Þ ”%âoã ä7ã ½•%– —Ã7á £ Ð ½—¤!•ó½"å'à ‰ ×ÄÒÃçæ ‹ ½¿¾ ÂaÉç×Ê.À™½ÞÄ+½¿Æ—ÂFßÀ“ÄÒÓaȓÂFș½¿ß¿½¿ËâÌ ÂaÉfÅ ‹ZèÄé êZ齿¾Ë+¼Ã<ÀfÄÓaȓÂFȓ½Øß¿½ÞËâÌ4ÂF¾¾½ÞãÉf×ŅÈÌUË+¼à Û?Ü Ï2Ü Ê.ÓÅfÃ—ß Ü [CÃÇË)Ùf¾)ÂF¾¾+ÙfÊ.à ˼“ÂQË)Âä㐽ÞûÃ7Éð¾Ò×ÉMË×ɓƂà ÆÇÓaÉ× ¾½Ø¾Ò˾ÁÓaàìë Ê.ÓFÄÀ™¼Ã7Ê.×¾ Ü UÉÃúÓFàäË+¼à Î&Ï ßØÂQÈ6Ã7ß¿¾ìÂF¾ ÅfÂæ6ÉÃ7Å ÂFÈ%Ó]ûaÃc½¿¾ìÂa¾¾½ÞãÉÃ7ÅÕËÒÓ Ã7ÂFƂ¼…Ê.ÓFÄÀ™¼Ã7Ê.ÃÍí Ë Í í¥îðïñîðë Ð ÈyÌ4Ùf¾½ØÉã ˼fÅÊ.ÓFÄÀ“¼fӐßÞÓF㐽¿Æ—ÂFß&½ØÉàùÓFÄ+Ê<ÂQË+½ÞÓaÉYÂFƗøMٓ½ÞÄ׊½¿É Ë+¼Ã&æ™Ä+¾Òˈ¾ÒËÒ×ÀŒÓFà§Ë¼fÃ&ÀfÄÓÆÇ×¾¾)áˆÃ&ÂFÄÃÅÃÇ× ¾ƂÄ+½Þș½¿Éfã Ü »2¼Ã Î Ï ßØÂQÈ%Ã7߈Âa¾¾½ÞãÉÃ7Å÷ËÒÓú˼fà ïAץ˼÷Ê.ÓaÄÒÀ™¼f×Ê.Óí Ë ½Ø¾Ú¾ÒÃ7ßÞÃ7ƂË×Å÷ÂaÆÇÆÇÓFÄ+œ½¿Éfã ËÒÓçÀfÄÓaȓÂFȓ½Øß¿½ÞË+½ÞÃ7¾1×¾ÒË+½¿Ê<ÂQË׊ÈÌY ËÄ+ÂF½¿ÉfÃ7Å Û?ÜÝÏ2Ü Ê.ÓyÅÃ7ß Ü4ò ÌÆÇÂaß¿ßN˼fÌÀ“ÄÒÓașÂQș½¿ß¿½¿ËâÌúÓFà ÂóÀ™ÂQÄË+½¿ÆÇٓ߿ÂFÄ Î&Ï ßØÂQÈ%Ã7ßfÈ×½¿ÉfãåÂF¾¾½¿ãaÉf×Å<ËÒÓÚ Ê.ÓFÄÀ“¼f×Ê.ÃañË+¼à ’"òfó † ’ Š„ôÍõ  ‚ óòfó Š ’ Š‹ˆ‹Ž Ü »2¼fà ߿ÂQÈ6Ã7ß¿½ØÉãÀ“ÄÓFșÂFȓ½¿ßؽÞË̽ؾÄ×ÀfÄ×¾ÒÃ7ÉyË×ÅåÈÌ Ï ø Ü Í 1 Ð+ÜIò Ã1Âa¾¾ÙfÊ.ÃU˼“ÂQË Âß¿ÂFÈ6×߿½ØÉãÀ“ÄÒÓașÂ]× È“½Øß¿½ÞËâÌpàxÓaÄ& á¼fÓaßÞÃÚ¾ÒÃ7ÉyË×ɓƂÃåÆÇÂaÉUÈ%ÃÚÅfÃÇËÃÇÄÒ× Ê<½¿ÉÃ7ÅåÂF¾Ë¼fÃNÀ“ÄÒÓyœÙfÆÇËÓaàfÂaß¿ßMß¿ÂFÈ%Ã—ßØ½¿ÉfãÀ“ÄÓFÈf× ÂQș½¿ß¿½¿Ë½ÞÃ7¾ ½ØÉU˼fÃÚ¾Ò×ÉyË×ɓƂà Üò Ãä×Ê.À™ßÞÓQÌÁ˼fà ö ½¿ËÒ×ÄÒș½%ÂFß¿ãFÓaÄ+½Þ˼“Ê ËÒÓÚæ6ɓŠË¼fÃÓaÀfË+½¿Ê<ÂFß%¾ÒÃ—Ë ÓFàSÂF¾¾½¿ãaÉf׊Î&Ï ß¿ÂFÈ%Ã—ßØ¾½¿Éö ¾ÒÃ7ÉMËÒÃ7ÉfÆÇÃá½ÞË+¼ ˼fÃUÆÇÓaɓœ½Þ˽¿ÓaÉ Ë¼“ÂQËðË+¼ÃUÀ“ߨÂFÆÇ×Ê.Ã7ÉMË.ÓaàߨÂ]× È6×߿¾&¾ÂQË+½¿¾æ™Ã—¾&ƂӐÉfÉfׯÇ˽¿ûy½ÞËÒÌ Ä+ٓßÞ×¾&¾¼ÓFáÉp½¿É »ÂFȓßÞÃS1 Ü 9 Ü ^Ӑ¾ÒË×¥ÀfÄÓMƂÃ7¾¾½¿ÉfãÁÈÌöÙf¾½ØÉãÁËÄ+ÂFɓ¾ÒàxÓaÄÊ<ÂF˽¿ÓaÉ ÄٓßÞÃ7¾ Ü »¼fà È%ӐÙfɓœÂQÄ+½ÞÃ7¾:È6×ËáNÃ—Ã—É Ê.ÓaÄÒÀ™¼f×Ê.×¾ á2¼“½¿Æw¼ Ä×¾ٓßÞËUàxÄÓaÊ ÂaɓÂFßÞ̾½¿¾UÈÌl ü Û4ÿ&Î ÅÓÁÉfÓFËÂFß¿áˆÂk̾ ÆÇÓFÄÄ×¾ÒÀ%ӐÉfŅËÒӌË+¼Ã.È6ÓaٓÉfÅ× ÂQÄ+½ÞÃ7¾ÚÈ6×ËáNÃ—Ã—É ÉfÂaÊ.×Å?Ã7ÉyË+½Þ˽¿Ã—¾*ÂF¾*ÅfÂæ6ÉÃ7Å ½¿É4Ë+¼Ã.éâê Ïë × Î&Ï ËÂa¾Òè Ü.þ Ó1ÂQàxËÃÇÄó˼fà Î&Ï ¾ ¼fÂkûÃÈ6ÃÇÃ7ÉpߨÂQÈ%Ã7ßÞÃ7Åp½ØÉŒË¼fÃä¾ÒׯÇÓaɓÅp¾ÒËÃÇÀñfáà Ùf¾ÒÃËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉYÄ+ٓßÞ×¾Uá2¼“½¿Æw¼#ÂQÄÃÂFÙ× ËÒӐÊ.ÂFË½ØÆÇÂaß¿ßÞÌÖÅ×ËÒ×ÄÊ<½¿ÉfÃ7ÅîËÒÓcÃÇÑyËÄÂaÆ‚Ë Î&Ï ¾ á2½¿Ë¼ È%ÓaٓɓÅfÂFÄ+½Þ×¾úË+¼fÂFËKÂFÄÃçÉfÓF˾ÂFÊ.ÃçÂF¾ ˼fÓa¾ÒÃ*È6×ËáN××ɅÊ*ÓaÄÀ“¼f×Ê.Ã7¾ Ü »§Ä+ÂFɓ¾ÒàxÓaÄÊ<ÂQ× Ë½ÞӐÉÚÄ+ٓßÞ×¾^ÂFÄÒÃ2ÂFƗøMٓ½ÞÄ×Å.ÈyÌäÂaÉÚ×ÄÄÒÓaÄÒ×¥ÅfĽ¿ûÃ—É ßÞ×ÂFÄ+Éf½ØÉã Ê.ÃÇË+¼ÓyÅ1á¼f½ØÆw¼p½Ø¾2¾½¿Ê<½¿ßØÂQÄËÓ.˼“ÂFË Ùf¾ÒÃ7ÅÈyÌK)Ä+½¿ß¿ß Í KˆÄ½Øß¿ßiñí7ïFï<÷ Ð àxÓaÄ£­U þ Ë+ÂQãF× ãa½¿Éfã Ü »2¼fÈœ½EDÃÇÄ×ɓƂÃÈ6ÃÇËáN××ÉÚÓaÙfÄZÊ.×˼fÓMÅ ÓFà™Ä+ٓßÞÃ2ÂaÆÇøyٓ½¿¾½ÞË+½ÞӐÉ*ÂFɓÅuKˆÄ½Øß¿ß‹ ¾Z½Ø¾^˼“ÂQË×K)Ä+½¿ß¿ß ٓ¾ÒÃ7¾ Ë×Ê.À™ß¿ÂQË×¾ŒËÒÓìÂFƗøMٓ½ÞÄÃ4Ä+ÙfßÞÃ7¾ÁÂFɓÅIáà ÅfÓúÉfÓFË Ü éÉ÷ӐÙÄ<Ê.ÃÇË+¼ÓyÅñSÄ+Ùf߿×¾.ÂFÄÒÃÁÂaÙfËÒÓF× Ê<ÂFË½ØÆÇÂaß¿ßÞÌYÂFƗøMٓ½ÞÄÃ7Å ÈÌc½¿Éyûa×¾ÒË+½ÞãÂQË+½¿Éãì˼fà œ½EDÃÇÄÃ7ÉfÆÇÃ<È6×ËáN××ÉúËáNÓU¾ÒÃÇË+¾äÓaàˆÅ“ÂQË+Âñ Î&Ï ßØÂQÈ6Ã7ß¿¾.½¿É÷…ËÂFãFãa×Å÷ÆÇÓFÄÀ™Ùf¾.ÂaÉfÅ÷˼fÓa¾ÒÃÁÃÇÑy× ËÄ+ÂFÆÇËÒ×Å*ÅfÙfÄ+½¿Éfã&˼fÃ)ÀfÄ×ûy½ÞӐٓ¾¾ÒËÃÇÀåàùÄÒӐÊÖË+¼à ¾ÂaÊ.ÅƂÓaÄÀ“Ù“¾Œá2½¿Ë¼fÓaÙfˌËÂFãa¾ Ü ò ÃöÃÇÑyËÄÂaÆ‚Ë Âa߿ߧÓaàZË+¼Ã.Åf½ED×ÄÒÃ7ÉfÆÇ×¾½¿ÉUÀ™ß¿ÂaƂÃ7¾ á¼fÃÇÄÃÚ˼fà ËâáˆÓåÅfÂFËÂå¾Ò×˾ˆÂQÄà ÈfÄÓaèÃ7É<ÙfÀŒ½¿ÉËÒÓåÂåÅf½ D%×ÄÒ× Ã7ÉMËóÉÙfÊÚȓ×ÄÓaà)ٓɓ½Þ˾ÓaÄóÊ.ÓaÄÒÀ™¼f×Ê.×¾ó×ûaÃ7É Ë+¼ӐÙfãa¼ Ë+¼Ã¾ÒËÄ+½¿Éfãa¾ˆÂQÄÃË+¼Ã¾ÂaÊ.ÃFñÂaÉfÅpÙf¾Òà Ë+¼Ã7Ê Âa¾)ËÒÄ+ÂFɓ¾ÒàxÓaÄ+Ê.ÂF˽¿ÓaÉðÄ+ٓßÞ×¾ Ü ]™ÓFÄNÃÇÑÂaÊ*× À™ßÞÃañ^Ë+¼ÌÄ+ٓßÞÃÁ¾¼fÓ]á2É?½ØÉ)]Z½¿ãaÙfÄÒÅíŒáÂa¾åÂaÆw× øyٓ½ÞÄÃ—Å Ü »2¼ÌÂaÉyËׯÇ×Åf×ɐËåÂFɓÅ?ƂӐɓ¾Ò×øyÙÃ7ÉyË ½ØÉMËÒ×ÄÀfÄÃÇË+ÂQË+½ÞӐÉf¾ˆÂQÄÃ&àùÄÒӐÊÕË+¼ÃóÄÒÃ7¾Ùfß¿Ë)ÓFàË+¼à À“ÄÃÇû½ÞӐÙf¾p¾ÒËÃÇÀîÂFɓŠ ËÂFãFãa×Å#ƂÓaÄÀ“Ù“¾Çñ&ÄÒÃÇ× ¾ÒÀ6Ã7ƂË+½ÞûM×ßÞÌ Ü éõàS¾Ò×ûÃÇÄ+Âaßœ½EDÃÇÄ×ÉË2Ä+ٓßÞÃ7¾&¼“Âkûaà Ë+¼̾ÂaÊ.ÃðÂaÉyËׯÇ×Åf×ɐËäÀ™ÂQÄËÇñӐÉfß¿Ìú˼fà ÄٓßÞà á½ÞË+¼ú˼fÃ𼓽Þ㐼Ã7¾ÒËóàùÄÒÃ7øMÙf×ɓÆÇ̅½Ø¾äÆw¼fÓa¾ÒÃ7É Ü éâà ¾Ò×ûa×ÄÂaߙÄٓßÞÃ7¾N¾¼“ÂQÄà ˼fü“½Þãa¼fÃ7¾ÒËàùÄÒÃ7øMÙfÃ7ÉfÆÇÌañ Âaß¿ßaÓFà˼fÃÄٓßÞÃ7¾CÂQÄÃÄ×Ê.Ó]ûaÃ7ÅàùÄÒӐÊ#ËÒÄ+ÂaÉf¾ÒàùÓFÄÒ× Ê<ÂF˽¿ÓaÉ?Ä+ٓßÞ×¾ Ü ]6ÙÄ˼fÃÇÄ+Ê.ÓaÄÒÃañ^½ÞàË+¼×ÄÃÁÂQÄà Ä+ٓßÞÃ7¾á¼f½ØÆw¼UÅÃ7ƂÄ×Âa¾ÒÃå˼fÃÚÂFƗÆÇÙfÄ+ÂFÆÇ̌Óaà^˼fà Ê.×˼fÓyÅöӐÉU˼fÃäËÄ+ÂF½¿É“½¿ÉfãŒÆ‚ÓaÄÀ“Ù“¾Çñ%˼fÃÇÌ1ÂFÄà ÄÃ7Ê.Ó]ûaÃ—Å Ü c Ü »ZÄÂaɓ¾ÒàxÓFÄ+Ê<½¿Éfã Î&Ï ßØÂQÈ%Ã7ß¿¾ËÓ Î&Ï Ë+ÂQ㐾 Ü ÿ àùËÒÃÇÄZËÒÄ+ÂFɓ¾ÒàxÓaÄ+Ê<½¿Éfã Î Ï ß¿ÂQÈ6Ã7ß¿¾ZËÒÓ Î Ï Ë+ÂQ㐾Çñ Ë+¼à VU–»éU Î&ÿ [  Ë+ÂQãÖ½¿¾ÄÒÃ7Ê.Ó]ûa׊È6ÃÇ× Æ—ÂFٓ¾ÒÃä½ÞË2½Ø¾ÉÓaË2Â.ËÂFÄãF×Ë2Óaà§Ë+¼ÃËÂa¾Òè Ü ]™ÓaÄ Ã‚ÑfÂFÊ.À™ßÞÃañ eg Í U»„‰ Ï ê Ð  ÓaÉ Ë¼fà æ™Ä¾ÒË ÆÇÂaɓÅf½ØÅfÂFËÒÃ4½¿Éç»ÂFșßÞÃKíö½Ø¾<ËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê.׊ËÒÓ e Í ê ψРg Í [\U ýˆÿ »éU ÎSš þ é Î WL[ ψР ½¿Éì˼fà ˼“½ÞÄ+ÅU¾ÒËÒ×À ÜNò ÃóãaÃÇË˼fÃäàxӐ߿ßÞÓ]὿Éfã<ÓaÙfËÒÀ™ÙË ÂQàùËÒÃ—Ä ËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<½¿Éfã Î&Ï ß¿ÂQÈ6Ã7ß¿¾ËÓ Î&Ï Ë+ÂQ㐾 Ü e¿àìøBr 0 A/3="7>[ùaáìøñúBd 0 A35="7>[ù˜êCë×ðÁô-õ˜ú û›øf7697A%>%="iA83=?> ù˜üCý`ð<EGFIH%MÁølú69A%>f="iA83="7>[ùfN OQIR7U[g û PSRÅü O™`>PO O™VZDF\ÁEVhþýK>P\QJf_N\Q\Q>ABCV ÿ      !"# $ ]™ÓFÄ2ËÄÂa½¿É“½¿Éfãfñ™áˆÃäÙf¾ÒÃ7ÅÁË+¼à ý êX[ ÍPý ÓaÊ<Ê.ٓÉf½¿Æ—ÂQ× Ë½¿Óaɓ¾óê2×¾×ÂFÄÆÇ¼h[CÂQÈ6ÓaÄ+ÂQËÓFÄÌ Ð Î&Ï ÅfÂFËÂfñéê ÏSë × Î&Ï ÅfÄ̐ץÄٓÉ[ËÒÄ+Âa½¿Éf½ØÉã œÂQË+Âñ4éê ÏSë × Î&Ï ÅfÄÒÌM× Ä+ÙfÉúÅfÂFËÂfñÂaÉfŅéâê Ïë × Î&Ï àxÓaÄÊ<Âaß ×PÄ+ٓÉ4ÅfÓaÊ<Âa½¿Éy× ¾ÒÀ%Ã7ÆÇ½Þæ™ÆÚœÂQË+Â Ü »2¼ÃÚËÓFË+ÂFßZÉÙfÊÚȓ×ÄÓaàN¾ÒÃ7ÉMËÒÃ7ÉfÆÇ×¾ ½¿¾SÂFÈ%ӐÙËí‰1ñBfñÂaÉfÅ.˼fÃËÓFË+ÂFߓÉٓÊäÈ6ÃÇÄSÓaà%Ê.ÓaÄ× À™¼Ã7Ê.×¾½¿¾ÂQÈ6ӐÙËL99ñé1 Üóÿ ߿ߜÂQË+ŒƂӐÉf¾½Ø¾ÒË&ÓFà ÂFÄÒË+½¿Æ—ßÞ×¾ZàùÄÒӐÊÖË+¼Ã Û ÂF½ØÉf½¿Æ‚¼f½yÉfÃÇá¾ÒÀ™ÂQÀ6×ÄÇñÂFɓÅ*ÂFÄÒà Ë+ÂQãaãFÃ7ÅKá2½¿Ë¼Ë¼fà Éf½ØÉà Î&Ï Ë+ÂQã¾ä½¿É þ W Û [IàxÓaÄ× Ê<ÂFË Ü ò Ãpٓ¾Ò׊˼f×¾ÒÃUÅfÂFËÂKÂQàùËÒ×Ä.Ê.ÓaÄÒÀ™¼fÓaßÞÓaãa½Þ× Æ—ÂFߨßÞÌ<ÂFɓÂaßÞÌA 7½¿ÉfãÚ˼fà ËÒÃÇÑyË)ÂaÉfÅ ËÄ+ÂFɓ¾ÒàxÓaÄÊ<½ØÉãåË+¼à Î&Ï Ë+ÂQ㐾2½¿ÉMËÓ*ӐÙÄÉfÃÇá Î&Ï ß¿ÂFÈ%Ã—ßØ¾ Ü ]6ÓaÄ)ËÒÃ7¾Ò˽ØÉã“ñ áÃ)Ùf¾ÒÃ7Åå˼fÃ)éâê ÏSë × Î&Ï àxÓaÄÊ<Âaß ×¥ÄٓÉÚœÂQË+ÂñQἓ½¿Æ‚¼ ÆÇÓaɓ¾½¿¾ÒË+¾§Óaà6ÂQÄË+½¿ÆÇ߿×¾ÓFà“ËÒáNÓ èy½ØÉfœ¾ÇñHIyí Í ÂFÈ%ÓaÙfË\cJ ¾ÒÃ7ÉyË×ɓƂÃ7¾ Ð ½ØÉ ÂäãFÃ7É×ÄÂaߓÅfӐÊ.Âa½¿É ÂaÉfÅ41 Í ÂFÈ%ӐÙË í‰.¾Ò×ÉËÒ×ɓÆÇ×¾ Ð ½¿ÉpÂ*¾ÒÀ6Ã7ÆÇ½ æ6ÆÅfÓaÊ<Âa½¿ÉñfË+¼ÃËÓFÀ™½¿Æ È%Ã7½¿Éfã<ÂaÉUÂQÄÄ×¾ÒË Ü »2¼×ÌÁá)ÃÇÄÃä¾Ò×ßÞÃ7ƂË×ÅpàxÄӐÊ[˼fÃ Û Âa½¿É“½¿Æ‚¼“½NÉ×á2¾ÒÀ™ÂFÀ%ÃÇÄ<ÂQÄË+½¿ÆÇ߿×¾*á¼f½ØÆw¼÷ÂQÀ“À%Ã7ÂQÄ׊àùÄÒÓÊ ÿ ÀfÄ+½¿ß.í'ca˼ ËÒÓ Û Â]ÌÖí9]Ë+¼ ½ØÉ í7ïFïaïyñÂaɓŠáÃÇÄÃÂaß¿¾ÒÓðË+ÂQãaãFÃ7Åp὿˼ Î Ï ËÂFãa¾&% Ü »¼fÃåÅÃÇæ™É“½ × Ë+½ÞÓaÉpÓaàË+ÂQ㐾2½¿¾Ë¼“ÂFË2Óaà§Ë+¼Ãéâê Ïë × Î&Ï ËÂa¾Òè Ü ÿ('  )*"# $ ,+ -.  /+( »2¼fÌÄÒÃ7¾ٓßÞ˾*ÂQÄÃÁ¾¼fÓ]áÉ?½¿É÷»^ÂQșßÞÃd9 Ü »¼fà æ™Ä¾Ë ÂaÉfÅK¾ÒׯÇÓaɓÅúƂӐ߿ÙfÊ<ɓ¾¾¼fÓQágË+¼Ã<Ä×¾ٓßÞ˾óàùÓFÄó˼fà ¾ÒÀ%Ã7ÆÇ½Þæ™ÆUÅfÓaÊ<Âa½¿É Íiÿ êê ψþ » Ð ÂFɓŠË¼fÃ1ãa×Éf×ÄÂaß ÅfÓaÊ<Âa½¿É Í W ÏÎ&Ï ê ÿ [ Ð ñNÄÃ7¾ÒÀ%ׯÇ˽¿ûaÃ—ß¿Ì Ü ò Ã4Åf½ØÅ ÉfÓFËËٓÉÃðӐÙÄäÊ.ÓÅf×ßZËÓU×½ÞË+¼×ÄäÅӐÊ<ÂF½ØÉ ÜÁý ӐÊ*× À™ÂQÄ+½¿ÉfãU˼fà ÄÒÃ7¾Ùf߿˾὿˼K˼fÓa¾ÒÃðÓaàˆÃÇÑyÀ6×Ä+½¿Ê.×É˾ ƗÂQÄÄ+½Þ×ŅÓaÙfËó὿˼fÓaÙfËËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉ4Ä+Ùf߿×¾ÇñCá)à àùÓaٓÉfÅË+¼ÃNÂFƗÆÇÙfÄ+ÂFÆÇÌ&àùÓFÄCË+¼ÃàxÓaÄ+Ê.ÂaßaÂÑÀ6ÃÇÄ+½¿Ê.×É˾ ¼“ÂFÅKÂFÉ]§×õÊ.×Âa¾ÙÄÃFñàxÓaÄóÈÓFË+¼…ÅfÓaÊ<Âa½¿Éf¾ÇñCӐÉÃ<ÓFÄ Ëá)Ó4À%Ӑ½¿ÉM˾*È6×ËÒËÃÇÄ<˼“ÂFÉ Ë¼fÓa¾ÒÃpá2½¿Ë¼fÓaÙfË.ËÄ+ÂFɓ¾× àùÓFÄ+Ê<ÂQË+½ÞӐÉÁÄ+Ùf߿×¾Çñ6ÂF¾2¾¼fÓ]áÉ1½¿Ép»^ÂFȓßÞ÷9 Ü éÉ÷Ë+¼Ã1éâê Ïë × Î&Ï àxÓaÄÊ<Âaß ×PÄ+ٓÉñNÂFÉÌKËÂFãa¾.Âa¾× ¾½¿ãaÉf×ÅúÈÌ4Âp¾Ò̾ÒËÒÃ7Ê á2½¿Ë¼“½¿ÉK˼fÃ.ÄÃÇ㐽ÞӐɅË+ÂQãaãa׊VU–»éU Î&ÿ [  ½¿É[˼fÃÖàxÓaÄÊ<Âaß ×PÄ+ٓÉ[œÂFËÂÕÂFÄÒà ½¿ãaÉfÓFÄ׊½¿É ˼fà ÃÇûFÂa߿ٓÂQË+½ÞÓaÉ Ü ò ¼fÃ—É ÂcÄÃÇ㐽ÞÓÉ Ë+ÂQãaãFÃ7ÅÖÈÌ#Âc¾Ò̾ÒËÃ—Ê ÂFɓÅÖ˼fÃ÷ÄÃÇ㐽ÞӐÉÖËÂFãaãFÃ7Å VU–»éU Î&ÿ [  ÓFûFÃÇÄ+ß¿ÂFÀñM½ÞËS½Ø¾SƂӐÙfÉËÒÃ7Å*Âa¾ÂFÉ.×Ä× ÄÓFÄ Ü UÙfÄÃÇûFÂa߿ٓÂQË+½ÞÓaÉÁàxӐ߿ßÞÓFáN×ÅÁ˼“½¿¾¾ÒË+ÂFɓÅfÂFÄ+Å Ü ÿÿ 01!, 2435)"678*39:-;/+* < =,> ?.@A@ ! @AB ò à ÂQÀ“À™ß¿½ÞÃ7Å?˼fÌËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉ?Ä+ÙfßÞÃ7¾ÚËÒÓ Î&Ï ¾ ἓ½¿Æ‚¼I½¿É“Æ—ß¿ÙfÅfÃ7ÅcÂ÷¾Ùfȓ¾ÒËĽØÉã÷ÓaàåÂ÷Ê.ÓaÄÒÀ™¼f×Ê.Ã Ü »2¼fÃóÄ+Ùf߿×¾ˆá)×ÄÃóÂQÀ“À™ß¿½ÞÃ7ŌËÓ1íÚ¾ٓƂ¼ Î&Ï ¾2½ØÉŒË¼fà ¾ÒÀ%Ã7ÆÇ½Þæ™Æ?ÅfÓaÊ<Âa½¿ÉñåÂaɓÅI]ïI½ØÉÖ˼fÃ?ãFÃ7É×Ä+ÂFßÚÅfÓQ× Ê<Âa½¿É ÜîÏ ÂaƂ¼ ÓFàóË+¼Ãöæ™ãaÙfÄ×¾ ÄÒ×À“ÄÒÃ7¾Ò×ÉMË+¾ ÂFÈ6ÓaÙfË ÷J@"ÓFàÚ˼fà Î&Ï ¾ö½¿É#˼fÃúàùÓFÄ+Ê<ÂFßÞ×PÄ+ÙfÉîœÂQË+ÂñàxÓFÄ Ã7ÂFÆÇ¼IÅӐÊ<ÂF½¿É Ü 9F1Ä+ٓßÞ×¾ÁáˆÃÇÄÃ4ÂFÙfËÒӐÊ<ÂQË+½¿Æ—ÂFß¿ß¿Ì ÂaÆÇøyÙf½¿ÄÒÃ7ÅðàxÄӐÊÔ˼fÃ&ËÒÄ+Âa½¿É“½¿ÉãÚÆÇÓFÄÀ™Ùf¾ Ü^Î ½ØÉà Ä+Ùf߿×¾ áÃÇÄÃ2ÂFÀ“À“ßؽÞ×ÅðÃ7ßÞ×ûa×É<˽ØÊ.×¾½ØÉðÀfÄÓyƂÃ7¾¾½¿ÉfãåÓFà˼fà ¾ÒÀ%Ã7ÆÇ½Þæ™ÆóÅӐÊ<ÂF½¿ÉpœÂQËÂfñfá½ÞË+¼ŒÓÉÃÃÇÄÄÓFÄ Ü »2¼fÃÄÃ‚× C Ëzsxs¹€rt~!tItK}Dq tKtKyxsutKvrsq…&{Z~DprqÍÎÒÀ«ÂÇ¢²q+vÅw(yx~Dq ½ ÎÒÀ«Â Ç ÂEDdq+#Ÿr~Dyxq×Ê[…&œ«œ«yµ~(~Dq+q&¼ ª FGFGF È Ëz{¹~Dq++q€dq+{~I|/t}(~ Â{¹~(}(à Hd»d½JILKGMON8MQPSRTMP¼/w~!tÃdyx{/ƒ¿y{VUtK|/tK{à ¾ÍÉaÑ·½"œŠt®…K}¢Ã {/…&Ÿr{ ¾ÍÉaÑ·½"œ«y{r…K}¢Ã Ñ ËzÀ\ÂÁÏ {r…&Ÿr{ ÈtKv<q+s ɘÌÀ\…À W Ê…&{rw(qLXŸrq+{¹~I|/t}(~ H½JILKYM?¼rw(~!tÃdy{rƒ'à »d½*N!MZP[RTMP¼!UtK|/tK{à {r…&Ÿr{ {r…Ÿr{ +…&œ«œ†…&{l{r…&Ÿr{ +…&œ«œ†…&{l{r…&Ÿr{ ¾ÁÀ  ÈHÉaʇËÍÌCΏɫÏ`РщÎÒϘÓaÈA ]Z½¿ãaÙfÄÒÃ í šSÏ ÑfÂFÊ.À™ßÞÃÓFàZËÒÄ+ÂFɓ¾ÒàxÓaÄ+Ê.ÂF˽¿ÓaÉpÄ+Ùf߿×¾ Ü »ZÂQșßÞÃS9 š ê×¾ٓßÞË+¾àxÓFÄÃÇÑyËÄ+ÂFÆÇ˽ÞӐÉÁÓaàɓÂFÊ.Ã7Å1×ÉMË+½ÞË+½Þ×¾ Ü \ yx~Dph~(}!tK{rw‡?…K}!œ«t~Dy…&{ }DŸ/sxq+w \ yx~Dpr…&Ÿd~I~(}!tK{rw(‡?…K}DœŠt~Dyx…&{h}DŸ/sxq+w Ë À À«ÂÄÑÌ Ó˜ÂÁÏ\ÂÁÀ Ë\È Ë À À«ÂÄÑÌ Ó˜ÂÁÏ\ÂÁÀ«Ë«È Ï]tKœ«q€hq+{¹~Dyx~Tà Àzq+tKss ¾}Dq++yw(yx…{ Àzq+tKss ¾}Dq++yw(yx…{ ÀIq+tKss ¾}Dq++yw(y…&{ Àzq+tKss ¾}Dq++yw(y…&{ ½Q]ñà ½J]­Ã ½Q]ñà ½J]­Ã ½J]­Ã ½J]ñà ½J]­Ã ½J]ñà É\À Ә˫ϫÎ^ËÁ̇΋É\Ï _ F ˆ åa` b ª ˆ åab _ F ˆ cYb d F ˆ _a_ _ F ˆ åa` b ª ˆ åab _Ybdˆ dYe b ª ˆ ba_ ¾Í…À\Ñ É\Ï bKå/ˆ _å bKårˆ _å dY`rˆ F c bGedˆ bTd bKårˆ _å bKårˆ _å dY`dˆ F c bGedˆ bad È-ɘʇËÁÌCÎÉ«Ï bGerˆ ¯ac b ª ˆ åab dY`rˆ cGd bKårˆ å_ dYedˆ _Yb dGd¹ˆ cYe ` F ˆ dYe bac¹ˆ _Gc Ë ÀÁÌCÎfË ÊCÌ ` ª ˆ _å `G`dˆ `ad ea_dˆ åTc _K¯dˆ ¯¯ ` ª ˆ _å `G`dˆ `ad ea_¹ˆ åTc _K¯dˆ ¯&¯ Ö«ËÁ̇ F ddˆ cGc F d¹ˆ cGc F ª ˆ ª _ F årˆ b¯ F d¹ˆ cGc F d¹ˆ cGc F ¯dˆ eGb F årˆ dY` ̇΋Ֆ F å/ˆ då ª ¯&¯dˆ ¯&¯ baddˆ ¯Kå F årˆ ¯¯ F årˆ då ª ¯&¯dˆ ¯&¯ bad¹ˆ ¯Kå F årˆ ¯&¯ ÕLÉ\Ï\ÂEg ª ¯&¯dˆ ¯&¯ ª ¯&¯dˆ ¯&¯ F erˆ eGe F edˆ eae ª ¯&¯dˆ ¯&¯ ª ¯&¯dˆ ¯&¯ F edˆ eGe F edˆ eGe ¾Í…À ÊCÂÁÏ\Ì È È ª ¯&¯dˆ ¯&¯ F _¹ˆ åT_ È È b&¯dˆ F _ F årˆ å&å Ìf…K~!tKs b ª ˆ dG_ bG`dˆ ª b då/ˆ _K¯ ba_¹ˆ ¯ae d F ˆ ª b ba_¹ˆ ¯Gb dGc¹ˆ ª F bKårˆ F ` ffÈœ«qtKw(Ÿd}Dq bGedˆ F ª d F ˆ åTc bac¹ˆ ¯ac dYbdˆ ¯a_ ÆÇÂaß¿ßÂaɓŌÀ“ÄÒÃ7ÆÇ½Ø¾½ÞÓaɌáˆÃ—ÄÒà ÷FJ@ Í í'<hyíÐ ÂFɓÅpïfí@ Í í‰<hyíaí Ð ñZÄÒÃ7¾ÒÀ6ׯÇ˽¿ûÃ—ß¿Ì Ü »áˆÃ7ßÞûFÃðÄ+ÙfßÞÃ7¾äáÃÇÄà ÂQÀf× À™ß¿½ÞÃ7ÅucA1ó˽¿Ê.Ã7¾S½¿É<ÀfÄÓyÆÇ×¾¾½¿ÉfãäÓFà%˼fÃãa×ÉfÃÇÄ+ÂaߓÅfÓQ× Ê<ÂF½ØÉcœÂQË+Â Ü »2¼fÃÇÄÃ4á)×ÄÒÃ÷í‰?×ÄÒÄÓaľ Ü »2¼fÃ4ÄÃ‚× ÆÇÂaß¿ßÂaɓŌÀ“ÄÒÃ7ÆÇ½Ø¾½ÞÓaɌáˆÃ—ÄÒÖc“í @ Í 91ihIQï Ð ÂFɓŠIF@ Í 91ih cA1 Ð ñÄÒÃ7¾ÒÀ6ׯÇ˽¿ûÃ—ß¿Ì ÜZò Ã&àxӐÙfɓŠË¼fÃàùÓa߿߿Ó]὿Éfã Ëâá)Ó.ËÌyÀ“Ã7¾2Óaà§ÃÇÄÄÓFÄ+¾ Ü j ÿ ¾Ùș¾ÒËÒÄ+½¿ÉfãúÓaà&ÂaÉ Î&Ï áÂa¾ÚÃÇÑËÒÄ+ÂFÆÇËÒÃ7ÅìÂa¾ ÂaÉ Î Ï ÈÌ Ê<½Ø¾ÒËÂFèÃó½ØÉÁӐÉÃåÆÇÂa¾ÒÃ Ü »2¼fÃY¾Ùfȓ¾ÒËĽØÉã -3 Í „ÁŠ‹€akAŠùñ+aÂQÀ™ÂFÉ Ð¢ áÂa¾ ÃÇÑyËÄ+ÂFÆÇËÒÃ7Å ÂF¾ [\U ýˆÿ »ˆéZU Î àxÄÓaÊ Ë+¼à ɓÂaÊ*ÃîÓaà…ÂÔßÞÓƗÂQË+½ÞÓÉ fe¬3Sg#lnmop#q ÍSr ò Š„…Š‹€TkJŠ ó †ƒŠ"ô ‘ „ Ž‚)s‚ˆ ò s Š‹€akAŠùñ<ÂFÉ ÿ Ê.ÃÇÄÒ× ½ØÆÇÂaÉ Ê.½Øß¿½ÞË+ÂQÄÌ È“Âa¾ÒÃg½¿É  ÓaèaÓaË Ð+ÜB »2¼fà á¼fÓaß¿ÃÓFàXe 3–gtlumvo=ptq  ¾¼fÓaٓ߿Åp¼fÂkûÃ È6Ã—Ã—É Ã‚ÑËÒÄ+ÂaƂËÒÃ7ÅgÂa¾h[\U ýˆÿ »ˆéZU Î ÂaÆÇƂÓaÄ+Åy× ½ØÉã;ËÒÓ;˼fà éê ÏSë × Î&Ï ÅfÃÇæ™É“½ÞË+½ÞÓaÉ Ü »2¼fà Û?Ü Ï2Ü Ê.ÓÅfÃ7ßóá2ÂF¾1ÉfÓFË4ÂFȓ߿ÃFñ¼fÓ]áˆÃÇûF×ÄÇñËÒÓ ÂaÆw¼“½ÞÃÇûÃUË+¼“½¿¾ ÜÕý Óaɓ¾ÒÃ7øMÙf×ÉMË+ßÞÌañ Â?ËÒÄ+ÂaÉf¾ÒàùÓFÄÒ× Ê<ÂF˽¿ÓaÉÔÄ+Ùfß¿ÃIáÂa¾ÂFÀfÀ™ß¿½ÞÃ7ÅÔËÒÓÖË+¼ÃIá¼fÓaßÞà ¾ÒËÄ+½¿Éã“ñ)ÂFɓŠË+¼fÃ1¾Ùfȓ¾ÒËĽØÉãKáÂa¾*ÂÑËÒÄ+ÂaƂËÒÃ7Å ÈÌ?Ê<½¿¾ÒË+ÂQèaÃ Ü »§Ó…ÄÃ7ÅfٓƂÃ1¾ÙfÆÇ¼?ÃÇÄÄÓFÄ+¾ÇñN˼fà Û?Ü Ï2Ü Ê.ÓÅf×ßÉ×ל¾ËÒÓðÈ6Ã使Ê.ÀfÄÓQûFÃ7Å Ü j ^&ÃÇæ™É“½Þ˽¿Óaɓ¾ Âa¾¾½ÞãÉÃ7Åö½¿ÉUË+¼ÃåË×¾ÒË&œÂQË œ½Þàx× àùÃÇÄ×ÅÁàxÄÓaÊ Ë+¼Ӑ¾ÒÃó½¿ÉÁ˼fÃóËÄÂa½¿É“½¿Éfã.œÂQË+Â Í í‰ ƗÂF¾ÒÃ7¾ Ð+Ü Lw  ½¿É ˼fÃYáÓFÄ+Å xw r Í kH‚ ‘ay ŠÒ„™ñ+aÂFÀ“ÂQ× Éf×¾à Т ÂFɓÅ2xz  ½¿ÉIË+¼Ã4áÓFÄ+Å Lz {x | Í ô ò ŠÞ“'‚ ‘ s ò Š} ò „“ñ]™ÓaÄ×½ÞãÉU0œ ÆÇÃ Û ½¿É“½¿¾ÒËÒÃ—Ä Æ‚ÓÉàùÃÇÄ×ɓƂà Т áˆÃ—ÄÃ^ÅÃÇæ6ÉÃ7ÅÂa¾ [\U ýˆÿ »éZU Î ÂFɓÅUê0W ÿ Î é7Y ÿ »éZU Î ñ&Ä×¾ÒÀ6Ã7ƂË+½ÞûÃ7ßÞÌFñÚ½¿É ˼fÃ<ËÄÂa½¿É“½¿Éfã4ƂÓaÄÀ“Ù“¾á¼f½ØßÞà ˼fÃÇÌúáˆÃÇÄà ÉÓaË Î Ï ¾2½¿Ép˼fÃóË×¾ÒË2œÂFËÂ Ü »§Ó.ÄÒÃ7ÅfٓÆÇþٓƂ¼ŒÃ—ÄÒ× ÄÒÓaÄ+¾ÇñÊ.Âa½¿ÉMË×ɓÂaÉfÆÇÃåÓFà^˼fÃÚËÒÄ+Âa½¿É“½¿ÉãŒÆÇÓFÄÀ™Ùf¾ ½¿¾×¾¾ÒÃ7ÉyË+½¿Âaß Ü ò ÃÓaȓËÂa½¿ÉÃ7ÅÁÂaÉÁ½ØÊ*À“ÄÓ]ûaÃ7Ê.×ɐË)Óaà§ÂQÈ%ӐÙfˈËáNÓ À%Ӑ½¿ÉMË+¾<½¿É ˼fà ]§×¥Ê.×Âa¾ÙfÄÒÃUàxÓaÄð˼fÃö¾ÒÀ6×Ɨ½ æ6Æ1ÅfÓQ× Ê<Âa½¿Éñ™ÂaÉfÅUÂFÈ%ÓaÙfËåí Ü ÷.À%Ӑ½¿ÉMË+¾2½¿É1˼fã]§×¥Ê.Ã7ÂF¾ÙfÄà àùÓFÄ˼fÃóãa×ÉfÃÇÄ+ÂaßÅӐÊ<ÂF½ØÉñÈfÌðÂQÀ“À™ßÞÌy½ØÉã<ËÒÄ+ÂaÉf¾ÒàùÓFÄÒ× Ê<ÂF˽ÞӐÉcÄ+ٓßÞ×¾ Ü éÉcӐÙÄpÂÑÀ%ÃÇÄ+½¿Ê.Ã7ÉM˾Çñ Ë+¼Ãú¾Ìy¾× ËÃ—Ê ÂaÙËÓaÊ<ÂFË½ØÆÇÂaß¿ßÞÌÁÂFƗøMٓ½ÞÄ×Å1Ä+ٓßÞ×¾á2½¿Ë¼1ƂӐÉf¾ÒÃÇ× øyÙÃ7ÉyËÀ“ÂFÄ˾äË+¼fÂFËÚÂFß¿áÂ7̾伓ÂkûaÃ Î Ï ¾á¼f½ØÆ‚¼½¿É× Æ—ß¿Ù“ÅÃÂå¾Ùfȓ¾ÒËÄ+½¿ÉfãÚÓFàCÂåÊ.ÓaÄÒÀ™¼f×Ê.ÃFñyșÙËÅf½¿ÅÁÉÓaË ÂaÆÇøyÙf½¿ÄÒÃ1Ä+Ùf߿×¾.á½ÞË+¼ ÆÇÓaɓ¾ÒÃ7øMÙf×ÉMË.À“ÂFÄÒË+¾.˼“ÂFË<ÅfÓ ÉfÓF˼fÂQûQÃ Î Ï ¾&ἓ½¿Æ‚¼…½¿É“ÆÇ߿ٓÅfÃ.ÂÁ¾Ùfȓ¾ÒËÄ+½¿ÉfãÁÓFà) Ê.ÓaÄÒÀ™¼f×Ê.à ÜSþ Ó á)ÃÆÇÂFÄÒÄ+½ÞÃ7ÅÓaÙf˧˼fÃSÃÇÑyÀ6×Ä+½¿Ê.×É˾ á½ÞË+¼IÂFߨß2Óaàó˼fÃUÄ+ٓßÞÃ7¾ Ü#ò ÃU˼f×ÉçÓaÈfË+ÂF½¿ÉfÃ7Å ]§× Ê.Ã7ÂF¾ÙfÄÒÃ7¾Óaà˜I 1 Ü 19àxÓaÄN˼fÃ&¾ÒÀ%×Ɨ½ æ6Æ ÅӐÊ<ÂF½ØÉ ÂFɓŠI9 Ü í‰1àxÓaÄ)Ë+¼à ãa×ÉfÃÇÄ+ÂaߙÅfÓaÊ<Âa½¿É Ü ]6ÓFÄË+¼Ã&¾ÒÀ×ÆÇ½Þæ™Æ ÅfÓaÊ<Âa½¿É.˼fÈÄ×¾ٓßÞË+¾áÃÇÄÃ)Ë×É.À6Óa½ØÉy˾áÓFÄ+¾ÒÃFñaÂFɓŠàùÓFÄS˼fÈãa×ÉfÃÇÄ+ÂaßfÅӐÊ<ÂF½ØÉåæ™ûÃ)À%Ӑ½¿ÉMË+¾ZáˆÓaÄ+¾ÒÃFñQË+¼fÂaÉ Ë+¼ÃÂaÆÇƗÙÄ+ÂaÆÇ½ÞÃ7¾)ÓFà˼fÃ&ÃÇÑyÀ6׼ØÊ.×ÉËÂaߓÄ×¾ٓßÞË+¾NÓaÈf× Ë+ÂF½¿ÉfÃ7Å á½Þ˼fÓaÙfËNËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞÓÉ ÄٓßÞÃ7¾ Ü »2¼f½Ø¾NÄÃ‚× ¾ٓßÞË*¾¼fÓ]á¾äË+¼fÂFËÚ˼fÌËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉ?ÄٓßÞÃ7¾ÚÂaÆw× øyÙf½¿ÄÒÃ7Å4àùÓFÄäÂFÉyÌ1ËÌyÀ“Ã7¾óÓFà Î Ï ¾ÅfÓUÉÓaËó¼“ÂkûaÃÚ˼fà ÂFȓ½Øß¿½ÞËÌpËÓpÆÇÓFÄÄׯÇËßÞÌUÄÒ×ûy½Ø¾Òà Î&Ï ß¿ÂQÈ×߿¾Âa¾¾½ÞãÉÃ7Å ÈfÌìÓaÙfÄ Û?ÜÝÏÜ Ê.ÓfÅÃ7ß Ü ‰ ÓQáN×ûFÃÇėñÓaÙfČÄ+Ùf߿ÅÂFÆ‚× »ÂFșßÞÃ0c š^ÿ ƗÆÇÙfÄÂaÆÇÌ*὿˼ŒÂa߿ߙàxÃ7ÂQË+ÙÄþÒÃÇË+¾Çñ¾½¿Éfãaß¿Ãàù×ÂFËÙfÄþÒÃÇË+¾ÇñÂFɓÅðÓaÉfþÒÃÇËNӐÊ<½ÞËÒËÃ—Å Í á½Þ˼ ËÄ+ÂFɓ¾ÒàxÓaÄ× Ê<ÂQË+½ÞӐÉÁÄ+ٓßÞ×¾ Ð~Ü Ë À À ÂÄÑÌ Ó˜ÂÁÏ\ÂÁÀ Ë\È f/qt~DŸd}DqZwDq#~ Àzq+tKsxs ¾}Dq++yxw(y…&{ f ÖZy~<q#}Dq+{r+q ÀIq+tKss ¾}Dq++yw(y…&{ f Ö]y€~<q#}Dq+{r+q ½J]ñà ½J]­Ã ½J]ñà ½J]ñà Ëzss b ª ˆ dG_ bG`dˆ ª b bGedˆ F ª ¯ dårˆ _K¯ ba_¹ˆ ¯Ge d F ˆ åTc ¯ Èq[D¹ytKsyµ~Dq+œ«wItKs…&{rq dYedˆ cY` b&¯dˆ F d dY`dˆ F c È(`dˆ FGF `ac¹ˆ _Yb dårˆ c F `ad¹ˆ F å È ª&ª ˆ åab ¾ÄÉaÑ·½"œŠt®(…K}¢ÃtKsx…{rq _¹ˆ å¯ d&¯dˆ ¯&¯ ª ¯dˆ ¯ac È*dYedˆ b F c¹ˆ ba_ åTc¹ˆ ª ` _¹ˆ eGe È*dårˆ ¯ F ¾ÄÉaÑ·½"œ«yx{r…K} Ã[tKs…&{rq _ ª ˆ å ª `ac¹ˆ _K¯ _Y`dˆ åTc È*cGd¹ˆ å F åT_¹ˆ cYe ` ª ˆ e ª _Gc¹ˆ ¯G` È*cGd¹ˆ eG` ÏG…Šsxq[DdyxtKs5yx~(q+œ«w _ ª ˆ å ª `Gedˆ å F _Y`dˆ bac È*cGd¹ˆ ¯ F åa`dˆ ª ` `a_¹ˆ åT_ _årˆ ª å È*cG_¹ˆ cYb ÏG… ¾ÍÉaє½"œŠt®…K}¢Ã b&¯dˆ åa` ba_¹ˆ FGF bGedˆ ª e ÈT¯dˆ dYb dGc¹ˆ F ª bac¹ˆ c F dGd¹ˆ eac È(c¹ˆ ª ¯ ÏG… ¾ÍÉaє½"œ«yx{/…K}¢Ã dY`dˆ ¯ F bad¹ˆ _Gd b ª ˆ åae È*c¹ˆ åab `G`dˆ b F bac¹ˆ dGc dYedˆ F d È(_¹ˆ åT_ »ÂFșßÞÃÄ÷ š)ÿ ÆÇƗÙÄ+ÂFÆÇÌÁá2½¿Ë¼1àxÃ7ÂQË+ÙÄÃ7¾2ÓFàZ˼fÃäËÂFÄÒãaÃÇË Ê.ÓFÄÀ™¼Ã7Ê.ÃåÀ“ߨÙf¾2Ë+¼Ӑ¾ÒÃäÓFà^ÂFœÅf½¿Ë½¿ÓaɓÂFßC¾ÙÄÄÓaٓɓÅf½¿Éfã Ê.ÓFÄÀ™¼Ã7Ê.×¾ Í á2½¿Ë¼1ËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉÁÄ+ٓßÞ×¾ Ð~Ü Ë À À«ÂÄÑÌ Óa…Ϙ…À«Ë«È f/qt~DŸd}DqZwDq#~ Àzq+tKss ¾}Dq++yw(y…&{ f Ö]y€~<q#}Dq+{r+q Àzq+tKsxs›¾}Dq++yxwDyx…&{ f Ö]y~<q#}Dq+{r+q ½J]ñà ½J]ñà ½Q]ñà ½J]ñà ÉG{l…&{rsx÷½¯´Ã e ª ˆ ª&ª åabdˆ d F ead¹ˆ FGF ÈåT_dˆ F c ea_¹ˆ _Y` dK¯dˆ _Gd åTd¹ˆ c F È(eac¹ˆ ª e ÉG{u½?È ª à ½¯´Ã ½ ª à dY`dˆ bG` bKårˆ åa` b&¯dˆ åab ÈJedˆ åae dGc¹ˆ eac ba_¹ˆ ª&ª dYbdˆ cK¯ È ª ˆ cac ÉG{u½?È(cƒÃ[~D…<½JcƒÃ b ª ˆ dG_ bG`dˆ ª b bGedˆ F ª ¯ dårˆ _K¯ ba_¹ˆ ¯Ge d F ˆ åTc ¯ ÉG{u½?ÈJe´Ã[~D…<½Qe´Ã b&¯dˆ dGc ba_¹ˆ ¯ F bac¹ˆ ba_ È ª ˆ ¯G` dYedˆ eGb bKårˆ ª F dYbdˆ å ª È ª ˆ ¯ ª øyÙf½¿¾+½Þ˽¿ÓaÉ Ê.ÃÇË+¼ÓÅ ½¿¾<¾½ØÊ*À™ßÞÃöÂaÉfÅìá)Ã1ÓFȓËÂa½¿Éf׊ãaÓyÓyÅ4Ä×¾ٓßÞË+¾&὿˼4Ë+¼Ã<Ä+ÙfßÞÃ7¾óÂFƗøMٓ½ÞÄÃ7Å4àxÓaÄ Î&Ï ¾ á¼f½ØÆw¼Œ½¿É“ÆÇߨÙfÅfÃóÂ.¾Ùș¾ÒËÒÄ+½¿Éfã*Óaà§Â*Ê.ÓaÄÒÀ™¼f×Ê.à ܈þ Ó á)Ã)ƗÂFÉ*ƂӐÉfƗ߿ٓÅÃ)Ë+¼fÂFË˼fÃ)ËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉåÄ+ÙfßÞÃ7¾ ÂaÆÇøyÙf½¿ÄÒÃ7Å ÈÌ÷ÓaÙfČÊ.ÃÇË+¼ÓyÅIÂFÄÒÃUÃ'D%Ã7ƂË+½ÞûÃU½¿ÉçÃÇÑM× ËÒÄ+ÂaƂË+½¿Éfã Î&Ï ¾Úá2¼“½¿ÆÇ¼?½¿ÉfƗ߿ٓÅÃ1Â4¾Ùș¾ÒËĽØÉãúÓFà  Ê.ÓFÄÀ™¼Ã7Ê.ÃFñyἓ½¿Æw¼ðÆÇÂaÉfÉfÓaËÈ6à ÃÇÑyËÄÂaƂË×ÅðÈÌÚӐÙfÄ Û?ÜÝÏÜ Ê.ÓÅÃ7ß Ü ÿZ ‚/ <788  ƒ, ?.@i@ >! @AB »2¼f½Ø¾N¾ÒÃ7ƂË+½ÞÓaÉ Åf×¾+ƂÄ+½ÞÈ%Ã7¾N¼fÓ]áIÊ.ÙfÆ~¼<×ÂaƂ¼<àxÃ7ÂQË+ÙÄà ¾Ò×Ë2ƂӐÉyËÄ+½ÞȓÙfËÒÃ7¾ˆËÒÓð½¿Ê.À“ÄÒÓFûy½¿Éfã*Ë+¼ÃäÂaÆÇÆÇÙfÄ+ÂFÆÇÌ Ü ò Ã<ƗÂQÄÄ+½ÞÃ7ÅúӐÙfËó˼fÃðÂÑÀ%׼ØÊ*Ã7ÉMË+¾óá2½¿Ë¼ú×ÂaƂ¼ àxÃ7ÂQË+ÙÄþÒ×Ë^ÂaßÞӐÉÃañaÂFɓÅÚ὿˼*Âaß¿ßyàù×ÂFËÙfÄÒÃ2¾ÒÃÇË+¾șÙfË ÓÉÃañӐÊ<½ÞË˽¿Éfã…Ã7ÂFƂ¼ ½¿ÉìËÙfÄÉ Ü÷ò ÃpÙf¾×Å÷ËÄ+ÂFɓ¾× àxÓaÄ+Ê<ÂQ˽¿ÓaÉ Ä+Ùf߿×¾ú½¿É ˼fÓa¾ÒÃìÂÑÀ%ÃÇÄ+½¿Ê.Ã7ÉM˾ Ü »^Â]× È™ßÞÃXcÚ¾¼Ó]á¾SË+¼ÃÀ%×ÄÒàùÓFÄ+Ê<ÂaÉfÆÇÃٓÉfÅf×ÄN˼f×¾Òà ƂӐÉ× Å“½Þ˽¿Óaɓ¾ Ü éÉç˼“½¿¾ðË+ÂQșßÞÃañ]  ½¿É“Åf½¿Æ—ÂQË×¾ ˼fÃh]§× Ê.×Âa¾ÙfÄÒÃ)ÂaÉfÅ%^½ D%×Ä×ɓƂà  ½¿Éfœ½¿Æ—ÂQË×¾§Ë¼fÃ)Å×ãFÄ+ÂQ× Å“ÂQË+½ÞÓaÉÁàxÄÓaÊË+¼ÃóÄÒÃ7¾Ùf߿˾ˆàxÓaÄ2˼fÃàxÓaÄ+Ê.ÂaßÃÇÑyÀ6×Ä+½ × Ê.×ÉyË Ü÷ò ÃÁÂFÆÇ¼f½Þ×ûF×Åì¼f½¿ãa¼ìÂFƗÆÇÙfÄ+ÂFÆÇÌ?á½ÞË+¼ ßÞÃÇÑM× ½¿Æ—ÂFßä½ÞË×Ê<¾ÇñäÂaɓŠË+¼Ã?ÂaÆÇƗÙÄ+ÂaÆ‚Ì ÅÃ7ƂÄÃ7ÂF¾ÒÃ7ÅÖ¾½ÞãF× É“½ æ6ÆÇÂaÉMËßÞÌ?á¼Ã7Éì߿ÂÑf½¿ÆÇÂaß½¿ËÒÃ7Ê.¾.á×ÄÒÃÁÉfÓFËðٓ¾ÒÃ—Å Ü »2¼f½Ø¾.ÄÒÃ7¾ٓßÞËð¾+¼Ó]á¾.Ë+¼fÂFË.Ë+¼ÃößÞÃÇѽ¿Æ—ÂFß2½ÞË×Ê<¾ðÂFÄÒà ˼fÃNÊ.Óa¾Ò˽¿Ê.À6ÓaÄËÂFÉyËàù×ÂFËÙfÄÒÃ7¾àxÓaħ½¿Ê.À“ÄÒÓFûy½ØÉã2˼fà ÂaÆÇƗÙÄ+ÂFÆÇÌ Ü »ZÂQșßÞÂ÷Á½¿¾Â1ƂӐÊ.À“ÂFÄ+½¿¾ÒӐɅá½ÞË+¼…ÀÃÇÄàxÓaÄÊ<ÂaÉfÆÇà ÓaàC˼fÃÂaÉfÂaßÞ̾½¿¾ˆàxÓaÄ2àx×ÂFËÙfÄ×¾ˆÓFà˼fÃË+ÂQÄãF×Ë2Ê.ÓaÄ× À™¼Ã7Ê.ÃöÂaßÞӐÉÃañÂaÉfÅçàùÓFÄðÀ%×ÄÒàùÓFÄ+Ê<ÂFɓÆÇÃ1á½ÞË+¼ç˼fà àù×ÂFËÙfÄÒÃ7¾ÁÓFàå¾ÙfÄÄÒӐٓÉfœ½¿ÉfãìÊ.ÓFÄÀ™¼Ã7Ê.×¾pÂa¾ŒáˆÃ—ßØß Ü éÉ Ë¼“½¿¾NËÂFȓßÞÃañaVU&ÉðӐɓßÞÌ Í  Т ½¿Éfœ½¿Æ—ÂQË×¾N˼“ÂQË)áà ٓ¾Ò×Åàù×ÂFËÙfÄÒÃ7¾Óaà)Ë+¼à Ë+ÂQÄãF×ËäÊ.ÓaÄÀ“¼f×Ê.ÌÂaßÞӐÉÃañ VUÉ Í ×+í Ð ËÒÓ Í í Ð  ½ØÉfœ½¿Æ—ÂQË×¾å˼“ÂQËäáˆÃ ٓ¾Ò×Åàù×ÂQ× Ë+ÙÄ×¾äÓFà˼fÃðËÂFÄÒãaÃÇËÚÊ.ÓaÄÒÀ™¼f×Ê.à ÂFɓÅKËáNÓöÂFÅ ÒÂQ× ÆÇ×ÉËäÊ.ÓaÄÒÀ™¼f×Ê.×¾ Ü VU&É Í ×¢1 Ð ËÒÓ Í 1 Т ½ØÉfœ½¿Æ—ÂQË×¾ Ë+¼fÂFËSáˆÃ2ٓ¾Ò×Å<àù×ÂFËÙfÄÒÃ7¾SÓaà%Ë+¼ÃËÂFÄÒãa×ËSÊ.ÓFÄÀ™¼Ã7Ê.à ÂaÉfÅÚàxÓaÙfÄÓa˼fÃÇÄZÊ*ÓaÄÀ“¼f×Ê.Ã7¾ÇñFË+¼Ã)ËâáNÓ ÓÉåË+¼È߿ÃÇàxË ÂaÉfÅú˼fÃ.ËÒáNÓÁӐÉK˼fÃ.Ä+½Þ㐼MËÓaà)Ë+¼Ã<ËÂFÄÒãaÃÇË Ü VUÉ Í ×¢9 Ð ËÓ Í 9 Т ½¿É“Å“½¿ÆÇÂFËÒÃ7¾ðË+¼fÂFË áˆÃöٓ¾Ò×ÅçàxÃ7ÂQË+ÙÄÃ7¾ Óaà˼fÃÁË+ÂQÄãF×Ë<Ê.ÓaÄÀ“¼f×Ê.ÃpÂaɓÅ÷˼fÃp¾½ÞÑ÷Éf×ÂFÄÒÃ7¾ÒË Ê.ÓaÄÒÀ™¼f×Ê.×¾Çñ½ Ü Ã Ü ñy˼fÃ&˼fÄÃÇÃ&ӐÉð˼fÃßÞ×àxË)ÂFɓŠË¼fà Ë+¼ÄÃÇÅÓaÉcË+¼ÅĽ¿ãa¼Ë Ü »2¼ÃúÈ6×¾ÒËpÂaÆÇƗÙÄ+ÂaÆ‚Ì áÂF¾ ÂaƂ¼“½ÞÃÇûFÃ7Å#á2¼fÃ7ÉîáˆÃ?Ùf¾ÒÃ7Åî˼fÃ?àxÃ7ÂQË+ÙÄ×¾4Óaà*Ë+¼à Ë+ÂQÄãF×Ë.Ê.ÓaÄÒÀ™¼f×Ê.Ã1ÂaɓÅ÷˼fÃpàùÓaÙfÄ.ÉfÃ7ÂQÄ×¾ÒË<Ê*ÓaÄÒ× À™¼Ã7Ê.×¾ Ü »2¼Ã÷ÂaÆÇƗÙÄ+ÂFÆÇÌ#ÅfׯÇÄÒÃ7ÂF¾ÒÃ7Åîá¼Ã7ÉÖá)à ٓ¾Ò×Å<Ë+¼Ã2àx×ÂFËÙfÄ×¾^Óaà%Ë+¼Ã2ËÂFÄãF×ËÊ.ÓaÄÒÀ™¼f×Ê.Ã2ÂaɓŠË+¼à ¾½ ÑúÉÃ7ÂQÄÃ7¾ÒËäÊ.ÓaÄÒÀ™¼f×Ê.Ã7¾ Ü<ò Ã.È6Ã7ß¿½Þ×ûÃ.˼“ÂQË ½¿Ë2½¿¾œÙfÃóËÓ.Ë+¼ÃåÅfÂFËÂ<¾ÒÀ™ÂQÄ+¾Ò×Éf×¾¾2ÀfÄÓFșßÞÃ7Ê Ü ÿ(„ ? "#39$ƒ352…0†!,*/‡6ˆ7  , ?.@A@ ! @AB ]Z½¿ãaÙfÄÃ411¾¼fÓQá¾ó˼fà Ä×߿ÂF˽¿Óaɓ¾¼f½¿À?È6ÃÇËÒáNÃÇÃ7ÉúË+¼à ÂaÊ.ÓaٓÉyË ÓaàËÒÄ+ÂF½ØÉf½ØÉãÁœÂFËÂ Í Ë¼fÃ*ÉٓÊäÈ6×Ä&ÓFà)¾Ò×É× Ë×ɓƂÃ7¾ Ð ÂaÉfÅpÂFƗÆÇÙfÄ+ÂFÆÇÌ Ü »2¼Ãä¼fÓFÄ+½ —ÓaÉMË+ÂFßÂQÑf½¿¾2½¿É× Å“½¿Æ—ÂQË×¾Á˼fÅÉٓÊäÈ6×ČÓaàå¾Ò×ÉyËÒ×ɓÆÇ×¾Á½¿ÉIËÄÂa½¿É“½¿Éfã œÂQË+Âñ ÂFɓÅIË+¼Ã4ûMÃÇÄË½ØÆÇÂaßÂQѽؾ ½¿É“Å“½¿ÆÇÂFËÒÃ7¾ŒË¼fÃ%]§× Ê.Ã7ÂF¾ÙfÄÒÃ Ü éɌË+¼f½¿¾)æ™ãaÙfÄÃFñË+¼ÃÉÓaËÂF˽ÞӐɁÂQÄÄ×¾ÒË  ÂaÉfŁãa×ÉfÃÇÄ+ÂFß  ÂQÄÃٓ¾Ò×ÅðËÓ*½¿É“Åf½ØÆÇÂFËÒà ˼fÃ&Ä×¾ٓßÞË+¾ »ÂFȓ߿ÔF š ê2Ã7¾Ùf߿˾ÓaàɓÂFÊ.Ã7ÅpÃ7ÉM˽¿ËâÌðÃÇÑyËÄÂaƂË+½ÞÓÉ Ü Ë À À ÂÄÑÌ Óa…Ϙ…À«Ë«È Ï]tKœ«q€ q+{¹~Dyµ~›Ã Àzq+tKsxs ¾}Dq++yxw(y…&{ Àzq+tKsxs ¾7}Dq++yw(y…&{ ½J]ñà ½J]­Ã ½J]ñà ½J]­Ã É«À«Ó˜Ë Ï«Î^ËÁÌCÎÉ«Ï `Gbdˆ F c ba_¹ˆ ¯¯ `ac¹ˆ eGe d F ˆ d F ¾ÁÂÁÀ\Ñ É\Ï bGedˆ _ ª ba_¹ˆ cG` dGd¹ˆ cGc bGedˆ F c ÈHÉaÊCËÄÌC΋É\Ï bGedˆ F ` bKårˆ dG` dY`dˆ dY` bG`dˆ eab Ë ÀÍÌCÎ*fË Ê‡Ì ` ª ˆ _å b&¯dˆ ¯¯ ea_¹ˆ åTc åabdˆ _ad Ö\ËÁ̇ F d¹ˆ cGc F d¹ˆ cac F ¯dˆ dGd F årˆ dGb ÌC΋ÕL F årˆ då ª ¯&¯dˆ ¯&¯ F ¯dˆ då F årˆ cGe ÕLÉ\Ï\ÂEg ª ¯&¯dˆ ¯&¯ bGbdˆ b F F edˆ eGe bac¹ˆ eT_ ¾ÁÂÁÀ«ÊCÂÁÏ\Ì È È ª ¯&¯dˆ ¯&¯ ª ¯¯dˆ ¯&¯ Ì%…K~!tKs bGedˆ _G_ bGbdˆ ¯ab dG_¹ˆ b F ba_¹ˆ c&¯ ½J‰ ª ˆ b&¯´Ã ½Q‰ ª ˆ F ¯´Ã ½J‰ ª ˆ e F à ½Q‰Z¯dˆ ª dƒÃ ffÈœ«qtKwDŸd}Dq ba_¹ˆ dG_X½J‰ ª ˆ bKå'à b&¯dˆ ª dX½J‰Z¯dˆ dG_ƒÃ ½¿Éì˼fÃ1¾ÒÀ×Ɨ½ æ6ÆpÂaÉfÅìãFÃ7É×Ä+ÂFßÅӐÊ<ÂF½ØÉf¾ÇñNÄ×¾ÒÀ%Ã7Æw× Ë½¿ûa×߿Ìañ^ÂaÉfÅÒá½Þ˼ Ä+ٓßÞ×¾  ÂFɓÅÒá2½¿Ë¼fÓaÙfË Ä+ÙfßÞÃ7¾  ÂFÄÒà ٓ¾Ò×ÅËÒÓ ½ØÉfœ½¿Æ—ÂQËÃÕ˼fÃ;ÄÃ7¾ÙfßÞË+¾îÓaÈfË+ÂF½ØÉÃ7Å á½ÞË+¼ ÂFɓŠá2½¿Ë¼fÓaÙf˅ËÄÂaÉf¾ÒàùÓFÄ+Ê<ÂF˽ÞӐÉgÄ+ٓßÞ×¾Çñ*ÄÃÇ× ¾ÒÀ%Ã7ƂË+½ÞûaÃ7ßÞÌ Ü »¼f×¾ÒÃßÞ×ÂFÄ+Éf½ØÉãƗÙÄûaÃ7¾§¾ÙfãFãa×¾ÒËË+¼fÂFË á)à ƗÂFÉKÂÑÀ%Ã7ƂËåÂöƂ×ÄÒË+ÂF½ØÉKÂaÊ.ÓaٓÉyËÓFà2½¿Ê.À“ÄÓ]ûaÃÇ× Ê.×ÉyËá½ÞË+¼ÁË+¼ÃäÙf¾ÒÃäÓaàÊ.ÓFÄÃËÒÄ+ÂF½¿É“½¿Éfã ÅfÂFËÂ Ü 60 65 70 75 80 85 90 95 100 0 2000 4000 6000 8000 10000 12000 F-measure Š Number of sentences "arrest.with_rules" "arrest.without_rules" "general.with_rules" "general.without_rules" ]Z½¿ãaÙfÄÒé1 š ê2Ã7ß¿ÂQË+½ÞӐÉf¾¼“½ÞÀ È6ÃÇËá)ÃÇÃ7É÷˼fÃUÂFÊ.ӐÙfÉË Óaà§ËÒÄ+Âa½¿Éf½ØÉã œÂQË+Â.ÂaÉfÅ1ÂaÆÇƗÙÄ+ÂFÆÇÌ Ü ÿQ‹   u3,2Œ,Žu @ 8*39>E B K)ÓaÄ˼M὿Æ~è Í K)ÓaÄÒË+¼y὿Æ+è%ñgí—ïaïFï Ð ÂaÉfÅ Î ÓaȓÂFË ÍiÎ ÓaȓÂFËÂfñí—ïaïFï Ð ¼fÂQûFÃ&Å×ûa×߿ÓFÀ6Ã7ÅðÓFË+¼×Ĉ¾Ò̾ÒËÒÃ7Ê<¾ àxÓaÄÂÑËÒÄ+ÂFÆÇ˽ØÉã Î&Ï ¾ Ü »2¼×̅¼fÂ]ûFÃ<ÓaÈfË+ÂF½¿Éf×ÅK½¿Ê*× À“ÄÒÓ]ûÃ—Å?ÂaÆÇƗÙÄ+ÂaƂÌÈÌÙf¾½ØÉã…ÂaÉ Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ Ü ò ÃƗÂQÄÄ+½ÞÃ7ÅäÓaÙfËCÂaÉäÂÑÀ6׼ØÊ*Ã7ÉyËCá½Þ˼äÂaÉ Î&Ï Å“½¿Æw× Ë½¿ÓaɓÂQÄÌ Üöò ÌÙf¾ÒÃ7Åá¼×˼fÃÇÄ*ÓFÄ.ÉÓaËÚ˼fà ËÂFÄÒãaÃÇË Ê.ÓFÄÀ™¼Ã7Ê.Ãp½¿¾.½¿É ˼fà Î&Ï Åf½ØÆ‚Ë+½ÞӐÉfÂFÄÒÌ÷ÂF¾.Â4àù×ÂQ× ËÙfÄÃ Ü ò ÃKٓ¾ÒÃ7ÅYË+¼Ã¾ÂFÊ.ÃÅf½ØÆ‚˽¿ÓaɓÂQÄÌ ÂF¾1ٓ¾ÒÃ7ÅYÈyÌ K)ÓaÄ˼M὿Æ~è1ÂaÉfÅ Î ÓFșÂQË+ÂñÂ]û]Âa½¿ß¿ÂFȓ߿ÃÚÓaÉ þ ×èy½ØÉÃݾ áÃÇÈY¾+½ÞËÒà ͥþ ÃÇ轿ÉfÃFñÚí—ïaïFï Ð »2¼“½¿¾p½¿¾pÂFÉ Î&Ï Åf½ØÆw× Ë+½ÞÓaɓÂFÄÒÌ Óaàp˼fÃcÉfÂaÊ.×¾ÓaàpÓFÄãÂFɓ½ 7ÂQ˽¿Óaɓ¾?ÂaɓŠ߿ÓÆÇÂF˽ÞӐɓ¾Çñ á2½¿Ë¼ ÂFÈ6ÓaÙfË íQñéÃ7ÉyËĽ¿Ã—¾ Ü ò à Âaß¿¾ÒÓ4ÃÇÑyËÄÂaƂËÃ—Å Î Ï ¾åἓ½¿Æw¼?ÂFÀfÀ%Ã7ÂQÄÃ7Å˼fÄÒ×ÌÓFÄ Ê.ÓaÄÒÃú˽¿Ê.Ã7¾Œ½ØÉYÂ?ËÄÂa½¿É“½¿ÉfãìÆÇÓFÄÀ“Ù“¾ÁÂFɓÅYÂaÅfÅf׊Ë+¼Ã7Ê ËÓg˼fà Î&Ï Åf½¿ÆÇ˽¿ÓaɓÂQÄÌ Ü ÿ È6ӐÙËYíFñ cJ Î&Ï ¾2áˆÃÇÄÃÂÑËÒÄ+ÂFÆÇËÒÃ7Å Í UêXW ÿ&Î éZY ÿ »éZU ÎSš 1AI 1ñ  Ï ê þ U ÎSš 99Fñ [˜U ý)ÿ »éU ÎSš 99Fïñ ÿ 껈éõ× ] ÿý » š c÷yñ‡^ ÿ » ÏXš 199ñ»é Û4Ï š 9fíQñ Û U Î&Ïauš 1fíQñ Ï ê ý)ÏNÎ » š c5÷ñU–N»ˆéZU Î ÿ [ š ÷F Ð+Ü »2¼fÃ^ËÒÓF× Ë+ÂFßÉٓÊäÈ6×Ä&Óaà Î&Ï ¾½¿Éö˼fà Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ1áÂa¾ Ë+¼Ã7É4ÂQÈÓaÙfËL1yñBc Ü&ò ÃÚÙf¾ÒÃ7Å% ü&Û4ÿ&Î ËÓÁÊ.ÓFÄÒ× À™¼ӐßÞÓaãa½ØÆÇÂaß¿ßÞÌ ÂaÉfÂaßÞÌH ÇÃ˼fÃ Î Ï ¾ö½¿É#Ë+¼Ã?Åf½ØÆ‚Ë+½ÞÓQ× É“ÂQÄ̐ñ%ÂaÉfÅöÂa¾¾½ÞãÉÃ7ÅUÓaÉfÃÚÓaàZË+¼à Î&Ï ß¿ÂFÈ%Ã—ßØ¾Ë+¼fÂFË áÃ4ÅÃÇæ™Éf×ÅY½¿É þ ׯÇ˽ÞÓÉ 1ËÓ÷×ÂaÆw¼IÊ.ÓFÄÀ™¼Ã7Ê.Ã Ü »2¼fÃÇÄÃá2ÂF¾^ÂóËÒÓaËÂaßÓaà%ÂQÈÓaÙfË2í‰ñéóÊ.ÓaÄÒÀ™¼f×Ê.Ã7¾ ½ØÉÚ˼fà Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ Ü§ò ¼Ã7É*Âó¾ÒËÒÄ+½¿ÉfãàxÓaÄ^ÂËÂFÄÒ× ãaÃÇËZÊ.ÓaÄÒÀ™¼f×Ê.Ã)áˆÂa¾CàùÓaٓɓÅÚ½¿ÉåË+¼Èœ½¿ÆÇ˽ÞӐɓÂQÄÌañ]áà ٓ¾Ò×Å*˼fà Î&Ï ß¿ÂFÈ%×ßfÂF¾¾½ÞãÉf×ÅÚËÓË+¼Ã2ƂÓaÄÒÄÃ7¾ÒÀ%ÓaɓÅ× ½ØÉãöÊ.ÓFÄÀ™¼Ã7Ê.Ã<½¿ÉKË+¼à œ½¿Æ‚Ë+½ÞӐÉfÂFÄÒ̅Âa¾äÂ1àx×ÂFËÙfÄà ûFÂaß¿ÙÃ Ü »ZÂQșßÞ÷F*¾¼fÓ]á¾)Ë+¼ÃÄ×¾ٓßÞË2ÓFȓËÂa½¿Éf×Åpá2½¿Ë¼Á˼fà Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ Ü »2¼ÃÂFƗÆÇÙfÄÂaƂ̌ÂF¾ˆÃÇÑyÀ“Ä×¾¾ÒÃ7ŌÈyÌ Ë+¼Ô]§×¥Ê.×Âa¾ÙfÄÒý¿Ê.ÀfÄÓFûF׊ÈfÌðÂQÈÓaÙfˈËâá)ÓÚÀ6Ӑ½¿É˾ ½ØÉç˼fÃU¾ÒÀ%Ã7ÆÇ½Þæ™ÆUÅfÓaÊ<Âa½¿ÉIÂFɓÅçÂQÈÓaÙfË<ӐÉfÃUÀ6Ӑ½¿ÉË ½ØÉU˼fÃäãa×Éf×ÄÂaß§ÅӐÊ<ÂF½¿Éñ%Ó]ûa×Ä˼fÃÚÂFƗÆÇÙfÄÂaƂÌpÓaÈf× Ë+ÂF½¿ÉfÃ7Åöá½Þ˼fÓaÙfËË+¼à Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ Ü éõàSáÃÚ¼fÂaÅ ÂaÉ Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ.á2½¿Ë¼ Ê.ÓFÄà Ã7ÉMËÒÄ+½ÞÃ7¾Çñaáà ƂӐÙf߿ŠÂaƂ¼“½ÞÃÇûFÃó̐×˼f½¿ãa¼fÃÇÄÂaÆÇƗÙÄ+ÂFƗ½ÞÃ7¾ Ü ÿ*‘ -; +*7 <Ž’“358” ò ½¿Ë¼ŒÄ×ãaÂFÄÅÁËÒÓ<ÉfÂaÊ.×ÅÁ×ÉyË+½ÞËõÌ<ÃÇÑyËÄÂaƂË+½ÞӐɌàxÄÓÊ Ï Éfãß¿½¿¾¼ ¾Ò×ÉyËÒ×ɓÆÇ×¾ÇñN¾ÒËÂF˽¿¾ÒË+½¿Æ—ÂF߈Ê.×˼fÓÅf¾*șÂF¾ÒÃ7Å ÓÉ Â#¼f½ØÅfÅfÃ—É Û ÂFÄÒèÓ]ûÖÊ.ÓyÅfÃ—ß Í ‰ Û4ÛKÐ÷Í K½ÞèÃ7ß Ã—ËŒÂFß Ü ñäí7ïFïAIJG Û ½Øß¿ßÞÃ—Ä ÃÇËÁÂaß Ü ñäí—ïaïÐ ñÂ÷ÅÃ7ÆÇ½Ø¾½ÞÓÉ ËÄÒ×ÃcÊ.ÓÅÃ7ß ÍPý ÓQá½ÞÃañöí7ïFï<÷ Ð ñŒÂFÉ Û?Ü Ï2Ü Ê.ÓÅÃ7ß Í KˆÓFÄ˼yá2½ØÆ~èúÃÇË*Âaß Ü ñ í—ïaïQ РñSÆÇÓaߨßÞÓyÆÇÂF˽¿ÓaÉì¾ÒËÂF˽¿¾× Ë½ØÆÇ¾ Í [C½¿Éñˆí—ïaïÐ ñZÂaɓÅKÂ1ËÒÄ+ÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉyץȓÂa¾Ò׊ÃÇÄÄÓFÄÒ×õÅÄ+½ÞûaÃ7Ép߿×ÂFÄ+Éf½¿ÉfãðÊ*ÓyÅfÃ7ß Íiÿ È%×ÄÅf××ÉpÃÇËÂFß Ü ñ í7ïFï<÷ Ð ¼fÂkûÃ)È%ÃÇÃ7É*À“ÄÓFÀ6Ӑ¾Ò×Å<¾ÒÓàiÂQÄ Ü éâÉ.˼fà Û4üóý ƂӐÊ.À%×˽ÞË+½ÞӐÉñ ˼fż“½Þãa¼fÃ7¾ÒËpÂFƗÆÇÙfÄ+ÂFÆÇÌI¼“ÂF¾ŒÈ%Ã—Ã—É ÂaÆw¼f½¿ÃÇûaÃ7ÅðÈÌ<ÂÚ¾Ò̾ÒËÒÃ7Ê Æ—ÂFߨßÞÃ7Å Î ÌyÊÚÈ“ß¿Ã Í Kˆ½¿èaÃ—ß™Ã—Ë Âaß Ü ñ í—ïaïJI Ð á¼f½ØÆw¼÷½¿¾ÚȓÂa¾Ò׊ÓaÉ ÂFÉ ‰ Û4Û?Ü »2¼“½¿¾ ¾Ò̾ÒËÒÃ7Ê-ÃÇÑËÒÄ+ÂFÆÇ˾ Î&Ï ¾óÈÌ4ÂFÀfÀ™ßÞ̽¿ÉfãpË+¼Ã<àxӐ߿ßÞÓFáˆ× ½¿Éfã4À“ÄÒÓfƂלÙfÄÒà ÜKÿ æ™É“½ÞËÂץ¾ÒË+ÂQËÃÁËÄÂaÉf¾½¿Ë½ÞӐÉ÷ÉfÃÇËÒ× á)ÓaÄè;½¿¾À“ÄÃÇÀ™ÂQÄÃ7Å Ü Ï ÂFÆÇ¼Ô¾ÒËÂFËÒÃIÓFàpË+¼ÃcÉfÃÇËÒ× á)ÓaÄèUÄÃÇÀ“Ä×¾ÒÃ7Éy˾ÂaÉ Î&Ï ÅfÂæ6Éf×ҽØÉ…Ë+¼à Û4üóý × Î&Ï Ë+ÂF¾Òè%ñ^¾ٓÆw¼ÂF¾u Ï ê þ U Î ÓaļUêXW ÿ&Î éZY ÿ × »éU Î ñÓaÄÄÃÇÀ“ÄÒÃ7¾Ò×É˾ Î U»ˆ× ÿ × Î&ÿ&Û4Ï á¼“½¿Æ‚¼ Ê.×Âaɓ¾ŒË+¼Ãúá)ÓaÄÅY½¿¾pÉÓaËpÂ÷ÅfÂæ6ÉÃ7Å Î&Ï2Ü2Ï ÂFÆw¼ ËÒÄ+ÂaÉf¾½¿Ë½ÞӐÉö¼“ÂF¾ÂðËÒÄ+ÂFɓ¾½ÞË+½ÞӐÉöÀfÄÓaȓÂFȓ½Øß¿½ÞËâÌañ™á¼f½ØÆ‚¼ ÄÃÇÀ“Ä×¾ÒÃ7ÉM˾Ë+¼ÃäËÒÄ+ÂaÉf¾½¿Ë½ÞӐÉC ¾ƂӐɓÅf½ÞË+½ÞӐÉfÂaßÀ“ÄÓFșÂ]× È™½¿ß¿½¿ËâÌ÷àùÓFÄ Â㐽ÞûaÃ7É ½ØÉÀ™ÙËðáÓFÄ+Å Ü »2¼fÃ4ÂFɓÂFß¿Ìy¾½¿¾ ½¿¾<ÂK¾Ò×ÂFÄ+Ƃ¼÷àùÓFÄ<˼fÃpÓaÀ“˽¿Ê<Âa߈À™ÂQË+¼ì½ØÉ Ë+¼ÃUÉ×Ë× á)ÓaÄèöἓ½¿Æw¼KÙf¾ÒÃ7¾Ë¼fà ö ½¿ËÒÃÇÄș½SÂaßÞãaÓFÄ+½ÞË+¼fÊ Ü »2¼à ¾ÒËÂFËÒÃ7¾^½¿ÉÚË+¼Ã2ÓFÀ“Ë+½¿Ê<ÂFßyÀ™ÂF˼Ú㐽ÞûMÈÙf¾ Î&Ï ¾ Ü éÉÚË+¼à Óa˼fÃÇÄ ¾Ò̾ÒËÒÃ7Ê.¾ÇñÉfÂaÊ.׊Ã7Éy˽¿Ë½ÞÃ7¾<ÂFÄÒÃöÃÇÑyËÄÂaƂË׊ÈyÌKÂö¾½¿Ê<½Øß¿ÂFÄåÀfÄÓÆÇלÙÄÃFñ^ÂÑfƂ×ÀfËå˼“ÂFËäË+¼à á2Â7Ì Óaà ×¾ÒË+½¿Ê<ÂF˽¿ÉfãúË+¼Ã1À“ÄÒÓaȓÂFș½¿ß¿½ÞËÒÌûFÂQÄ+½ÞÃ7¾ Ü K)ÓaÄÒË+¼y× á½¿Æ~è ÂFɓÅç¼f½Ø¾ðƂÓ]áˆÓFÄèF׾<¾ÒÃ7ßÞÃ7ƂË×Åç¾ÒÃÇûÃÇÄ+ÂFß2¾ÒÌy¾× ËÒÃ7Ê<¾åἓ½¿Æw¼ÓaÈfË+ÂF½ØÉÃ7Å÷Â4¼“½Þ㐼÷ÂFƗÆÇÙfÄ+ÂFÆÇ̽¿É?Ë+¼à Û4üóý × Î&Ï Ë+ÂF¾ÒèYàxÄÓÊ ÂFÊ.ӐÉãç˼fӐ¾ÒÃKșÂF¾׊ÓaÉ ¾ÒËÂF˽ؾÒ˽¿Æ—Âaß6Ê.×˼fÓœ¾)ÂaÉfŌ˼fÓa¾ÒÃșÂa¾Ò׊ӐɌ¼“ÂFɓÅy× Æ‚Ä+ÂFàxË×ÅäÄ+Ùf߿×¾ÇñQÂaÉfÅäÓaÈfË+ÂF½ØÉÃ7ÅäÈ6ÃÇËËÒ×ħÄÒÃ7¾ٓßÞ˾§Ë¼“ÂFÉ ÂaÉM̅ÓFàˆË¼f̽¿Éfœ½Þû½¿Å“ÙfÂaßS¾Ò̾ÒË×Ê<¾äÈ̅½ØÉMËÒ×ãFÄ+ÂF˽¿Éfã ˼fÃ7Ê ÓÉp˼fÃșÂF¾½Ø¾ÓFà˼fà Û?ÜÝÏÜ Ê.ÓyÅfÃ—ß Í K)ÓaÄÒË+¼y× á½¿Æ~èYÃÇË4Âaß Ü ñŒí7ïFïQ Ð+Ü »¼fÃÇÌ Ä×À%ÓFÄË×Åî˼“ÂFËU ãaÓyÓyÅcÂFƗÆÇÙfÄ+ÂFÆÇÌìá2¼“½¿ÆÇ¼ ¾ÙfÄÀ“Âa¾¾ÒÃ7Åc¼yٓÊ.ÂaÉçÀ%×Ä× àxÓaÄ+Ê<ÂFɓƂÃƂӐÙfߨÅ*È%Ã2ÓFȓËÂa½¿Éf×Å<àxÓaÄÂóÆÇÃÇÄËÂa½¿É<ÅfÂFË ¾Ò×ËÈÌ<½¿ÉMËÒ×ãFÄ+ÂF˽¿Éfãå¾ÒÃÇûÃÇÄ+ÂFߙ¾ÒÌy¾ÒË×Ê<¾ Í KˆÓFÄ˼Má½ØÆè ÃÇËÂFß Ü ñ§í—ïaïQÈ Ð~Ü ò ½¿Ë¼ÕÄÃÇãÂFÄÅ ËÒÓ É“ÂFÊ.Ã7Å Ã7ÉyË+½ÞËõÌ;ÂÑËÒÄ+ÂaƂ˽¿ÓaÉ àxÄÓÊ aÂQÀ™ÂFÉf×¾ÒÃç¾Ò×ÉËÒÃ7ÉfÆÇ×¾Çñ.¾½ØÊ<½¿ß¿ÂFľÒËÂF˽¿¾ÒË+½¿ÆÇÂaß Ê.ÃÇË+¼ÓfÅf¾¼fÂkûÃSÈ%××ÉäÀ“ÄÓFÀ6Ӑ¾Ò×ÅñF½¿É“ÆÇߨÙfœ½¿ÉfãÊ.ÃÇË+¼y× Óœ¾1șÂF¾ÒÃ7ÅîÓaÉÖÂaÉð‰ Û4Û Í¥þ ¼f½ØÉfÉfÓaÙñ í—ïaïFï Ð ñå Åf×Ɨ½¿¾½ÞӐɅËÄÃÇÃ<Ê.ÓyÅfÃ—ß Í¥þ ÃÇ轿ÉfÃ.×ËÂaß Ü ñNí—ïaï-G Î ÓF× È™ÂQË+Âñ^í7ïFïaï Ð ñÂaÉfÅ4ÂaÉ Û?Ü Ï2Ü Ê.ÓÅfÃ—ß Í KˆÓFÄ˼yá2½ØÆ~è6ñ í7ïFïaï Ð~Ü KˆÓFÄ˼á2½ØÆ+èÍ ¾ÂQÀ“ÀfÄӐÂFƂ¼½¿¾§¾½¿Ê<½¿ß¿ÂFÄCËÒÓӐÙÄ+¾ ÂÑfÆÇÃÇÀ“ËË+¼fÂFË)¼Ã&ٓ¾Ò׊¼“ÂaÉfÅ×¥ÆÇÄÂFàxË×ÅðËÄÂaÉf¾àxÓFÄ+Ê<ÂQ× Ë½¿ÓaÉäÄ+ٓßÞ×¾§á¼f½ØßÞÃSáÃÙf¾ÒÃ)ÂaÙËÓaÊ<ÂF˽¿Æ—ÂFߨßÞÌÂaÆÇøyÙf½ÞÄÃ7Å Ä+Ùf߿×¾KÂaßÞӐÉÃ Ü »2¼fÃçÂaÆÇƗÙÄ+ÂaƂÌgá)à ÄÒ×À6ÓFÄËÒÃ7ÅÔ½¿É þ ׯÇ˽ÞӐÉ9 Ü Fp½¿¾óÈ%×ËÒËÃÇÄ˼“ÂFÉúË+¼fÂFËóá¼f½ØÆ‚¼%KˆÓFÄË¼× á½¿Æ~èŒÓFȓËÂa½¿ÉfÃ—Å Ü U&ÙfÄ Ê.ÃÇË+¼ÓfÅ1½Ø¾Ê.ÓFÄÃÂaƗÆÇÙfÄÂFËÒà ˼“ÂaɅÂaÉyÌ1Óa˼fÃÇľÒÌy¾ÒËÃ—Ê È™ÂF¾ÒÃ7Å4ӐÉúÂp¾ÒËÂF˽¿¾ÒË+½¿Æ—ÂFß Ê.ÃÇË+¼ÓfÅ Ë+¼fÂFË*À™ÂQÄË½ØÆÇ½ÞÀ™ÂFËÒ׊½¿Éì˼fÃ1ߨÂF¾ÒË<éâê Ïë × Î&Ï áÓFÄèy¾+¼ÓaÀñÂFɓн¿¾.ƗßÞӐ¾ÒÃÁËÒÓú˼“ÂQË*ÓaȓËÂa½¿ÉÃ7Å ÈfÌ.˼fþÒÌy¾ÒËÃ—Ê á¼f½ØÆw¼ ÓFȓËÂa½¿Éf×Ō˼füf½¿ãa¼f×¾ÒË2ÂFÆ‚× Æ—ÙÄ+ÂaƂ̌àxÓFÄË+¼Ãéâê ÏSë × Î&Ï Ë+ÂF¾Òè Ü • – BCVNJ T _N\Q>ABCV »2¼“½¿¾SÀ™ÂQÀ6×ÄNÅÃ7¾ƂÄ+½ÞÈ6Ã7Å<˼fÃÃÇÑyËÄÂaƂË+½ÞӐÉ.Óaà%ɓÂFÊ.Ã7Å Ã7ÉyË+½Þ˽¿Ã—¾ӐÉä˼fÃNȓÂa¾½¿¾CÓaà“ÂFÉ Û?Ü Ï2ÜMÍ Ê<Â]Ñf½¿Ê.ÙfÊ#Ã7É× ËÄÒÓaÀÌ Ð Ê.ÓyÅf×ßfÂaÉfÅ.ËÄÂaÉf¾ÒàùÓFÄ+Ê<ÂQË+½ÞӐÉ*Ä+ٓßÞ×¾ ÜSÏ ½Þ㐼MË ËÌyÀ™Ã—¾pÓaà Î&Ï ÂFÄÃKÅfÂæ6ÉÃ7Å#ÈyÌçéê ÏSë × Î&Ï ñÂFɓŠÃ7ÂFÆÇ¼ Î&Ï ÆÇÓaɓ¾½¿¾ÒË+¾2ÓaàZӐÉÃäÓaÄ Ê.ÓFÄÃäÊ.ÓaÄÀ“¼f×Ê.Ã7¾Çñ ÓaÄ.½ØÉfƗ߿ٓÅÃ7¾ÚÂú¾Ùș¾ÒËÒÄ+½¿Éfã4Óaà Â4Ê.ÓFÄÀ™¼Ã7Ê.à Üò à ÅfÂæ6ÉÃ7Å cJ Î Ï ß¿ÂFÈ%Ã7ß¿¾ðËÓ ½¿É“Åf½ØÆÇÂFËÒÃöË+¼Ã4È6×ãa½ØÉy× É“½¿ÉfãfñÊ<½¿Å“Å“ßÞÃFñÂaÉfÅ Ã7ÉfÅYÓFà Î&Ï ¾ÇñÂaɓÅcÂÑËÒÄ+ÂaÆ‚Ë Î&Ï ¾Ná2¼“½¿ÆÇ¼.ÆÇÓaɓ¾½¿¾ÒËNÓaàӐÉfÃÓaÄ)Ê.ÓaÄÒÃ&Ê.ÓaÄÒÀ™¼f×Ê.×¾ ÈfÌä×¾ÒË+½¿Ê<ÂF˽¿Éfã˼fÃߨÂQÈ6Ã7ß¿¾ÂaÆÇÆÇÓFÄ+œ½¿ÉãËÒÓäÂFÉ Û?Ü Ï2Ü Ê.ÓÅfÃ—ß ÜZÿ àùËÒ×Ä˼“½¿¾ZÃ7¾Ò˽¿Ê<ÂF˽¿ÓaÉñQáˆÃNÃÇÑyËÄÂaÆ‚Ë Î Ï ¾Çñ ἓ½¿Æ‚¼p½¿ÉfƗ߿ٓÅÃåÂ<¾Ùș¾ÒËÒÄ+½¿ÉfãðÓFàZÂ<Ê.ÓaÄÀ“¼f×Ê.ÃañfÈÌ Ù“¾½¿Éfã?ËÒÄ+ÂFɓ¾ÒàxÓaÄ+Ê<ÂQ˽¿ÓaÉ Ä+Ùf߿×¾ Ü »2¼f×¾ÒÃUÄ+Ùf߿×¾ðÂFÄà ÂaÙËÓaÊ<ÂF˽¿Æ—Âaß¿ßÞÌ#ÂaÆÇøyÙf½¿ÄÒÃ7ÅîÈyÌ ½¿ÉMûaÃ7¾Ò˽¿ãaÂF˽ØÉã ˼fà œ½EDÃÇÄ×ɓÆÇÃÈ%ÃÇËá)ÃÇÃ7É Î&Ï ß¿ÂFÈ6Ã—ßØ¾ˆ½ØÉÁÂÚË+ÂQãaãa×ÅÁÆÇÓFÄÒ× À™Ùf¾ZÂaÉfÅåË+¼Ӑ¾ÒÈÃÇÑyËÄÂaÆÇËÒ×ÅÚàxÄÓaÊg˼fȾÂFÊ.ÈÆÇÓFÄÀ™Ùf¾ á½ÞË+¼ӐÙË2ËÂFãa¾ÈyÌðӐÙÄ ¾Ò̾ÒËÒÃ7Ê Ü »2¼fÄÓaÙfãa¼ ÓaÙfÄÁÃÇÑÀ%ÃÇÄ+½¿Ê.Ã7ÉM˾Çñ áÅàxӐÙfɓÅcË+¼fÂFË Ë+¼ÃóËÒÄ+ÂFɓ¾ÒàxÓaÄÊ<ÂF˽¿ÓaÉÁÄٓßÞÃ7¾ƂӐÉMËÒÄ+½ÞșÙËÃ&ËÒÓ<ÂFÉp½¿Ê*× À“ÄÓQûF×ÅIÂaÆÇƗÙÄ+ÂaƂÌañßÞÂÑf½¿Æ—ÂFß&½¿ËÒ×Ê<¾ŒÂQÄÃ4Ë+¼ÅÊ.Ӑ¾ÒË ½ØÊ*ÀÓFÄËÂaÉMËàù×ÂFËÙfÄ×¾ÇñQÂaÉfÅ˼fÃÈ6×¾Ò˧ÂFƗÆÇÙfÄ+ÂFÆÇÌ&áÂF¾ ÂaƂ¼“½ÞÃÇûFÃ7Åäá¼Ã7Éåá)Ã)ٓ¾Ò×ÅÚ˼fÃ)àx×ÂFËÙfÄ×¾ÓFà“Ë+¼Ã)Ë+ÂQÄÒ× ãaÃÇËÊ.ÓaÄÒÀ™¼f×Ê.ÃÚÂFɓÅöË+¼ÃÚàùÓaÙfÄ&Ê.ÓFÄÀ™¼Ã7Ê.×¾ÆÇß¿Óa¾× Ã7¾ÒËöËÒÓc½Þ˗ñå½ Ü Ã Ü ñË+¼Ã?Ëá)Ó ÓÉÖ˼fÃ÷ßÞ×àxË4ÂFɓÅî˼fà Ëá)ÓóӐÉð˼fÃ2Ä+½Þ㐼y˗ñaá¼f×ÉðÂËÒÄ+ÂF½ØÉf½ØÉãÚÆÇÓFÄÀ™Ùf¾Sá½ÞË+¼ í1yñé&¾Ò×ÉËÒÃ7ÉfÆÇ×¾§áˆÂa¾§Ù“¾ÒÃ—Å Ü »¼fÃ7¾ÒÃ)ÄÒÃ7¾Ùf߿˾áˆÃ—Äà ÓaÈfË+ÂF½ØÉÃ7Å á½Þ˼ŒË¼fý¿ÉfàxÓaÄÊ<ÂF˽ÞÓÉ ½¿É Ë+¼fÃ&ËÒÄ+ÂF½ØÉf½¿Éfã ÆÇÓFÄÀ™Ùf¾<ÂaßÞӐÉà Ü÷ò ¼Ã7É áˆÃpٓ¾Ò׊ÂFÉ Î&Ï Åf½ØÆ‚Ë+½ÞÓQ× É“ÂQÄÌ*á¼f½ØÆ‚¼.½Ø¾Â]ûQÂF½Øß¿ÂFȓßÞÃ2ӐÉ.Ë+¼ÃáÃÇÈðÂF¾SáÃ7ß¿ßiñá)à ÂaƂ¼“½ÞÃÇûFÃ7ÅUÂFÉ©]§×õÊ.×Âa¾ÙÄÃÚÓaà×<÷ Ü I/÷<àùÓFÄ& ¾ÒÀ×ÆÇ½Þæ™Æ ÅfÓaÊ<Âa½¿ÉñZÂFɓÅ5 Ü í I àùÓFÄåÂ1ãFÃ7É×Ä+ÂFßÅӐÊ<ÂF½ØÉñ§àxÓaÄ éê ÏSë × Î&Ï àùÓFÄ+Ê<ÂFßÞ×PÄ+ÙfÉUœÂQË+Â Ü »2¼fÃÇÄÈÂFÄÒȾÒ×ûÃ—Ä+ÂFߐÀ%Ӑ¾¾½Þȓ߿Ã)àùÙfËÙfÄÈÅf½¿ÄÒÃ7ƂË+½ÞӐÉf¾ Ü éÉpÀ™ÂFÄÒË+½¿ÆÇٓ߿ÂFėñ“áÃóÂFÄÒÃ彿ÉËÒ×ÄÒÃ7¾ÒËÒÃ7Åp½ØÉpË+¼ÃàxӐ߿ßÞÓFá× ½ØÉã𽿾¾Ùf×¾ Ü j ]Z½¿É“Å“½¿Éfã<Ã'D%Ã7ƂË+½ÞûaÃàxÃ7ÂQË+ÙÄ×¾ ò ÃÂÑÀ6ׯÇËZ˼“ÂQËSá)ÈƗÂFÉ<ÂFƂ¼f½Þ×ûFü“½Þ㐼×Ä^ÂaÆw× ÆÇÙfÄÂaƂÌpÈÌÁٓ¾½¿Éf㌽¿ÉfàxÓaÄÊ<ÂF˽¿ÓaÉUË+¼fÂFË á)ÃåÂQÄà ÉÓaËٓ¾½¿Éfã1ÂFËóË+¼ÃðÊ.ÓaÊ.Ã7ÉyËÇñC¾ÙfÆw¼úÂF¾½ØÉàùÓFÄÒ× Ê<ÂQ˽¿ÓaÉðÓÉ Å×À6×ɓÅf×ɓÆÇ½¿Ã—¾ÈÃÇËâáˆÃÇÃ7É.À™¼fÄÂa¾ÂFß Ùfɓ½ÞË+¾öÆÇÂa߿߿×Ŭ șٓÉf¾Ò×˾ه¿ñÚÂFɓÂQÀ™¼fÓFÄ+½¿ÆÄÃ—ßØÂ]× Ë½ÞӐÉf¾—ñ)ÂaɓÅìË+¼Ãö½¿ÉfàxÓaÄ+Ê.ÂF˽¿ÓaÉ ã½ÞûaÃ7Éì½ØÉìË+¼à ÀfÄÓyƂÃ7¾¾2ÓaàÂaÉfÂaßÞÌH —½¿Éfã.ËÂÑË Ü j ý ÓFÄÀ“Ù“¾2ÄÃÇû½¿¾½¿ÓaÉ1ÂFɓÅ1ÂFÉ Î&Ï Å“½¿Æ‚Ë+½ÞӐÉfÂFÄÌ ò ÈàùÓaٓÉfÅ*˼“ÂQË^ÃÇÄÄÓFÄ+¾^½¿É<Â&ËÒÄ+Âa½¿É“½¿ÉãƂÓaÄÀ“Ù“¾ á½¿ßØß߿×ÂaÅ ËÒÓ÷Â?ßÞÓFáNÃÇÄ ÂaÆÇƗÙÄ+ÂaƂÌañÂaÉfÅçË+¼“ÂQË Å“½¿ÆÇ˽¿ÓaɓÂQÄÌú½¿ÉfàxÓFÄ+Ê<ÂF˽ÞӐÉ?¼f×ßÞÀ™¾ËÓ4½¿Ê.ÀfÄÓ]ûaà Ë+¼Ã.ÂFƗÆÇÙfÄÂaÆ‚Ì Ü »2¼fÃÇÄÃÇàùÓFÄÃFñƂÓaÄÒÀ™Ùf¾&ÄÃÇû½¿¾½¿ÓaÉ ¾¼fӐÙfß¿ÅäÈ6ÃÂFÆÇ˽ÞûÃ—ßÞÌó¾ÒË+Ùfœ½ÞÃ7ÅñFÂaÉfÅåߨÂQÄãFÃ—Ä Î&Ï Å“½¿ÆÇ˽¿ÓaɓÂQÄ+½ÞÃ7¾á½¿ß¿ßÂaß¿¾ÒÓðÈ6Ãå¼Ã7ßÞÀ“àùÙ“ß Ü ò ÃúÊ<ÂkÌIÈ6ÃKÂFȓ߿ÃúËÓ ËٓÉfÃúË+¼ÃúÊ.ÓÅfÃ7ßËÒÓ Â.À™ÂQÄË½ØÆÇٓ߿ÂQÄÅfÓaÊ<Âa½¿ÉpÈyÌðÀ“ÄÃÇÀ™ÂQÄ+½¿Éfã ÂFÉ Î&Ï Å“½¿ÆÇ˽¿ÓaɓÂQÄÌìÂFœÂQÀ“ËÒÃ7Å ËÒÓ˼fÃ4ÅӐÊ<ÂF½ØÉ Ücò à áˆÓaٓ߿ÅKß¿½ÞèMÃ.ËÓ1ËÄÌ4Ë+¼f½Ø¾Çñ§ÂaɓÅ?¾ÒÃÇà ¼ÓQá áÃ—ßØß ÂaÉ1ÂFœÂQÀ“ËÒÃ7Åpœ½¿ÆÇ˽ÞӐɓÂQÄÌÁá)ÓaÄÒè¾ Ü ¹ JG^VB,— T O™hWCO O6V§DF\ »2¼ÃÕÂaÙË+¼ÓaÄ+¾#á)ӐÙfßØÅ ß¿½¿èÃ;ËÒÓ Ë¼“ÂFÉfè þ ÂFËÒӐ¾¼“½ þ ÃÇ轿ÉfÈÂFɓŠÿ ɓÅÄÃÇáK)ÓaÄÒË+¼M὿ÆNàùÓFÄàùÄٓ½ÞËàùٓßyƂӐÊ*× Ê.×Éy˾ ÂFɓżÃ7ßÞÀ“àùٓßìÅf½Ø¾ÆÇٓ¾¾½ÞӐÉf¾ œÙÄ+½¿Éfã˼fà À“ÄÒÓaãFÄÃ7¾¾ÓFàZ˼“½¿¾áˆÓFÄè Ü ˜hO,™fO“`O™VNJ-O™\ U&…&p/{Ëzvq#}!€dq+q+{5¼1U&…pr{X»Ÿd}Dƒ&q#}¼­ÖZtyu€©ÖÅtü­È<Ãd{rq#~(~Dq ÀGyx}Dw(#prœŠtK{5¼˜¾ t~(}Dyx+yut¼Àz…&vry{rwD…&{5¼ÅtK{/€©ÕÂt}D›šGyxsutKyx{ˆ ª FGF _dˆ Õ<΋ÌCÀ«Â ЖÖ]q+w(#}Dy|d~Dy…&{څK‡¢~Dprq)Ë«ÈAÂÄÕ»ÄÎ‹Ê ÑÃdw~Dq+œ Ÿ/w(q€ ‡?…&}uÕœaÊ È(`dˆ²ÎT{6Ÿž¡ ¢P4£4£¤YM¥NT¦G§› ¨›©ZR8£ ª M€«G©¥R.¬›£S§4§SKL¦£u­<N)¤T£[ž4§4©(KYN®¤YMON¦u¯, YNL¨x£Sž4£[N)P[£=°Z¬ˆ­5¯E± ²S³ ¼/|/tKƒ&q+w ª å ª ´ ª _G_¹ˆ ËG€rtKœ Ȉ%»[q#}Dƒ&q#}¼ÄÑ~Dq+|rprq+{¼ËŽˆ-Ö]q+sst”¾yq#~(}!td¼tK{/€µšGy{dÈ +q+{¹~¶UrˆHÖZq+sxsut–¾ yxq#~(}!tdˆ ª FGF `dˆ<ˁÕltYD¹yœoŸ/œ  {¹~(}D…&|à Ëz|r|r}D…tK#p ~(…–Ï]t~DŸd}!tKsÁÈftK{rƒ&Ÿ8tKƒ&q ¾}D…+q+w(wDyx{/ƒrˆ›¯, Y·¸± ¹)º ©JKY©QMZ YN®KG»!¼EMON¦ º M¥§¡©QMZP[§!¼®cGc½ ª âРe F ´ d ª ˆ ÖZtK{/yxq+s£ÕÁˆñ»[yÊq+s¼Sѹ+…K~(~©Õlyxssxq#}¼£Àzyp8t}!€ ѹ#p²It}(~¡½K¼ tK{/€dÀGtKs|rp \ q+yw(!prq€dq+sˆ ª FGF d¹ˆ5ÏzÃdœovrsqƒÐ¢tÍÀ]yxƒ&pdÈ ¾q#}(‡?…K}DœŠtK{r+q Èqt}D{ry{rƒaÏ]t&œ†q#ÈÆ/{/€rq#}ˆ ÎT{…Ÿž¡ ¾P4£4£¤YM¥NT¦G§  ¨&©ZR8£¿5M ¨4©ZRÀ¯, YNx¨L£Sž4£[N)PS£Á YNˆÂ ¹a¹ » MQ£S¤µÃKY© º ž¡Ka»Ÿ¼KYN)± ¦ º KL¦T£Ÿž4 ¾PS£S§4§¡M¥NT¦&¼¹|/tKƒ&q+w ª F å ´ cK¯ ª ˆ Ëz{/€¹}!q#² »…K}(~Dp¹²Iy!ʼ)U&…&pr{·Ñ~Dq#}Dsyx{/ƒr¼ÂŸ/ƒ&q+{rq`Ëzƒyx#p~Dq#y{5¼ tK{/€?ÀGtKs|rp ÓG}!yxw(p/œ«t&{5ˆ ª FGF b&tdˆ ÂED¹|/sx…&yx~Dyx{/ƒÖZyµÈ q#}Dw(qÄÅ{r…²zsxq€dƒq«Ñ¹…&Ÿd}D+q+wyut`ÕÂt¢Ddyxœ†Ÿrœ5 {~(}D…&|ëyx{ Ï]tKœ«q€¼Â{¹~Dyx~TÃSÀzq++…&ƒ&{ryx~Dyx…&{ˆLÎT{=Ÿž¡ ¢P4£4£¤YM¥NT¦G§† ¨Å©¥R8£ ª M€«G©¥RÁÆ Yž¡Ç¢§R   ¹  YNƒÈ®£SžSɼKYžJ¦£/¯, Yž ¹  YžKK¼K|/tKƒ&q+w ª _Gc ´ ª `&¯dˆ Ëz{/€¹}!q#² »…K}(~Dp¹²Iy!ʼ)U&…&pr{·Ñ~Dq#}Dsyx{/ƒr¼ÂŸ/ƒ&q+{rq`Ëzƒyx#p~Dq#y{5¼ tK{/€ÀGtKs|rp?Ó]}DyxwDprœŠtK{5ˆ ª FGF bKv5ˆ Ïg/œ`ÐñÖ]q+w(#}Dy|dÈ ~Dy…&{þ…K‡ ~DprqlÕL…Ï\ Ï]tKœ«q€ {¹~Dyx~TÃÑÃdw~Dq+œ t&w œGwDq€Úyx{²ÕVœ˜Ê[È(d¹ˆ ÎT{ʝ$ž¡ LPS£¡£¡¤YMON¦Y§6 ¨6©¥R8£ ª £[˾± £[N8©ZR¶¬Á£S§4§[KL¦T£Å­iN®¤T£[ž4§4©(KYN)¤YM¥NT¦Å¯, YNL¨x£[ž4£SN®P4£Ì°Z¬ˆ­5¯E±Í ³ ˆ p¹~(~D|JÐ ÎGβ²I²Åˆ œ¿Ÿ/&ˆ w!tKyx&ˆ +…&œÅιˆ Ëz{/€¹}!q#² »…K}(~Dp²zy!ʈ ª FGFGF ˆ©ËUtK|/tK{rq+wDqLÏ]tKœ«q€4 {dÈ ~Dyx~›Ã£Àzq++…&ƒ&{ryϽ+q#} Ê[…&{rw(~(}DŸr#~Dq€Ív¹ÃÍt”ÏG…&{rÈÒѹ|qtKÊq#}o…K‡ UtK|/tK{rq+w(q&ˆ–Λ{ƒ$ž¡ LPS£4£¡¤YMON¦Y§† ¨1©¥R8£†ÐџÒ,ÓÔÆ Yž4Ç¢§R   ¹ ¼ |/tKƒq+w ª bad ´ ª F edˆ Â}Dyo» }Dyss"ˆ ª FGF _¹ˆ\Ìf}!tK{rw(‡?…K}DœŠt~Dyx…&{rÈ»tKw(q€<Â}(}D…K}(ÈÞÖG}Dyq+{ Èqt}D{ry{rƒ tK{/€Ï]t~DŸd}!tKsLÈftK{rƒ&Ÿ/tKƒ&q©¾}D…¹+q+wDw(yx{/ƒ Ð.Ë ÊtKw(q ѹ~(Ÿ8€¹ÃÁyx{£¾ t}(~(È…K‡PÈѹ|/q+q+#p£Ì%t&ƒ&ƒ&yx{rƒ/ˆµ¯, Y· ¹)º ©(KG± ©QMZ YN®KG»!¼EMON¦ º M¥§¡©QMZP[§!¼®c ª ½"å'âР_åTe ´ _Y`a_¹ˆ Uyxœ¯Ê[…²Iyq&ˆ ª FGF _¹ˆ5ÊCÀ ÈÎ7ϘՔÑœ.Ö]q+w(#}Dy|d~Dyx…{­…K‡¿~Dprq ÊCÀ«ÈÎ7ÏaÕLÑœ%ÑÃdw~Dq+œÕœGwDq€h‡?…K}«ÕVœ˜Ê È(`dˆÁÎT{VŸž4 LP4£4£¤a± M¥NT¦G§V ¨©ZR8£ ª M€«G©¥Rƒ¬Á£[§¡§[KL¦T£ˆ­iN®¤T£SžS§4©(KYN)¤YM¥NT¦;¯, YNL¨x£Sž[± £[N)P[£†°Z¬ˆ­5¯7± ²S³ ¼r|/tKƒ&q+w ª _Gd ´ ª `G`dˆ ÎÒÀ«Â ÇÂEDdq++Ÿd~Dyxq×Ê[…&œ«œ«yx~~Dq+q&ˆ ª FGFaF ˆCÎÒÀ«ÂÇXpr…&œ«q+|/tKƒ&q&ˆ p¹~(~D|JÐ ÎG΁+w+ˆ {ùŸˆ q€dŸ®Î+w4Î|d}D…®q+#~Dw4Î|d}!…K~(q+Ÿ/w4Îyx}Dq[D8ιˆ Ñdt&€/tK…¸ÄZŸr}D…&p/tKw(p/ytK{8€<ÕltKÊ…K~D…_Ï]tKƒtK…r¼ ª FGF bdˆÖ K ¹ KYNA£S§x£ ¬Á Yž ¹ R  G» S¦YMZP4KG»1Â>N®KG» ÉY§4MO§ ª ÉY§4©(£[·×Ö­A¬VÂÃØÈ)£SžS§4MZ YN ÙÚ¥² ˆ…Ö]q+|8t}(~Dœ†q+{¹~I…K‡-Λ{d‡?…K}DœŠt~Dy#w¼!Ä]ù…K~D…ÛœG{ryxq#}Dw(yx~TÈ ÖZq+Ê&tK{rƒ¼Èyx{ˆ ª FGF bdˆnœGw(y{rƒ4Ê…&sxs…t~Dy…&{©Ñ~!t~Dyxw~Dy+w yx{ Λ{d‡?…K}DœŠt~Dyx…{–ÂED~(}!tK#~Dy…&{5ˆ…Λ{VŸžS LP4£4£¤YM¥NT¦G§ ¨Ì©¥R8£ ª £S˾± £[N8©ZR¶¬Á£[§¡§[KL¦T£Å­<N)¤T£[žS§¡©(KYN®¤YMON¦Å¯E YNL¨x£[ž4£SN®PS£ °¥¬­,¯E±¡Í ³ ˆ p¹~(~D|JÐ ÎGβI²²Žˆ œoŸr&ˆ w!tKyx&ˆ +…&œÅιˆ ѹ+…K~(~5Õlyxssxq#}¼©Õñy!p/tKq+s4Ê[}(ùw(~Dt&s"¼­ÀGq+yu€dyf8…LD<¼dÈt&{r+q ÀGtKœ«w(p8t²Å¼ ÀIy#p/t}!€ Ñdp²t}(~¡½&¼ ÀIq+v<q++t¯Ñ~D…&{rq&¼ ÀGtKs|rp \ q+yw(prq€dq+s¼¢tK{/€ ~(p/qËz{r{r…K~!t~Dy…&{.Ó]}D…&Ÿr|5ˆ ª FaF bdˆ:Ëzsƒ&…K}Dyµ~Dprœ«wÄ~Dp/t~¼È5qtK}D{·~(…©ÂED~(}!tK#~¼ÎT{d‡"…K}(È œŠt~Dy…&{ »»ÄÏ`ИÖZq+w(#}Dy|d~Dyx…&{·…K‡„~(p/q4ѹyx‡P~ÑÃdw~Dq+œ tKw œGwDq€ ‡?…K}Õœ˜Ê[È*d¹ˆ ÎT{ܝ$ž¡ LPS£¡£¡¤YMON¦Y§t ¨t©ZR8£ ª £[˾± £[N8©ZR¶¬Á£[§¡§[KL¦T£Å­<N)¤T£[žS§¡©(KYN®¤YMON¦Å¯E YNL¨x£[ž4£SN®PS£ °¥¬­,¯E±¡Í ³ ˆ p¹~(~D|JÐ ÎGβI²²Žˆ œoŸr&ˆ w!tKyx&ˆ +…&œÅιˆ ÊpryÊ&tKw(pry-Ï]…&v/t~!tdˆ ª FaFGF ˆ`Ï]tKœ«q€·Â {¹~Dyµ~›ÃLÌ%tKƒƒ&yx{rƒ–ÑÃdwÈ ~Dq+œ »tKwDq€Â…{Át ÖZq++yxwDyx…&{SÌ%}Dq+qñÕñ…d€dq+s"ˆ`ÎT{›Ÿž4 LP4£4£¤a± M¥NT¦G§Ý ¨©ZR8£VÐџÒ5ÓÞÆ Yž¡Ç¢§R   ¹ ¼I|/tKƒ&q+wcK¯ ª ´ cK¯G`dˆ5½"yx{ UtK|/tK{rq+wDqZÃ!ˆ ËG€¹²ztKyx~<ÀGt~D{/tK|/t}Dʹpryˆ ª FGF `dˆ?Ë ÕÂt¢D¹yœoŸ/œ¬Â {¹~}D…|à Õl…¹€rq+s%‡?…K}×¾7t}(~(ÈÉG‡PÈѹ|8q+q+p·Ì7tKƒ&ƒ&y{rƒrˆ˜Î›{¯E YNL¨x£[ž4£SN®PS£  YN Ò$· ¹ MOžSMZPSKG»L¬›£S©ZR  ¾¤Y§EMONÛÐKY© º ž¡KG»x¼KYN¦ º KL¦T£7Ÿž¡ LPS£S§4§S± M¥NT¦&¼/|/tKƒ&q+w ª eGe ´ ª åTc¹ˆ ËG€¹²ztKyx~nÀGt~({8tK|/t}Dʹpryˆ ª FGF d¹ˆ Ë:Èyx{rqtK}"É]vrw(q#}Dq€ Ìyœ†q¼Ñ~!t~Dyw~DyxtKs`¾t}!w(q#}l»tKw(q€ …&{hÕÂt¢D¹yœ†Ÿrœ  {dÈ ~(}D…&|¹Ã_Õñ…d€dq+sw+ˆ-ÎT{߯, YNL¨x£[ž4£SN®P[£$ YN1ҟ· ¹ MOžSMQP4KG»G¬›£S©¥R8 ¾¤Y§ M¥NVÐKY© º žSKG» ¼7KYNT¦ º KL¦T£>ŸžS LPS£S§4§¡MON¦&ˆ ÑË ÎÊGˆ ª FGF bdˆ Õœ˜Ê p/…&œ†q+|8tKƒ&q&ˆp¹~(~D|JÐ ÎGβ²I²Åˆ œŽŸr&ˆ wDtKy&ˆ +…&œÅιˆ Ñdt~D…&wDpry…ѹq+ʹy{rq&¼ÀGtKs|rpSÓG}Dyw(prœŠtK{5¼<tK{8€lÀ]yµ}D…ÃdŸrʹy ѹpry{dÈ {/…&Ÿ5ˆ ª FGF bdˆ¼Ë?ÖZq++yxwDyx…&{!Ì%}Dq+q0Õñq#~Dpr…r€Í‡?…K}Åf%y{/€dy{rƒ tK{8€LÊstKwDw(yx‡PÃdyx{rƒ Ï]tKœ«q+wGyx{ßUtK|8tK{rq+w(q_Ìfq[D¹~(wˆ‡ÎT{ߝŸž¡ G± P[£4£¤YMON¦G§> ¨>©ZR8£ ª M€«G©¥RƒÆ Yž4Ǿ§¡R   ¹  YN.È)£SžSɐ¼KYžQ¦T£¶¯, Yž[± ¹  YžKK¼r|/tKƒ&q+w ª d ª ´ ª dYbdˆ Ñdt~D…&wDpry–ѹq+ʹyx{rq&ˆ ª FGFGF ˆÑdt~D…&wDpry0Ñdq+Êy{rq pr…&œ«q+|/tKƒq&ˆ p¹~(~D|JÐ ÎGβI²²Žˆ +wˆ {ÃdŸ5ˆ q€rŸ)΁+w4Î|d}D…®q+#~Dw4ÎK|d}D…K~Dq+Ÿrw4ÎwDq+Êy{rq¾Î¹ˆ ÀGyx}D…ùŸ/ÊyHѹpry{r{r…&Ÿˆ ª FGFGF ˆÍÂ7D¹~(}!tK#~Dyx…&{l…K‡ ¾}D…&|8q#}«ÏG…&Ÿr{rw ~Dpd}D…Ÿrƒ&pXÂ7D¹~Dq+{/€dq€ Ê[p8t}!tK#~Dq}»tKwDq€ŠÀ˜ÕLÕĈ ÎT{…Ÿž¡ G± P[£4£¤YMON¦G§5 ¨9©¥R8£ŸÐџÒ,Ó:Æ Yž4Ǿ§R8  ¹ ¼|/tKƒq+w ª _ ª ´ ª _Gd¹ˆC½"yx{ UtK|/tK{rq+wDqZÃ!ˆ ÄÅyµÃ¹…K~!tKÊKt†œG#pryœ«…K~D…r¼AÑdt~D…&wDpryÁÑdq+Êy{rq&¼ftK{/€ ÀGyx~(…&wDpryÁÎTwDtÈ p8t}!tdˆ ª FGFGF ˆ5UtK|8tK{rq+w(q˜Ö]q+|q+{/€dq+{/#Ã<ѹ~}!Ÿr#~DŸd}DqaËz{/tKsxÈ Ãdw(ywZ»tKwDq€l…&{·ÕÂt¢D¹yœoŸ/œ¶Â {¹~}D…&|ÃLÕl…¹€dq+sw+ˆ«ÎT{ÁŸž4 G± P[£4£¤YMON¦G§… ¨Å©ZR8£ÅÃMON!©ZRˆ¯, YNL¨x£Sž4£[N)PS£Œ ¨…©ZR8£1Ò º ž¡  ¹ £¡KYN ¯R K ¹ ©*£SžÌ ¨©ZR8£Â/§¡§[ ¾PxMZKY©ZMQ YN ¨[ Yž¸¯, Y· ¹)º ©JKY©QMZ YN®KG»8¼7M¥N!± ¦ º MO§4©QMZP[§¸°¥Ò9¶¯<¼¶à áaá ³ ¼r|/tKƒ&q+w ª F ` ´ cK¯Gedˆ
2000
42
Extracting Causal Knowledge from a Medical Database Using Graphical Patterns Christopher S.G. Khoo, Syin Chan and Yun Niu Centre for Advanced Information Systems, School of Computer Engineering Blk N4, Rm2A-32, Nanyang Avenue Nanyang Technological University Singapore 639798 [email protected]; [email protected]; [email protected] Abstract This paper reports the first part of a project that aims to develop a knowledge extraction and knowledge discovery system that extracts causal knowledge from textual databases. In this initial study, we develop a method to identify and extract cause-effect information that is explicitly expressed in medical abstracts in the Medline database. A set of graphical patterns were constructed that indicate the presence of a causal relation in sentences, and which part of the sentence represents the cause and which part represents the effect. The patterns are matched with the syntactic parse trees of sentences, and the parts of the parse tree that match with the slots in the patterns are extracted as the cause or the effect. 1 Introduction Vast amounts of textual documents and databases are now accessible on the Internet and the World Wide Web. However, it is very difficult to retrieve useful information from this huge disorganized storehouse. Programs that can identify and extract useful information, and relate and integrate information from multiple sources are increasingly needed. The World Wide Web presents tremendous opportunities for developing knowledge extraction and knowledge discovery programs that automatically extract and acquire knowledge about a domain by integrating information from multiple sources. New knowledge can be discovered by relating disparate pieces of information and by inferencing from the extracted knowledge. This paper reports the first phase of a project to develop a knowledge extraction and knowledge discovery system that focuses on causal knowledge. A system is being developed to identify and extract cause-effect information from the Medline database – a database of abstracts of medical journal articles and conference papers. In this initial study, we focus on causeeffect information that is explicitly expressed (i.e. indicated using some linguistic marker) in sentences. We have selected four medical areas for this study – heart disease, AIDS, depression and schizophrenia. The medical domain was selected for two reasons: 1. The causal relation is particular important in medicine, which is concerned with developing treatments and drugs that can effect a cure for some disease 2. Because of the importance of the causal relation in medicine, the relation is more likely to be explicitly indicated using linguistic means (i.e. using words such as result, effect, cause, etc.). 2 Previous Studies The goal of information extraction research is to develop systems that can identify the passage(s) in a document that contains information that is relevant to a prescribed task, extract the information and relate the pieces of information by filling a structured template or a database record (Cardie, 1997; Cowie & Lehnert, 1996; Gaizauskas & Wilks, 1998). Information extraction research has been influenced tremendously by the series of Message Understanding Conferences (MUC-5, MUC-6, MUC-7), organized by the U.S. Advanced Research Projects Agency (ARPA) (http://www.muc.saic.com/proceedings/proceedi ngs_index.html). Participants of the conferences develop systems to perform common information extraction tasks, defined by the conference organizers. For each task, a template is specified that indicates the slots to be filled in and the type of information to be extracted to fill each slot. The set of slots defines the various entities, aspects and roles relevant to a prescribed task or topic of interest. Information that has been extracted can be used for populating a database of facts about entities or events, for automatic summarization, for information mining, and for acquiring knowledge to use in a knowledge-based system. Information extraction systems have been developed for a wide range of tasks. However, few of them have focused on extracting cause-effect information from texts. Previous studies that have attempted to extract cause-effect information from text have mostly used knowledge-based inferences to infer the causal relations. Selfridge, Daniell & Simmons (1985) and Joskowsicz, Ksiezyk & Grishman (1989) developed prototype computer programs that extracted causal knowledge from short explanatory messages entered into the knowledge acquisition component of an expert system. When there was an ambiguity whether a causal relation was expressed in the text, the systems used a domain model to check whether such a causal relation between the events was possible. Kontos & Sidiropoulou (1991) and Kaplan & Berry-Rogghe (1991) used linguistic patterns to identify causal relations in scientific texts, but the grammar, lexicon, and patterns for identifying causal relations were hand-coded and developed just to handle the sample texts used in the studies. Knowledge-based inferences were also used. The authors pointed out that substantial domain knowledge was needed for the system to identify causal relations in the sample texts accurately. More recently, Garcia (1997) developed a computer program to extract cause-effect information from French technical texts without using domain knowledge. He focused on causative verbs and reported a precision rate of 85%. Khoo, Kornfilt, Oddy & Myaeng (1998) developed an automatic method for extracting causeeffect information from Wall Street Journal texts using linguistic clues and pattern matching. Their system was able to extract about 68% of the causal relations with an error rate of about 36%. The emphasis of the current study is on extracting cause-effect information that is explicitly expressed in the text without knowledgebased inferencing. It is hoped that this will result in a method that is more easily portable to other subject areas and document collections. We also make use of a parser (Conexor’s FDG parser) to construct syntactic parse trees for the sentences. Graphical extraction patterns are constructed to extract information from the parse trees. As a result, a much smaller number of patterns need be constructed. Khoo et al. (1998) who used only part-of-speech tagging and phrase bracketing, but not full parsing, had to construct a large number of extraction patterns. 3 Initial Analysis of the Medical Texts 200 abstracts were downloaded from the Medline database for use as our training sample of texts. They are from four medical areas: depression, schizophrenia, heart disease and AIDs (fifty abstracts from each area). The texts were analysed to identify: 1. the different roles and attributes that are involved in a causal situation. Cause and effect are, of course, the main roles, but other roles also exist including enabling conditions, size of the effect, and size of the cause (e.g. dosage). 2. the various linguistic markers used by the writers to explicitly signal the presence of a causal relation, e.g. as a result, affect, reduce, etc. 3.1 Cause-effect template The various roles and attributes of causal situations identified in the medical abstracts are structured in the form of a template. There are three levels in our cause-effect template, Level 1 giving the high-level roles and Level 3 giving the most specific sub-roles. The first two levels are given in Table 1. A more detailed description is provided in Khoo, Chan & Niu (1999). The information extraction system developed in this initial study attempts to fill only the main slots of cause, effect and modality, without attempting to divide the main slots into subslots. Table 1. The cause-effect template Level 1 Level 2 Object State/Event Cause Size Object State/Event Effect Size Polarity (e.g. “Increase”, “Decrease”, etc.) Object State/Event Size Duration Condition Degree of necessity Modality (e.g. “True”, “False”, “Probable”, “Possible”, etc.) Research method Sample size Significance level Information source Evidence Location Type of causal relation Table 2. Common causal expressions for depression & schizophrenia Expression No. of Occurrences causative verb 69 effect (of) …(on) 51 associate with 35 treatment of 31 have effect on 28 treat with 26 treatment with 22 effective (for) 14 related to 10 Table 3. Common causal expressions for AIDs & heart disease Expression No. of Occurrences causative verb 119 have effect on 30 effect (of)…(on) 25 due to 20 associate with 19 treat with 15 causative noun (including nominalized verbs) 12 effective for 10 3.2 Causal expressions in medical texts Causal relations are expressed in text in various ways. Two common ways are by using causal links and causative verbs. Causal links are words used to link clauses or phrases, indicating a causal relation between them. Altenburg (1984) provided a comprehensive typology of causal links. He classified them into four main types: the adverbial link (e.g. hence, therefore), the prepositional link (e.g. because of, on account of), subordination (e.g. because, as, since, for, so) and the clause-integrated line (e.g. that’s why, the result was). Causative verbs are transitive action verbs that express a causal relation between the subject and object or prepositional phrase of the verb. For example, the transitive verb break can be paraphrased as to cause to break, and the transitive verb kill can be paraphrased as to cause to die. We analyzed the 200 training abstracts to identify the linguistic markers (such as causal links and causative verbs) used to indicate causal relations explicitly. The most common linguistic expressions of cause-effect found in the Depression and Schizophrenia abstracts (occurring at least 10 times in 100 abstracts) are listed in Table 2. The common expressions found in the AIDs and Heart Disease abstracts (with at least 10 occurrences) are listed in Table 3. The expressions listed in the two tables cover about 70% of the explicit causal expressions found in the sample abstracts. Six expressions appear in both tables, indicating a substantial overlap in the two groups of medical areas. The most frequent way of expressing cause and effect is by using causative verbs. 4 Automatic Extraction of CauseEffect Information The information extraction process used in this study makes use of pattern matching. This is similar to methods employed by other researchers for information extraction. Whereas most studies focus on particular types of events or topics, we are focusing on a particular type of relation. Furthermore, the patterns used in this study are graphical patterns that are matched with syntactic parse trees of sentences. The patterns represent different words and sentence structures that indicate the presence of a causal relation and which parts of the sentence represent which roles in the causal situation. Any part of the sentence that matches a particular pattern is considered to describe a causal situation, and the words in the sentence that match slots in the pattern are extracted and used to fill the appropriate slots in the cause-effect template. 4.1 Parser The sentences are parsed using Conexor’s Functional Dependency Grammar of English (FDG) parser (http://www.conexor.fi), which generates a representation of the syntactic structure of the sentence (i.e. the parse tree). For the example sentence Paclitaxel was well tolerated and resulted in a significant clinical response in this patient. a graphical representation of the parser output is given in Fig. 1. For easier processing, the syntactic structure is converted to the linear conceptual graph formalism (Sowa, 1984) given in Fig. 2. A conceptual graph is a graph with the nodes representing concepts and the directed arcs representing relations between concepts. Although the conceptual graph formalism was developed primarily for semantic representation, we use it to represent the syntactic structure of sentences. In the linear conceptual graph notation, concept labels are given within square brackets and relations between concepts are Fig. 1. Syntactic structure of a sentence given within parentheses. Arrows indicate the direction of the relations. 4.2 Construction of causality patterns We developed a set of graphical patterns that specifies the various ways a causal relation can be explicitly expressed in a sentence. We call them causality patterns. The initial set of patterns was constructed based on the training set of 200 abstracts mentioned earlier. Each abstract was analysed by two of the authors to identify the sentences containing causal relations, and the parts of the sentences representing the cause and the effect. For each sentence containing a causal relation, the words (causality identifiers) that were used to signal the causal relation were also identified. These are mostly causal links and causative verbs described earlier. Example sentence Paclitaxel was well tolerated and resulted in a significant clinical response in this patient. Syntactic structure in linear conceptual graph format [tolerate] (vch)->[be]->(subj)->[paclitaxel] (man)->[well] (cc)->[and] (cc)->[result](loc)->[in]->(pcomp)->[response](det)->[a] (attr)->[clinical]->(attr) ->[significant], (phr)->[in]->(pcomp)->[patient] ->(det)->[this],,. Example causality pattern [*]&(v-ch)->(subj)->[T:cause.object] (cc|cnd)->[result]+(loc)+->[in]+->(pcomp) ->[T:effect.event] (phr)->[in]->(pcomp) ->[T:effect.object],,. Cause-effect template Cause: paclitaxel Effect: a significant clinical response in this patient Fig. 2. Sentence structure and causality pattern in conceptual graph format main root tolerate be v-ch well and man cc result cc in in loc phr response pcomp patient pcomp clinical attr a det this det significant attr We constructed the causality patterns for each causality identifier, to express the different sentence constructions that the causality identifier can be involved in, and to indicate which parts of the sentence represent the cause and the effect. For each causality identifier, at least 20 sentences containing the identifier were analysed. If the training sample abstracts did not have 20 sentences containing the identifier, additional sentences were downloaded from the Medline database. After the patterns were constructed, they were applied to a new set of 20 sentences from Medline containing the identifier. Measures of precision and recall were calculated. Each set of patterns are thus associated with a precision and a recall figure as a rough indication of how good the set of patterns is. The causality patterns are represented in linear conceptual graph format with some extensions. The symbols used in the patterns are as follows: 1. Concept nodes take the following form: [concept_label] or [concept_label: role_indicator]. Concept_label can be: • a character string in lower case, representing a stemmed word • a character string in uppercase, refering to a class of synonymous words that can occupy that place in a sentence • “*”, a wildcard character that can match any word • “T”, a wildcard character that can match with any sub-tree. Role_indicator refers to a slot in the causeeffect template, and can take the form: • role_label which is the name of a slot in the cause-effect template • role_label = “value”, where value is a character string that should be entered in the slot in the cause-effect template (if “value” is not specified, the part of the sentence that matches the concept_label is entered in the slot). 2. Relation nodes take the following form: (set_of_relations). Set_of_relations can be: • a relation_label, which is a character string representing a syntactic relation (these are the relation tags used by Conexor’s FDG parser) • relation_label | set of relations (“|” indicates a logical “or”) 3. &subpattern_label refers to a set of subgraphs. Each node can also be followed by a “+” indicating that the node is mandatory. If the mandatory nodes are not found in the sentence, then the pattern is rejected and no information is extracted from the sentence. All other nodes are optional. An example of a causality pattern is given in Fig. 2. 4.3 Pattern matching The information extraction process involves matching the causality patterns with the parse trees of the sentences. The parse trees and the causality patterns are both represented in the linear conceptual graph notation. The pattern matching for each sentence follows the following procedure: 1. the causality identifiers that match with keywords in the sentence are identified, 2. the causality patterns associated with each matching causality identifier are shortlisted, 3. for each shortlisted pattern, a matching process is carried out on the sentence. The matching process involves a kind of spreading activation in both the causality pattern graph and the sentence graph, starting from the node representing the causality identifier. If a pattern node matches a sentence node, the matching node in the pattern and the sentence are activated. This activation spreads outwards, with the causality identifier node as the center. When a pattern node does not match a sentence node, then the spreading activation stops for that branch of the pattern graph. Procedures are attached to the nodes to check whether there is a match and to extract words to fill in the slots in the cause-effect template. The pattern matching program has been implemented in Java (JDK 1.2.1). An example of a sentence, matching pattern and filled template is given in Fig. 2. 5 Evaluation A total of 68 patterns were constructed for the 35 causality identifiers that occurred at least twice in the training abstracts. The patterns were applied to two sets of new abstracts downloaded from Medline: 100 new abstracts from the original four medical areas (25 abstracts from each area), and 30 abstracts from two new domains (15 each) – digestive system diseases and respiratory tract diseases. Each test abstract was analyzed by at least 2 of the authors to identify “medically relevant” cause and effect. A fair number of causal relations in the abstracts are trivial and not medically relevant, and it was felt that it would not be useful for the information extraction system to extract these trivial causal relations. Of the causal relations manually identified in the abstracts, about 7% are implicit (i.e. have to be inferred using knowledge-based inferencing) or occur across sentences. Since the focus of the study is on explicitly expressed cause and effect within a sentence, only these are included in the evaluation. The evaluation results are presented in Table 4. Recall is the percentage of the slots filled by the human analysts that are correctly filled by the computer program. Precision is the percentage of slots filled by the computer program that are correct (i.e. the text entered in the slot is the same as that entered by the human analysts). If the text entered by the computer program is partially correct, it is scored as 0.5 (i.e. half correct). The F-measure given in Table 4 is a combination of recall and precision equally weighted, and is calculated using the formula (MUC-7): 2*precision*recall / (precision + recall) Table 4. Extraction results Slot Recall Precision FMeasure Results for 100 abstracts from the original 4 medical areas Causality Identifier .759 .768 .763 Cause .462 .565 .508 Effect .549 .611 .578 Modality .410 .811 .545 Results for 30 abstracts from 2 new medical areas Causality Identifier .618 .759 .681 Cause .415 .619 .497 Effect .441 .610 .512 Modality .542 .765 .634 For the 4 medical areas used for building the extraction patterns, the F-measure for the cause and effect slots are 0.508 and 0.578 respectively. If implicit causal relations are included in the evaluation, the recall measures for cause and effect are 0.405 and 0.481 respectively, yielding an F-measure of 0.47 for cause and 0.54 for effect. The results are not very good, but not very bad either for an information extraction task. For the 2 new medical areas, we can see in Table 4 that the precision is about the same as for the original 4 medical areas, indicating that the current extraction patterns work equally well in the new areas. The lower recall indicates that new causality identifiers and extraction patterns need to be constructed. The sources of errors were analyzed for the set of 100 test abstracts and are summarized in Table 5. Most of the spurious extractions (information extracted by the program as cause or effect but not identified by human analysts) were actually causal relations that were not medically relevant. As mentioned earlier, the manual identification of causal relations focused on medically relevant causal relations. In the cases where the program did not correctly extract cause and effect information identified by the analysts, half were due to incorrect parser output, and in 20% of the cases, causality patterns have not been constructed for the causality identifier found in the sentence. We also analyzed the instances of implicit causal relations in sentences, and found that many of them can be identified using some amount of semantic analysis. Some of them involve words like when, after and with that indicate a time sequence, for example: • The results indicate that changes to 8-OHDPAT and clonidine-induced responses occur quicker with the combination treatment than with either reboxetine or sertraline treatments alone. • There are also no reports of serious adverse events when lithium is added to a monoamine oxidase inhibitor. • Four days after flupenthixol administration, the patient developed orolingual dyskinetic movements involving mainly tongue biting and protrusion. Table 5. Sources of Extraction Errors A. Spurious errors (the program identified cause or effect not identified by the human judges) A1. The relations extracted are not relevant to medicine or disease. (84.1%) A2. Nominalized or adjectivized verbs are identified as causative verbs by the program because of parser error. (2.9%) A3. Some words and sentence constructions that are used to indicate cause-effect can be used to indicate other kinds of relations as well. (13.0%) B. Missing slots (cause or effect not extracted by program), incorrect text extracted, and partially correct extraction B1. Complex sentence structures that are not included in the pattern. (18.8%) B2. The parser gave the wrong syntactic structure of a sentence. (49.2%) B3. Unexpected sentence structure resulting in the program extracting information that is actually not a cause or effect. (1.5%) B4. Patterns for the causality identifier have not been constructed. (19.6%) B5. Sub-tree error. The program extracts the relevant sub-tree (of the parse tree) to fill in the cause or effect slot. However, because of the sentence construction, the sub-tree includes both the cause and effect resulting in too much text being extracted. (9.5%) B6. Errors caused by pronouns that refer to a phrase or clause within the same sentence. (1.3%) In these cases, a treatment or drug is associated with a treatment response or physiological event. If noun phrases and clauses in sentences can be classified accurately into treatments and treatment responses (perhaps by using Medline’s Medical Subject Headings), then such implicit causal relations can be identified automatically. Another group of words involved in implicit causal relations are words like receive, get and take, that indicate that the patient received a drug or treatment, for example: • The nine subjects who received p24-VLP and zidovudine had an augmentation and/or broadening of their CTL response compared with baseline (p = 0.004). Such causal relations can also be identified by semantic analysis and classifying noun phrases and clauses into treatments and treatment responses. 6. Conclusion We have described a method for performing automatic extraction of cause-effect information from textual documents. We use Conexor’s FDG parser to construct a syntactic parse tree for each target sentence. The parse tree is matched with a set of graphical causality patterns that indicate the presence of a causal relation. When a match is found, various attributes of the causal relation (e.g. the cause, the effect, and the modality) can then be extracted and entered in a cause-effect template. The accuracy of our extraction system is not yet satisfactory, with an accuracy of about 0.51 (F-measure) for extracting the cause and 0.58 for extracting the effect that are explicitly expressed. If both implicit and explicit causal relations are included, the accuracy is 0.41 for cause and 0.48 for effect. We were heartened to find that when the extraction patterns were applied to 2 new medical areas, the extraction precision was the same as for the original 4 medical areas. Future work includes: 1. Constructing patterns to identify causal relations across sentences 2. Expanding the study to more medical areas 3. Incorporating semantic analysis to extract implicit cause-effect information 4. Incorporating discourse processing, including anaphor and co-reference resolution 5. Developing a method for constructing extraction patterns automatically 6. Investigating whether the cause-effect information extracted can be chained together to synthesize new knowledge. Two aspects of discourse processing is being studied: co-reference resolution and hypothesis confirmation. Co-reference resolution is important for two reasons. The first is the obvious reason that to extract complete cause-effect information, pronouns and references have to be resolved and replaced with the information that they refer to. The second reason is that quite often a causal relation between two events is expressed more than once in a medical abstract, each time providing new information about the causal situation. The extraction system thus needs to be able to recognize that the different causal expressions refer to the same causal situation, and merge the information extracted from the different sentences. The second aspect of discourse processing being investigated is what we refer to as hypothesis confirmation. Sometimes, a causal relation is hypothesized by the author at the beginning of the abstract. This hypothesis may be confirmed or disconfirmed by another sentence later in the abstract. The information extraction system thus has to be able to link the initial hypothetical cause-effect expression with the confirmation or disconfirmation expression later in the abstract. Finally, we hope eventually to develop a system that not only extracts cause-effect information from medical abstracts accurately, but also synthesizes new knowledge by chaining the extracted causal relations. In a series of studies, Swanson (1986) has demonstrated that logical connections between the published literature of two medical research areas can provide new and useful hypotheses. Suppose an article reports that A causes B, and another article reports that B causes C, then there is an implicit logical link between A and C (i.e. A causes C). This relation would not become explicit unless work is done to extract it. Thus, new discoveries can be made by analysing published literature automatically (Finn, 1998; Swanson & Smalheiser, 1997). References Altenberg, B. (1984). Causal linking in spoken and written English. Studia Linguistica, 38(1), 20-69. Cardie, C. (1997). Empirical methods in information extraction. AI Magazine, 18(4), 65-79. Cowie, J., & Lehnert, W. (1996). Information extraction. Communications of the ACM, 39(1), 80-91. Finn, R. (1998). Program Uncovers Hidden Connections in the Literature. The Scientist, 12(10), 12-13. Gaizauskas, R., & Wilks, Y. (1998). Information extraction beyond document retrieval. Journal of Documentation, 54(1), 70-105. Garcia, D. (1997). COATIS, an NLP system to locate expressions of actions connected by causality links. In Knowledge Acquisition, Modeling and Management, 10th European Workshop, EKAW ’97 Proceedings (pp. 347-352). Berlin: SpringerVerlag. Joskowsicz, L., Ksiezyk, T., & Grishman, R. (1989). Deep domain models for discourse analysis. In The Annual AI Systems in Government Conference (pp. 195-200). Silver Spring, MD: IEEE Computer Society. Kaplan, R. M., & Berry-Rogghe, G. (1991). Knowledge-based acquisition of causal relationships in text. Knowledge Acquisition, 3(3), 317-337. Khoo, C., Chan, S., Niu, Y., & Ang, A. (1999). A method for extracting causal knowledge from textual databases. Singapore Journal of Library & Information Management, 28, 48-63. Khoo, C.S.G., Kornfilt, J., Oddy, R.N., & Myaeng, S.H. (1998). Automatic extraction of cause-effect information from newspaper text without knowledge-based inferencing. Literary and Linguistic Computing, 13(4), 177-186. Kontos, J., & Sidiropoulou, M. (1991). On the acquisition of causal knowledge from scientific texts with attribute grammars. Expert Systems for Information Management, 4(1), 31-48. MUC-5. (1993). Fifth Message Understanding Conference (MUC-5). San Francisco: Morgan Kaufmann. MUC-6. (1995). Sixth Message Understanding Conference (MUC-6). San Francisco: Morgan Kaufmann. MUC-7. (2000). Message Understanding Conference proceedings (MUC-7) [Online]. Available: http://www.muc.saic.com/proceedings/muc_7_toc. html. Selfridge, M., Daniell, J., & Simmons, D. (1985). Learning causal models by understanding realworld natural language explanations. In The Second Conference on Artificial Intelligence Applications: The Engineering of Knowledge-Based Systems (pp. 378-383). Silver Spring, MD: IEEE Computer Society. Sowa, J.F. (1984). Conceptual structures: Information processing in man and machine. Reading, MA: Addison-Wesley,. Swanson, D.R. (1986). Fish oil, Raynaud’s Syndrome, and undiscovered public knowledge. Perspectives in Biology and Medicine, 30(1), 7-18. Swanson, D.R., & Smalheiser, N.R. (1997). An interactive system for finding complementary literatures: A stimulus to scientific discovery. Artificial Intelligence, 91, 183-203.
2000
43
2000
44
Memory-Efficient and Thread-Safe Quasi-Destructive Graph Unification Marcel P. van Lohuizen Department of Information Technology and Systems Delft University of Technology [email protected] Abstract In terms of both speed and memory consumption, graph unification remains the most expensive component of unification-based grammar parsing. We present a technique to reduce the memory usage of unification algorithms considerably, without increasing execution times. Also, the proposed algorithm is thread-safe, providing an efficient algorithm for parallel processing as well. 1 Introduction Both in terms of speed and memory consumption, graph unification remains the most expensive component in unification-based grammar parsing. Unification is a well known algorithm. Prolog, for example, makes extensive use of term unification. Graph unification is slightly different. Two different graph notations and an example unification are shown in Figure 1 and 2, respectively. In typical unification-based grammar parsers, roughly 90% of the unifications fail. Any processing to create, or copy, the result graph before the point of failure is b e A C F D   A = b C = 1  D = e  F = 1   Figure 1: Two ways to represent an identical graph. redundant. As copying is the most expensive part of unification, a great deal of research has gone in eliminating superfluous copying. Examples of these approaches are given in (Tomabechi, 1991) and (Wroblewski, 1987). In order to avoid superfluous copying, these algorithms incorporate control data in the graphs. This has several drawbacks, as we will discuss next. Memory Consumption To achieve the goal of eliminating superfluous copying, the aforementioned algorithms include administrative fields—which we will call scratch fields—in the node structure. These fields do not attribute to the definition of the graph, but are used to efficiently guide the unification and copying process. Before a graph is used in unification, or after a result graph has been copied, these fields just take up space. This is undesirable, because memory usage is of great concern in many unification-based grammar parsers. This problem is especially of concern in Tomabechi’s algorithm, as it increases the node size by at least 60% for typical implementations. In the ideal case, scratch fields would be stored in a separate buffer allowing them to be reused for each unification. The size of such a buffer would be proportional to the maximum number of nodes that are involved in a single unification. Although this technique reduces memory usage considerably, it does not reduce the amount of data involved in a single unification. Nevertheless, storing and loading nodes without scratch fields will be faster, because they are smaller. Because scratch fields are reused, there is a high probability that they will remain in cache. As the difference  A =  B = c  D =  E = f   ⊔   A = 1  B = c  D = 1 G =  H = j   ⇒   A = 1  B = c E = f  D = 1 G =  H = j    Figure 2: An example unification in attribute value matrix notation. in speed between processor and memory continues to grow, caching is an important consideration (Ghosh et al., 1997).1 A straightforward approach to separate the scratch fields from the nodes would be to use a hash table to associate scratch structures with the addresses of nodes. The overhead of a hash table, however, may be significant. In general, any binding mechanism is bound to require some extra work. Nevertheless, considering the difference in speed between processors and memory, reducing the memory footprint may compensate for the loss of performance to some extent. Symmetric Multi Processing Smallscale desktop multiprocessor systems (e.g. dual or even quad Pentium machines) are becoming more commonplace and affordable. If we focus on graph unification, there are two ways to exploit their capabilities. First, it is possible to parallelize a single graph unification, as proposed by e.g. (Tomabechi, 1991). Suppose we are unifying graph a with graph b, then we could allow multiple processors to work on the unification of a and b simultaneously. We will call this parallel unification. Another approach is to allow multiple graph unifications to run concurrently. Suppose we are unifying graph a and b in addition to unifying graph a and c. By assigning a different processor to each operation we obtain what we will call concurrent unification. Parallel unification exploits parallelism inherent of graph unification itself, whereas concurrent unification exploits parallelism at the context-free grammar backbone. As long as the number of unification operations in 1Most of today’s computers load and store data in large chunks (called cache lines), causing even uninitialized fields to be transported. one parse is large, we believe it is preferable to choose concurrent unification. Especially when a large number of unifications terminates quickly (e.g. due to failure), the overhead of more finely grained parallelism can be considerable. In the example of concurrent unification, graph a was used in both unifications. This suggests that in order for concurrent unification to work, the input graphs need to be read only. With destructive unification algorithms this does not pose a problem, as the source graphs are copied before unification. However, including scratch fields in the node structure (as Tomabechi’s and Wroblewski’s algorithms do) thwarts the implementation of concurrent unification, as different processors will need to write different values in these fields. One way to solve this problem is to disallow a single graph to be used in multiple unification operations simultaneously. In (van Lohuizen, 2000) it is shown, however, that this will greatly impair the ability to achieve speedup. Another solution is to duplicate the scratch fields in the nodes for each processor. This, however, will enlarge the node size even further. In other words, Tomabechi’s and Wroblewski’s algorithms are not suited for concurrent unification. 2 Algorithm The key to the solution of all of the abovementioned issues is to separate the scratch fields from the fields that actually make up the definition of the graph. The resulting data structures are shown in Figure 3. We have taken Tomabechi’s quasi-destructive graph unification algorithm as the starting point (Tomabechi, 1995), because it is often considered to be the fastest unification algoarc list type Arc Node Unification data Copy data Reusable scratch structures copy forward comp-arc list value label offset index index only structures Permanent, readFigure 3: Node and Arc structures and the reusable scratch fields. In the permanent structures we use offsets. Scratch structures use index values (including arcs recorded in comp-arc list). Our implementation derives offsets from index values stored in nodes. rithm for unification-based grammar parsing (see e.g. (op den Akker et al., 1995)). We have separated the scratch fields needed for unification from the scratch fields needed for copying.2 We propose the following technique to associate scratch structures with nodes. We take an array of scratch structures. In addition, for each graph we assign each node a unique index number that corresponds to an element in the array. Different graphs typically share the same indexes. Since unification involves two graphs, we need to ensure that two nodes will not be assigned the same scratch structure. We solve this by interleaving the index positions of the two graphs. This mapping is shown in Figure 4. Obviously, the minimum number of elements in the table is two times the number of nodes of the largest graph. To reduce the table size, we allow certain nodes to be deprived of scratch structures. (For example, we do not forward atoms.) We denote this with a valuation function v, which returns 1 if the node is assigned an index and 0 otherwise. We can associate the index with a node by including it in the node structure. For structure sharing, however, we have to use offsets between nodes (see Figure 4), because otherwise different nodes in a graph may end up having the same index (see Section 3). Off2The arc-list field could be used for permanent forward links, if required. c_ Left graph offset: 0 g4 e3 f _ Right graph offset: 1 2 j h0 _ l 3 k 1 b 1 i 2 x 0 + 0 a h b j i k 0 1 2 3 4 5 6 7 8 9 10 11 12 d e g a0 d2 +1 +1 +1 2 x 1 + 1 +1 -2 +0 +3 +1 2 x 4 + 0 +4 -2 +1 +0 Figure 4: The mechanism to associate index numbers with nodes. The numbers in the nodes represent the index number. Arcs are associated with offsets. Negative offsets indicate a reentrancy. sets can be easily derived from index values in nodes. As storing offsets in arcs consumes more memory than storing indexes in nodes (more arcs may point to the same node), we store index values and use them to compute the offsets. For ease of reading, we present our algorithm as if the offsets were stored instead of computed. Note that the small index values consume much less space than the scratch fields they replace. The resulting algorithm is shown in Figure 5. It is very similar to the algorithm in (Tomabechi, 1991), but incorporates our indexing technique. Each reference to a node now not only consists of the address of the node structure, but also its index in the table. This is required because we cannot derive its table index from its node structure alone. The second argument of Copy indicates the next free index number. Copy returns references with an offset, allowing them to be directly stored in arcs. These offsets will be negative when Copy exits at line 2.2, resembling a reentrancy. Note that only AbsArc explicitly defines operations on offsets. AbsArc computes a node’s index using its parent node’s index and an offset. Unify(dg1, dg2) 1. try Unify1((dg1, 0), (dg2, 1))a 1.1. (copy, n) ←Copy((dg1, 0), 0) 1.2. Clear the fwtab and cptab table.b 1.3. return copy 2. catch 2.1. Clear the fwtab table.b 2.2. return nil Unify1(ref in1, ref in2) 1. ref1 ←(dg1, idx1) ←Dereference(ref in1) 2. ref2 ←(dg2, idx2) ←Dereference(ref in2) 3. if dg1 ≡addr dg2 and idx1 = idx2c then 3.1. return 4. if dg1.type = bottom then 4.1. Forward(ref1, ref2) 5. elseif dg2.type = bottom then 5.1. Forward(ref2, ref1) 6. elseif both dg1 and dg2 are atomic then 6.1. if dg1.arcs ̸= dg2.arcs then throw UnificationFailedException 6.2. Forward(ref2, ref1) 7. elseif either dg1 or dg2 is atomic then 7.1. throw UnificationFailedException 8. else 8.1. Forward(ref2, ref1) 8.2. shared ←IntersectArcs(ref1, ref2) 8.3. for each (( , r1), ( , r2)) in shared do Unify1(r1, r2) 8.4. new ←ComplementArcs(ref1, ref2) 8.5. for each arc in new do Push arc to fwtab[idx1].comp arcs Forward((dg1, idx1), (dg2, idx2)) 1. if v(dg1) = 1 then fwtab[idx1].forward ←(dg2, idx2) AbsArc((label, (dg, off)), current idx) return (label, (dg, current idx + 2 · off))d Dereference((dg, idx)) 1. if v(dg1) = 1 then 1.1. (fwd-dg, fwd-idx) ←fwtab[idx].forward 1.2. if fwd-dg ̸= nil then Dereference(fwd-dg, fwd-idx) 1.3. else return (dg, idx) IntersectArcs(ref1, ref2) Returns pairs of arcs with index values for each pair of arcs in ref1 resp. ref2 that have the same label. To obtain index values, arcs from arc-list must be converted with AbsArc. ComplementArcs(ref1, ref2) Returns node references for all arcs with labels that exist in ref2, but not in ref1. The references are computed as with IntersectArcs. Copy(ref in, new idx) 1. (dg, idx) ←Dereference(ref in) 2. if v(dg) = 1 and cptab[idx].copy ̸= nil then 2.1. (dg1, idx1) ←cptab[idx].copy 2.2. return (dg1, idx1 −new idx + 1) 3. newcopy ←new Node 4. newcopy.type ←dg.type 5. if v(dg) = 1 then cptab[idx].copy ←(newcopy, new idx) 6. count ←v(newcopy)e 7. if dg.type = atomic then 7.1. newcopy.arcs ←dg.arcs 8. elseif dg.type = complex then 8.1. arcs ←{AbsArc(a, idx) | a ∈dg.arcs} ∪fwtab[idx].comp arcs 8.2. for each (label, ref) in arcs do ref1 ←Copy(ref, count + new idx)f Push (label, ref1) into newcopy.arcs if ref1.offset > 0g then count ←count + ref1.offset 9. return (newcopy, count) aWe assign even and odd indexes to the nodes of dg1 and dg2, respectively. bTables only needs to be cleared up to point where unification failed. cCompare indexes to allow more powerful structure sharing. Note that indexes uniquely identify a node in the case that for all nodes n holds v(n) = 1. dNote that we are multiplying the offset by 2 to account for the interleaved offsets of the left and right graph. eWe assume it is known at this point whether the new node requires an index number. fNote that ref contains an index, whereas ref1 contains an offset. gIf the node was already copied (in which case it is < 0), we need not reserve indexes. Figure 5: The memory-efficient and thread-safe unification algorithm. Note that the arrays fwtab and cptab—which represent the forward table and copy table, respectively—are defined as global variables. In order to be thread safe, each thread needs to have its own copy of these tables. Contrary to Tomabechi’s implementation, we invalidate scratch fields by simply resetting them after a unification completes. This simplifies the algorithm. We only reset the table up to the highest index in use. As table entries are roughly filled in increasing order, there is little overhead for clearing unused elements. A nice property of the algorithm is that indexes identify from which input graph a node originates (even=left, odd=right). This information can be used, for example, to selectively share nodes in a structure sharing scheme. We can also specify additional scratch fields or additional arrays at hardly any cost. Some of these abilities will be used in the enhancements of the algorithm we will discuss next. 3 Enhancements Structure Sharing Structure sharing is an important technique to reduce memory usage. We will adopt the same terminology as Tomabechi in (Tomabechi, 1992). That is, we will use the term feature-structure sharing when two arcs in one graph converge to the same node in that graph (also refered to as reentrancy) and data-structure sharing when arcs from two different graphs converge to the same node. The conditions for sharing mentioned in (Tomabechi, 1992) are: (1) bottom and atomic nodes can be shared; (2) complex nodes can be shared unless they are modified. We need to add the following condition: (3) all arcs in the shared subgraph must have the same offsets as the subgraph that would have resulted from copying. A possible violation of this constraint is shown in Figure 6. As long as arcs are processed in increasing order of index number,3 this condition can only be violated in case of reentrancy. Basically, the condition can be violated when a reentrancy points past a node that is bound to a larger subgraph. 3This can easily be accomplished by fixing the order in which arcs are stored in memory. This is a good idea anyway, as it can speedup the ComplementArcs and IntersectArcs operations. h0 a0 1 i 3 k s6 t G +1 7 Node could be shared Node violates condition 3 1 b j 4 +3 +1 +2 F K +1 G H c2 d e4 f 5 g6 +4 +1 +1 +5 F F G +1 H G +1 K L b 2 j 1 3 o2 p3 +4 +1 +1 +5 F H G +1 K L F 0 q4 +1 1 n m r 5 result without sharing result with sharing F 0 m +1 F G +4 s6 -3 +6 H G +1 K Specialized sharing arc -3 -2 3 d g7 4 l Figure 6: Sharing mechanism. Node f cannot be shared, as this would cause the arc labeled F to derive an index colliding with node q. Contrary to many other structure sharing schemes (like (Malouf et al., 2000)), our algorithm allows sharing of nodes that are part of the grammar. As nodes from the different input graphs are never assigned the same table entry, they are always bound independently of each other. (See the footnote for line 3 of Unify1.) The sharing version of Copy is similar to the variant in (Tomabechi, 1992). The extra check can be implemented straightforwardly by comparing the old offset with the offset for the new nodes. Because we derive the offsets from index values associated with nodes, we need to compensate for a difference between the index of the shared node and the index it should have in the new graph. We store this information in a specialized share arc. We need to adjust Unify1 to handle share arcs accordingly. Deferred Copying Just as we use a table for unification and copying, we also use a table for subsumption checking. Tomabechi’s algorithm requires that the graph resulting 0 1 2 3 4 5 6 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Time (seconds) Sentence length (no. words) "basic" "tomabechi" "packed" "pack+deferred_copy" "pack+share" "packed_on_dual_proc" Figure 7: Execution time (seconds). from unification be copied before it can be used for further processing. This can result in superfluous copying when the graph is subsumed by an existing graph. Our technique allows subsumption to use the bindings generated by Unify1 in addition to its own table. This allows us to defer copying until we completed subsumption checking. Packed Nodes With a straightforward implementation of our algorithm, we obtain a node size of 8 bytes.4 By dropping the concept of a fixed node size, we can reduce the size of atom and bottom nodes to 4 bytes. Type information can be stored in two bits. We use the two least significant bits of pointers (which otherwise are 0) to store this type information. Instead of using a pointer for the value field, we store nodes in place. Only for reentrancies we still need pointers. Complex nodes require 8 bytes, as they include a pointer to the first node past its children (necessary for unification). This scheme requires some extra logic to decode nodes, but significantly reduces memory consumption. 4We do not have a type hierarchy. 0 5 10 15 20 25 30 35 40 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Heap size (MB) Sentence length (no. words) "basic" "tomabechi" "packed" "pack+share" Figure 8: Memory used by graph heap (MB). 4 Experiments We have tested our algorithm with a mediumsized grammar for Dutch. The system was implemented in Objective-C using a fixed arity graph representation. We used a test set of 22 sentences of varying length. Usually, approximately 90% of the unifications fails. On average, graphs consist of 60 nodes. The experiments were run on a Pentium III 600EB (256 KB L2 cache) box, with 128 MB memory, running Linux. We tested both memory usage and execution time for various configurations. The results are shown in Figure 7 and 8. It includes a version of Tomabechi’s algorithm. The node size for this implementation is 20 bytes. For the proposed algorithm we have included several versions: a basic implementation, a packed version, a version with deferred copying, and a version with structure sharing. The basic implementation has a node size of 8 bytes, the others have a variable node size. Whenever applicable, we applied the same optimizations to all algorithms. We also tested the speedup on a dual Pentium II 266 Mhz.5 Each processor was assigned its own scratch tables. Apart from that, no changes to the 5These results are scaled to reflect the speedup relative to the tests run on the other machine. algorithm were required. For more details on the multi-processor implementation, see (van Lohuizen, 1999). The memory utilization results show significant improvements for our approach.6 Packing decreased memory utilization by almost 40%. Structure sharing roughly halved this once more.7 The third condition prohibited sharing in less than 2% of the cases where it would be possible in Tomabechi’s approach. Figure 7 shows that our algorithm does not increase execution times. Our algorithm even scrapes offroughly 7% of the total parsing time. This speedup can be attributed to improved cache utilization. We verified this by running the same tests with cache disabled. This made our algorithm actually run slower than Tomabechi’s algorithm. Deferred copying did not improve performance. The additional overhead of dereferencing during subsumption was not compensated by the savings on copying. Structure sharing did not significantly alter the performance as well. Although, this version uses less memory, it has to perform additional work. Running the same tests on machines with less memory showed a clear performance advantage for the algorithms using less memory, because paging could be avoided. 5 Related Work We reduce memory consumption of graph unification as presented in (Tomabechi, 1991) (or (Wroblewski, 1987)) by separating scratch fields from node structures. Pereira’s (Pereira, 1985) algorithm also stores changes to nodes separate from the graph. However, Pereira’s mechanism incurs a log(n) overhead for accessing the changes (where n is the number of nodes in a graph), resulting in an O(n log n) time algorithm. Our algorithm runs in O(n) time. 6The results do not include the space consumed by the scratch tables. However, these tables do not consume more than 10 KB in total, and hence have no significant impact on the results. 7Because the packed version has a variable node size, structure sharing yielded less relative improvements than when applied to the basic version. In terms of number of nodes, though, the two results were identical. With respect to over and early copying (as defined in (Tomabechi, 1991)), our algorithm has the same characteristics as Tomabechi’s algorithm. In addition, our algorithm allows to postpone the copying of graphs until after subsumption checks complete. This would require additional fields in the node structure for Tomabechi’s algorithm. Our algorithm allows sharing of grammar nodes, which is usually impossible in other implementations (Malouf et al., 2000). A weak point of our structure sharing scheme is its extra condition. However, our experiments showed that this condition can have a minor impact on the amount of sharing. We showed that compressing node structures allowed us to reduce memory consumption by another 40% without sacrificing performance. Applying the same technique to Tomabechi’s algorithm would yield smaller relative improvements (max. 20%), because the scratch fields cannot be compressed to the same extent. One of the design goals of Tomabechi’s algorithm was to come to an efficient implementation of parallel unification (Tomabechi, 1991). Although theoretically parallel unification is hard (Vitter and Simons, 1986), Tomabechi’s algorithm provides an elegant solution to achieve limited scale parallelism (Fujioka et al., 1990). Since our algorithm is based on the same principles, it allows parallel unification as well. Tomabechi’s algorithm, however, is not thread-safe, and hence cannot be used for concurrent unification. 6 Conclusions We have presented a technique to reduce memory usage by separating scratch fields from nodes. We showed that compressing node structures can further reduce the memory footprint. Although these techniques require extra computation, the algorithms still run faster. The main reason for this was the difference between cache and memory speed. As current developments indicate that this difference will only get larger, this effect is not just an artifact of the current architectures. We showed how to incoporate datastructure sharing. For our grammar, the additional constraint for sharing did not pose a problem. If it does pose a problem, there are several techniques to mitigate its effect. For example, one could reserve additional indexes at critical positions in a subgraph (e.g. based on type information). These can then be assigned to nodes in later unifications without introducing conflicts elsewhere. Another technique is to include a tiny table with repair information in each share arc to allow a small number of conflicts to be resolved. For certain grammars, data-structure sharing can also significantly reduce execution times, because the equality check (see line 3 of Unify1) can intercept shared nodes with the same address more frequently. We did not exploit this benefit, but rather included an offset check to allow grammar nodes to be shared as well. One could still choose, however, not to share grammar nodes. Finally, we introduced deferred copying. Although this technique did not improve performance, we suspect that it might be beneficial for systems that use more expensive memory allocation and deallocation models (like garbage collection). Since memory consumption is a major concern with many of the current unificationbased grammar parsers, our approach provides a fast and memory-efficient alternative to Tomabechi’s algorithm. In addition, we showed that our algorithm is well suited for concurrent unification, allowing to reduce execution times as well. References [Fujioka et al.1990] T. Fujioka, H. Tomabechi, O. Furuse, and H. Iida. 1990. Parallelization technique for quasi-destructive graph unification algorithm. In Information Processing Society of Japan SIG Notes 90-NL-80. [Ghosh et al.1997] S. Ghosh, M. Martonosi, and S. Malik. 1997. Cache miss equations: An analytical representation of cache misses. In Proceedings of the 11th International Conference on Supercomputing (ICS-97), pages 317– 324, New York, July 7–11. ACM Press. [Malouf et al.2000] Robert Malouf, John Carroll, and Ann Copestake. 2000. Efficient feature structure operations witout compilation. Natural Language Engineering, 1(1):1–18. [op den Akker et al.1995] R. op den Akker, H. ter Doest, M. Moll, and A. Nijholt. 1995. Parsing in dialogue systems using typed feature structures. Technical Report 95-25, Dept. of Computer Science, University of Twente, Enschede, The Netherlands, September. Extended version of an article published in E... [Pereira1985] Fernando C. N. Pereira. 1985. A structure-sharing representation for unificationbased grammar formalisms. In Proc. of the 23 rd Annual Meeting of the Association for Computational Linguistics. Chicago, IL, 8–12 Jul 1985, pages 137–144. [Tomabechi1991] H. Tomabechi. 1991. Quasidestructive graph unifications. In Proceedings of the 29th Annual Meeting of the ACL, Berkeley, CA. [Tomabechi1992] Hideto Tomabechi. 1992. Quasidestructive graph unifications with structuresharing. In Proceedings of the 15th International Conference on Computational Linguistics (COLING-92), Nantes, France. [Tomabechi1995] Hideto Tomabechi. 1995. Design of efficient unification for natural language. Journal of Natural Language Processing, 2(2):23–58. [van Lohuizen1999] Marcel van Lohuizen. 1999. Parallel processing of natural language parsers. In PARCO ’99. Paper accepted (8 pages), to appear soon. [van Lohuizen2000] Marcel P. van Lohuizen. 2000. Exploiting parallelism in unification-based parsing. In Proc. of the Sixth International Workshop on Parsing Technologies (IWPT 2000), Trento, Italy. [Vitter and Simons1986] Jeffrey Scott Vitter and Roger A. Simons. 1986. New classes for parallel complexity: A study of unification and other complete problems for P. IEEE Transactions on Computers, C-35(5):403–418, May. [Wroblewski1987] David A. Wroblewski. 1987. Nondestructive graph unification. In Howard Forbus, Kenneth; Shrobe, editor, Proceedings of the 6th National Conference on Artificial Intelligence (AAAI-87), pages 582–589, Seattle, WA, July. Morgan Kaufmann.
2000
45
    ! "# $ &%' !)(+*'-,. !/$ 10 2 3 #4564  78 2:9; </$)$  =?>A@CBEDGFIHCJC@ KL$MONQPRNQS$NTVU$WYX[Z\MO]_^P`Lacb`bdafegWhZ\]_^ji.akWhZ\WhlAakPRNQSL$mnAoLPRi.akWOMhPRN_pNe NhS$NhNhmjZ\WN q<r akLslAakWhm\MONQWhZ\t.aGuwvnExzy{\|$u"{}Ge NQSNhNQm.Z\WONkn~facWhX[Z\Lj€ ‚wƒE„$…†E‡kˆ…ЉŒ‹sƒ?‡ j…wŽj‹ Ž Ž „.jŽ'‰’‘“ ”–• D˜—"™wBEš\— ›)œsIžCŸk ¡¢Q£5¤j_¤.¥s¦Q§¨¢©«ª¬Ÿk¡®­w©« ¨®§œj¯°¨¢± h².§Qªs¤j_¤´³µ©« ¶§œs¸·j ©\¦__±±¨®ªj­¹©k³»º¼› ±£\ª½§Ÿc²¿¾ Ÿ«±_¤¹©«ª¿³ÀÁŸk§¥j ¶­" Ÿk¯Â¯ÃŸk ±ÁÄ Å¼¥s§©¬³µŸk¨®§œ.³À¥j¡®ªs_±±ÇÆ\¨¢©«¡RŸk§¨¢©«ªs±ÁÈɨ®ª.Ê Ë ªj¨®§Q¡®£Ì¯ÃŸkª½£¦ÁŸkªs¤.¨¢¤sŸk§_±Í¯Y¥s±§Î¾ ¦_©«¯Â· Ÿk _¤EÄÏΨ®§œz§œs!Ð` ÁŸ«±©«ª Ÿk¾j¡¢˜ÑAŸ«±’Ê ±¥j¯Â·j§¨¢©«ªs±ÒÐ`¨dÑɧœ Ÿk§Óº¼›Ô¦_©«ªs±§ Ÿk¨®ª§± Ÿk ´¤j_±¦Q ¨®·j§¨¢©«ªs±¿¤jQªs©«§¨®ªj­;¾©«¥jªs¤j_¤ ±§ ¥s¦Q§¥j _±1Ÿkªs¤ÕÐ`¨®¨dÑA§œ Ÿk§ QÆwQ £Y ¥j¡¢Š hÊ ¦Q¥j ±¨¢©«ªÓ¨®ªÖ§œsÃ¾ Ÿ«±f­" Ÿk¯Â¯ÃŸk <¨®ªs¦Q¥j ± ±©«¯f)¦_©«ªs±§ Ÿk¨®ª§ ƨ¢©«¡RŸk§¨¢©«ªÈ\Ÿ¼¦œ Ÿk §CŸk¡×Ê ­w©« ¨®§œj¯¬¦ÁŸkªØ¾Ù¤jQÆ\¨¢±_¤EÄÚÛª§Q ¡¢ÁŸ_Æ\¨®ªj­ · Ÿk ±¨®ªj­IŸkªs¤­wQªsQ Ÿk§¨¢©«ª¹·Q ¯Â¨®§±f§œs Ÿk·j·j¡®¨¢¦ÁŸk§¨¢©«ªÍ©k³<­wQªsQ Ÿk§¨¢©«ª.ÊV¾ Ÿ«±_¤Í©«·.Ê §¨®¯Â¨®Ü˜Ÿk§¨¢©«ªIQÆwQª6¨®ª¸§œsÂ· Ÿk ±¨®ªj­¶§Ÿ«±Ý$È ¨VÄޝ"Ä®È.³µ©« 8ŸØ±§ ¨®ªj­f¨®ªj·j¥j§ÁÄ ß à @?—"™w> áCHCš\—wâÛ>A@ ڌªãº!·j§¨®¯ÃŸk¡®¨®§Û£Í›)œs_©« £;ÐVº¼›8ÑhÈ4¾$©«§œ§œsÒÐ`ƨ¢©kÊ ¡RŸk¾j¡¢˜ÑÕ¦_©«ªs±§ Ÿk¨®ª½§±–Ÿkªs¤Í§œs6±Q§[©k³Â¦_©«¯Â·Q§¨®ªj­ ¦ÁŸkªs¤.¨¢¤sŸk§±§ ¥s¦Q§¥j _±ÙŸk )¥jªj¨®ÆwQ ±Ÿk¡®¡®£¨®ªÆ«Ÿk ¨RŸkª§Áä å ¥s±§Ù§œs! Q¡RŸk§¨®Æw' ŸkªjÝ\¨®ªj­Ø©k³¦_©«ªs±§ Ÿk¨®ª½§±)¨¢±)±¥j¾.Ê å _¦Q§§©¶¦Q ©"±±’ÊV¡®¨®ªj­"¥j¨¢±§¨¢¦fÆ«Ÿk ¨RŸk§¨¢©«ªÄ»›)œj¨¢±z¯ÃŸkÝw_± º¼›ÇŸkª»Ÿk§§ Ÿ«¦Q§¨®Æw4¯f©\¤jQ¡j³µ©«  §œs4h²\·j¡RŸkª Ÿk§¨¢©«ª»©k³ §Û£\·©«¡¢©«­"¨¢¦ÁŸk¡?Æ"Ÿk ¨RŸk§¨¢©«ªÉŸkªs¤ÉŸ«¦_æ½¥j¨¢±¨®§¨¢©«ªÄ)ڌªG¦_©«ª.Ê § Ÿ«±§f§©Iº¼›ç·jœs©«ªs©«¡¢©«­"£¿§œsQ Õœ Ÿ«±Â¾_QªÇ¡®¨®§§¡¢ ¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡¼èÙ©« Ý¿©«ªÎº¼›é±£\ª½§Ÿc²EĹ꼝Q "È)Ú ·j ©«·©"±!Ÿz· Ÿk ±¨®ªj­ëc­wQªsQ Ÿk§¨¢©«ª¶±§ Ÿk§Q­"£Â³µ©« )º¼› ±£\ª½§Ÿc²¾ Ÿ«±_¤ã©«ª5žCŸk ¡¢Q£Í¤j_¤.¥s¦Q§¨¢©«ªÈY¦_©«¯¾j¨®ª.Ê ¨®ªj­›A_±Ÿk ÁìÞ±ÇВíÁî"î"ïwÑ[¦œ Ÿk §’ÊV¾ Ÿ«±_¤;¦_©«¯Â·j¥j§Ÿk§¨¢©«ª ©k³Øº¼›ð¦_©«¯Â·$Q§¨®§¨¢©«ªè4¨®§œ´¨®ªs±¨®­"œ§±»³` ©«¯:· Ÿk ±’Ê ¨®ªj­ëc­wQªsQ Ÿk§¨¢©«ªéè4¨®§œç¦_©«ªs±§ Ÿk¨®ª§’ÊV¾ Ÿ«±_¤ç­" Ÿk¯ØÊ ¯ÃŸk ±GÐ`¨®ª¿· Ÿk §¨¢¦Q¥j¡RŸk Ãñ¼Q¥j¯ÃŸkªjªìÞ±[ВíÁî"î"òwÑØ¨®ª½§Q ’Ê ¡¢ÁŸ_Æ\¨®ªj­ÑhÄzóô·j ©«§©«§Û£\·$<œ Ÿ«±'¾_Qª[¨®¯Â·j¡¢Q¯fQª§_¤ ¨®ª¶õ  ©«¡¢©«­sÄ ö _¦"ÄÃ÷ø·j _±Qª§±¿¾ Ÿ«¦Ý\­" ©«¥jªs¤&©«ªÌº¼›ù±£ª.Ê §Ÿc²úŸkªs¤éŸã³µ©« ¯ÃŸk¡®¨®Ü˜Ÿk§¨¢©«ªé¾ Ÿ«±_¤ú©«ªéûh².¨¢¦ÁŸk¡×Ê üj¥jªs¦Q§¨¢©«ª Ÿk¡4ý¼ Ÿk¯Â¯ÃŸk þÐÀûAü?ýzÑh䱝_¦"ÄÿÕ¨¢¤jQª½§¨ Ë _± §ŒèÙ©þ¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡/¦œ Ÿk¡®¡¢Qªj­w_±Áä$±_¦"Äf·j _±Qª½§± §œs4º!·j§¨®¯ÃŸk¡®¨®§Û£ÊV§œs_©« Q§¨¢¦žCŸk ¡¢Q£<¤j_¤.¥s¦Q§¨¢©«ªÂŸk·.Ê ·j ©wŸ«¦œÈs¨®ª½§Q ¡¢ÁŸÁƨ®ªj­þ· Ÿk ±¨®ªj­»Ÿkªs¤ ­wQªsQ Ÿk§¨¢©«ªÄ   D @?—"B  • Bš ™w>AHÙ@Cá CŸk ¨¢©«¥s±ãº¼› ±£\ª½§Ÿc²ð¯f©¤jQ¡¢±´œ ŸÁÆw ¾$_Qª7Ÿ«±’Ê ±¥j¯f_¤EÄ Ú¬è4¨®¡®¡³µ©«¡®¡¢©Áè §œs º¼›ŠÊÛûAü?ý Ÿk·.Ê ·j ©wŸ«¦œ Ë  ±§A·j ©«·©"±_¤z¾£Š _±ª ŸkªþВíÁî"î"òwÑhÈ"±¨®ªs¦_ ¨®§±Y Q·j _±Qª½§Ÿk§¨¢©«ªs±Ò¦_©«ªs±§¨®§¥sQª½§GÐÀ¦hÊ Ñ ±§ ¥s¦Q§¥j  § __±ðŸkªs¤ù¦_©«  _±·$©«ªs¤.¨®ªj­ ³ÀÁŸk§¥j  Ðd³dÊ Ñ ±§ ¥s¦hÊ §¥j _±Οk '芝Q¡®¡×ÊV¥jªs¤jQ ±§©\©¤EÈsŸkªs¤ §œsQ ¼¨¢±Š¾$©«§œ 芩« Ý'©«ªz³À©« ¯ÃŸk¡®¨®Ü˜Ÿk§¨¢©«ª Ÿkªs¤YŸŠ­" ©Áè4¨®ªj­!§Û£\·$©«¡¢©«­«Ê ¨¢¦ÁŸk¡?¡®¨®§Q Ÿk§¥j "ĸê!©˜èٝQÆwQ ÁÈ$§œsYŸk¡®­w©« ¨®§œj¯ð¨®§±Q¡×³ ¨¢±Â¡RŸk ­wQ¡®£¿¨®ªs¤jQ·Qªs¤jQª½§Ã³` ©«¯ ±·$_¦Q¨ Ë ¦_±Ã©k³Yº¼›ŠÊ ûüýØÄ ›)œs ±Q§©k³¦_©«¯Â·$Q§¨®ªj­é¦ÁŸkªs¤.¨¢¤sŸk§_±¨¢±¤jhÊ Ë ªs_¤Õ¾£ ŸØ¦_©«¯Â¯f©«ªG¥jªs¤jQ ¡®£¨®ªj­f Q·j _±Qª½§Ÿk§¨¢©«ª Ð! #" ÑhÈ ¨®ªÌº¼›ŠÊÛûüý Ÿkªç¥jªs¤jQ ±·_¦Q¨ Ë _¤ú³`Ê ±§ ¥s¦Q§¥j ÓВí˜ÑhȊ¦_©«ª§Ÿk¨®ªj¨®ªj­IŸk¡®¡§œs¶¨®ª.³À©« ¯ÃŸk§¨¢©«ª  Q¡¢QÆ"Ÿkª½§Š³µ©« Ù¨®ª§Q ·j Q§Ÿk§¨¢©«ªÖÐ`¨VÄޝ"Ä®È.Ÿk¡®¡¦ÁŸkªs¤.¨¢¤sŸk§_± œ ŸÁÆwz§œs±Ÿk¯f¯fÁŸkªj¨®ªj­ÑhÄ$ %'&)( *+ + , -/.01 2 35464 %87 9 :;(#< =?>@ A -/.01 2 BDC;4 : <FEHG =?>JI K -/.01 2 L4MD35NPO < Q R ST UWV V X Y Ÿkªs¤.¨¢¤sŸk§_± Ÿk  ¦hÊÛ±§ ¥s¦Q§¥j ˜ë˜³`ÊÛ±§ ¥s¦Q§¥j  · Ÿk¨® ±<¤j Ë ªs_¤Ö¾£ÖŸ ­" Ÿk¯Â¯ÃŸk [Z]\W^J_\a`)bCQªs¦_©¤.¨®ªj­ §œs¨®ª½Æ\¨¢©«¡RŸk¾j¡¢ ·j ¨®ªs¦Q¨®·j¡¢_± ÐÀh².§Qªs¤j_¤dc8ÊV¾ Ÿk  §œs_©« £sÑhÄôÏΜj¨®¡¢–ÁŸ«¦œÍ¦ÁŸkªs¤.¨¢¤sŸk§"ìÞ± ³`ÊÛ±§ ¥s¦Q§¥j  œ Ÿ«±ð§©Ô¾ ±¥j¾s±¥j¯f_¤ ¾£ §œs-¨®ªj·j¥j§ÁÈ §œs ¦hÊÛ±§ ¥s¦Q§¥j ðŸkªs¤Ô¡¢h².¨¢¦ÁŸk¡ Ë ¡®¡®¨®ªj­-¦ÁŸkª ¤jQƨRŸk§ e %gf NPC;O35NhO 9?&jiPihk/l;mon CO 9;pPqPqhqhrslmon CO 9pHqhqhqHt( u6v 464o4hw xw 9 BDC4 y6NPOsB5MDz tn BDz{NhO3Fz{O % v 46|{|{3 9}pHqhqhqh( w ~6 MDz{y€C;4M3546‚ r OJBDz{yƒMD46LMD46354OsB r BDz{NhO„y6N n |W… t 4 r 3'† 3 n ‚‡4j… 9 ‚ rHˆ z{OxЉ n‹r OJBDzŒ‹4#M35yNhL4Ž4 7 L;|{z{y6zB 9 4BDyhwŠO]BDC;4  ‘ †’!“•”–|{zBD4M r B n MD4 9 BDC;4 r MDx n ‚‡4OsB 3'B5M n yB n MD4—MD4˜‹4yBD4j… z{O𙛆3'B5M n y#B n MD4 C r 3 t 4646Oœ| r MDxh46| : 3 n y6z{46OJBjw ¦_©«ªs±¨¢¤jQ Ÿk¾j¡®£Â³À ©«¯¬è4œ Ÿk§Š¨¢±C¦_©«ªs±¨¢¤jQ _¤[Œ¦ÁŸkªs©«ªj¨×Ê ¦ÁŸk¡8"Ä1ž/².·j¡¢Q§¨®ÆwzQ¡¢Q¯fQª§±8©«ªÕ§œs©«ªs'œ Ÿkªs¤GŸkªs¤ ¤. ©«·j·_¤Ò·j ©«ªs©«¯Â¨®ª Ÿk¡¢±Õ©«ªÎ§œsÉ©«§œsQ þ¯f©«§¨®Æ"Ÿk§ §œsÕ§œs_©« Q§¨¢¦ÁŸk¡Ÿ«±±¥j¯Â·j§¨¢©«ªÒ§œ Ÿk§Ãü Ÿk¨®§œ.³À¥j¡®ªs_±± §© §œs ¨®ªj·j¥j§¨¢±ÍŸ   ‹ ‹  µÄ ÚÛª ž ªj­"¡®¨¢±œÓ§œs–Ÿk Ýw_¤.ªs_±±<¦_©«ªs±§ Ÿk¨®ª½§ е÷wÑÓ¨¢±I ŸkªjÝw_¤çœj¨®­"œsQ ¿§œ Ÿkªç§œs´ü Ÿk¨®§œ.³À¥j¡®ªs_±± ¦_©«ªs±§ Ÿk¨®ª§"!$#% еÿwÑhÄ %ph('& -)(+* -/0-, .0/ 4M : NPL4M r BDNHM z{3—z{O BDC;4‡BDNhL‚‡Nh3'B35L46y6zŒ‹4#M—L/MDNP† 1 46y#BDzWNPO NP™ r Oš4 7 BD46O;…4j…ŽLMDN 1 46y#BDzWNPO?w %32 (54 0/-)(+6 & 2 7 N 46L46OJBDC;435zW3 < 8  n BDL n B 3546xh‚‡46OJBD3 ‚ n 3'B C r / 4 z{OL n B yNPM5MD4635LNPO‹…4OsBD3 % z w 4hw 9 r |{|–|{4 7 z{y r | ™›†35L46y6zŒ;y r BDz{NhO3 r MD4 n 354j…Žz{OšBDC;4 ™a†3'B5M n yB n MD4 ( w ›)œ½¥s±¦ÁŸkªs¤.¨¢¤sŸk§ÕÐ Ñ8¦ÁŸkª–Ÿk ¨¢±<Ÿ«±!­" Ÿk¯Â¯ÃŸk§¨×Ê ¦ÁŸk¡E¤j_±·j¨®§'ŸzÆ\¨¢©«¡RŸk§¨¢©«ªÕ©k³ŠÐµÿwÑD¹§œs'¡¢h²\¨¢¦ÁŸk¡¦_©«ª.Ê § ¨®¾j¥j§¨¢©«ªÓ©k³:9;IÐÀQªs¦Q¨® ¦Q¡¢_¤ Ñ'¨¢±ªs©«§Y¥s±_¤¸¨®ª6§œs ³`ÊÛ±§ ¥s¦Q§¥j "Ä % <J( * + + , -/.01 2 35464 %87}9 :;(#< =?>;@BA A -/.01 2 BDC;4 :}< EHG Q@CA K -/.01 2 L4MD35NPO < Q R S‹T U V V X “D 7ED “GF H C;N “ % I -/.01 (KJ 2 L4MD35NhO < …N LED % I Q (KJNM % I -.0/1 (KJ 2 …/NCO PQKR-S < 7TD L F BDC4 : L % I -/.01 (KJ 2 BDC;4 :}< 35464 % I -.0/1 (KJ 2 35464O PQ+R-S < ›)œs»§Ÿk¾j¡¢ÁŸk¥еïwÑY±œs©˜è±Y· Ÿk §Ø©k³8§œsÃ¦_©«¯Â·hÊ §¨®§¨¢©«ªÉ§œ Ÿk§8·j ©\¤.¥s¦__±ØÐ Ñ8Ÿ«±8§œsYè4¨®ªjªsQ !¨®ªÉž ª.Ê ­"¡®¨¢±œÈj¾Q¨®ªj­Â§œs¯f©"±§4œ Ÿk ¯f©«ªj¨¢¦"Ä U %WVh( X r O;…zW… r BD463 Y Z [ \ Z] ^ _ ] Z [ ` Y a bc BDC;4 : 35464 H CN%d e f a g c BDC4 : …N a bc 35464 H CN%dhd e f e a g c H CN‡…N a bc BDC4 : 35464idhd e a g c H CN‡…N a g c BDC4 : …N a bc 35464jdkdhd ee f %3l ( X r O;…zW… r BD4  z{3nmpo%q+rs-t%qumvo%w-x3yBDC r Oy r O‹…/zW… r BD4nz z|{  < 3•yNhO;3'B5M r z{OsB•‚ r M ˆ z{O;x y6NPOsB r z{O;3•|{46353 / z{Nh| r BDz{NPO;3 ™8NHM—BDC;4 CzWxPC;463'B5†gM r O ˆ z{O;xŠy6NPO;3'B5M r z{OJBzWO H Cz{y#C BDC4 ‚ r M ˆ z{O;xNP™  r O‹…}zƒ…z|{}4MD36w ~ “NPM4 7 LNP35zBDNPM : MD4 r 35NhO3 9 BDC;4 NhL/BDzWNPO‡NP™}MD4 r |Wzh6z{Ox BDC4 35L46y6zŒ;4M NH™—“D H zBDCN n BŽMD4 r |Wzh6z{Ox BDC;4“ C;4 r … %gr 3œzWO € s-o‚ƒsCrj„…ir+r ( z{3‡4 7 y6| n …46… C4MD4hw ‘ C;z{3 4i{}46yB‡z{3‡3'B r O† … r M€…| : MD4 r y#C4j… H zBDC r O r …;…zBDz{NhO r | y6NPO;3'B5M r zWOJB & @ (+†—1 ™ r / N n MDz{O;x—LMDN 1 46yBDz{NPO;3 H z{BDC r O N / 4M5B C4 r …}w ڌª;Ð ÑhÈ!§œsÉ¡¢h².¨¢¦ÁŸk¡¯ÃŸk§Q ¨RŸk¡¨¢±  ¨¢¦œsQ Õ§œ Ÿkª §œsC¨®ªj·j¥j§ÁÄ?›)œsÙ©«·j·$©"±¨®§C±¨®§¥ Ÿk§¨¢©«ª©\¦_¦Q¥j ±è4œsQª ‡‰ˆ‹ŠŒ$#%-Ð+"Ñ8¨¢±'Æ\¨¢©«¡RŸk§_¤EÄ ö ¥s¦œÖŸþ¦ÁŸkªs¤.¨¢¤sŸk§Ø¨¢± ©«·j§¨®¯ÃŸk¡¨®ª Ž?Bi9PÖ¡RŸkªj­"¥ Ÿk­w_±!¡®¨®ÝwYڌ§Ÿk¡®¨RŸkªIеòwÑ ÐÀ¦h³Ä ý¼ ¨®¯f±œ ŸÁèøŸkªs¤ ö Ÿk¯fQÝÊÛû©\¤j©ÁÆ\¨¢¦Q¨ŠÐ’íÁî"î"òwÑÑhÄ| %W‘h(5’”“;• (+6 & 2 7 N …/46|{4BDz{NhO < 8  OL n B354xh‚‡46OJBD3 ‚ n 3'B C r / 4N n BDL n B y6NPM5MD4635LNPO‹…/46OJBD3 % z w 4Pw 9 zWOL n B‡‚ r BD4MDz r |BDN t 4œMD4 r |† zk4j… tJ: |{4 7 zWy r |!‚ r BD4MDz r | ( w %gk ( K =?>;@BA A -/.01 2 35C;4 < E -/.01 2 35z{O;x-– % I =?>;@BA (K—#< S LTD L F L y r OJB r % I -/.01 (KJ 2 35zWOxC– % I =?>;@BA (K—#< ˜Œ™›š œŒ0žBŸ¡ ¢£›¤£¥§¦©¨«ª­¬%®¯œp° üs©«¡®¡¢©˜è4¨®ªj­Ã§œs±¥j¾s±¥j¯Â·j§¨¢©«ª.ÊV¾ Ÿ«±_¤¶¦_©«ªs¦_Q·j§¨¢©«ªÈ §œs¦ÁŸkªs¤.¨¢¤sŸk§!±Q§/³µ©« CŸ'¦_©«¯Â·$Q§¨®§¨¢©«ªþ¦ÁŸkªf¾$4¤jhÊ Ë ªs_¤GŸ«±4³À©«¡®¡¢©˜è±)± %gi (5²‹³ ´Cµ O;NhO/†46‚‡LB : z{OL n BMD46L/MD463546OJB r BDz{NhO l·¶ ³ ´j¸¹³ º›» µ % 354B•NH™y#†¼j™›†3'B5M n y#B n MD4 r O r | : 3543!xh4O;4M r BD4j… tJ:;( ’!“!” t‹r 354oxHM r ‚‡‚ r M lC½¿¾)ÀÁ¿¾  rjwO ²‹³ ´ SnÃÅÄÆ–NjQ ² F —ÉÈʶ ³ ´i¸W³|º›»0Ë ²‹³ ´}ÌͲ F 9 H C;4MD4 ² F y6NPOJB r z{O;3 ONœ‚‡NHMD4354‚ r OJBDz{yz{O™›NPMD‚ r BDzWNPO BDC r O ²‹³ ´ÏÎ ÐÑ NHMD4 …/M r 3'BDz{y ’«“;• (K6 & / z{Nh| r BDzWNPO;3‡NJy6y n M H C46O[46|† |{zWL35z{3ozW3o‚‡Ns…/46|{|{4j… H z{BDC BDC;z{3 ‚‡4y#C r O;z{35‚ wÒz < 3 MD435LNhO;354 z{OŽBDC4 …z r |WNPx n 4 % z ( y6N n |a… t 4 ‚‡Ns…/46|{|{4j… r 3z{O % z{z ( w ‘ C;4MD4 z{3 O;N ™8NPMD‚ r |‹|{z{‚‡z{BBDN BDC4 r ‚‡N n OsBNH™}MD46y n MD35z / 4 3'B5M n yB n MD4 …/MDNPL;L4j… % NP™ y6N n MD354 9 yNhOJBD4 7 B5†gMD46| r BD4j…[y6NhO3'B5M r z{OJBD3‡C r / 4 BDN 46O3 n MD4 MD46yN / 4M rPt zW|{zB :;( w % z (  µ f NhC;Oœy6| r z{‚‡4j…œBDC r B§z z{|{|!3 r H v n 4hw z µ  O‹…  OO?w % z{z ( Ó¿ÔÕGÖi× ÓÔÕGÖ Ø× ÙiÚ$Û ØÜ ÝkÞßàá$â ã3äå á$æçÝkÞßà èéuãWäê Ø× ØÜ ÝkÞà ëâ ã3äê ÕG× ì Ú$Ú ÝíÞéîïæã3ä0ð ì Ú$Ú ñ *+ + + + + + + , ßà áâòåá$æ éîïæóð ôKõ Ùiö ÷ Ýhøù ú$ã ñ û›ü ëâ A éîïæóð Öýiþ ÚÆñ E%ÿ ßà èé *+ , éiî$ïiæ ð  ÝÆù $ã ñ û›ü ëâ A éîïæóð  ö õ õ ñ E á û éWå û àëâ A éîïæóð ì Ú%Ú ñ E UWV X UWV V V V V V V X rjw;rjqKt$ x3o%w –Ç @Q ² @ — mpt%q$… – @ @?Q I  @?Q Q  @ — –Ç I Q ² I — – @ I Q I  I Q Q  I — ²‹³ ´ w6w6w w6wjw –Ç ÆQ ² — – @ )Q I  ÆQiQ  —           t  –Ç º|³ Q ²Éº|³3— ü¨®­"¥j fí± ö ÝwQ§¦œ[©k³?­wQªsQ Ÿk§¨¢©«ª.ÊV¾ Ÿ«±_¤–º¼›·j ©\¦__±±¨®ªj­ ›)œs1Æ\¨¢©«¡RŸk¾j¡¢C¦_©«ªs±§ Ÿk¨®ª½§±¦ÁŸkªY¾1h²\·j _±±_¤YŸ«± Ð`¨®¯Â·j¡®¨¢¦ÁŸk§¨®Æw˜Ñ?¤j_±¦Q ¨®·j§¨¢©«ªs±/©k³s§œsC¦ÁŸkªs¤.¨¢¤sŸk§"ìÞ± ¦hÊ ±§ ¥s¦Q§¥j 'Ÿkªs¤Â³`ÊÛ±§ ¥s¦Q§¥j Ð'¥jœjªÈ.÷k¾ÑhÄ ›)œs ¦_©«ªs±§ Ÿk¨®ª§1¯ÃŸk Ý¨®ªj­z³`¥jªs¦Q§¨¢©«ª! ”B#")¨¢±/§œsQªÃ¤jhÊ Ë ªs_¤GŸ«±8Ÿ³µ©«¡®¡¢©Áè±Ï± %'&jqh( –NjQ ²Œ—jµ ’!“•” r O r | : 35zW3 l%$¿J –&P@iQ& I '& —jµ y6NPO† 3'B5M r z{OJBD3 lB½¿¾)ÀÁ¿¾ mvt%q$…jOK–ǧQ ²Œ— S0à – @ Q I   —#9 H C;4MD4(*)z{3 BDC4 O n ‚ t 4MNH™ / z{NP| r BDz{NhO;3 BDC r B¯–NjQ ²Œ— z{O;y n MD3™8NPM y6NPO;3'B5M r z{OJB+& ) w , $·j¨¢¦Ý\±C§œs4¯f©"±§ œ Ÿk ¯f©«ªj¨¢¦<ÐÀ¦h³’Ä Ð-wÑѳ` ©«¯ ŸÎ±Q§6©k³ Ÿkª Ÿk¡®£.±_±ÁÈ<­"¨®ÆwQªôŸÒ¡RŸkªj­"¥ Ÿk­whÊÛ±·_¦Q¨ Ë ¦  ŸkªjÝ\¨®ªj­/.10É©k³§œs¦_©«ªs±§ Ÿk¨®ª§±ÁÄ %'&h&j(320µ 354B NP™ ’!“•” r O r | : 35463v–NjQ ²Œ—#l – $ Q54% —jµ M r O ˆ 4j… 354BFNP™ yNhO;3'B5M r z{OsBD3 lB½¾ÆÀ0Á¿¾ 6 t# 0O 2 S à Ä)–Ç Q ² —"È72 Ë mpt%q$…jOK–Ç Q ² — S z{3 ‚ rH7 z{‚ r |{| : C r MD‚‡NhOzWy ™8NPM r |W| r O r | : 35463 z{O 2 9/n O‹…4#M M r O ˆ z{Ox84  Î Ï6Ø¦ÁŸkª[ªs©˜è;¤j Ë ªs§œs¡RŸkªj­"¥ Ÿk­w<­wQªsQ Ÿk§_¤ ¾½£¶Ÿkª[º¼›ŠÊÛûüý¬±£.±§Q¯ ± %'&)pP(«¶ ³ ´j¸¹³ º›» l – $ Q54  —#l9 r: µ |W4 7 z{y6NPO lB½¾ÆÀ0Á¿¾ ; O ¶ ³ ´i¸W³|º›» Q?– $ 4< — S¯Ã Ä>= È?9 r:'@ ËBA z{O;L n B MD4† LMD46354OsB r BDz{NhO ²‹³ ´ Q=[zW3?BDC;4BD4MD‚‡z{O r |J3'B5MDz{Ox NH™;35NP‚‡4 –NjQ ²Œ—ŒÈ  t  O  rjwO ²‹³ ´ SKS Î C ED >´š.>+FHGÙH —"B$—wâV>A@CBJI!šJ1BJI5I5K @ LKD ڌ§)±_Q¯f±4§œ Ÿk§!º¼›´·j ©¦__±±¨®ªj­f¦ÁŸkª¶¾!¯f©\¤jQ¡®¡¢_¤ ±§ Ÿk¨®­"œ§’³À©« è)Ÿk ¤.¡®£ÖŸ«±z±ÝwQ§¦œs_¤6¨®ª Ë ­sÄ í"Ⱦ Ÿ«±_¤ ©«ª Œ¦Q¡RŸ«±±¨¢¦ÁŸk¡8"/­wQªsQ Ÿk§¨¢©«ªY³` ©«¯NM \W^ è4¨®§œ Z]\W^J_\a`)bÀÄ ê¼©ÁèٝQÆwQ ÁȤ.¥s6§©¥jª.³µŸk¨®§œ.³À¥j¡®ªs_±±Áȼèٝ6œ Ÿ_Æw¸§© ¤jÁŸk¡Øè4¨®§œã¨®ª Ë ªj¨®§Q¡®£ã¯ÃŸkª½£ø¦_©«¯Â·$Q§¨®ªj­¦ÁŸkªs¤.¨×Ê ¤sŸk§_±)±PO8ªs¤jQ [Ÿ !G# Æ\¨¢©«¡RŸk§¨¢©«ªÈzŸ _¦Q¥j ’Ê ±¨¢©«ªÇ¨®ªÒ§œs¶¾ Ÿ«±¶­" Ÿk¯Â¯ÃŸk »¯ÃŸÁ£I¾¶§ ŸÁÆwQ ±_¤ è4¨®§œs©«¥j§œŒ¦_©«ªs±¥j¯Â¨®ªj­‹"<Ÿkª£¶· Ÿk §¼©k³1§œs¨®ªj·j¥j§ ­wQªsQ Ÿk§¨®ªj­¿³µ©« Õh²jŸk¯Â·j¡¢ÖŸk ¾j¨®§ Ÿk ¨®¡®£Î¯ÃŸkª½£´¨®ª.Ê ±§Ÿkªs¦__±¸©k³ h²\·j¡¢Q§¨®Æw 9"Ä Š¥j§¸§œsI³µ©«¡®¡¢©Áè4¨®ªj­ ·j ©«·Q §Û£ ©k³/º¼›´±£\±§Q¯f±Š¦ÁŸkª  _±©«¡®Æw¼§œs¼·j ©«¾.Ê ¡¢Q¯ ±Éè4¨®§œÎŸ«±±¥j¯Â·j§¨¢©«ªs±ÉВíÁÿwÑßkªs¤ãВí ÑhÈ)§œsQ  è4¨®¡®¡.¾$)Ÿ8·$©«¨®ª§¨®ªÂŸkª½£§ Ÿ_ÆwQ ±Ÿk¡s©k³Ÿ¾ Ÿ«±Ù­" Ÿk¯ØÊ ¯ÃŸk 4 _¦Q¥j ±¨¢©«ªÕ³À ©«¯çè4œj¨¢¦œÉ©«ªGŸk¡®¡E¡RŸk ­wQ 8¦ÁŸkªs¤.¨×Ê ¤sŸk§_±Ÿk !¡¢_±±Šœ Ÿk ¯f©«ªj¨¢¦'§œ Ÿkª ©«ªs¼­"¨®ÆwQªÕ¦ÁŸkªs¤.¨×Ê ¤sŸk§"ÄRQ ›A©É¯Y£6Ý\ªs©Áè4¡¢_¤.­w"È1§œs_±ÕŸ«±±¥j¯Â·j§¨¢©«ªs± S ‘ Cz{3ozW3 r y6NPO;354j‰ n 46O;y64 NP™BDC;4 L n ‚‡L;z{O;x‡|{46‚‡‚ r ™›NPM y6NPOsBD4 7 B5†g™aMD464 | r O;x n‹r xP4636w  ™›NPMD‚ r |•LMDNJNH™ z{3 z{OŠL/MD46L r M r † BDz{NhO w Ÿk '¦_©«¯Â· Ÿk§¨®¾j¡¢'è4¨®§œ §œs8¨®ª§¥j¨®§¨¢©«ªs±4¾Qœj¨®ªs¤»§œs º¼›5Ÿ«¦_¦_©«¥jª§±¨®ª¶§œsz¡®¨®§Q Ÿk§¥j "ÄUT %'&?2h( ‘ C;4—yNhO;3'B5M r z{OsBD3 r MD4—3 n y#CŠBDC r Bo4 / 4#M : r LL;|{z{y r BDz{NhO NP™ MD46y n MD35z / 4‡M n |{463o™›MDNP‚ ¶ ³ ´i¸W³|º›» z{O;y n MD3 35NP‚‡4 y6NhO/† 3'B5M r z{OJB / z{Nh| r BDzWNPO?w V %'&< (  ‘ y6NPO;3'B5M r zWOJBD3 r MD4 3'B5M n yB n M r |J…/4635yMDz{LBDz{NhO3•…/46O;NHB5† zWOx t N n O‹…/4j…Ž3'B5M n yB n MD436w W ›?_±Ÿk 6ВíÁî"î"ïwÑ»h²\·j¡¢©«¨®§± §œj¨¢±þ·j ©«·Q §Û£Î¨®ªÒœj¨¢± ¤.£\ª Ÿk¯Â¨¢¦<·j ©«­" Ÿk¯Â¯Â¨®ªj­ÓÐÀ©« ¦œ Ÿk §’ÊV¾ Ÿ«±_¤ Ñ'Ÿk¡®­w©kÊ  ¨®§œj¯ ³µ©« Ø Q­"¥j¡RŸk <¡RŸkªj­"¥ Ÿk­w_±»Ÿkªs¤¿¦_©«ª½§h².§’ʵ³` _ Û·©"±¨®§¨¢©«ª¿­" Ÿk¯Â¯ÃŸk ±6"ÄÇڌª¿±_¦"Ä sÄ ÷.ÈÙ§œj¨¢±Â¨¢±Âh²Ê §Qªs¤j_¤Õ§©Â³ÀÁŸk§¥j ­" Ÿk¯Â¯ÃŸk ±˜Ä ›)œsÂ±_¦_©«ªs¤6¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡Ù¦œ Ÿk¡®¡¢Qªj­w»Ÿk ¨¢±_± ³µ©« ¶§œs¸ _¦_©«­"ªj¨®§¨¢©«ª;Ÿkªs¤Í· Ÿk ±¨®ªj­¹§Ÿ«±ÝγÀ©« [Ÿkª º¼› ¡RŸkªj­"¥ Ÿk­wÓ¤j Ë ªs_¤ãŸ«±G¨®ª ВíÁ÷wÑhÄ&ó¼±YXw©«œjª.Ê ±©«ª7ВíÁî"î"òwÑÓªs©«§_±ÁÈ[Ÿ5§_¦œjªj¨¢æ½¥s±£¯Â¯fQ§ ¨¢¦ÁŸk¡ §©É§œs ©«ªs ±ÝwQ§¦œs_¤¿¨®ª Ë ­sÄ)íP¹¨VÄޝ"Ä®È1· Ÿk ±¨®ªj­¸Ÿ ±§ ¨®ªj­ÕŸkªs¤É¦_©«¯Â· Ÿk ¨®ªj­Õ§œs¦_©«ªs±§ Ÿk¨®ª§'¯ÃŸk Ý¨®ªj­ ©k³A§œs'¤.¨[ZEQ Qª½§8Ÿkª Ÿk¡®£.±_±Ò¤j©\_±Šªs©«§Š ÁŸk¡®¨®ÜÁ'§œs · Ÿk ±¨®ªj­É§Ÿ«±Ýɳµ©« Y§œs»¡RŸkªj­"¥ Ÿk­w»¤j Ë ªs_¤Ó¾½£6§œs ©« ¨®­"¨®ª Ÿk¡C±£\±§Q¯ ±¼±§ ¨®ªj­w±'§œ Ÿk§YŸk Øªs©«§z©«·j§¨®¯ÃŸk¡ ³µ©« )Ÿkª£f¥jªs¤jQ ¡®£\¨®ªj­Â Q·j _±Qª½§Ÿk§¨¢©«ª\M \a^ Ÿ«¦_¦_©« ¤\Ê ¨®ªj­I§©ВíÁ÷wÑÂè4¨®¡®¡'¾¶è4 ©«ªj­"¡®£¹·j _¤.¨¢¦Q§_¤Î§©Ó¾$ ­" Ÿk¯Â¯ÃŸk§¨¢¦ÁŸk¡›Ò±¨®ªs¦_'§ ¨®Æ\¨RŸk¡®¡®£wÈ.©«ªs'©k³§œs_±8è4¨®¡®¡ ¾Ö©«·j§¨®¯ÃŸk¡Y³À©« [§œj¨¢±G· Ÿk ±¨®ªj­«ÊV¾ Ÿ«±_¤ã©«·j§¨®¯Â¨®Ü˜ŸcÊ §¨¢©«ªÄ ]  3'B5M r z{xhCJB5™8NHM H r M€… H r6: BDNo3 r BDz{3'™ : %'&?2 ( z{3 BDN r 353 n ‚‡4 r y6NPO;3'B5M r z{OJB_^a` Q b 1¿*dc}. > , 9 H CzWy€C]z{3 / z{Nh| r BD4j… NhO;y4 tJ: 4 / 4M : y†3'B5M n yB n MD4 ONs…4hw V %'&2 ( y r O t 4 35446O r 3 r MD46| r)7 46… / 4MD35z{NhO NP™BDC;4 N#e‡z{O4 L r MD3 rPt z{|WzB : MD463'B5MDz{yBDz{NhO 9r O‹… r y6NHM5MD4635LNhO;…z{O;x N#e‡z{O4 xP46O;4M rHt z{|{zB : MD463'B5MDz{yBDz{NPO µ  O y| r 3535z{y r | L r MD3'† z{O;xƼ)xh46O4M r BDz{NPO 9 BDC;46354‡MD463'B5MDz{yBDz{NhO3—4 7 y6| n …4‡M n |{4‡MD46y n M5† 35z{NhO BDN t 4 r LL;|{z{4j… / r y n N n 35| :„% H M5BjwoBDC;4 z{O;L n B 9 z w 4hw 9 r 3'B5MDzWOxƼ r MD4635N n MDy646… ™›4 r B n MD4œ3'B5M n yB n MD4 (#9t 46y rPn 354ŽBDCz{3 H N n |W… |{4 r …]BDN r O z{OŒ‹OzBD4‡O n ‚ t 4M NP™ y†3'B5M n y#B n MD4636wš O BDC;4  ‘ †’!“!” 354#B5BDzWOx 9  rjw 35L4y6zŒ‹463 r O z{OŒ;O;zBD4‡354B NP™ y r O;…zW… r BD463 9!r O‹… zB z{3 BDC;4—yNhO;3'B5M r z{OsBD3 BDC r B y6NPOJB5MDNh| BDC;4 B5M r / 4MD3 r | NP™ BDC4 y r O‹…zW… r BD4o35L r y64hw  MD4 / z{4 H 4MŽLNPzWOJBD3 N n B‡BDC r B %'&?2 ( ‚ r6: O;NHB t 4ŠO;46y#† 46353 r M :J9 35z{O;y4 ™›NPM r Oƒz{OŒ;O;zBD4Š354BœNP™ 4j‰ n‹r |{| : C r MD‚‡NhO;z{y y r O;…zW… r BD463 r MDz{35zWOx BDCMDN n xhCƒL n ‚‡L;z{Ox % y™ w™8O w l (#9 BDC;4 f n O;L n ‚‡L4j…hg / 4MD35z{NPO—y6N n |W… t 4 n 354j… r 3 r MD4LMD463546OJB r BDz / 4hw m r L| r O r O;…_i 4j…/4 ˆ z{O‹… %pHqhqhqh( ‚ rHˆ 4 r 35z{‚‡z{| r MLNPzWOJB™8NHM y6| r 3535z{y r |•xh46O4M r BDz{NPO?w Wi7 NPBD4 BDC r B %'&i<J( …/NJ463 O;NHB 4 7 y6| n …/4FBDC r B r y6NhO3'B5M r z{OJB y r OŠC r / 4 r O«r[jTrKyi NhOš35Nh‚‡4 n O t N n O‹…/4j…| : MD46‚‡NHBD4 L r M5B NP™ BDC4 3'B5M n yB n MD4 % BDCMDN n xPCšy6NPO;3'B5M r z{OJBFz{OsBD4#M r yBDz{NhO ( w É©¤jQ¡®¡®¨®ªj­f§œsz _¦_©«­"ªj¨®§¨¢©«ªÉŸkªs¤Õ· Ÿk ±¨®ªj­Â§Ÿ«±Ý ¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡®¡®£;è4¨®¡®¡Â¨®ª½Æw©«¡®ÆwÎŸÎ¾j¨¢¤.¨® _¦Q§¨¢©«ª Ÿk¡ ·j ©\¦__±±¨®ªj­ÇZE©« §ÓÐ'¥jœjªÈ÷k¾Ñ±¿§œs6±§ ¨®ªj­ ¨¢±!· Ÿk ±_¤¶§©þ¤jQ§_¦Q§!·©"±±¨®¾j¡¢Y¥jªs¤jQ ¡®£¨®ªj­þ Q·j hÊ ±Qª§Ÿk§¨¢©«ªs±ÁÈŸkªs¤Ø³À ©«¯;§œs_±ŸŽÛ¾ Ÿ«¦Ýè)Ÿk ¤f­wQªsQ ’Ê Ÿk§¨¢©«ª•"f±§Q·¸¨¢±·Q ’³À©« ¯f_¤EÈA³µ©« è4œj¨¢¦œº¼›ôQÆ«Ÿk¡×Ê ¥ Ÿk§¨¢©«ªÇ¨¢±þŸk·j·j¡®¨¢_¤EÄκ!ªj¡®£¨×³¼§œs¶· Ÿk ±_¤Ç±§ ¨®ªj­ ¨¢±f¯ÃŸk§¦œs_¤¹¾£¿§œsG©«·j§¨®¯ÃŸk¡¼¦ÁŸkªs¤.¨¢¤sŸk§¶³µ©« f¨®§± ¥jªs¤jQ ¡®£\¨®ªj­Â Q·j _±Qª½§Ÿk§¨¢©«ªÈj¨¢±Š§œs'±§ ¨®ªj­Ø­" Ÿk¯ØÊ ¯ÃŸk§¨¢¦ÁŸk¡Vě ü Ÿk¨®§œ.³À¥j¡®ªs_±±Ùƨ¢©«¡RŸk§¨¢©«ªs±Š©k³E§œs ‡‰ˆ‹ŠŒ$#% §Û£\·$ Ð+"Ñ4Ÿk¡®¡¢©˜è³À©« ±¨®§¥ Ÿk§¨¢©«ªs±è4œsQ ¥jªs¤jQ ¡®£\¨®ªj­f¯ÃŸcÊ §Q ¨RŸk¡s¨¢±Cªs©«§1  _¦Q§_¤Â¾½£fŸkª£Ø©k³§œs4¡¢h².¨¢¦ÁŸk¡s¯ÃŸcÊ §Q ¨RŸk¡VÄ O8ªj¡¢_±±1©«ªs)¨®ª½§ ©\¤.¥s¦__±ÙŸ¼¾©«¥jªs¤Ø³µ©« /±¥s¦œ Æ\¨¢©«¡RŸk§¨¢©«ªs± ¾½£¹¤j Ë ªj¨®§¨¢©«ªôÐ` _±§ ¨¢¦Q§¨®ªj­¿§œs[­wQª.Ê Q Ÿk¡®¨®§Û£¸©k³Š§œsÂ¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡C¯f©\¤jQ¡dÑhȨ®§§¥j ªs± ©«¥j§¼§œ Ÿk§!§œs· Ÿk ±¨®ªj­þ§Ÿ«±ÝÕ¨®ªG§œs¾j¨¢¤.¨® _¦Q§¨¢©«ª Ÿk¡ ·j ©\¦__±±¨®ªj­Ø±¦œsQ¯f'¦ÁŸkªjªs©«§)¾$¼±¥j¾ å _¦Q§_¤Ã§©<§œs ±§Ÿkªs¤sŸk ¤I _±§ ¨¢¦Q§¨¢©«ª©k³!©f¨®ªs»· Ÿk ±Ÿk¾j¨®¡®¨®§Û£´ÐÀ¦h³’Ä ³ÀªÄsíÁÿwѱ›j<¡®¨®Ýw¨®ªf­wQªsQ Ÿk§¨¢©«ªÈ\§œs±Q§C©k³· Ÿk ±¨®ªj­ Ÿk¡®§Q ª Ÿk§¨®Æw_±z§œ Ÿk§'œ ŸÁÆwØ§© ¾$Ø¦_©«ªs±¨¢¤jQ _¤–¨¢±'¨®ª.Ê Ë ªj¨®§"ȱ©¶Ÿþ±£.±§Q¯ÃŸk§¨¢¦Ø§ ŸÁÆwQ ±Ÿk¡C©k³Ù§œsÂ±ÁŸk ¦œ ±· Ÿ«¦_¨¢±) _æ¥j¨® _¤ÓÐÀ±_¦"ĕsÄ ÿwÑhÄ   GC—wâ5FÎBJIÛⵗ/h—wJ Ks>™*Kj—wâۚ <B™*I5K  á K áÙHCš\—wâV>A@ ‹™š } žB¢  Ï£›Ž¥uŽžT žÏ£¥É¦Å ¥ ¦ ¿¥ žC  Ï£›Ž¥ ö ¨®ªs¦_f§œs»¦œ Ÿk §’ÊV¾ Ÿ«±_¤¿Ÿk¡®­w©« ¨®§œj¯§©¶¾f¤jQÆwQ¡×Ê ©«·_¤;¨¢±6§©芩« Ý ¾j¨¢¤.¨® _¦Q§¨¢©«ª Ÿk¡®¡®£wÈ»¥s±¨®ªj­5žCŸk ’Ê ¡¢Q£Ó¤j_¤.¥s¦Q§¨¢©«ªøÐÀõ?Q Q¨® ŸÓŸkªs¤ÓÏ֟k  QªȼíÁî"ò"ÿwѨ¢± Ÿª Ÿk§¥j Ÿk¡Y¦œs©«¨¢¦_"ě€ õ/Ÿk ±¨®ªj­´Ÿkªs¤­wQªsQ Ÿk§¨¢©«ª Ÿk¯f©«¥jª½§± §©I¤j_¤.¥s¦Q§¨¢©«ª©k³<ŸÖ­"¨®ÆwQªÒ­w©wŸk¡'¥s±¨®ªj­ §Û芩I¾ Ÿ«±¨¢¦É¨®ª.³ÀQ Qªs¦_[ ¥j¡¢_±)± Ž ?9‹K J 'K‹ ÐÀ©«  ¨®ª.Ê ±§Ÿkª§¨RŸk§¨¢©«ªÑ?Ÿkªs¤  œŽ3 'K‹[ÐÀ©«  _¤.¥s¦Q§¨¢©«ªÑhÄ/ÚÛª · Ÿk ±¨®ªj­sÈk§œs1­w©wŸk¡VìÞ±/±§ ¨®ªj­!¨¢±A±·_¦Q¨ Ë _¤»ÐµŸkªs¤¥s±_¤ Ÿ«±¨®§± 9 "!Ñh䫨®ªY­wQªsQ Ÿk§¨¢©«ªÈ«¨®§±¥jªs¤jQ ¡®£\¨®ªj­8 Q·.Ê  _±Qª§Ÿk§¨¢©«ª¹¨¢±ÁÄó!ª¿Ÿk­wQªs¤sŸ¸¨¢±Â¥s±_¤ÇŸ«±fŸ–¦_©«ª.Ê § ©«¡±§ ¥s¦Q§¥j "䨮§Q¯f±Š©«ªÃ§œs¼Ÿk­wQªs¤sŸŸkªs¤Ã¨®ªÃ§œs ¦œ Ÿk §»œ Ÿ_ÆwG§œs ³À©« ¯Ô©k³z¤j Ë ªj¨®§¶¦Q¡RŸk¥s±_±ÁÄÇÏ6 ¤.¨¢±§¨®ªj­"¥j¨¢±œÐ`¨dÑŸ«¦Q§¨®ÆwY¨®§Q¯f±ÁȕÛ¡¢©\©«Ý¨®ªj­Ã³À©« 6"'¯ÃŸcÊ §Q ¨RŸk¡4©k³'Ÿ[¦_Q §Ÿk¨®ªÇ¦ÁŸk§Q­w©« £Ÿkªs¤Iè4¨®§œ¿¦_Q §Ÿk¨®ª e$#  3 1 n 3'B | r zW… N n B 9 BDC4 ‚‡Ns…/46|!z{O / NP| / 463 t zW…zMD46yBDz{NhO r | LMDNJy463535z{O;x 9?t;n B NPLBDz{‚‡zh r BDz{NPO r L;L;|{z{463 NPO;| : z{O NhO4 % BDC;4 xh4O;4M r BDzWNPO ( …zMD46y#BDzWNPO?wƒB z{3‡C;N H 4 / 4M 3'B5M r z{xhCJB5™›NPM H r M€… BDN]4 7 BD4O‹… BDC;z{3‡‚‡Ns…4|BDN r&% x('%xq+rKyi x3o$wtÉo*)B xm x,+?t% x3o%w ‚‡Ns…4| % mon C;O 9‹pHqhqPqhr ( w e5e ‘ Cz{3 t 46yNh‚‡463y6|{4 r Mz{O 3'B5M n y#B n MD4 % z{z ( z{O—™›O?w V rHt N / 4hw eu X ™w % v C;z{4 t 4#M 9—&jiPkhkh(#l  r …/NhL/B­7 4 n ‚ r OO < 3 %'&6ihiPk ( n Oz™8NPMD‚ r xh46O;… r † …sMDz / 46OœB rHt;n | r M r |{xhNHMDz{BDC‚ w ³µÁŸk§¥j _±¼Ð`§œsQ¨® )±Q¡¢_¦Q§_¤þQ¡¢Q¯fQª½§OÑhÈsŸkªs¤–Ð`¨®¨dÑ/· Ÿ«±’Ê ±¨®Æw ¨®§Q¯f±˜ÈÙ±§Ÿk§¨®ªj­Ö§œ Ÿk§Ã¦_Q §Ÿk¨®ªÇ¯ÃŸk§Q ¨RŸk¡œ Ÿ«± Ÿk¡® ÁŸ«¤.£¸¾_Qª6³µ©«¥jªs¤EÄþÏҜsQªÓ¯ÃŸk§¦œj¨®ªj­sÈ1Ÿ · Ÿ«±’Ê ±¨®Æw Ÿkªs¤ŸkªŸ«¦Q§¨®Æw»¨®§Q¯¦ÁŸkªI¾Ã¦_©«¯Y¾j¨®ªs_¤¾£  Ž 'K‹ ě $ ó4¦Q§¨®ÆwÖ¨®§Q¯f±[¦_©«¯fÖ¨®ª½§©Òh²\¨¢±’Ê §Qªs¦_–§œj ©«¥j­"œ ? ?9 J 'K‹ ȼ¾ Ÿ«±_¤©«ª5ŸÓ­" Ÿk¯ØÊ ¯ÃŸk f ¥j¡¢"Ä Šh³µ©« Õ¦Q ÁŸk§¨®ªj­IŸÉªsQè̦œ Ÿk §Ã¨®§Q¯ Ÿkªs¤Â§ ¨®­"­wQ ¨®ªj­<¨®ª.³µQ Qªs¦_4 ¥j¡¢_±1¾ Ÿ«±_¤Ã©«ªf¨®§ÁÈw¨®§ ¨¢± ¦œs_¦Ýw_¤Iè4œsQ§œsQ <±¥s¦œ¿ŸkªÖ¨®§Q¯-h².¨¢±§±ØŸk¡® ÁŸ«¤.£ Ð`¨VÄޝ"ĮȊ¨®§f¨¢±  ) "B ?9sÑhÈ8ŸÁÆw©«¨¢¤.¨®ªj­Ó¥jªjªs_¦__±±OŸk £I hÊ ¦_©«¯Â·j¥j§Ÿk§¨¢©«ªs±ÁÄٛ)œsŸk¡®­w©« ¨®§œj¯ç¨¢±Ù¨®ªj¨®§¨RŸk¡®¨®ÜÁ_¤ ¾£ ·j¥j§§¨®ªj­–ŸkªÖŸ«¦Q§¨®Æwf¨®§Q¯7¡¢©\©«Ý¨®ªj­ ³µ©« §œsÂ­" Ÿk¯ØÊ ¯ÃŸk ÁìÞ±Ö ©©«§6±£\¯Y¾©«¡Ã©«ª;§œs¹Ÿk­wQªs¤sŸ.Èf¨®ªs¤jh²._¤ è4¨®§œ–§œs<¦_©«¯Â·j¡¢Q§¨®ªj·j¥j§ÁÄ'ڌ§8§Q ¯Â¨®ª Ÿk§_±'è4œsQª §œsYŸk­wQªs¤sŸÂ¨¢±Q¯Â·j§Û£wě U ü ©« ¨®ªs¤jh²\¨®ªj­»§œsz¥jªs¤jQ ¡®£\¨®ªj­Ã¨®ª.³µ©« ¯ÃŸk§¨¢©«ª[¨®ª ­wQªsQ Ÿk§¨¢©«ªÈÚè4¨®¡®¡j_±±Qª§¨RŸk¡®¡®£¥s±Š§œs .-Œ./ÕÆ«Ÿk¡×Ê ¥s_±!¨®ª¶§œsz¨®ªj·j¥j§!³`ÊÛ±§ ¥s¦Q§¥j "Ès§ ÁŸk§¨®ªj­þ§œsQ¯ðŸ«±  _±©«¥j ¦__±˜ÄY›A©þ¨¢¤jQª§¨×³`£É§œsQ¯¥jªj¨¢æ½¥sQ¡®£wÈE§œsØ±hÊ ¯ÃŸkª§¨¢¦z¨®ªs¤jh² ³À©« 8Ÿ<­"¨®ÆwQªG¨®§Q¯éè4¨®¡®¡¾zŸ<¡®¨¢±§©k³ .-Œ0/6Æ"Ÿk¡®¥s_±4è4¨®§œ¶§œsQ¨® !¦_©«¯Â·j¡¢Q§· Ÿk§œs±ÁÄ §™3˜ 132§ ž ¬54É 67  Ï£Ÿ‰£¹¤  Ï£¹¥ ›)œsÂh².§Qªs±¨¢©«ª6©k³ŠžÙŸk ¡¢Q£É¤j_¤.¥s¦Q§¨¢©«ª6§©¶ŸkªÓº¼› ±£.±§Q¯ð¯f_Q§¨®ªj­¶Ÿ«±±¥j¯Â·j§¨¢©«ªs±fВíÁÿwѼŸkªs¤¿Ð’í Ñ4¨¢± ±§ ¨®Ý\¨®ªj­"¡®£¬±¨®¯Â·j¡¢±Ð`¨dÑ6ŸÍ _¦_©« ¤ô©k³¶§œs¹¦_©«ª.Ê ±§ Ÿk¨®ª§!·j © Ë ¡¢<©k³1§œs<±§ ¥s¦Q§¥j ¦_©«ªs±§ ¥s¦Q§_¤[±© ³VŸk ¨¢±<ÝwQ·j§<³µ©« þÐ`œsÁŸ«¤j±Â©k³hÑz¨®§Q¯f±Áä!Ð`¨®¨dÑŸ«±   Ž3 'K‹ɨ¢±8Ÿk·j·j¡®¨¢_¤EÈj§œsº¼›¦_©«ªs±§ Ÿk¨®ª§±!Ÿk YŸk·.Ê ·j¡®¨¢_¤EÈ" _¦_©« ¤.¨®ªj­Y§œsŠ±¥j¯ ©k³$ªsQèÆ¨¢©«¡RŸk§¨¢©«ªs±CŸkªs¤ §œs¦_©«ªs±§ Ÿk¨®ª§8·j © Ë ¡¢z©k³§œs'· Ÿ«±±¨®Æw'¨®§Q¯&¦_©«ª.Ê ±¥j¯f_¤úÐ`è4œsQªsQÆwQ ÖŸÒ¦_©«ªs±§ Ÿk¨®ª½§–¯ÃŸ_£ø©« [¯ÃŸÁ£ ªs©«§4Ÿk·j·j¡®£wÈ\¾$©«§œ ©«·j§¨¢©«ªs±4Ÿk ¼¦_©«¯Â·j¥j§_¤ ukÑhäÐ`¨®¨®¨dÑ è4œsQªÇŸÕ· Ÿ«±±¨®ÆwÃ¨®§Q¯¨¢±Ø¦œs_¦Ýw_¤I³À©« ¾j¡¢©¦Ý¨®ªj­sÈ §œs¦_©«ªs±§ Ÿk¨®ª§4·j © Ë ¡¢'¨¢±)§ŸkÝwQªÕ¨®ª½§©fŸ«¦_¦_©«¥jª§)±1¨×³ §œsªsQ訮§Q¯&¨¢±)¯f©« œ Ÿk ¯f©«ªj¨¢¦"Èj¨®§¨¢±)ªs©«§8¦_©«ª.Ê ±¨¢¤jQ _¤fŸ«±1¾j¡¢©¦Ýw_¤EÈw¾j¥j§ ¨¢±1Ÿk·j·j¡®¨¢_¤EÄoŠ¡¢©¦Ý¨®ªj­¨®ª Ð`¨®¨®¨dÑA¨¢±/±¥j¾ å _¦Q§_¤Y§©Ÿ8 _±§ ¨¢¦Q§¨¢©«ªÂ©«·$Q Ÿk§¨¢©«ªÈwQª.Ê ±¥j ¨®ªj­§œsQ )¨¢±/©«ªj¡®£ÂŸ Ë ªj¨®§)ª½¥j¯Y¾Q C©k³ ·©"±±¨®¾j¡¢ §Œ£·_±©k³/¨®§Q¯f±Áě5Q e~8 yut$wCwCxƒw z{3 r 35L46yz r |•y r 354NH™Ty+o%m)krj x3o%w 9 H C;4#MD4 r |{4 7 zWyNhO 46OJB5M : z{3 n 354j… BDN ‚ r BDy€C r O r y#BDz / 4zBD46‚ < 3 354|W4y† BDz{NhO?w e ~ ™3 n y6y64353'™ n | 9 BDC4 y€C r M5BFy6NhOJB r z{O3 NhO4 NPM ‚‡NPMD4oL r 3'† 35z / 4œzBD46‚‡3 y6N / 4MDz{Ox BDC4 46OJBDz{MD4Žz{O;L n B 9 H zBDC BDC;4 xPM r ‚ † ‚ r M < 3 MDNJNPBF3 : ‚ t NP| r 3 BDC;4 y r BD46xhNHM : w e¹Ð  t NJN ˆ † ˆ 464L;z{O;x 35y€C;46‚‡4 46O3 n MD463—BDC r B—y6NhO3'B5M r z{OJBD3 NhO ™›†3'B5M n y#B n MD4 r MD4 r L;L;|{z{4j… NhO| : NhO;y4 9 35N n O;zŒ‹46… ™a† 3'B5M n yB n MD463 r MD4oONPB MD46x r M€…46…œB H z{y64hw e S ‘ CzW3 NhL4M r BDzWNPO zW3 1 n 3'BDz{Œ;4j… 9 35z{Oy64 r 353 n ‚‡L/BDzWNPO %'&< ( 46O;3 n MD463‡BDC r BŽ™›NPMœy6Nh‚‡L r MDz{O;xBDC;4Šy6NPO;3'B5M r zWOJBŽLMDNHŒ‹|{4ŠNP™ ) Ÿ E¢  žB£  Ï£›Ž¥¯™ ВíÁïwÑ)±·_¦Q¨ Ë _± Z]\a^J_#\›`jbµÈӟ ±¯ÃŸk¡®¡¸h²\§Qªs¤j_¤ c!ÊV¾ Ÿk Î³` Ÿk­"¯fQª§ÁÄ Y Ÿk§Q­w©« ¨¢_±ãŸk ¬Ÿ«±±¥j¯f_¤§©çœ Ÿ_Æw Ÿkª ¨®ª½§Q ’Ê ª Ÿk¡f±§ ¥s¦Q§¥j ÍÐ'Š _±ª ŸkªÈf÷.ä '¥jœjªÈÕíÁî"î"îwÑ Qªs¦_©\¤.¨®ªj­ ¡¢h².¨¢¦ÁŸk¡–¦Q¡RŸ«±±ÁȖ¾ Ÿk ´¡¢QÆwQ¡VȖ§œsø¡¢h²\¨×Ê ¦ÁŸk¡d똳À¥jªs¦Q§¨¢©«ª Ÿk¡!¤.¨¢±§¨®ªs¦Q§¨¢©«ªÈ4Ÿkªs¤¿è4œsQ§œsQ »§œsQ£ Ÿk §©«·j¯f©"±§6è4¨®§œj¨®ª Ÿkª¬h²\§Qªs¤j_¤;·j © å _¦Q§¨¢©«ª Ð`¨VÄޝ"Ä®Èß´¡¢h².¨¢¦ÁŸk¡Â·j © å _¦Q§¨¢©«ªôè4¨®§œ Ÿk¡®¡f¨®§±¸³`¥jªs¦hÊ §¨¢©«ª Ÿk¡·j © å _¦Q§¨¢©«ªs±OÑhÄúüs©« ¶¦Q¡RŸk ¨®§Û£wÈÚÃè4¨®¡®¡œs©Áè)Ê QÆwQ é¥s±7Ÿk¾j¾j QÆ\¨RŸk§¨¢©«ªs±é¡®¨®Ýw ¼õ!§©«·5³À©«  ) «7! Æ3 "!  +hhÄ  T %'&VP(„r w o ‘  D a M BDNPL-d I)J t wš“D a  d  % 7TD a M BDNPL-d ( “ F a  d  % I =?>;@BA (KJ IÆJ  % I c Q b , (KJ % I Q¿@BA (KJ yhw “GF a  d  “ a  d  D a BDNhLÏd I)J IÆJ … wpLTD a  d  % 7TD a M BDNPL-d ( LTF a  d % I =?>;@BA (KJ IÆJ 4hw L F a  d  L a  d % 7ED a M BDNhLÏd ( I)J % I Q@CA (KJ ›)œsÕ³À¥jªs¦Q§¨¢©«ª Ÿk¡Ÿkªjªs©«§Ÿk§¨¢©«ªs±Ã³µ©«¡®¡¢©Áȩ̀œs[Ÿkª.Ê ªs©«§Ÿk§¨¢©«ªÕ·j ¨®ªs¦Q¨®·j¡¢_±4©k³ŠÐ'Š _±ª ŸkªÈj÷wÑhÈj±ŸÁ£¨®ªj­ ³µ©« Y¨®ªs±§Ÿkªs¦_ÉВíÁ﫝˜Ñ'§œ Ÿk§<§œsÂ³`ÊÛ±§ ¥s¦Q§¥j Ã©k³§œs ÆwQ ¾ìޱæ_©«¯Â·j¡¢Q¯fQª§Ã¨¢±Â¨®ª§ ©¤.¥s¦__¤¹¥jªs¤jQ  !#"cÄ üj¥jªs¦Q§¨¢©«ª Ÿk¡·j © å _¦Q§¨¢©«ªs±)Ÿk 4¥jªj¨ Ë _¤Âè4¨®§œ»¡¢h²\¨¢¦ÁŸk¡ ·j © å _¦Q§¨¢©«ªs±/Ÿk§§œsÙ¡¢QÆwQ¡.©k³j³`ÊÛ±§ ¥s¦Q§¥j 'Ð`§œ½¥s±/§œs $&%(' ³µ©« þ¾©«§œü)+*Ÿkªs¤ c'õ-,§©«·z¨®ª¬Ð’íÁ﫦˜ÑÑhÄ ñ¼©«§ §œ Ÿk§[c¼õ(-,§©«·8¨®ª5ВíÁ﫦˜ÑئÁŸkªÇ¾þ¨®ªs±§Ÿkª.Ê §¨RŸk§_¤ÖŸ«±zü/õ-,8§©«·ÛÈ¡¢ÁŸ«¤.¨®ªj­¶§©¶Ÿþ _¦Q¥j ±¨¢©«ªÓ©k³ ü/õìÞ±ÁÈ Ÿk¡®¡E¯ÃŸk·j·_¤Õ§©Â§œs±OŸk¯f'³dÊÛ±§ ¥s¦Q§¥j "Ä Ð’íh-wÑ!¡®¨¢±§±'§œsØ¡¢h².¨¢¦_©«ªÄ O!ªs¤jQ YŸÃ³µŸk¨®§œ.³À¥j¡®ªs_±± Æ\¨¢©«¡RŸk§¨¢©«ª¶¨®§4¨¢±)·$©"±±¨®¾j¡¢'§©Â±Ý\¨®·þ§œs'¡¢h².¨¢¦ÁŸk¡¦_©«ª.Ê § ¨®¾j¥j§¨¢©«ªÈ"Ä ­sĮȧœs.-Œ0/¿¨®ª§ ©¤.¥s¦Q§¨¢©«ªÓ¾½£ 9;"Ä ›)œ½¥s±˜È§œsÂ¦ÁŸkªs¤.¨¢¤sŸk§_±'¨®ªÉ§œs<§Ÿk¾j¡¢ÁŸk¥¹ÐµïwÑ<еŸkªs¤ ¨®ª Ë ªj¨®§Q¡®£Ö¯ÃŸkª£Ö©«§œsQ ±OÑ<Ÿk »­wQªsQ Ÿk§_¤Ó¾£Ö§œs ­" Ÿk¯Â¯ÃŸk ÁÄ %'&?lh( BDC;4 : 7ED a M BDNPL-d % I -/.0/1 (KJ 2 BDC;4 : < % I Q (KJ. H C;N 7ED a M BDNPL-d % I -/.0/1 (KJ 2 H C;N < % I Q (KJNM 35464 L a/ d % I -/.0/1 (KJ 2 35464 – P¿Q+R —#< …N “ a/ d % I -/.0/1 (KJ 2 …N-– PQKR —#< B H N zBD46‚‡3 H zBDC BDC;4 3 r ‚‡4 z{O‹…4 7 9 r t N n O;…4j…3 n;t 354B NP™ BDC;46zM‡™›4 r B n MD4 3'B5M n yB n MD463 C r 3 BDN t 4ŽMD46x r M€…4j…}w ‘ C4 MD463'B5MDz{yBD4j… 3'B5M n yB n MD4š‚ n 3'B t 4 r B‡|{4 r 3'BŽBDC4š35zh64 NH™ BDC4 3'B5M n y#B n MD4Š‚ rH7 z{‚ r |{| : …46ONPBD4j… tJ:r yNhO;3'B5M r z{OsB l n 35z{O;x rPt 3'B5M r yBDz{NPON / 4M r |{|Jy6NPO;3'B5M r zWOJBD3 r 3?MD463'B5MDz{yBDz{NPO H z{|{|Jx n;r M5† r OJBD464 BDCz{36w e ] “D a M ¼  BDNPL-dz{3 n 354j…—™›NPM ™ n OyBDz{NhO r |‹L/MDN 1 46yBDz{NhO3 %gr |{| NP™ H C;z{y€C H z{|{| t 4 / 4M t‹r |}zWO BDC;z{3 4 7r ‚‡L;|{4 (#l  D a M ¼  BDNPL-d z{3 n O‹…/4MD35L46y6zŒ‹46… ™8NHMBDC;4 |{4 7 z{y r |í¼)™ n OyBDz{NhO r |…z{3'BDz{O;y#BDzWNPO?w ڌ§Q¯f±8Ÿk 'ªs©«§Ÿk§_¤ÉŸ«±)³À©«¡®¡¢©˜è±)± %'&‘P( –10 . 3254  . Ñ  ›7 ›7 ” / z  276 l‹a Q98 d l xƒw;: 'Ær: l‹t NJN ˆ † ˆ 4646L;z{Ox —#9 H C;4MD4 <  . Ñ  ›7 ›7 ” / z  276 % 46‚‡L/B : ™›NPMœL r 3535z / 4 zBD46‚‡3 (jµ 3546|{46y#BD4j… % C;4MD4 xP46O;4M r |{| : |{4™›BD‚‡NP3'B ( 4|W4‚‡46OJB ‚ r M ˆ 46… tJ: n O;…4MD|{z{O;z{O;x l < Q=8 µ O n ‚ t 4MNP™ / z{Nh| r BDz{NhO3NP™ & -Æ(u*}-0Ï, %pP( r O‹… 4 0-Æ(+6 & %32 (#l < 354‚ r OJBDz{yÒxƒw'Ær:‡z{3 H MDzB5BD46O r 3 r |{z{3'BNH™ L r BDC36w ›)œsY¾$©\©«ÝÊVÝw_Q·j¨®ªj­ÃÝw_Q·s±§ Ÿ«¦ÝG©k³ §œsY±§ ¥s¦hÊ §¥j –¦_©«ªs±§ ¥s¦Q§_¤´±©6³µŸk Áä¼³µ©« Ã¨®¡®¡®¥s±§ Ÿk§¨®Æw–·j¥j ’Ê ·©"±_±ÁÈÚÙ¥s± å ¥s±§§œs<±§ ¨®ªj­f³µ©« 8§œj¨¢±ÁÄ ?> Å'Q§Ÿk¨®¡¢± ©k³C§œs<¦_©«ª½§ ©«¡ ±§ Ÿk§Q­"£ÉŸk ¨®­"ªs©« _¤G³µ©« ¼¦Q¡RŸk ¨®§Œ£ ©k³/·j _±Qª½§Ÿk§¨¢©«ªÄ ûAQ§¥s±¡¢©©«Ý¸Ÿk§§œsÂ§Ÿ«±ÝÉ©k³Š­wQªsQ Ÿk§¨®ªj­G³` ©«¯ Вí˜ÑhÈG Q·j _±Qª½§_¤çœsQ ãŸ«±@k.-Œ./nBA-ÈCAED!F"$ .-Œ0/nHGJI‹&KCÈJ3!F"$$.-Œ./nMLNIOÛĊ›)œszŸk­wQªs¤sŸØ¨¢± ¨®ªj¨®§¨RŸk¡®¨®ÜÁ_¤Gè4¨®§œ¶¨®§Q¯ %'&jkh( –  7 v i .  4  o ‘ lŒa q/9 q d l§a -/.0/1;( = 0/0 9 =?>;@BA ( -.0/1;(cQP 0R 9 Q¿@BA (5-.0/1;(TSUP Q d l – —Œ— õ1 _¤.¨¢¦Q§¨¢©«ªÓè4¨®¡®¡ÙŸk·j·j¡®£wȾ Ÿ«±_¤Ö©«ªÖ ¥j¡¢GВíÁï"ŸwÑhÈ ­"¨®Æ\¨®ªj­f ¨¢±'§©Â§œs¼³µ©«¡®¡¢©Áè4¨®ªj­»ªsQèͨ®§Q¯ ± %'&jih( –V o ‘ 4  D a M BDNhLÏd lNa qs9 q d l¯a -/.01( = 0/0 9 =?>;@BA ( -.0/1;(cQP 0R 9 Q¿@BA (5-.0/1;(TSUP Q d l – —Œ—#9 è4œj¨¢¦œÕè4¨®¡®¡§ ¨®­"­wQ Ÿ<¦œ Ÿk¨®ªÕ©k³A·j _¤.¨¢¦Q§¨¢©«ªs±zÐ`ªs©«§ §œ Ÿk§É§œsI¨®ªs¤jh²5¨®ªã§œs_±IŸ«¦Q§¨®ÆwÓ¨®§Q¯f±É¨¢±[§œs ±Q¡¢_¦Q§_¤¹Q¡¢Q¯fQª½§ÁìÞ±»¨®ªs¤jh²Ȋ¨VÄޝ"Ä®ÈÙ¥jªj¡¢_±±ÃŸkª $%(' ªs©\¤j¨¢±±Q¡¢_¦Q§_¤EÈs§œs'¨®ªs¤jh²Õ¦œ Ÿkªj­w_±Oѱ %pPqh( – “D a M BDNhLÏd 4 7ED a M BDNhL-d “ F a M BDNPL-d lŒa q/9 q d l‹a =?>;@BA ( -.0/1;(cQP 0R¿d l – —Œ— %ps&j( – “D a M BDNhLÏd 4 7ED a M BDNhLÏd “ F a M BDNhL-d l¯a q/9 q d l a Q¿@BA ( -.0/1;(TSUP Q d l – —Œ— %phpP( – “D a M BDNhL-d 4 “ F a M BDNhLÏd lGa q/9 q d l0a -/.0/1;( = 0/0 9 =?>;@BA ( -.0/1;(cQP 0R 9 Q¿@BA (5-.0/1;(TSUP Q d l – —Œ— %p%2h( – “ F a M BDNhLÏd 4 “ a M BDNhLÏd  D a BDNhLÏd l¯a qs9 q d l a -.0/1( = 00 9 =?>@CA (D-/.0/1;(cQP}0R 9 Q@CA (D-/.0/1;(WS)P Q d l – —‹— %p$< ( –LED a M BDNhLÏd 4 7ED a M BDNPL-d LÉF a M BDNhLÏd l0a qs9 q d la =?>;@BA ( -.0/1;(cQP 0R¿d l – —Œ— %pVP( –LED a M BDNPL-d 4 L F a M BDNPL-d la qs9 q d la -/.01( = 0/0 9 =?>;@BA ( -.0/1;(cQP 0R 9 Q¿@BA (5-.0/1;(TSUP Q d l – —Œ— %p%lh( –GLTF a M BDNhL-d 4 L a M BDNPL-d % 7ED a M BDNhLÏd (#la qs9 q d la -.0/1( = 00 9 =?>@CA (D-/.0/1;(cQP}0R 9 Q@CA (D-/.0/1;(WS)P Q d l – —‹— e V ‘ C4‡3'B5MDzWOx …/NJ463vwo% MD46LMD46354OsBoBDC4 3'B5MDz{O;x yN / 4MD4j… tJ: BDC;4 354|W4yBD4j…œ4|W4‚‡46OJB 9st;n B BDC;4 4OsBDzMD4o…/4MDz / r BDz{NhOŽC;z{3'† BDNPM : f žCŸ«¦œ»©k³/е÷wÑ/Ÿkªs¤[е÷;ѦÁŸkªÂ¥jªs¤jQ ­w©z±¦ÁŸkªjªj¨®ªj­ è4¨®§œÍ§œs6Qª§ £Î³À©«  "È!¡¢ÁŸ«¤.¨®ªj­Ò§©¿ _¤.¥s¦__¤ ¨®§Q¯f±»Ð`§œsf¨®ªs¤jh²¸¦_©«ª§Ÿk¨®ªs±Y§œsf Q¯ÃŸk¨®ªj¨®ªj­G¯ÃŸcÊ §Q ¨RŸk¡§©Â¾'­wQªsQ Ÿk§_¤Õ¾£þ§œszªsQèͱQ¡¢_¦Q§¨¢©«ªÑ± %p‘P( –F“D a M BDNhLÏd 4 “GF a M BDNhL-d l§a q/9 q d lTa -/.01( = 0/0 9 Q¿@BA ( -.0/1(WSUP Q d l –ƒsCrj„ —‹— %pPkh( –nLED a M BDNhLÏd 4 LTF a M BDNhLÏd lŽa q/9 q d lŒa -.0/1( = 00 9 Q¿@BA ( -.0/1(WSUP Q d l –ƒsCrj„ —‹— ûQ§¥s±z¡¢©\©«Ý–Ÿk§ е÷"òwÑhÄ»›)œsØ³µ©«¡®¡¢©Áè4¨®ªj­É·j _¤.¨¢¦hÊ §¨¢©«ª %pPih( –GLTF a M BDNhLÏd 4 L a M BDNhL-d % 7TD a M BDNPL-d (#l;a qs9 q d la -/.01( = 00 9 Q@BA (D-/.01(WS)P Q d l –ƒsCrj„ —§— ¦ÁŸkª¸¾¥s±_¤–§©Õ±¦ÁŸkª Æ  QÈA _±¥j¡®§¨®ªj­¶¨®ª¸§ŒèÙ©¶©«·.Ê §¨¢©«ªs±ÁȤjQ·Qªs¤.¨®ªj­þè4œsQ§œsQ '©« ¼ªs©«§¼§œs<©«·j§¨¢©«ª Ÿk¡ ñ¼õ¿¨¢±8Ÿ«±±¥j¯f_¤Œ±ÙŸkªs©«§œsQ ¼Ÿ«¦Q§¨®Æw¨®§Q¯ %32hqh( –GLTF a M BDNhLÏd 4 7ED a M BDNhLÏd l;a q/9 q d l¿a Q@BA (D-/.01(WSUP Q d l –ƒsCrj„v…ir+r —‹— ©« 8ŸØ· Ÿ«±±¨®Æw'¨®§Q¯ ± %32/&j( –L F a M BDNhL-d 4 la q/9 q d lGa -/.01( = 0/0Æd l –ƒs-rj„v…r+r —§— ÏΜsQª5¦Q ÁŸk§¨®ªj­ÎŸI· Ÿ«±±¨®Æw¸¨®§Q¯¶È'§œs6±§ ¥s¦hÊ §¥j Ÿk¡Aº¼›´¦_©«ªs±§ Ÿk¨®ª½§±8Ÿk ¼¦œs_¦Ýw_¤Eäsœs©˜èٝQÆwQ ÁÈj¨®ª §œj¨¢±¦ÁŸ«±"Èsªs©ØÆ¨¢©«¡RŸk§¨¢©«ªs±!©¦_¦Q¥j ÁÄ ÐµÿwÑ»¦ÁŸkªÒ¾[¥s±_¤¹§©I±¦ÁŸkª "È4¡¢ÁŸ«¤.¨®ªj­§© Ÿkªs©«§œsQ · Ÿ«±±¨®Æw¨®§Q¯-Ð`è4¨®§œ[ŸØ¤.¨[ZEQ Qª½§¨®ªs¤jh² ÑhÄ ê¼Q "È芝ؤj©»œ ŸÁÆwÂŸÃ¦_©«ªs±§ Ÿk¨®ª§Æ¨¢©«¡RŸk§¨¢©«ª‹±  ¨¢±¼¯ÃŸk Ýw_¤¸Ÿ«±'Ÿkª–©«·Q Ÿk§©« ÁȾj¥j§¼¨®§¼¨¢±¼ªs©«§'¨®ª½§ ©kÊ ¤.¥s¦__¤Õ¨®ª¶§œs'§©«·j¯f©"±§8±·_¦Q¨ Ë Q ·$©"±¨®§¨¢©«ªÄ %32 pP( –vL F a M BDNhL-d 4 l­a{&h9 q d l·a -.0/1;( = 00 9 Q@BA (D-/.01( SUP Q d l –ƒsCrj„ …irur € s-o —‹— еÿ"÷wѶ§ ¨®­"­wQ ±[¦_©«¯Â·j¡¢Q§¨¢©«ª ©k³f§œs 'õ!§©«· ¨®§Q¯ÔÐ`ªs©«§¦ÁŸk¥s±¨®ªj­GŸkª½£G³`¥j §œsQ ¦_©«ªs±§ Ÿk¨®ª§Æ¨¢©kÊ ¡RŸk§¨¢©«ªs±OÑhÈ'è4œj¨¢¦œãŸk­½Ÿk¨®ªÍ§ ¨®­"­wQ ±G¦_©«¯Â·j¡¢Q§¨¢©«ªã©k³ §œs¨®ªj¨®§¨RŸk¡$ó!ñ ö Ï´ž ¹¨®§Q¯¶Ä ö ©jÈèٝ8œ Ÿ_Æw¼Ÿ Ë  ±§ ¦ÁŸkªs¤.¨¢¤sŸk§z§©Â¦_©˜ÆwQ 8§œszQª§¨® ¨®ªj·j¥j§ÁÄ %322h( –TLTD a M BDNhLÏd 4  lTa{&h9 q d lNa -/.01( = 0/0 9 =?>;@BA (D-/.01( cQP}0 R 9 Q@BA (D-/.01(WSUP Q d l –ƒsCrj„v…ir+r € s-o —Œ— %32%< ( –  7 v i .  4  laW&P9 q d lŒa -.0/1;( = 00 9 =?>;@BA (D-/.01( cQP}0 R 9 Q@BA (D-/.01(WSUP Q d l –ƒsCrj„v…ir+r € s-o —Œ— ý'©«¨®ªj­Â¾ Ÿ«¦Ýþ§©¶Ðµ÷.í˜ÑhÈ.§œj¨¢±!Ÿ«¦Q§¨®Æw'¨®§Q¯é¦ÁŸkªÕ¾$ ¥s±_¤Õ±¦ÁŸkªjªj¨®ªj­   %32 VP( – “D a M BDNhLÏd 4 “ F a M BDNPL-d lGa q/9 q d la -.0/1( = 00 9 =?>;@BA ( -.0/1(cQP}0 Rd l – € s-o —‹— ·j _¤.¨¢¦Q§¨®ªj­ %32lh( – “GF a M BDNPL-d 4 “ a M BDNhLÏd  D a BDNPL-d l¯a qs9 q d l a -/.01( = 00 9 =?>;@BA (5-.0/1;(cQP 0R¿d l – € sÏo —Œ— 꼝Q "È'Ÿkª¨®ª½§Q _±§¨®ªj­¹¦ÁŸ«±¸©k³<±¦ÁŸkªjªj¨®ªj­¹¦ÁŸkª ©\¦_¦Q¥j )±8¨®ª–§œs<¡¢h².¨¢¦_©«ª¸§œsQ ¨¢±zŸkª¸Qª§ £[©k³Ù§œs ±Q¡¢_¦Q§_¤¿¦ÁŸk§Q­w©« £ü))u± 9"Ä6ڌ§±<³dʌŸkªjªs©«§Ÿk§¨¢©«ª Ð`¨®ª§ ©¤.¥s¦Q¨®ªj­ k.-Œ./Œ /3O`Ѥj©\_±zªs©«§z¯ÃŸk§¦œÓ§œs ¨®ªs¤jh²EÄ Š¥j§)§œsQ '¨¢±)§œs¼©«·j§¨¢©«ª¶©k³¥s±¨®ªj­ÃŸ¡¢h²\¨×Ê ¦_©«ªþQª½§ £Â¥jª.³VŸk¨®§œ.³`¥j¡®¡®£wȨ®ª½§ ©\¤.¥s¦Q¨®ªj­ØŸÆ¨¢©«¡RŸk§¨¢©«ª ©k³}!$#%þěñ!©«§§œsz¥jªs¦œ Ÿkªj­w_¤¶¨®ªs¤jh²EÄ %32 ‘P( –“ F a M BDNhL-d 4  D a BDNPL-d la q/9{& d la -/.01( = 0/0 9 =?>;@BA ( -.0/1;(cQP 0R¿d l – € sÏo 'o —‹— !_¦ÁŸk¡®¡ §œ Ÿk§—c¼õ(-,§©«·$¨¢±ŠQ¨®§œsQ  ¼õ(-,§©«·©«  ü/õ-,§©«·ÛÄ!›)œs'õΩ«·j§¨¢©«ª[è4¨®¡®¡­"¨®Æwz¥s±'Ÿf· Ÿk ’Ê §¨RŸk¡Š¤jQ ¨®Æ«Ÿk§¨¢©«ªIæ½¥j¨®§f±¨®¯Â¨®¡RŸk z§©¶§œs 'õ!§©«· ¦ÁŸ«±fŸk¾©ÁÆw[Ðе÷"òwÑÛÊOеÿ.í˜ÑÑhÄ»º!ªj¡®£–§œsÂ©«¾ å _¦Q§Y¦ÁŸkª.Ê ªs©«§Ø¾» ÁŸk¡®¨®ÜÁ_¤¨®ª§œs'õ?È ±¨®ªs¦_þ¨®§±< _±©«¥j ¦_ · Ÿk§œ¸œ Ÿ«±YŸk¡® ÁŸ«¤.£É¾_QªÉ¥s±_¤EÄ ö ©jÈE芝حwQ§z©«ªj¡®£ §œs'³µ©«¡®¡¢©Áè4¨®ªj­»· Ÿ«±±¨®Æw'¨®§Q¯ ± %32hkh( –TLED a BDNhLÏd 4  lTa q/9 q d lNa -/.01( = 0/0 9 =?>;@BA (5-.0/1;( cQP 0R¿d l – € s-o ' o ƒs-rj„ …ir+r —‹— ›)œj¨¢±· Ÿ«±±¨®ÆwC¨®§Q¯ ¦ÁŸkªØ¾$C¥s±_¤Y¨®ªØ¦_©«¯Â·j¡¢Q§¨¢©«ª ¦_©«¯¾j¨®ªs_¤»è4¨®§œ–еÿ;"ÑhÈ\ _±¥j¡®§¨®ªj­Y¨®ª»§œs³µ©«¡®¡¢©Áè4¨®ªj­ · Ÿ«±±¨®Æw¨®§Q¯f±)± %32hih( –—“ F a M BDNPL-d 4 lva qs9{& d l a -/.01( = 0/0 9 =?>;@BA (5-.0/1;( cQP 0R¿d l – € s-o ' o ƒs-rj„ …ir+r —‹— % < qh( –o“D a M BDNPL-d 4 lTa qs9{& d l a -/.01( = 0/0 9 =?>;@BA (5-.0/1;( cQP 0R 9 Q¿@BA (D-.0/1(WSUP Q d l – € sÏo ' ovƒsCrj„v…ir+r —§— ›)œj¨¢±8è4¨®¡®¡ Ë ª Ÿk¡®¡®£Õ§ ¨®­"­wQ '¦_©«¯Â·j¡¢Q§¨¢©«ªÉ©k³8ВíÁòwÑhÄ ñ¼©«§C§œ Ÿk§§œsQ Š¨¢±Ÿk¡® ÁŸ«¤.£<Ÿ· Ÿ«±±¨®ÆwÙó¼ñ ö Ï´ž ¨®§Q¯Ôè4¨®§œ¹§œs¶±Ÿk¯fÕ¨®ªs¤jh²Œ±Öеÿ;ÑhÄ ö ©jȊ¦Q¡RŸ«±±¨×Ê ¦ÁŸk¡®¡®£wÈE芝ØèÙ©«¥j¡¢¤¸œ ŸÁÆwÃŸþ¦ÁŸ«±Ø©k³Ù¾j¡¢©\¦Ý\¨®ªj­sÄ Ù¥j§ œsQ "ȧœsÃ¦_©«ªs±§ Ÿk¨®ª½§<·j © Ë ¡¢f¨¢±Y¦_©«¯Â· Ÿk _¤EÈ1Ÿkªs¤ ¨®§<§¥j ªs±Â©«¥j§§œ Ÿk§Ø§œsþªsQèúó!ñ ö ÏΞ ú¨®§Q¯-¨¢± ¯f©« Ãœ Ÿk ¯f©«ªj¨¢¦"Ä ö ©Gèٝ Q·j¡RŸ«¦_Ã§œs Ë  ±§z¨®§Q¯ ¾£þ§œj¨¢±ªsQè5©«ªs± % <&j( –  7 v i .  4 lna qs9W& d lŒa -/.0/1;( = 0/0 9 =?>;@BA (5-.0/1;( cQP 0R 9 Q¿@BA (D-.0/1(WSUP Q d l – € sÏo ' ovƒsCrj„v…ir+r —§— Š¥j§¸·j ©\¦__±±¨®ªj­Í¨¢±6ªs©«§¸£wQ§ Ë ªj¨¢±œs_¤EÄ ›)œs ©«§œsQ É©«·j§¨¢©«ª5³µ©« Óеÿ;"Ñ è)Ÿ«±G§œ Ÿk§[§œs6±Q¡¢_¦Q§_¤ c'õ-,§©«·'¨¢±ÕŸkªÎü/õ-,§©«·Ûä'§œsQªÈèٝɭwQ§»§œs ³µ©«¡®¡¢©Áè4¨®ªj­»ªsQè㟫¦Q§¨®Æw¨®§Q¯-еŸk¯f©«ªj­Ã©«§œsQ ±hѱ % <JpP( – “D a BDNhL-d 4 “ F a BDNhLÏd lGa q/9{& d l0a -/.0/1;( = 0/0 9 =?>;@BA ( -.0/1;(cQP 0R¿d l – € sÏo 'o —‹— ›)œj¨¢±è4¨®¡®¡·j _¤.¨¢¦Q§ % < 2h( – “GF a BDNhLÏd 4 “ a BDNhLÏd  D a BDNhLÏd l¯a qs9W& d l a -.0/1( = 00 9 =?>@CA (D-/.0/1;(cQP}0R¿d l – € s-o ' o —Œ— e W  |{BDCN n xhC 4 0-Æ(+6 & y6N n |a… t 4 ‚‡Ns…4|W|{4j… BDN t 4 y#C46y ˆ 4j… O;NHB n OJBDzW|;L r 3535z / 4FzBD46‚‡3 r MD4FyMD4 r BD46… 9 zBz{3 O r B n † M r | BDN ˆ 4646LoB5M r y ˆ NH™ / z{Nh| r BDz{NPO;3 r |{NPO;x H z{BDC |{4 7 z{y r | r yy6463536w è4œj¨¢¦œ;¦ÁŸkª;¦_©«¯Y¾j¨®ªsè4¨®§œ;Ÿkªs©«§œsQ –¥jª.³µŸk¨®§œ.³À¥j¡ ¥s±©k³ 9;± % << ( – “GF a BDNhLÏd 4  D a BDNPL-d la qs9 p d la -/.0/1;( = 0/0 9 =?>;@BA ( -.0/1(cQP}0 Rd l – € s-o ' o 'o —Œ— ó!­½Ÿk¨®ªÈ 芝Yœ ŸÁÆwØŸÃ¦œs©«¨¢¦_Y³À©« œc¼õ(-,§©«·ÛÄÏ6 ¦ÁŸkªG·j¨¢¦Ý 'õ-,§©«·/Ÿk­½Ÿk¨®ªÈ¾Q¨®ªj­þŸk¾j¡¢§©f Q¥s± еÿ"òwÑhÄ Y ©«¯Â·j¡¢Q§¨¢©«ª[­"¨®Æw_±¥s± % <ÆVP( –—“ F a BDNhL-d 4  lva q/9 p d l a -/.0/1;( = 0/0 9 =?>;@BA (D-/.01( cQP}0 Rd l – € s-o ' o ' o ƒs-rj„ …ir+r —‹— è4œj¨¢¦œÉ¦_©«¯Â·j¡¢Q§_±Ð ½÷wÑ4Ÿkªs¤¶¥j¡®§¨®¯ÃŸk§Q¡®£¸Ðµÿ;"ѱ % < lh( –—“GF a M BDNhL-d 4  lva q/9 p d l a -/.0/1;( = 0/0 9 =?>;@BA (D-/.01( cQP}0 Rd l – € s-o ' o ' o ƒs-rj„ …ir+r —‹— Š¥j§ªs©«§<§œ Ÿk§z±¥s¦œ6Ÿkª–¨®§Q¯7h²\¨¢±§±'Ÿk¡® ÁŸ«¤.£n± еÿ"îwÑhÄ Y ©«¯Â· Ÿk ¨®ªj­–§œsþ¦_©«ªs±§ Ÿk¨®ª½§Ø·j © Ë ¡¢_±ÁÈ §œs ªsQ蹩«·j§¨¢©«ª»¨¢± ¡¢_±±1œ Ÿk ¯f©«ªj¨¢¦<Ð`§œsh².¨¢±§¨®ªj­z¨®§Q¯ ¥s±_¤–©«ªj¡®£[©«ªs 9;0± U.È®í `ÑhÄüj¥j §œsQ ·j _¤.¨¢¦Q§¨¢©«ªs± è4¨®§œ[c'õ-,§©«·Ÿ«±1ü/õ-,8§©«·$Ÿk )¾j¡¢©¦Ýw_¤GеŸ«± ¨®ª ¦Q¡RŸ«±±¨¢¦ÁŸk¡?¦œ Ÿk §· Ÿk ±¨®ªj­ëc­wQªsQ Ÿk§¨¢©«ªÑhÄ ›)œs¶±Ÿk¯Â·j¡¢G¤jQ ¨®Æ«Ÿk§¨¢©«ª¹±œs©Áè±fœs©˜è̟kª¿¨®ª Ë Ê ªj¨®§þ±Q§©k³8¦ÁŸkªs¤.¨¢¤sŸk§_±¶Ð`¾ Ÿ«±¨¢¦ÁŸk¡®¡®£  Å9;Å9; ”Æ  kъ¨¢±4¤.¨¢±¦ÁŸk ¤j_¤GŸ«±!Ÿkª¶_æ½¥j¨®Æ"Ÿk¡¢Qªs¦_¦Q¡RŸ«±±ÁÄ ›)œj¨¢±É¯ÃŸkÝw_±6º¼› ·j ©\¦__±±¨®ªj­Îè4¨®§œ Ÿkªø¨®ª Ë ªj¨®§ ¦ÁŸkªs¤.¨¢¤sŸk§'±Q§C·©"±±¨®¾j¡¢"Ä/›)œs!h²jŸk¯Â·j¡¢!芟«±)±¨®¯ØÊ ·j¡¢"Ès¾j¥j§!§œsz§_¦œjªj¨¢æ½¥s<¦ÁŸk  ¨¢_±!©˜ÆwQ !§©þŸk¡®¡A¦_©«ª.Ê ±§ Ÿk¨®ª§±¶±Ÿk§¨¢±’³À£¨®ªj­¿Ÿ«±±¥j¯Â·j§¨¢©«ªôВí ÑhÄ ÚÛ§þ¯ÃŸ_£ ¾ _æ¥j¨® _¤»§© Û· Ÿ«±±6"!ŸY¦Q£\¦Q¡¢¼±QÆwQ Ÿk¡$§¨®¯f_±Š¾$hÊ ³µ©« Õ±§ ¨¢¦Q§Âœ Ÿk ¯f©«ª½£¿¤j_¦Q ÁŸ«±ÖÐ`­"¥ Ÿk Ÿkª§__¤Ç¾£ ВíÁÿwÑÑf§ŸkÝw_±»ZE_¦Q§ÁÄ›)œs¶· Ÿ«±±_±Ã©k³z§œj¨¢±þ¦Q£\¦Q¡®¨¢¦ ±§ ¥s¦Q§¥j 4œ ŸÁÆw§œs4ZE_¦Q§C©k³Ÿ_Æw©«¨¢¤.¨®ªj­Y§œsÆ¨¢©«¡RŸcÊ §¨¢©«ªÓ©k³4±©«¯fÃœj¨®­"œ.ÊV ŸkªjÝw_¤Ó¦_©«ªs±§ Ÿk¨®ª§ÁÄ  ÏÒ¨®§œ §œs8¦_©«ªs±§ Ÿk¨®ª½§)±¨®ÜÁ4¾©«¥jªs¤j_¤EȽ§œj¨¢±Ù¦_©«ªs±§ ¥s¦Q§¨¢©«ª ¨¢±­"¥ Ÿk Ÿkª§__¤Ö§©Õ§Q ¯Â¨®ª Ÿk§"Äþó4¤j¤.¨®§¨¢©«ª Ÿk¡)¦Q£\¦Q¡¢ · Ÿ«±±_±è4¨®¡®¡AŸk­½Ÿk¨®ª[¦ÁŸk¥s±¤jQ§Q ¨¢©« Ÿk§¨¢©«ªÄ ‹™ Æ¥  ¿žC¢   §£¥§¦ T žÏ£¥§¦¡ ¥ ¦ ¿¥ žC  Ï£›Ž¥ ó¼±G¤.¨¢±¦Q¥s±±_¤Í¨®ª5±_¦"ļÿ.ȧœs6 _¦_©«­"ªj¨®§¨¢©«ª¬Ÿkªs¤ · Ÿk ±¨®ªj­þ§Ÿ«±Ý ³µ©« ¼Ÿkª6º¼›ã±£.±§Q¯Ì¨®ª½Æw©«¡®Æw_±'· Ÿk ±’Ê ¨®ªj­Ÿkªs¤Ã¾ Ÿ«¦Ý\芟k ¤»­wQªsQ Ÿk§¨¢©«ªÄ1üs©«¡®¡¢©Áè4¨®ªj­<¨¢¤jÁŸ«± ©k³ŠÐÀñ¼Q¥j¯ÃŸkªjªÈíÁî"î"òwÑhÈjÚ/ ÁŸk¡®¨®ÜÁ!§œj¨¢±Š¨®ªÕŸkª ¨®ª½§Q ’Ê ¡¢ÁŸ_Æw_¤Éè)Ÿ_£wÈEŸ«±±¥j¯Â¨®ªj­ÕŸf¤j©«¥j¾j¡¢¨®ªs¤jh²¶³µ©« 8· Ÿ«±’Ê ±¨®Æw'¨®§Q¯f±ÁÈ ±©Ø§œsQ£þ¦ÁŸkªÕ¾'¥s±_¤Õ¨®ªÕ¾©«§œ¶¤.¨® _¦hÊ §¨¢©«ªs±ÁÄ<üs©« · Ÿk ±¨®ªj­Õè4¨®§œ6©«·j§¨®¯Â¨®Ü_¨®ªj­¶¾ Ÿ«¦Ýè)Ÿk ¤ u # 7 NPBD4 BDC r B r O 4j…/xh4 r / NPzW…z{O;x‡BDC;4 C;z{xPC†gM r O ˆ 4j… y6NPO† 3'B5M r z{OJB H zW|{|•C r / 4 t 4646O yNhO;3'B5M n yBD4j… r |MD4 r … : t 4™›NPMD4 C;zB5† BDz{O;x BDC;4 MD46y n MD35z{NPO 9Jr OJBDzWyzWL r BDz{O;xFBDC;4| r MDxh4M3'B5M n yB n MD4 MD4† ‰ n zMD4j…}w % ‘ C;z{3 z{3 t 4y rPn 354™8NHM r O : yNhO;3'B5M r z{OsB BDC r B ‚ r6: NPM‚ r6: ONPB t 4 / z{NP| r BD4j… 9t NHBDC NhL/BDzWNPO;3 r MD4 46OJBD4MD4j…]BDN BDC;4 y#C r M5Bjw ( ‘ C n 3 9 BDC;4 / z{Nh| r BDzWNPO;3Fz{O;y n M5MD4j… tJ: BDC;4 y : y6|{4 H z{|{|!O;NHBFy n B BDCzW3 t M r Oy#C w ­wQªsQ Ÿk§¨¢©«ªÈ'§œs[³µ©«¡®¡¢©Áè4¨®ªj­Ç·j ©\¦__¤.¥j –¨¢±ÕZ_¦hÊ §¨®Æw± ›)œsŸk­wQªs¤sŸ<¨¢±)¨®ªj¨®§¨RŸk¡®¨®ÜÁ_¤Õ¾£þŸkªGŸ«¦Q§¨®Æw'· Ÿk ±’Ê ¨®ªj­[¨®§Q¯¶È/¨®ªs¤jh²._¤Ö¾½£¸§œsþQª½§¨® »¨®ªj·j¥j§<±§ ¨®ªj­sÄ ó)§f§œs ·©«¨®ª½§Ãè4œsQ ¶ªs©« ¯ÃŸk¡®¡®£ÇŸÉ· Ÿ«±±¨®ÆwÕ¨®§Q¯  ¨¢±¼Ÿ«¤j¤j_¤G§©Â§œsY¦œ Ÿk §ÁÈ$Ÿkª–Ÿ«¦Q§¨®Æwz­wQªsQ Ÿk§¨¢©«ª ¨®§Q¯Ì¨¢±·j¥j§!©«ª[§œsŸk­wQªs¤sŸ.Èè4¨®§œG§œs±Q¯ÃŸkª½§¨¢¦ ¨®ªs¤jh²[¦_©«ªs±§ ¥s¦Q§_¤¶³µ©«    ¨®ªG· Ÿk ±¨®ªj­sÄ8›)œj¨¢±!è4¨®¡®¡ § ¨®­"­wQ zŸkªÉ¨®ª½§Q ¯f_¤.¨RŸk§Ø­wQªsQ Ÿk§¨¢©«ª–·jœ Ÿ«±"ȝh²Ê ·j¡¢©« ¨®ªj­–Ÿk¡®§Q ª Ÿk§¨®Æw»Ÿkª Ÿk¡®£.±_±ÁÄf꼝Q "ÈA©«·j§¨®¯Â¨®Ü˜ŸcÊ §¨¢©«ª;Ÿk·j·j¡®¨¢_±ÁÈz¡¢ÁŸ«¤.¨®ªj­Î§©ÒŸkª ©«·j§¨®¯ÃŸk¡<¨®§Q¯  ³µ©« Õ§œs6±Q¯ÃŸkª§¨¢¦6¨®ªs¤jh²´¥jªs¤jQ G¦_©«ªs±¨¢¤jQ Ÿk§¨¢©«ªÄ ÏΜsQª¶§œs­wQªsQ Ÿk§¨¢©«ªÉ·jœ Ÿ«±¨¢± Ë ªj¨¢±œs_¤EÈs§œs  ¨¢±4¦_©«¯Â· Ÿk _¤ §©  Ä ÚÛ³A¨®§)¨¢±)¨¢¤jQª½§¨¢¦ÁŸk¡VÈ   ¨¢±4Ÿ«¦Q§¥.Ê Ÿk¡®¡®£¸Ÿ«¤j¤j_¤¸§© §œsÂ¦œ Ÿk §Áä¨×³)ªs©«§ÁÈE¨®§¨¢±¥jªj­" Ÿk¯ØÊ ¯ÃŸk§¨¢¦ÁŸk¡EŸkªs¤Ãè4¨®¡®¡¾$!¤.¨¢±¦ÁŸk ¤j_¤[Ð`¯f©« !·j _¦Q¨¢±Q¡®£wÈ ŸÃ _¦_©« ¤É¨¢±!ÝwQ·j§¼§œ Ÿk§'§œj¨¢±¼· Ÿk §¨¢¦Q¥j¡RŸk ¨®§Q¯Ìœ Ÿ«± ¾_Qªf±œs©Áè4ªÂ§©'¾Š¥jªj­" Ÿk¯Â¯ÃŸk§¨¢¦ÁŸk¡dÑhÄ1ûAŸk§Q 1­wQª.Ê Q Ÿk§¨¢©«ªÕ·jœ Ÿ«±_±4¦ÁŸkª ¥s±!¨®ª½§Q ¯f_¤.¨RŸk§Y _±¥j¡®§±)©k³ §œszÁŸk ¡®¨¢Q !©«ªs_±ÁÄ ‡‰ˆ‹Š‹# ƨ¢©«¡RŸk§¨¢©«ªs±´¨®ª§ ©¤.¥s¦_ §©ú· Ÿk ±¨®ªj­ ±¨®¯Â¨®¡RŸk þ±¨®§¥ Ÿk§¨¢©«ªs± Ÿ«±!$#%+¤.¨¢¤Ç§©¸­wQªsQ ŸcÊ §¨¢©«ª‹±¨®ªÃ©« ¤jQ Ù§©z¦_©«ªs±¨¢¤jQ ÙŸk¡®¡·$©"±±¨®¾j¡¢)¥jªs¤jQ ¡®£½Ê ¨®ªj­¿ Q·j _±Qª§Ÿk§¨¢©«ªs±ÁÈzŸ¦Q£.¦Q¡®¨¢¦É¦œ Ÿk §G±§ ¥s¦Q§¥j  œ Ÿ«±[§©Ç¾6¤jÁŸk¡®§Gè4¨®§œÄ ÚV³ hK9‹›? ? s '‹BÊP '›  ) 'K‹ ¨¢±G¯f©\¤jQ¡¢_¤úÐ`¨VÄޝ"Ä®Èz§œsI©«·j§¨®¯ÃŸk¡<­wQª.Ê Q Ÿk§¨¢©«ª6¦ÁŸkªs¤.¨¢¤sŸk§_±Y¥jªs¤jQ ­w©¶Ÿkªs©«§œsQ z¦_©«¯Â·$Q§¨×Ê §¨¢©«ªÈE¡¢ÁŸÁƨ®ªj­¶©«ªj¡®£¸Ÿþ¦ÁŸkªs¤.¨¢¤sŸk§Â§œ Ÿk§'¨¢±'¾$_±§¼¨®ª ¾©«§œÇ¤.¨® _¦Q§¨¢©«ªs±OÑhÈÙ§œsÕ¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡!±©«¡®¥j§¨¢©«ª ³µ©« C­wQªsQ Ÿk§¨¢©«ª è4¨®¡®¡¦ÁŸk  £»©ÁÆwQ )¤.¨® _¦Q§¡®£f§©z· Ÿk ±’Ê ¨®ªj­± §œs'¨®§Q¯f±8¦_©«ªs±§ ¥s¦Q§_¤Õ¨®ªÕ· Ÿk ±¨®ªj­ÃŸk zŸk¡¢±© ¦œs_¦Ýw_¤I³µ©« ¦_©«ªs±§ Ÿk¨®ª§Æ\¨¢©«¡RŸk§¨¢©«ªs±ÂŸkªs¤ Ë ¡®§Q _¤ Ÿ«¦_¦_©« ¤.¨®ªj­"¡®£wÄ Š¥j§Í‡‰ˆ‹ŠŒ$#%ÔÆ\¨¢©«¡RŸk§¨¢©«ªs±GŸk 6Ÿk¡¢±©¿Ÿ«±±¥j¯f_¤ ¨®ªÓ¯f©¤jQ¡¢±è4¨®§œs©«¥j§Y¾j¨¢¤.¨® _¦Q§¨¢©«ª Ÿk¡4©«·j§¨®¯Â¨®Ü˜Ÿk§¨¢©«ª Ð`è4œj¨¢¦œ 9;à_æ½¥j¨® z¾j¨¢¤.¨® _¦Q§¨¢©«ª Ÿk¡ Ž?) $ $P  "Ȧh³’Ä ±_¦"Ä'ÿwÑhÄ ö ©jÈY§œsÖ _¦_©«­"ªj¨®§¨¢©«ª;§Ÿ«±Ý¾ Ÿ«±_¤ã©«ª §œs_±Í¯f©¤jQ¡¢±Ç±_Q¯f±Ç§©;·©"±ÍŸ ¤j_¦Q¨¢¤sŸk¾j¨®¡®¨®§Û£ ·j ©«¾j¡¢Q¯¶Ä ö ¨®ªs¦_'§œsQ YŸk '¨®ª Ë ªj¨®§Q¡®£ ¯ÃŸkª½£ ·$©"±Ê ±¨®¾j¡¢Š¥jªs¤jQ ¡®£\¨®ªj­'³À©« ¯f±?³µ©« /Ÿ!­"¨®ÆwQªÂ±§ ¨®ªj­sÈ«§œsQ  ¨¢±Gªs©Ç±§ Ÿk¨®­"œ½§’³µ©« 芟k ¤ã·j ©\¦__¤.¥j Ö©k³fŸk·j·j¡®£¨®ªj­ Û¾ Ÿ«¦Ýè)Ÿk ¤Ò­wQªsQ Ÿk§¨¢©«ª•"[§©ÖÁŸ«¦œ´©k³§œsQ¯¶Ä›A© ­"¥ Ÿk Ÿkª§_f¤j_¦Q¨¢¤sŸk¾j¨®¡®¨®§Û£wȝQ¨®§œsQ ÃÐÀ±§ ©«ªj­}hhÑ4¾j¨¢¤.¨×Ê  _¦Q§¨¢©«ª Ÿk¡Š©«·j§¨®¯Â¨®Ü˜Ÿk§¨¢©«ªÖœ Ÿ«±'§©Õ¾ÂŸ«±±¥j¯f_¤EÈA©«  §œs4¤jQ­" _©k³ ‡‰ˆ‹ŠŒ$#%øÆ¨¢©«¡RŸk§¨¢©«ªs±1·j ©¤.¥s¦__¤Â¾£ hɜ Ÿ«±4§©Â¾'¡®¨®¯Â¨®§_¤EÄj u#e “;NHM r …z{35y n 3535zWNPO NP™ H 4 rHˆ 4M‚‡Ns…46|{3 9 35464 % mon C;O 9 pPqPqhqhrh( w u5u B—z{3 C;N H 4 / 4M—yNhO;y46z / rHt |{4‡BDC r Bz{O BDC;4‡z{OJBD4MD|{4 r / 4j… Ɵ E¢ ¿Ÿ ¥ )  Ï£›Ž¥¯™ CŸk ­w_±;ВíÁî"î;"ÑÇ·j ©ÁÆ\¨¢¤j_± ŸkªÒh².·Q ¨®¯fQª½§Ÿk¡ ö ¨¢¦_±§¥s±Ãõ1 ©«¡¢©«­I¨®¯Â·j¡¢Q¯fQª§ŸcÊ §¨¢©«ªÖ©k³YÐÀñ¼Q¥j¯ÃŸkªjªÈÙíÁî"î"òwÑhÄÕº!ª6§œj¨¢±z¾ Ÿ«±¨¢±Áȧœs Ÿk¡®­w©« ¨®§œj¯+¨®¡®¡®¥s±§ Ÿk§_¤¨®ªÇ±_¦"Ä sÄ ÷.Èٟkªs¤I§œs ¨®ª.Ê §Q ¡¢ÁŸ_Æ\¨®ªj­ ¤.¨¢±¦Q¥s±±_¤Õ¨®ª[±_¦"Ä sÄ ÿœ ŸÁÆwz¾_QªG¨®¯ØÊ ·j¡¢Q¯fQª½§_¤EÄ ›)œsÓ· Ÿk ±Q Oëc­wQªsQ Ÿk§©« ¸œ Ÿ«±[¾$_Qª §_±§_¤ è4¨®§œ;±¯ÃŸk¡®¡Â­" Ÿk¯Â¯ÃŸk ¸³À Ÿk­"¯fQª½§±¸³` ©«¯ §œs§œs_©« Q§¨¢¦ÁŸk¡ º¼›´¡®¨®§Q Ÿk§¥j "Ä  âÛD«šjHCD«D«âV>?@ Ú/·j ©«·©"±_¤þŸ¦œ Ÿk §’ÊV¾ Ÿ«±_¤Õº¼›Ÿ«¦_¦_©«¥jª§Ù³µ©« Ù±£ª.Ê §Ÿc²EÈc¯ÃŸkÝ\¨®ªj­¦Q ¥s¦Q¨RŸk¡.¥s±C©k³ ¨®ª§Q ¡¢ÁŸ_Æ\¨®ªj­z©k³ · Ÿk ±’Ê ¨®ªj­<Ÿkªs¤»­wQªsQ Ÿk§¨¢©«ªÄ1›)œsQ 'Ÿk 8±QÆwQ Ÿk¡$©«¾½Æ\¨¢©«¥s± ±©«¥j ¦__±z³À©« Yh².·©«ªsQª½§¨RŸk¡Š¾$Qœ ŸÁƨ¢©«¥j )± $ Ð`¨dÑ'¦_©«ª.Ê ±§ Ÿk¨®ª§!Ÿk·j·j¡®¨¢¦ÁŸk§¨¢©«ªG¡¢ÁŸ«¤j±4§©fœj¨®­"œj¡®£þ¤.¨¢± å ¥jªs¦Q§¨®Æw ±§ ¥s¦Q§¥j _±Áä)Ð`¨®¨dÑ8³µ©« ¼­wQªsQ Ÿk§¨¢©«ªÈ?§œsØª½¥j¯¾$Q ©k³ ¨®§Q¯ ±Q§±z¯ÃŸ_£É­" ©Áè h².·©«ªsQª½§¨RŸk¡®¡®£–¨®ª¸§œsf±¨®ÜÁ ©k³/§œs¨®ªj·j¥j§4³`ÊÛ±§ ¥s¦Q§¥j "Ä U üs©« GÐ`¨dÑhÈÙ±©«·jœj¨¢±§¨¢¦ÁŸk§_¤§_¦œjªj¨¢æ¥s_±Ø³À ©«¯³ÀÁŸcÊ §¥j <­" Ÿk¯Â¯ÃŸk · Ÿk ±¨®ªj­6Ðɟc².èٝQ¡®¡1Ÿkªs¤ zŸk·j¡RŸkªÈ íÁî"î"òwÑ!¯ÃŸ_£[œsQ¡®·Èh².·j¡¢©«¨®§¨®ªj­ 9 Æ; J + ¨®ªs¤jQ·Qª.Ê ¤jQªs¦_´©k³G±§ ¥s¦Q§¥j _±¨®ª ¯f©"±§I¦ÁŸ«±_±ÁÄ û©\¦ÁŸk¡×Ê ¨®§Û£; _±§ ¨¢¦Q§¨¢©«ªs±Ó¤.¨¢±¦Q¥s±±_¤¬¨®ªÌÐ'¥jœjªÈ ÷k¾Ñ ¯ÃŸ_£ œsQ¡®·¬§©Í¡®¨®¯Â¨®§Ö·j ©«¾j¡¢Q¯ Ð`¨®¨dÑhÄ ó8¡¢±©jȝh²Ê ·j¡¢©«¨®§¨®ªj­Ø·j _¦_©«¯Â·j¥j§_¤þ¨®¯Â·j¡®¨¢¦ÁŸk§¨¢©«ªs±4©k³§œs8¦_©«ª.Ê ±§ Ÿk¨®ª§4 Ÿkªjݨ®ªj­Â±œs©«¥j¡¢¤ œ ŸÁÆwzŸ¦_©«ªs±¨¢¤jQ Ÿk¾j¡¢h³dÊ ³µ_¦Q§ÁÄCê'Ÿ_Æ\¨®ªj­Ã¦_©«¥s¦œs_¤[¦_©«¯Â·j¥j§Ÿk§¨¢©«ª Ÿk¡ º¼›Í±£ª.Ê §Ÿc²Ø¨®ªÂ§œs)èٝQ¡®¡×ÊÛ±§¥s¤.¨¢_¤Â· Ÿk Ÿ«¤.¨®­"¯ ©k³$žCŸk ¡¢Q£Â¤jhÊ ¤.¥s¦Q§¨¢©«ª5è4¨®¡®¡Yœs©«·$h³À¥j¡®¡®£Î³µŸ«¦Q¨®¡®¨®§Ÿk§Ó±¥s¦œøh²\§Qª.Ê ±¨¢©«ªs±8Ÿkªs¤¶¨®¯Â·j ©ÁÆwQ¯fQª½§±˜Ä ‹¥§¢ T¦0 Ÿ ¿¥   ›)œj¨¢±¼ _±ÁŸk ¦œ–è)Ÿ«±¼±¥j·j·$©« §_¤[¾½£¶§œs ö ü ãÿ;* Ð`·j © å _¦Q§ 'íÁ÷wÑÓ©k³G§œs´Å'Q¥j§±¦œsü ©« ±¦œ¥jªj­w±’Ê ­wQ¯fQ¨®ªs±¦œ Ÿc³`§ÁÄ ›)œ ŸkªjÝ\± §© Xw©wŸkª Š _±ª ŸkªÈ ó!ªsQ§§éüj ŸkªjÝÈ !©«ª zŸk·j¡RŸkªÈçɟk §¨®ª zŸ_£wÈ Xw©«œjª –Ÿc²\芝Q¡®¡VÈ!ê'Ÿ«¤sŸk  ö œsQ¯Y§©˜Æ$ÈŸkªs¤ X j ­wQª Ï6_¤jQÝ\¨®ªs¤´³À©« ÕÆ"Ÿk¡®¥ Ÿk¾j¡¢6¤.¨¢±¦Q¥s±±¨¢©«ªÍ©k³<Æ"Ÿk ¨¢©«¥s± r LLMDN r y€C 9r OŽz{O‹…zMD46y#B y6NhOJB5MDNP|NP™!BDC4 z{O/Œ‹O;zBD4 y r O;…zW… r BD4 35L r y64 t 46y6NP‚‡463 LNh3535z t |{4 %gr B|{4 r 3'B ™8NHM r |W||{zWOx n zW3'BDz{y r |{| : z{OJBD4MD463'BDz{O;x y r 35463 ( w ‘ C4zW…4 r H N n |a… t 4 BDC r B ™8NHM r |W| r O r |† : 35463 r MDzW35z{Ox BDCMDN n xPC r MD46y n MD35z / 4]|{NJNhLz{O L r MD35z{O;x 9 zB y r O t 435CN H O3 : 3'BD46‚ r BDzWy r |{| : BDC r B•BDC4 : r MD4LMDNs… n yBDz{NhO/† t‹r 354j… |{Nh354MD3 BDNŽ35NP‚‡4 ‚‡NPMD4 C r MD‚‡NhO;z{y y6NP‚‡L4BDzBDNPM H z{BDC BDC;4 3 r ‚‡4 z{OL n B % y™w r |{35N % mon C;O 9}pHqhqhqPr (5( w ‘ C;z{3 C r 3 BDN t 4 …4#™84M5MD4j… BDN—™ n B n MD4 MD46354 r MDy€C C;N H 4 / 4Mjw u'~ z n B O;NHBD4 BDC r B L r MD35z{O;x z{3 r |{MD4 r … : H NPMD3'B5†y r 354o4 7 LNH† O;4OsBDz r | ™›NPM BDC;4 t;r 354oxPM r ‚‡‚ r MD36w u ~ D MDN t |{46‚ % z{z ( z{3 r BDC46NPMD4#BDzWy r | NhL/BDz{NhO 4 / 46O ™›NPM O;NhO/†  ‘ xh4O;4M r BDzWNPO % m r6:J9&jihi%l (#9!t;n B BDC;4 H zW…/4†35LMD4 r … n O/† ™ r zBDC™ n |{O;46353 t MDz{O;xP3 z{BFN n B z{OšBDC;4  ‘ y r 354hw ¨¢±±¥s_±  Q¡RŸk§_¤´§©§œsÉ³À©« ¯ÃŸk¡®¨®Ü˜Ÿk§¨¢©«ªãŸkªs¤´·j ©kÊ ¦__±±¨®ªj­f©k³1º¼›Í±£ª§Ÿc²Ä K Ks™dK@Cš K D f N r O·zMD4635O r O?w &jiPihk w  L/BDzW‚ r |!3 : OJB rH7 w O f w 2 4 ˆsˆ 4#MD3 9 “ w / r O …/4M‡’ 464 n H 9 r O‹… f w / r O …4 i 46z 1 4M 9 4j…/zBDNPMD3 9 )B xmpt x „¿sCrKo%qu„ŽsÏo%woho „ 8 „$wCWt':nt%w'§y)x : …uxƒ x3o%w;w  7 ™8NHM€… Oz / 4MD35zB : D MD463536w ‘ N r LL4 r Mjw f N r O z MD4635O r O?w pHqhqPq w 9 r:%x yut#+: !)wyi x o%wt 8 „$wCWt':Pw z | r y ˆ H 46|{| w ‘ N r L;L4 r Mjw f r O4 ” MDz{‚‡35C r H r O;…pL z{4MDz v r ‚‡4 ˆ †’?Ns…/N / z{y6z w &jihiPk w  L/† BDzW‚ r | 3 nt 1 46yBD3 r O‹… 3 n;t 1 4yB n O;z / 4MD3 r |W36w] O”D•wz r M5† t Nh3 r/9 2 w “N 7}9 D•w 0 r xh3'B5MDNh‚ 9 Ñ w Ñ y)” z{O;Oz{3 9 r O‹… 2 w‹D•46354BD3 ˆJ:J9 4j…zBDNPMD3 9#" …·ƒs-r%$Trj…+  o?o"'  wBo s'& 9 L r xP463 &ji%2)(/pJ&ji w Ñ  ‘ DMD4353 r O‹… Ñ  ‘ i D ’ w Ñ r M ˆ–f NhCO;35NhO w &6ihihk w  L/BDzW‚ r |{z{B : †BDC46NPMD4#BDzWy ’ 4 7 z† y r |—“ n OyBDz{NhO r | ” M r ‚‡‚ r Mjw  O*qKo?yur+r '%xw … o,+‚ƒsCr -ƒs.§wCw/Ct#1032 465700o%w8+?rjq+rjwByjro%w:9;)mvt%w 8 rjw;: ¹rjwyur<qKo?yurj…+…+xw  =>) rjqj…%2wCx  rjqu…+x „Pw ‘ N r L;L4 r Mjw  NPO r |a… Ñ w m r L;| r O r O‹… f@? MDxh46O i 46…4 ˆ z{O‹…}w pPqhqPq w ’!“•” xh46O4M r BDz{NhO LMDNs… n y6463šy6NhOJBD4 7 B5†g™›MD464 | r O;x n‹r xh4636w  OAnqo?yurur '%xƒw …‚oB+C0 9" 4  :EDGFF@F 9 L r xh463 pPi ‘8()2Pq ps9 v rhr M t M ? y ˆ 4O?w Ñ r M5BDz{O m rj: w &jiPil w X C r M5BoxP46O;4#M r BDz{NhO wo OHqKo?yur+r '$xw … oB+ ƒsCrJILK-ƒsM§wCw/Ct#'NÊrurj xƒw oB+ ƒsCrE…+…jo?yix3t% x3o$w6+o%q 00o%m)O)Wt% x o%wt 9 xw )x…+ x3yi… 8 t%wCWt%0qP+Q0 w f NhO r 3 mon CO?w &6ihihi w ‘ N H r M€…/3 r 35zW‚‡L|{4 r MDy€C;zBD46yB n MD4 ™8NHM BDC;4š3'B5M n yB n MD4#†™ n OyBDz{NhO ‚ r L;Lz{O;xw  O Ñ wz n B5B r O‹… ‘ w0 w m zWOx 9 4j…zBDNHMD3 9 qKo?yur+r '$xw …ÉoB+nƒsCr 9  ÂRR 00o$w;: +?rjq+rjwByjrPNÊt$wyusCrj…+¹rjqSC2 T 9 X v ’? DMDNJy464j…z{Oxh3  O/† |Wz{O4hw f NhO r 3 mon C;O w pPqhqPqhr wF“ r zBDC™ n |{O;4353 / z{Nh| r BDz{NhO3 r O‹… t za…/z† MD46yBDz{NhO r |‹NPLBDz{‚‡zh r BDz{NhO w  O Ñ w)z n B5B r O‹… ‘ w 0 w m z{O;x 9 4j…zBDNPMD3 9 qKo?yur+r '%xw …}oB+vƒsCr 9   DGF@FFU00o%w8+?rjq+rjwByurP $Trjq r krj„V0 9 X v ’?vD MDNJy64646…z{O;xP3  O|Wz{O4hw ‘ N r L/† L4 r Mjw f NhO r 3 mon CO?w pPqPqhqPt wœ” 46O;4#M r BDz{NhO r O‹…L r MD35z{Oxšz{O  L/† BDzW‚ r |{z{B : ‘ C46NPMD4BDz{y 3 : OJB rH7#( z{353 n 463z{O‡BDC;4 ™8NPMD‚ r |{zk r † BDzWNPOšNH™  ‘ † ’?“•”wO % v 4|W|{3 9pPqhqPq ( w ‘ N r LL4 r Mjw f NhCO Ñ rH7 H 4|W| r O‹…. NPO r |W… m r L;| r O w &jiPihk w! O;zŒ‹y r BDz{NhO/† t‹r 3546… L r MD354MD3 BDC r B rPn BDNh‚ r BDzWy r |{| : B rPˆ 4 r … / r OJB r xh4FNP™ y6NhOJBD4 7 B}™›MD464O;463536w Ñ 36w 9  4#MDN 7 D   X 9 “4 t M n;r M : &jihiPk w ” ? OJBD4M­7 4 n ‚ r O;O w &jiPihk w  OJBD4MD|{4 r / z{O;x O r B n M r |o| r O/† x n‹r xh4 L r MD35z{O;x r O;…—xh4O;4M r BDzWNPO BDCMDN n xhC n O;z™›NPMD‚ LMDNH† y6463535z{O;x/w§qu x WŽyut# " wC¹r   x rjwyur 9;ihiϵ{&jps&P(;&?l%2 w “;4MDO r O‹…/N­D•4MD46zM rŠr O‹… 2 r / zW… i r M5MD46O w &jihk%2 w}D r MD35z{O;x r 3œ…4j… n yBDz{NhO w OXnqo?yurur '%xƒw …·oB+­ƒsCr%D …+§wCw/Ct# N‚r+rj xw oB+EƒsCr<T…+…jo?yix3t% x3o%w+o$q00o%m)ρWt$ x3o%wt# 9 xƒw;: Ïxƒ…u x yi…S>00t%m % qjx(' rP NY w D•4BD4M v 46|{|{3 9 4j…/z{BDNHMjw pHqhqhq w!o%qumvtÏt%w '  m)Bxqux3y+t " …u…Crj… xw )B xƒmvt x „:3ƒsCrKo%q+rj x3y 8 „%w-Wt':Pw X v ’?D nt |{z{y r BDz{NPO;3 9 v B r O/™8NHM€… w ‘ N r L;L4 r Mjw v B n;r M5B v C;z{4 t 4Mjw &jiPkhk w  n O;z™›NPMD‚ r MDy#CzBD46yB n MD4™8NHM L r MD3'† zWOx r O‹…ŠxP46O;4M r BDz{NhO?w  OZqKo?yur+r '$xw …poB+ ƒsCr Dƒs " w;: ¹rjqjwBt% x3o%wBt00o%wL+?rjqKrjwyurvo%w[00o%m)O)Wt% x o%wt 9 xw )x…?:  x3yi…\0 9'" 4 Â^] '$>' t*)-rj…+ w zM n y4 ‘ 463 r Mjw &6ihi V w_00o%m)O)Wt% x3o%wBt )B xmpt x „C¿sCr : o%qj„HwŽD C w 2 wsBDC;4635z{3 9  O;z / 4MD35zB : NH™ X Nh|{NHM r …N/w v 4 t;r 3'BDz r OÍL r MDxh436w &jihi‘ w D r MD35z{Ox n O;…” 46O;4MDz{4M n O;x zWO n Oz™8NPMD‚‡4O  MDy#Cz{BD4 ˆ B n MD46O?w Ñ r 3'BD4M < 3„BDC;4635z{3 9 0 46z{OMDz{y€C†0 4zWO4†` Oz / 4MD35zBSaHB 2 ? 353546|W…NHM5™ w
2000
46
A Polynomial-Time Fragment of Dominance Constraints Alexander Koller Kurt Mehlhorn∗ Joachim Niehren [email protected] [email protected] [email protected] University of the Saarland / ∗Max-Planck-Institute for Computer Science Saarbr¨ucken, Germany Abstract Dominance constraints are logical descriptions of trees that are widely used in computational linguistics. Their general satisfiability problem is known to be NP-complete. Here we identify the natural fragment of normal dominance constraints and show that its satisfiability problem is in deterministic polynomial time. 1 Introduction Dominance constraints are used as partial descriptions of trees in problems throughout computational linguistics. They have been applied to incremental parsing (Marcus et al., 1983), grammar formalisms (VijayShanker, 1992; Rambow et al., 1995; Duchier and Thater, 1999; Perrier, 2000), discourse (Gardent and Webber, 1998), and scope underspecification (Muskens, 1995; Egg et al., 1998). Logical properties of dominance constraints have been studied e.g. in (Backofen et al., 1995), and computational properties have been addressed in (Rogers and Vijay-Shanker, 1994; Duchier and Gardent, 1999). Here, the two most important operations are satisfiability testing – does the constraint describe a tree? – and enumerating solutions, i.e. the described trees. Unfortunately, even the satisfiability problem has been shown to be NPcomplete (Koller et al., 1998). This has shed doubt on their practical usefulness. In this paper, we define normal dominance constraints, a natural fragment of dominance constraints whose restrictions should be unproblematic for many applications. We present a graph algorithm that decides satisfiability of normal dominance constraints in polynomial time. Then we show how to use this algorithm to enumerate solutions efficiently. An example for an application of normal dominance constraints is scope underspecification: Constraints as in Fig. 1 can serve as underspecified descriptions of the semantic readings of sentences such as (1), considered as the structural trees of the first-order representations. The dotted lines signify dominance relations, which require the upper node to be an ancestor of the lower one in any tree that fits the description. (1) Some representative of every department in all companies saw a sample of each product. The sentence has 42 readings (Hobbs and Shieber, 1987), and it is easy to imagine how the number of readings grows exponentially (or worse) in the length of the sentence. Efficient enumeration of readings from the description is a longstanding problem in scope underspecification. Our polynomial algorithm solves this problem. Moreover, the investigation of graph problems that are closely related to normal constraints allows us to prove that many other underspecification formalisms – e.g. Minimal Recursion Semantics (Copestake et al., 1997) and Hole Semantics (Bos, 1996) – have NP-hard satisfiability problems. Our algorithm can still be used as a preprocessing step for these approaches; in fact, experience shows that it seems to solve all encodings of descriptions in Hole Semantics that actually occur. ∀u • →• comp • u • • ∀w • →• ∧• • dept • w • • ∃x • ∧• ∧• • repr • x • • ∃y • ∧• • ∧• spl • y • • ∀z • →• prod • z • • in • w • u • of • x • w • see • x • y • of • y • z • Fig. 1: A dominance constraint (from scope underspecification). 2 Dominance Constraints In this section, we define the syntax and semantics of dominance constraints. The variant of dominance constraints we employ describes constructor trees – ground terms over a signature of function symbols – rather than feature trees. f • g • a • a • Fig. 2: f(g(a, a)) So we assume a signature Σ function symbols ranged over by f, g, . . ., each of which is equipped with an arity ar(f) ≥ 0. Constants – function symbols of arity 0 – are ranged over by a, b. We assume that Σ contains at least one constant and one symbol of arity at least 2. Finally, let Vars be an infinite set of variables ranged over by X, Y, Z. The variables will denote nodes of a constructor tree. We will consider constructor trees as directed labeled graphs; for instance, the ground term f(g(a, a)) can be seen as the graph in Fig. 2. We define an (unlabeled) tree to be a finite directed graph (V, E). V is a finite set of nodes ranged over by u, v, w, and E ⊆V × V is a set of edges denoted by e. The indegree of each node is at most 1; each tree has exactly one root, i.e. a node with indegree 0. We call the nodes with outdegree 0 the leaves of the tree. A (finite) constructor tree τ is a pair (T, L) consisting of a tree T = (V, E), a node labeling L : V →Σ, and an edge labeling L : E → N, such that for each node u ∈V and each 1 ≤k ≤ar(L(u)), there is exactly one edge (u, v) ∈E with L((u, v)) = k.1 We draw 1The symbol L is overloaded to serve both as a node and an edge labeling. constructor trees as in Fig. 2, by annotating nodes with their labels and ordering the edges along their labels from left to right. If τ = ((V, E), L), we write Vτ = V , Eτ = E, Lτ = L. Now we are ready to define tree structures, the models of dominance constraints: Definition 2.1. The tree structure Mτ of a constructor tree τ is a first-order structure with domain Vτ which provides the dominance relation ∗τ and a labeling relation for each function symbol f ∈Σ. Let u, v, v1, . . . vn ∈Vτ be nodes of τ. The dominance relationship u∗τv holds iffthere is a path from u to v in Eτ; the labeling relationship u:f τ(v1, . . . , vn) holds iffu is labeled by the n-ary symbol f and has the children v1, . . . , vn in this order; that is, Lτ(u) = f, ar(f) = n, {(u, v1), . . . , (u, vn)} ⊆Eτ, and Lτ((u, vi)) = i for all 1 ≤i ≤n. A dominance constraint ϕ is a conjunction of dominance, inequality, and labeling literals of the following form where ar(f) = n: ϕ ::= ϕ ∧ϕ′ | X∗Y | X̸=Y | X:f(X1, . . . , Xn) X 1 X 2 Y X f Fig. 3: An unsatisfiable constraint Let Var(ϕ) be the set of variables of ϕ. A pair of a tree structure Mτ and a variable assignment α : Var(ϕ) →Vτ satisfies ϕ iffit satisfies each literal in the obvious way. We say that (Mτ, α) is a solution of ϕ in this case; ϕ is satisfiable if it has a solution. We usually draw dominance constraints as constraint graphs. For instance, the constraint graph for X:f(X1, X2) ∧X1∗Y ∧ X2∗Y is shown in Fig. 3. As for trees, we annotate node labels to nodes and order tree edges from left to right; dominance edges are drawn dotted. The example happens to be unsatisfiable because trees cannot branch upwards. Definition 2.2. Let ϕ be a dominance constraint that does not contain two labeling constraints for the same variable.2 Then the constraint graph for ϕ is a directed labeled graph G(ϕ) = (Var(ϕ), E, L). It contains a (partial) node labeling L : Var(ϕ) ⇝Σ and an edge labeling L : E →N ∪{∗}. The sets of edges E and labels L of the graph G(ϕ) are defined in dependence of the literals in ϕ: The labeling literal X:f(X1, . . . , Xn) belongs to ϕ iffL(X) = f and for each 1 ≤i ≤n, (X, Xi) ∈E and L((X, Xi)) = i. The dominance literal X∗Y is in ϕ iff(X, Y ) ∈E and L((X, Y )) = ∗. Note that inequalities in constraints are not represented by the corresponding constraint graph. We define (solid) fragments of a constraint graph to be maximal sets of nodes that are connected over tree edges. 3 Normal Dominance Constraints Satisfiability of dominance constraints can be decided easily in non-deterministic polynomial time; in fact, it is NP-complete (Koller et al., 1998). X 1 X 2 f Y f Y 1 Y 2 X Fig. 4: Overlap The NP-hardness proof relies on the fact that solid fragments can “overlap” properly. For illustration, consider the constraint X:f(X1, X2) ∧ Y :f(Y1, Y2) ∧Y ∗X ∧X∗Y1, whose constraint graph is shown in Fig. 4. In a solution of this constraint, either Y or Y1 must be mapped to the same node as X; if X = Y , the two fragments overlap properly. In the applications in computational linguistics, we typically don’t want proper overlap; X should 2Every constraint can be brought into this form by introducing auxiliary variables and expressing X=Y as X∗Y ∧Y ∗X. never be identified with Y , only with Y1. The subclass of dominance constraints that excludes proper overlap (and fixes some minor inconveniences) is the class of normal dominance constraints. Definition 3.1. A dominance constraint ϕ is called normal ifffor all variables X, Y, Z ∈ Var(ϕ), 1. X ̸= Y in ϕ iffboth X:f(. . .) and Y :g(. . .) in ϕ, where f and g may be equal (no overlap);3 2. X only appears once as a parent and once as a child in a labeling literal (treeshaped fragments); 3. if X∗Y in ϕ, neither X:f(. . .) nor Z:f(. . . Y . . .) are (dominances go from holes to roots); 4. if X∗Y in ϕ, then there are Z, f such that Z:f(. . . X . . .) in ϕ (no empty fragments). Fragments of normal constraints are treeshaped, so they have a unique root and leaves. We call unlabeled leaves holes. If X is a variable, we can define Rϕ(X) to be the root of the fragment containing X. Note that by Condition 1 of the definition, the constraint graph specifies all the inequality literals in a normal constraint. All constraint graphs in the rest of the paper will represent normal constraints. The main result of this paper, which we prove in Section 4, is that the restriction to normal constraints indeed makes satisfiability polynomial: Theorem 3.2. Satisfiability of normal dominance constraints is O((k+1)3n2 log n), where n is the number of variables in the constraint, and k is the maximum number of dominance edges into the same node in the constraint graph. In the applications, k will be small – in scope underspecification, for instance, it is 3Allowing more inequality literals does not make satisfiability harder, but the pathological case X ̸= X invalidates the simple graph-theoretical characterizations below. bounded by the maximum number of arguments a verb can take in the language if we disregard VP modification. So we can say that satisfiability of the linguistically relevant dominance constraints is O(n2 log n). 4 A Polynomial Satisfiability Test Now we derive the satisfiability algorithm that proves Theorem 3.2 and prove it correct. In Section 5, we embed it into an enumeration algorithm. An alternative proof of Theorem 3.2 is by reduction to a graph problem discussed in (Althaus et al., 2000); this more indirect approach is sketched in Section 6. Throughout this section and the next, we will employ the following non-deterministic choice rule (Distr), where X, Y are different variables. (Distr) ϕ ∧X∗Z ∧Y ∗Z → ϕ ∧X∗Rϕ(Y ) ∧Y ∗Z ∨ ϕ ∧Y ∗Rϕ(X) ∧X∗Z In each application, we can pick one of the disjuncts on the right-hand side. For instance, we get Fig. 5b by choosing the second disjunct in a rule application to Fig. 5a. The rule is sound if the left-hand side is normal: X∗Z ∧Y ∗Z entails X∗Y ∨Y ∗X, which entails the right-hand side disjunction because of conditions 1, 2, 4 of normality and X ̸= Y . Furthermore, it preserves normality: If the left-hand side is normal, so are both possible results. Definition 4.1. A normal dominance constraint ϕ is in solved form iff(Distr) is not applicable to ϕ and G(ϕ) is cycle-free. Constraints in solved form are satisfiable. 4.1 Characterizing Satisfiability In a first step, we characterize the unsatisfiability of a normal constraint by the existence of certain cycles in the undirected version of its graph (Proposition 4.4). Recall that a cycle in a graph is simple if it does not contain the same node twice. Definition 4.2. A cycle in an undirected constraint graph is called hypernormal if it does not contain two adjacent dominance edges that emanate from the same node. f • • X g • • Y • a • Z b • g • • Y f • • X • a • Z b • (a) (b) Fig. 5: (a) A constraint that entails X∗Y , and (b) the result of trying to arrange Y above X. The cycle in (b) is hypernormal, the one in (a) is not. For instance, the cycle in the left-hand graph in Fig. 5 is not hypernormal, whereas the cycle in the right-hand one is. Lemma 4.3. A normal dominance constraint whose undirected graph has a simple hypernormal cycle is unsatisfiable. Proof. Let ϕ be a normal dominance constraint whose undirected graph contains a simple hypernormal cycle. Assume first that it contains a simple hypernormal cycle C that is also a cycle in the directed graph. There is at least one leaf of a fragment on C; let Y be such a leaf. Because ϕ is normal, Y has a mother X via a tree edge, and X is on C as well. That is, X must dominate Y but is properly dominated by Y in any solution of ϕ, so ϕ is unsatisfiable. In particular, if an undirected constraint graph has a simple hypernormal cycle C with only one dominance edge, C is also a directed cycle, so the constraint is unsatisfiable. Now we can continue inductively. Let ϕ be a constraint with an undirected simple hypernormal cycle C of length l, and suppose we know that all constraints with cycles of length less than l are unsatisfiable. If C is a directed cycle, we are done (see above); otherwise, the edges in C must change directions somewhere. Because ϕ is normal, this means that there must be a node Z that has two incoming dominance edges (X, Z), (Y, Z) which are adjacent edges in C. If X and Y are in the same fragment, ϕ is trivially unsatisfiable. Otherwise, let ϕ1 and ϕ2 be the two constraints obtained from ϕ by one application of (Distr) to X, Y, Z. Let C1 be the sequence of edges we obtain from C by replacing the path from X to Rϕ(Y ) via Z by the edge (X, Rϕ(Y )). C is hypernormal and simple, so no two dominance edges in C emanate from the same node; hence, the new edge is the only dominance edge in C1 emanating from X, and C1 is a hypernormal cycle in the undirected graph of ϕ1. C1 is still simple, as we have only removed nodes. But the length of C1 is strictly less than l, so ϕ1 is unsatisfiable by induction hypothesis. An analogous argument shows unsatisfiability of ϕ2. But because (Distr) is sound, this means that ϕ is unsatisfiable too. Proposition 4.4. A normal dominance constraint is satisfiable iffits undirected constraint graph has no simple hypernormal cycle. Proof. The direction that a normal constraint with a simple hypernormal cycle is unsatisfiable is shown in Lemma 4.3. For the converse, we first define an ordering ϕ1 ≤ϕ2 on normal dominance constraints: it holds if both constraints have the same variables, labeling and inequality literals, and if the reachability relation of G(ϕ1) is a subset of that of G(ϕ2). If the subset inclusion is proper, we write ϕ1 < ϕ2. We call a constraint ϕ irredundant if there is no normal constraint ϕ′ with fewer dominance literals but ϕ ≤ϕ′. If ϕ is irredundant and G(ϕ) is acyclic, both results of applying (Distr) to ϕ are strictly greater than ϕ. Now let ϕ be a constraint whose undirected graph has no simple hypernormal cycle. We can assume without loss of generality that ϕ is irredundant; otherwise we make it irredundant by removing dominance edges, which does not introduce new hypernormal cycles. If (Distr) is not applicable to ϕ, ϕ is in solved form and hence satisfiable. Otherwise, we know that both results of applying the rule are strictly greater than ϕ. It can be shown that one of the results of an application of the distribution rule contains no simple hypernormal cycle. We omit this argument for lack of space; details can be found in the proof of Theorem 3 in (Althaus et al., 2000). Furthermore, the maximal length of a < increasing chain of constraints is bounded by n2, where n is the number of variables. Thus, applications of (Distr) can only be iterated a finite number of times on constraints without simple hypernormal cycles (given redundancy elimination), and it follows by induction that ϕ is satisfiable. 4.2 Testing for Simple Hypernormal Cycles We can test an undirected constraint graph for the presence of simple hypernormal cycles by solving a perfect weighted matching problem on an auxiliary graph A(G(ϕ)). Perfect weighted matching in an undirected graph G = (V, E) with edge weights is the problem of selecting a subset E′ of edges such that each node is adjacent to exactly one edge in E′, and the sum of the weights of the edges in E′ is maximal. The auxiliary graph A(G(ϕ)) we consider is an undirected graph with two types of edges. For every edge e = (v, w) ∈G(ϕ) we have two nodes ev, ew in A(G(ϕ)). The edges are as follows: (Type A) For every edge e in G(ϕ) we have the edge {ev, ew}. (Type B) For every node v and distinct edges e, f which are both incident to v in G(ϕ), we have the edge {ev, fv} if either v is not a leaf, or if v is a leaf and either e or f is a tree edge. We give type A edges weight zero and type B edges weight one. Now it can be shown (Althaus et al., 2000, Lemma 2) that A(G(ϕ)) has a perfect matching of positive weight iff the undirected version of G(ϕ) contains a simple hypernormal cycle. The proof is by constructing positive matchings from cycles, and vice versa. Perfect weighted matching on a graph with n nodes and m edges can be done in time O(nm log n) (Galil et al., 1986). The matching algorithm itself is beyond the scope of this paper; for an implementation (in C++) see e.g. (Mehlhorn and N¨aher, 1999). Now let’s say that k is the maximum number of dominance edges into the same node in G(ϕ), then A(G(ϕ)) has O((k + 1)n) nodes and O((k + 1)2n) edges. This shows: Proposition 4.5. A constraint graph can be tested for simple hypernormal cycles in time O((k + 1)3n2 log n), where n is the number of variables and k is the maximum number of dominance edges into the same node. This completes the proof of Theorem 3.2: We can test satisfiability of a normal constraint by first constructing the auxiliary graph and then solving its weighted matching problem, in the time claimed. 4.3 Hypernormal Constraints It is even easier to test the satisfiability of a hypernormal dominance constraint – a normal dominance constraint in whose constraint graph no node has two outgoing dominance edges. A simple corollary of Prop. 4.4 for this special case is: Corollary 4.6. A hypernormal constraint is satisfiable iffits undirected constraint graph is acyclic. This means that satisfiability of hypernormal constraints can be tested in linear time by a simple depth-first search. 5 Enumerating Solutions Now we embed the satisfiability algorithms from the previous section into an algorithm for enumerating the irredundant solved forms of constraints. A solved form of the normal constraint ϕ is a normal constraint ϕ′ which is in solved form and ϕ ≤ϕ′, with respect to the ≤order from the proof of Prop. 4.4.4 Irredundant solved forms of a constraint are very similar to its solutions: Their constraint graphs are tree-shaped, but may still 4In the literature, solved forms with respect to the NP saturation algorithms can contain additional labeling literals. Our notion of an irredundant solved form corresponds to a minimal solved form there. 1. Check satisfiability of ϕ. If it is unsatisfiable, terminate with failure. 2. Make ϕ irredundant. 3. If ϕ is in solved form, terminate with success. 4. Otherwise, apply the distribution rule and repeat the algorithm for both results. Fig. 6: Algorithm for enumerating all irredundant solved forms of a normal constraint. contain dominance edges. Every solution of a constraint is a solution of one of its irredundant solved forms. However, the number of irredundant solved forms is always finite, whereas the number of solutions typically is not: X:a ∧Y :b is in solved form, but each solution must contain an additional node with arbitrary label that combines X and Y into a tree (e.g. f(a, b), g(a, b)). That is, we can extract a solution from a solved form by “adding material” if necessary. The main workhorse of the enumeration algorithm, shown in Fig. 6, is the distribution rule (Distr) we have introduced in Section 4. As we have already argued, (Distr) can be applied at most n2 times. Each end result is in solved form and irredundant. On the other hand, distribution is an equivalence transformation, which preserves the total set of solved forms of the constraints after the same iteration. Finally, the redundancy elimination in Step 2 can be done in time O((k+1)n2) (Aho et al., 1972). This proves: Theorem 5.1. The algorithm in Fig. 6 enumerates exactly the irredundant solved forms of a normal dominance constraint ϕ in time O((k + 1)4n4N log n), where N is the number of irredundant solved forms, n is the number of variables, and k is the maximum number of dominance edges into the same node. Of course, the number of irredundant solved forms can still be exponential in the size of the constraint. Note that for hypernormal constraints, we can replace the quadratic satisfiability test by the linear one, and we can skip Step 2 of the enumeration algorithm because hypernormal constraints are always irredundant. This improves the runtime of enumeration to O((k + 1)n3N). 6 Reductions Instead of proving Theorem 4.4 directly as we have done above, we can also reduce it to a configuration problem of dominance graphs (Althaus et al., 2000), which provides a more general perspective on related problems as well. Dominance graphs are unlabeled, directed graphs G = (V, E ⊎D) with tree edges E and dominance edges D. Nodes with no incoming tree edges are called roots, and nodes with no outgoing ones are called leaves; dominance edges only go from leaves to roots. A configuration of G is a graph G′ = (V, E ⊎E′) such that every edge in D is realized by a path in G′. The following results are proved in (Althaus et al., 2000): 1. Configurability of dominance graphs is in O((k + 1)3n2 log n), where k is the maximum number of dominance edges into the same node. 2. If we specify a subset V ′ ⊆V of closed leaves (we call the others open) and require that only open leaves can have outgoing edges in E′, the configurability problem becomes NP-complete. (This is shown by encoding a strongly NPcomplete partitioning problem.) 3. If we require in addition that every open leaf has an outgoing edge in E′, the problem stays NP-complete. Satisfiability of normal dominance constraints can be reduced to the first problem in the list by deleting all labels from the constraint graph. The reduction can be shown to be correct by encoding models as configurations and vice versa. On the other hand, the third problem can be reduced to the problems of whether there is a plugging for a description in Hole Semantics (Bos, 1996), or whether a given MRS description can be resolved (Copestake et al., 1997), or whether a given normal dominance constraints has a constructive solution.5 This reduction is by deleting all labels and making leaves that had nullary labels closed. This means that (the equivalent of) deciding satisfiability in these approaches is NP-hard. The crucial difference between e.g. satisfiability and constructive satisfiability of normal dominance constraints is that it is possible that a solved form has no constructive solutions. This happens e.g. in the example from Section 5, X:a ∧Y :b. The constraint, which is in solved form, is satisfiable e.g. by the tree f(a, b); but every solution must contain an additional node with a binary label, and hence cannot be constructive. For practical purposes, however, it can still make sense to enumerate the irredundant solved forms of a normal constraint even if we are interested only in constructive solution: It is certainly cheaper to try to find constructive solutions of solved forms than of arbitrary constraints. In fact, experience indicates that for those constraints we really need in scope underspecification, all solved forms do have constructive solutions – although it is not yet known why. This means that our enumeration algorithm can in practice be used without change to enumerate constructive solutions, and it is straightforward to adapt it e.g. to an enumeration algorithm for Hole Semantics. 7 Conclusion We have investigated normal dominance constraints, a natural subclass of general dominance constraints. We have given an O(n2 log n) satisfiability algorithm for them and integrated it into an algorithm that enumerates all irredundant solved forms in time O(Nn4 log n), where N is the number of irredundant solved forms. 5A constructive solution is one where every node in the model is the image of a variable for which a labeling literal is in the constraint. Informally, this means that the solution only contains “material” “mentioned” in the constraint. This eliminates any doubts about the computational practicability of dominance constraints which were raised by the NPcompleteness result for the general language (Koller et al., 1998) and expressed e.g. in (Willis and Manandhar, 1999). First experiments confirm the efficiency of the new algorithm – it is superior to the NP algorithms especially on larger constraints. On the other hand, we have argued that the problem of finding constructive solutions even of a normal dominance constraint is NPcomplete. This result carries over to other underspecification formalisms, such as Hole Semantics and MRS. In practice, however, it seems that the enumeration algorithm presented here can be adapted to those problems. Acknowledgments. We would like to thank Ernst Althaus, Denys Duchier, Gert Smolka, Sven Thiel, all members of the SFB 378 project CHORUS at the University of the Saarland, and our reviewers. This work was supported by the DFG in the SFB 378. References A. V. Aho, M. R. Garey, and J. D. Ullman. 1972. The transitive reduction of a directed graph. SIAM Journal of Computing, 1:131–137. E. Althaus, D. Duchier, A. Koller, K. Mehlhorn, J. Niehren, and S. Thiel. 2000. An efficient algorithm for the configuration problem of dominance graphs. Submitted. http://www.ps.uni-sb.de/Papers/ abstracts/dom-graph.html. R. Backofen, J. Rogers, and K. Vijay-Shanker. 1995. A first-order axiomatization of the theory of finite trees. Journal of Logic, Language, and Information, 4:5–39. Johan Bos. 1996. Predicate logic unplugged. In Proceedings of the 10th Amsterdam Colloquium. A. Copestake, D. Flickinger, and I. Sag. 1997. Minimal Recursion Semantics. An Introduction. Manuscript, ftp://csli-ftp. stanford.edu/linguistics/sag/mrs.ps.gz. Denys Duchier and Claire Gardent. 1999. A constraint-based treatment of descriptions. In Proceedings of IWCS-3, Tilburg. D. Duchier and S. Thater. 1999. Parsing with tree descriptions: a constraint-based approach. In Proc. NLULP’99, Las Cruces, New Mexico. M. Egg, J. Niehren, P. Ruhrberg, and F. Xu. 1998. Constraints over Lambda-Structures in Semantic Underspecification. In Proceedings COLING/ACL’98, Montreal. Z. Galil, S. Micali, and H. N. Gabow. 1986. An O(EV log V ) algorithm for finding a maximal weighted matching in general graphs. SIAM Journal of Computing, 15:120–130. Claire Gardent and Bonnie Webber. 1998. Describing discourse semantics. In Proceedings of the 4th TAG+ Workshop, Philadelphia. Jerry R. Hobbs and Stuart M. Shieber. 1987. An algorithm for generating quantifier scopings. Computational Linguistics, 13:47–63. A. Koller, J. Niehren, and R. Treinen. 1998. Dominance constraints: Algorithms and complexity. In Proceedings of the 3rd LACL, Grenoble. To appear as LNCS. M. P. Marcus, D. Hindle, and M. M. Fleck. 1983. D-theory: Talking about talking about trees. In Proceedings of the 21st ACL. K. Mehlhorn and S. N¨aher. 1999. The LEDA Platform of Combinatorial and Geometric Computing. Cambridge University Press, Cambridge. See also http://www.mpi-sb. mpg.de/LEDA/. R.A. Muskens. 1995. Order-independence and underspecification. In J. Groenendijk, editor, Ellipsis, Underspecification, Events and More in Dynamic Semantics. DYANA Deliverable R.2.2.C. Guy Perrier. 2000. From intuitionistic proof nets to interaction grammars. In Proceedings of the 5th TAG+ Workshop, Paris. O. Rambow, K. Vijay-Shanker, and D. Weir. 1995. D-Tree grammars. In Proceedings of the 33rd ACL, pages 151–158. J. Rogers and K. Vijay-Shanker. 1994. Obtaining trees from their descriptions: An application to tree-adjoining grammars. Computational Intelligence, 10:401–421. K. Vijay-Shanker. 1992. Using descriptions of trees in a tree adjoining grammar. Computational Linguistics, 18:481–518. A. Willis and S. Manandhar. 1999. Two accounts of scope availability and semantic underspecification. In Proceedings of the 37th ACL.
2000
47
     ! "#$ &%')(*'+-,.!!/1023 445674 8 )79! :;74<64)0>=?44);@-%9;7 A%9BC%EDFHG  -'+-,7 /!;74-FH "7JIKMLC/N " 8    A%9-%ED OQPR!S9TVUXWMWZY['[]\_^9`baMcRNTedgfhd5ikjlcmndod p7qVr \_s;t*u q ^vtw_xXyB^&xzw_s*u3\te{|wE^~}V{ q ^& q € ^9{| q s*‚;{|tBƒ„wxA…wE†vƒEw ‡ wE^9ˆEw3‰‹ŠBŒŠŽ_9‘5’9^&†'ƒ_wŠB†'’ …wE†vƒEw“ŽEŽ”ŒŠB•E•EŒEŒ9–E\ r \_^ —‹˜ qnq ™t*‚*’š;{|{o›œ7{‚VžŸ‚VžŸ’Š tewE†vƒ_w9ž¡\_ž š r ¢ P9[ET¤£¥h!PR!S§¦„dg¨ p7qnr \s;teu q ^vt©w_xXª5wEu r ’&t q s©}n{ q ^9 q « wEs q \ € ^9{| q s;‚*{|tBƒ Žk¬‹Š­H\¥®-^\_u¯Š p wE^9ˆ&} q wE^9ˆE°9’&†vŠ­±’ } q wE’ ˜ Ž”Œ_²Š6‰_•ŽE « wEs q \ s;{u³œ^ ˜ r ž™†_wEs q \&žŸ\_žŸ†'s ´¶µAj¤·”¸‹Pf_· ¹º¼»½'¾À¿ÂÁÄÃlÁ9ŏÆÂÇAÅÈÁ'ÆÅÉ¿BÅɺ»Ê½Ä¾ÀË'ËvÅɺ ̳ÃlÆ͔ÎVÏ$ÐÎEËvÅÉÑÀ¿-ҝΔÆ7ÓΔÆÅÉÔºÔÁÄÃlÆ»ÕzΔÒ|Õ ¿Á&ŏÅÉÖ;½×»Ãlؔ؋¾ÙºvØvÚNǽ'¾ÀÖ;½“Öe΋ºÄ¿¾ÀËvŏÆCÓÛÎlÕ Æ6ÅÉÔºÖ6½ÄÃlÆ6ÔÖe»BŏÆ6¾À¿B»¾Ù֏¿?¿Ü'Ö;½Կݽ'¾Þ؋½ Ãlؔ؋ÑÙÜv»¾Àº'Ãl»¾ÞÏE¾Þ»oߔړÇAΔÆ6ËEÕg¿BÁÔ֏¾ÀºvØvÚÔÔº'Ë ½Ä¾Þ؋½àÑÞÅeáE¾Ù֏ÔÑ5ÖeΔÆ6ÆÅÉÑÀÃl»¾ÞÏE¾Þ»gߔâݹ ºàΔÆ;ËvÅÆ Î”»ÛÖe΋º'¿6¾ÀËvŏÆãÆ6¾ÀÖ;½³¾ÙºvÒ|ΔÆ;бÃl»¾Þ΋º³¾Àº¶Öe΋ºEÕ »BÅeáE»¿Ú»½vÅ-ÐÎEËvÅÉÑÀ¿ÔËvΔÁ'»ÃkÑÞÅÉ¿¿"¿B»BÆ;¾ÀÖe» ̳ÃlÆ͔ÎVÏKÔ¿¿Ü'бÁ'»¾Þ΋ºâX¹ºä»½vÅCÐÎ_Ë'ÅÉÑÀ¿Ú ¿ÁÄÃlÆ6¿BÅeÕgË'Ãl»óÁ'Æ6ΔåÄÑÞÅÉÐæ¾À¿©Ï”ŏÆ6ߓ¿BŏÆ6¾À΋Ü'¿ ÔºÄË©»½vÅɾÀÆçÁÄÃlÆ;ÔÐŏ»BŏÆ6¿ç»BÅɺ'ËC»BÎ)å9ÅÅÉ¿»¾èÕ ÐHÃl»BÅÉËKÜ'ºvÆ6ÅÉÑÀ¾ÀÃlåÄÑÞßHå&ÅÉ֏ÔÜÄ¿BÅ»½vŏ߯½'äϔŠÃ$ÑÀÃlÆؔųºÜÄЩå9ŏÆ.ΔÒÛÁÄÃlÆ6ÔÐŏ»BŏÆ6¿ÉâÊéçÎ ÎVϔŏÆ6Öe΋ÐÅ~¿BÁÃlÆ6¿BÅeÕgË'Ãl»ÓÁ'ÆΔåÄÑÀÅÉЯÚX΋ÜvÆ Ð±Î_ËvÅÉÑ7Ü'¿ÅÉ¿~翾ÀÐÁÑÀ¾èêÄÅÉËZϔŏÆ6¿¾Þ΋ºëÎ”Ò »½'Å~ÇÅÉÑÙÑèÕzÍ_º'ΤǺàåÄÔÖ6͍ÕzÎlì?¿6ÐÎΔ»½Ä¾ÀºvØ Ð±Å»½vÎ_Ë:âQéçΩб¾Þ»¾À؋Ãl»BÅ"Ü'ºvÆÅÉÑÀ¾ÙÃlåÄÑÞÅÅÉ¿»¾èÕ ÐHÃl»¾Þ΋ºHÁÄÆΔåÄÑÞÅÉЯÚ΋ÜvÆбÎ_ËvÅÉÑÙ¿Ô¿¿ÜÄÐÅ íB΋¾Àº»3¾ÀºÄËvŏÁ9Åɺ'ËvÅɺ'ÖeÅ~¾ÀºÄ¿B»BÅÉÔËZΔÒkÖe΋ºEÕ Ëľ޻¾Þ΋º'ÔÑQ¾Àº'ËvŏÁ9Åɺ'Ë'Åɺ'ÖeÅ-å9ÅÉ֏ÔÜ'¿Å5í΋¾Àº» ÁÄÆΔåÄÃlåľÀÑÙ¾Þ»¾ÞÅÉ¿9½'ÃÉϔÅA»½vÅA¿ÔÐÅË'ŏؔÆŏÅÎ”Ò ÅÉ¿»¾ÀбÃl»¾Þ΋º³ÆÅÉÑÀ¾ÙÃlåľÀÑÀ¾Þ»oߔâîNáEÁ9ŏÆ6¾ÀÐÅɺ»Ã”Ñ Æ6ÅÉ¿Ü'ÑÞ»¿±¿6½vΤÇ]»½'Ãl»HÐÎEËvÅÉÑÀ¿Ç)¾Þ»½ïÆ6¾ÀÖ;½ Öe΋º»BÅeáE»¿3Á9ŏÆҝΔÆ6ÐŏϔÅɺÊå9ŏ»B»BŏƳ»½'Ôº ¿»Ôº'Ë'ÃlÆ6Ëñð5̳Ì3¿.Ôº'Ëñ»½ÄÃl»ÛíB΋¾Àº»k¾ÀºEÕ Ë'ŏÁ&ÅɺÄËvÅɺ»Ô¿¿Ü'ÐÁÄ»¾Þ΋ºk¾Ù¿!Åeì9ÅÉÖe»¾ÞϔžÀº ¿΋ÐÅ©ÐÎEËvÅÉÑÀ¿â ò ó R緔¸‹WNôcf_·‹dgW1R ÓÛΔÆÅÉÔºë¾À¿3Ôºë½'¾Þ؋½'ÑÞßàÃlؔ؋ÑÀÜv»¾Ùº'Ãl»¾ÞϔÅÔÑÀÔºv؋Ü'ÃlؔŠÇ½'¾ÙÖ6½õ½'Ô¿öÇAΔÆ6ËEÕg¿ÁÄÔ֏¾ÀºvØ÷ΔÆ»½vΔؔÆ;ÃlÁĽß”â ¹o» бÃl͔ÅÉ¿±ÓΔÆÅÉÔº$ÁÄÃlÆ»ÕzΔÒ|Õg¿BÁ9ŏÅÉÖ6½ZøúùXûü'ý»Ãlؔ؋¾ÀºvØ Ë'¾èì9ŏÆÅɺ»©ÒÆ΋Ðþîºv؋ÑÙ¾À¿½~ùXûü„»Ãlؔ؋¾ÙºvØvâ×ÿ5ÅɺvŏÆBÕ Ã”ÑÀÑÞ߯îºv؋ÑÀ¾À¿½¯ùXûü¥»Ãlؔ؋¾ÙºvØk֏Ôº3å9ÅÆŏ؋ÃlÆ6ËvÅÉ˳Ô¿ ÄÁ'Æ6Î_ÖeÅÉ¿¿H¾Àºïǽ'¾ÙÖ6½ïÄÁ'Æ6ΔÁ&ŏƱùXûü$»ÃlØ$¾À¿±Ã”¿Õ ¿¾Þ؋º'ÅÉË7»BÎÅÉÔÖ6½HÇΔÆ6˾ٺ7»BÅeá_»¿â!ð)ΤÇAŏϔŏÆÉÚE¾ÀºÓÛÎlÕ ÆÅÉÔºHùAûüC»Ãlؔ؋¾ÙºvØvڍÅÉÔÖ;½HÇAΔÆ6˱¾À¿M»ÃlؔؔÅÉËkǾ޻½à Á'Æ6ΔÁ&ŏÆCÖe΋Щå¾Àº'Ãl»¾Þ΋º×ΔÒ"֏Ãl»BŏؔΔÆ6¾ÞÅÉ¿7Ôº'Ë×ÑÀÅeáE¾ÀÖÃ”Ñ ÒÎ”Æ6б¿NΔÒÐΔÆ6ÁĽvÅÉÐÅÉ¿¤ø1ŏÅ"ŏ»Ã”Ñ âÞÚ‹ýMå9ÅÉ֏ÔÜ'¿BÅ ÓÛΔÆ6ÅÉÔº©ÇAΔÆ6Ë'¿ç֏Ôº©å9Å!ҝÆŏÅÉÑÞßãҝΔÆ6ÐÅÉËå_ßÛÃlؔ؋ÑÀÜv»¾èÕ º'Ãl»¾ÙºvØÛÐΔÆÁĽvÅÉбÅÉ¿QÔºÄË¿BÎ㻽vÅ ºÜ'Ð-å9ŏÆMΔÒ֏Ãl»BÅeÕ Ø”Î”Æ6¾ÀÅÉ¿ ΔÒQÓΔÆÅÉÔº¯ÇAΔÆ6Ë'¿ ֏Ôº¯å9űø»½vŏΔÆŏ»¾À֏ÔÑÙÑÞßvý ¾Àºvêº'¾Þ»BŔâ ûãϔŏÆñÃÊË'ÅÉ֏ÔËvŔÚäбÔºß?ÇAΔÆÍ_¿ÔҝΔÆïÓÛΔÆÅÉÔº ùXûü³»Ãlؔ؋¾Àºvس½'ÃÉϔťÜ'¿BÅÉË×ÃäǾÀË'ÅÆ6ÔºvؔűΔÒ"бÃnÕ Ö;½'¾ÀºvÅ$ÑÞÅÉÃlÆ6º'¾Àº'Øñ»BÅÉÖ6½'ºÄ¾Ü'ÅÉ¿3¿ÜÄÖ6½ÊÔ¿³Ã ½'¾ÀËÄËvÅɺ ̳ÃlÆ͔ΤÏëÐÎEËvÅÉѯøúð)̳̄ýÔø:ÅÅ”Ú  ‹ýeøúÓ¾ÀРŏ» Ã”Ñ âÞÚ  ‹ý;ÚÃбÃnáE¾ÙÐ-Ü'Ð Åɺ»BÆΔÁ_ß.ÐÎEËvÅÉÑNøúÓÔºvØvÚ  ‹ý;Ú»BÆ6Ôº'¿BҝΔÆ6бÃl»¾Þ΋ºCÆ6ÜÄÑÞÅÉ¿Aøç¾ÀÐ¯Ú ”ý;ÚÃãËvÅeÕ Ö¾À¿6¾Þ΋ºÔ»BÆŏÅ$ø1ŏÅäŏ»HÃ”Ñ âÞÚ‹ý;Ú"Ë'¾Ù¿ÖeÆ6¾Àб¾Ùº'Ãl»¾ÞϔŠÑÞÅÉÃlÆ;º'¾ÀºvØäøúÓ¾ÀÐÈŏ»)Ã”Ñ âÞÚ  ‹ý;ÚÃÒúÜ ß¥ºvŏ»©øúÓ¾ÀРŏ»¥Ã”Ñ âÞÚ‹ý;Ú)ÓºvÅÉÜvÆ6ÔÑ)ºvŏ»gÇAΔÆÍÂø1ŏŔÚý;Ú Ã”º'ËK¿Bα΋ºâ ¹ºà»½'¾À¿.ÁÄÃlÁ9ŏƯÇAńÁ'ÆΔÁ9΋¿BŶ½'¾ÀË'Ë'Åɺ ̳ÃlÆ͔ÎVÏ ÐÎEËvÅÉÑÀ¿!Ò|ΔÆXÓÛΔÆÅÉÔº.ùAûü7»Ãlؔ؋¾ÀºvØvÚEǽ'¾ÀÖ;½±Ã”ËvΔÁ'» ÃÛÑÞÅÉ¿6¿N¿B»BÆ6¾ÀÖe»N̳ÃlÆ͔ΤϱÔ¿¿6Ü'ÐÁ'»¾Þ΋ºçøX¾Àº'ÑÀÃlƤÚ ‹ý »BÎKÖe΋ºÄ¿¾ÀËvŏÆãÆ6¾ÀÖ;½„Öe΋º»BÅeáE»¿CÔº'˶ǽ'¾ÀÖ;½¶Öe΋º'¿¾ÀËvÅÆ ÓÛΔÆ6ÅÉÔºCÖ;½'ÃlÆ6ÔÖe»BŏÆ6¾Ù¿B»¾À֏¿M¿Ü'Ö;½-Ô¿Q½'¾Þ؋½-Ãlؔ؋ÑÀÜv»¾Àº'ÃnÕ »¾ÞÏE¾Þ»oߔÚvÇΔÆ6ËvÕg¿BÁÄÔ֏¾ÀºvØvÚ'ÔºÄ˯½'¾Þ؋½¥ÑÞÅeáv¾À֏ÔÑÖeΔÆÆÅÉÑÀÃnÕ »¾ÞÏE¾Þ»oߔâ)¹º³»½vÅCбÎ_ËvÅÉÑÙ¿Ú¿BÁÃlÆ6¿BÅeÕgË'Ãl»Ã.Á'ÆΔåÑÞÅÉÐ ¾À¿ ϔŏÆ߄¿BŏÆ;¾Þ΋Ü'¿Ûå9ÅÉ֏ÔÜ'¿BÅH»½vŏ߄½'ÃÉϔÅ.ÃKÑÀÃlÆؔÅkºÜ'ÐÕ å9ŏÆÎ”Ò ÁÄÃlÆ;ÔÐŏ»BŏÆ6¿â¥é1Î3ΤϔŏÆ6Öe΋бť¿ÁÄÃlÆ6¿BÅeÕgË'Ãl»à Á'Æ6ΔåÄÑÞÅÉЯÚ9΋ÜvÆÛÐÎEËvÅÉÑMÜ'¿BÅÉ¿ÛÃ¥¿¾ÀбÁÄÑÀ¾èêÄÅÉËKϔŏÆ6¿¾Þ΋º ΔÒE»½vÅNÇAÅÉÑÀÑèÕzÍEºvΤÇ)ºãåÄÔÖÍÕzÎlìH¿ÐÎ_Δ»½'¾ÀºvØ"бŏ»½vÎ_Ë:â ¹ Ò)»½'ůÁÄÃlÆ6ÔÐŏ»BŏÆ6¿kÃlÆÅäϔŏÆßÔ¿BÁ9ÅÉ֏¾èêÖ¯ÑÙ¾Þ͔ůÑÞÅeáE¾ÞÕ ÖÃ”ÑÀ¾ÅÉËä΋ºvÅÉ¿ÚÄ»½vŏ߯»BÅɺ'˯»BÎ.½'ÃÉϔÅ-ϔŏÆßäË'¾èì9ŏÆÅɺ» ÅÉ¿B»¾ÙбÃl»¾Þ΋º¯ÆÅÉÑÙ¾ÀÃlåľÀÑÀ¾À»gߔÚEбÃlÍ_¾ÙºvØ7»½vũ̳ÃlÆ͔ΤÏäÔ¿Õ ¿ÜÄÐÁ'»¾Þ΋º.¾ÀÐÁÄÑÙÔÜ'¿¾ÞåÄÑÀŔâ1é1αÐH¾Þ»¾Þ؋Ãl»BÅ㻽'¾À¿AÁ'ÆΔå'Õ ÑÞÅÉÐäÚ"΋ÜvƯÐÎEËvÅÉÑÀ¿.Ô¿¿Ü'ÐÅHíB΋¾Àº».¾Àº'ËvŏÁ9Åɺ'Ë'Åɺ'ÖeÅ å9ŏ»gÇAŏÅɺ§Æ6ÔºÄËv΋РÏnÃlÆ;¾ÀÃlåÄÑÞÅÉ¿¥¾Àº'¿»BÅÉÃ”Ë Î”Ò7Öe΋º'Ë'¾ÞÕ »¾Þ΋ºÄÃ”Ñ ¾Àº'ËvŏÁ9Åɺ'ËvÅɺÄÖeÅkå9ÅÉ֏ÔÜ'¿BÅCíB΋¾Àº»7Á'ÆΔåÄÃlåľÙÑÀ¾èÕ »¾ÞÅÉ¿-½'ÃÉϔÅ.»½vÅH¿6ÔÐÅkËvŏؔÆ6ŏÅHÎ”Ò ÅÉ¿B»¾ÀÐHÃl»¾Þ΋º„ÆÅÉÑÀ¾ÞÕ ÃlåľÙÑÀ¾Þ»gߔâAîNáEÁ9ŏÆ6¾ÀÐÅɺ»ÔÑ9ÆÅÉ¿Ü'ÑÀ»¿ Ò|ΔÆ)»½vÅ-Ó1ù ÖeΔÆÁÜ'¿„ø:ŏńŏ»¯Ã”Ñ âÞÚ ‹ý¯¿½vΤÇ÷»½'Ãl»äбÎ_ËvÅÉÑÙ¿ Ǿ޻½×Æ6¾ÀÖ;½“Öe΋º»BÅeá_»¿7Á9ŏÆҝΔÆ6Ð<ŏϔÅɺ“å9ŏ»B»BŏÆC»½'Ôº ¿B»Ôº'ËÄÃlÆ6ËÔð)̳Ì3¿ÔºÄË×»½'Ãl»)íB΋¾Àº»C¾ÀºÄËvŏÁ9Åɺ'ËvÅɺ» Ô¿¿ÜÄÐÁ'»¾Þ΋º¯¾À¿XÅeì9ÅÉÖe»¾ÞϔÅC¾Àº¯¿B΋ÐũбÎ_ËvÅÉÑÙ¿â ! Y"[ "!dgfEP$#ãfEW:¸‹¸[%#zP&·‹d'&!d ·)( W * +×W:¸['PR ¹ºäÓÛΔÆÅÉÔºÚ»½vÅ-¿6ÔÐÅÇAΔÆ6ËäҝΔÆ6Р֏ÔºKå9ũбÔËvŠҝÆ΋ÐbË'¾èì9ŏÆÅɺ»ÐΔÆÁĽ'ÅÉÐű¿BÅ_ÜvÅɺ'ÖeÅÉ¿Ç)¾Þ»½„»½vÅ ¿ÔÐÅ »ÃlØë¿BÅÜ'Åɺ'ÖeŔâ ,'ΔÆÔ¾Ùº'¿B»Ôº'ÖeŔڥÃÇAΔÆ6Ë ÒÎ”Æ6Ð.- /0- 12433֏ÔºÔÖeΔÆÆÅÉ¿BÁ9΋º'Ë×»BÎ3»oÇ΄Ë'¾Þì&ŏÆBÕ Åɺ»HÐΔÆÁĽ'ÅÉÐÅä¿BÅ_ÜvÅɺ'ÖeÅÉ¿±Ç)¾Þ»½ï»½vÅK¿6ÔÐÅä»ÃlØ ¿BÅ_ÜvÅɺ'ÖeŔÚ- /)576ø85»BÎk¿Á'Æ΋Üv»;ý:9-;1<243=5nî5ø8ã֏Ô¿BŠбÃlÆ͔ŏÆeý¥Ã”º'Ë>- /?@5767ø85»BÎBAÄß'ý:9- 1243=5nîDCeâFE„Å ֏ÔÑÀÑ »½Ä¾À¿±Ã”Ð-åÄ¾ÞØ‹Ü'¾Þ»oßHGB½v΋ÐÎlÕg֏Ãl»BŏؔΔÆ6¾ÀÔÑJI“ÔÐ-åľèÕ Ø‹Ü'¾Þ»oߔâ 5¿Ü'ÔÑÀÑÀß ½v΋ÐÎlÕg֏Ãl»BŏؔΔÆ;¾ÀÔÑ-ÔÐ-åÄ¾ÞØ‹Üľ޻gß ¾À¿äºvΔ» ÅÉÔ¿Bß³»BÎ¥ÆÅÉ¿B΋ÑÞϔÅǾÀ»½v΋Üv»ãÖe΋º'¿Ü'ÑÞ»¾ÙºvØkÑÞÅeáv¾À֏ÔÑM¾ÀºEÕ ÒÎ”Æ6бÃl»¾Þ΋º7¾Àº7Öe΋º»BÅeá_»¿âK,'ΔÆ!ÅeáEÔбÁÄÑÞŔÚ- /0'-;1<243 ¾À¿N»ÃlؔؔÅÉËkǾ޻½L-/)5769-;1<243=5nÀºMGONPN/)Q0SRT- /0 -;1<243VU=2=3=Q0SRW0X;/ YSZ\[^]<2P_`RJaabced2 fRg3Q<h<I±Ã”º'Ë Ç¾Þ»½i-/?j5769-;1<243=5nîõ¾ÀºkGON/l1<0<mF/\- /0'-;1<243 U=2=3=Q0SRW0X;/HYSZn[o]RWc:_pRJa qsrRg3QhOIEâutAÅÉ֏ÔÜ'¿BÅ »½vÅÉ¿BÅñ¿Åɺ»BÅɺ'ÖeÅÉ¿“½'ÃÉϔŠ»½vÅñ¿ÔÐÅï»ÃlØÂÖe΋º»BÅeáE» G:9ãù^69ãîi95¹v9ãîwIxnÚA»½vŏßñ֏Ôº'ºvΔ».å9ÅKË'¾À¿Õ ÖeÆ6¾ÀÐH¾Àº'Ãl»BÅÉËÔå_ß$Öe΋º'¿6¾ÀËvŏÆ6¾Àº'Ø3΋º'ÑÀߓùAû©ü$»ÃlØ×¾ÀºEÕ ÒÎ”Æ6бÃl»¾Þ΋ºÂ¾ÀºÖe΋º»BÅeá_»¿â¼ÌKΔÆŏÎVϔŏÆÉÚ7ÔÑÞ»½v΋Üv؋½ »gÇAÎëÑÞÅeáE¾Ù֏ÔÑÁ'ÆΔåÄÃlå¾ÀÑÀ¾Þ»¾ÞÅÉ¿ÉÚzy{vø- /}|V~Cý„Ôº'Ë y{'ø- /?|~Cý;Ú)ÃlÆńÖe΋º'¿6¾ÀËvŏÆÅÉËÚ »½'ųÇAΔÆ6Ëà֏Ôº ºvΔ»³å9Å$ÖeΔÆÆ6ÅÉÖe»ÑÞß»ÃlؔؔÅÉË ¿¾Ùº'ÖeÅ×»½vÅ$»ÃlØàǾ޻½ ÑÀÃlÆؔŏÆHÁ'ÆΔåÄÃlåľÙÑÀ¾Þ»gß~¾Ù¿ÔÑÞÇAäß_¿H¿BÅÉÑÞÅÉÖe»BÅÉËñ¾ÀºÔå9Δ»½ ¿BÅɺ»BÅɺ'ÖeÅÉ¿â ð)ΤÇAŏϔŏÆÉÚÛ¿Ü'Ö;½ ÔÐ-åÄ¾ÞØ‹Üľ޻gßñ֏Ôºàå9ÅKÆ6ÅÉ¿B΋ÑÞϔÅÉË å_߯ÆŏÒ|ŏÆ6Æ6¾ÀºvØHÑÀÅeáE¾À֏ÔÑ1ÆÅÉÑÀÃl»¾Þ΋ºÄ¿¾Àº3Öe΋º»BÅeáE»¿â€,vÎ”Æ ÅeávÔÐÁÄÑÞŔÚ$-/0-;1<243k֏Ôº³å9Å©ÖeΔÆÆÅÉÖe»ÑÀ߯»ÃlؔؔÅÉ˄¾ÞÒ ÇAÅ)Öe΋º'¿¾ÙËvŏÆÑÞÅeáv¾À֏ÔÑvÆ6ÅÉÑÀÃl»¾Þ΋º'¿å&ŏ»oÇŏÅɺNPN/)Q0<mR Ôº'ËB- /0- 12=3~ÔºÄËïå9ŏ»gÇAŏÅɺ‚N/l10OmF/KÔº'˃- /0 -;1<243'⠄ ¢†…‡… T*µPjn[Äôˆ+“W:¸‹[ÄP9RЉV‹¶O ·lP9S:SçdgR!S ,M¾Þ؋ÜvÆÅ¿½'ΤÇ¿çÃ)ÐΔÆ6ÁĽvÅÉÐÅeÕgÜ'ºÄ¾Þ»:ÑÀÃl»B»¾ÀÖeÅA¿B»BÆ6Ü'Ö*Õ »ÜvÆÅΔÒ:éÓÛΔÆÅÉÔºk¿BÅɺ»BÅɺ'ÖeŔڌG- 1d0'-;1<243`Ž /?PN%2 RWaa0X;/‘IöÚǽvŏÆÅ7ÅÉÔÖ;½~ºvÎEËvÅC½ÄÔ¿5ïбΔÆÁĽvÅÉЊԺ'Ë.¾Þ»¿AùAûü±»ÃlØCÔºÄËkÇ)½vŏÆÅ»½'Å)¿BÅ_ÜvÅɺ'ÖeÅãÖe΋ºEÕ ºvÅÉÖe»BÅÉ˯å_ß.å&΋ÑÙË.ÑÙ¾ÀºvÅÉ¿X¾Àº'Ë'¾Ù֏Ãl»BÅÉ¿X»½vÅÐ΋¿B»ÑÙ¾Þ͔ÅÉÑÞß ¿BÅ_ÜvÅɺ'ÖeŔâLtÅÉ֏ÔÜÄ¿BÅkÓÛΔÆ6ÅÉԺ׽'Ô¿©ÇΔÆ6ËvÕg¿BÁÄÔ֏¾ÀºvØ Î”Æ»½vΔؔÆ;ÃlÁĽß”ÚÛ»BÆ;Ôº'¿¾Þ»¾Þ΋ºÄ¿¥å&ŏ»oÇŏÅɺºvÎEËvÅÉ¿ä֏Ôº ’“€”7•e–7—˜•e™›šsœO•SžšŸl™˜•S €¡š–”¢£šsœ<•¤žšŸl•e–”7¥@–¦)§ ¨:©V”7•e–7—˜•e™ªš–—«–l¡¬`š­—O™˜­—O™'¥j˜¥@—O–l¡š–”®Tšs¯e—­7° «Ÿ±š§ ²³O² ©K´µ ³¶·¶·¸ ©w´'µO¹ ³vº·º ©w´¤»¼ ³¸%½ ©K´¤»¼ ³¾%¿4À ÁŒÂ ¹ ³¶ª¶ŒÃ$Ä ÁŒÂ ¹ ³¶ª¶ÆÅÇ ÁŒÂ ³ºªº ÁK ³vº·½ ¹ ³¾%¿4À È » ³v¶·¶ÆÃ$Ä È » ³v¶·¶ÆÅTÄ ÉËÊÊ ³ºÆÌ ÉËÊÊ ³ºª½ ÍK ³¾%¿ ¿ ÍK ³¾%¿4à § ³ÎÎ § ²³O² ,M¾Þ؋ÜvÆ6Å Ï Ð Ð±Î”ÆÁĽvÅÉÐÅeÕgÜĺ'¾Þ» ÑÀÃl»B»¾ÀÖeÅ Î”Ò G-;1vd)-;1<243Ž /?TN%2RWaavX;/±I'ø8ŠÑ!΋ÜK֏ÔºäË'ÎH¾Þ»âŸý å9ÅË'¾À¿B»¾Ùºv؋Ü'¾À¿½'ÅÉ˯åß¶Ã.ÇΔÆ;˶å&΋Üĺ'Ë'ÃlÆߔâ©éçÆ6ÔºEÕ ¿¾À»¾Þ΋º'¿ÔÖeÆ6΋¿¿Ã-ÇAΔÆ6˱å9΋Ü'º'ËÄÃlÆߔڍǽľÀÖ6½kÃlÆÅ)ËvÅeÕ ÁľÙÖe»BÅÉËKå_ßKÃ¥¿B΋ÑÀ¾ÀË3ÑÀ¾ÀºvŔÚ&ÃlÆÅË'¾À¿B»¾Ùºv؋Ü'¾À¿½'ÅÉË¥Ò|Æ΋Р»BÆ6ԺĿ¾Þ»¾Þ΋º'¿Ç¾Þ»½Ä¾Àº±Ã-ÇΔÆ6Ë:Ú_Ç)½'¾ÀÖ6½.ÃlÆÅãËvŏÁľÀÖe»BÅÉË å_ß.ÃHËvΔ»B»BÅÉËKÑÀ¾ÙºvŔâ Ò ÓÔ ÕÖ7×%؛ٷ×PÚÙBÛzÜ%ÚٛݏުطßWÖ`àáÜ$Ù·â=ã E„ÅñåÔ¿¾À֏ÔÑÀÑÞߧҝ΋ÑÀÑÞÎVÇõ»½vÅïºvΔ»Ãl»¾Þ΋ºöΔҶøX½'ÃlÆBÕ º'¾ÙÃlÍ>ŏ»bÃ”Ñ âÞÚä‹ý÷»BÎ>Ë'ÅÉ¿ÖeÆ6¾Þå9ÅktXäߔÅÉ¿¾ÀÔº ÐÎEËvÅÉÑÀ¿Éâ ¹ º¼»½'¾À¿ZÁÄÃlÁ9ŏÆÉÚïÇÅÈÔ¿¿ÜÄÐÅ?»½'Ãl» åæ C)ç æ x ç èèè7ç æFé ê ¾À¿à ¿Bŏ» ΔÒÂÇAΔÆ6Ë'¿Ú åë C ç ë xçnèèè)ç ëSìlê ¾À¿Êà ¿Bŏ»ÊÎ”Ò ùAû©ü÷»Ãl؋¿Ú ü¿BÅÜ'Åɺ'ÖeŠΔҧÆ6ÔºÄËv΋РÏlÃlÆ6¾ÀÃlåÄÑÞÅÉ¿^í C:î ï 8 í C í x èèèðí ï ¾À¿„ç¿Åɺ»BÅɺ'ÖeŠΔÒòñÈÇAΔÆ6Ë'¿Ú à ¿BÅÜ'Åɺ'ÖeÅ<ΔÒëÆ6Ôº'Ë'΋РÏlÃlÆ6¾ÀÃlåÄÑÞÅÉ¿ôó C:î ï 8 ó C ó x èèèõó ï ¾À¿¥Ãï¿BÅ_ÜvÅɺ'ÖeÅ„Î”Ò ñÊÇAΔÆ6˧֏Ãl»Õ ŏؔΔÆ6¾ÀÅÉ¿âötAÅÉ֏ÔÜ'¿Å$ÅÉÔÖ;½ëΔұÆ6Ôº'Ë'΋РÏlÃlÆ6¾ÀÃlåÄÑÀÅÉ¿ í ֏Ôºë»Ãl͔ÅñÔ¿„¾Þ»¿³ÏnÔÑÀÜ'Å$ÔºßΔÒH»½'Å$ÇAΔÆ6Ë'¿ ¾Àºä»½vũϔÎE֏ÃlåÄÜ'ÑÀÃlÆ6ߔÚÄÇAÅ-ËvÅɺvΔ»BÅC»½vÅÏlÔÑÀÜvũΔÒ÷íùø å_ß æ øNÔº'ËKÃHÁÄÃlÆ»¾À֏Ü'ÑÙÃlÆ¿BÅ_ÜvÅɺ'ÖeũΔÒNÏlÔÑÀÜvÅÉ¿"Ò|Î”Æ íùø î ú øgûüþýýHå_ß æ ø î ú ⹺ Ã$¿¾ÀÐH¾ÀÑÀÃlƱÇXÃÉߔÚ5ÇÅ ËvÅɺ'Δ»BÅ3»½vųÏlÔÑÀÜvÅKÎ”Ò€ó ø å_ß ë ø ÔºÄË Ã“ÁÄÃlÆ»¾À֏ÜvÕ ÑÀÃlÆÛ¿BÅ_ÜvÅɺ'ÖeÅΔÒÏnÔÑÙÜvÅÉ¿5Ò|ΔƀóTø î ú øgûüƒý_ý)åß ë ø î ú â ,vΔÆÛؔÅɺ'ŏÆ6ÔÑÀ¾Þ»oߔÚ&»BŏÆ;б¿ æ ø î ú Ôº'Ë ë ø î ú øgûDÿBýý)ÃlÆÅ ËvÅeê&ºvÅÉ˯Ô¿"å9ÅɾÀºvØ7ÅÉÐÁÄ»gߔâ é½vůÁÄÜvÆÁ9΋¿BÅ.ΔÒtXÃÉߔÅÉ¿¾ÀÔº ÐÎEËvÅÉÑÀ¿7ҝΔÆHùAûü »Ãlؔ؋¾Àº'Ø.¾À¿»BÎHê&º'Ë份vÅCÐ΋¿B»ãÑÀ¾Þ͔ÅÉÑÞߥ¿BÅ_ÜvÅɺ'ÖeÅCÎ”Ò ùXûü$»Ãl؋¿kÒ|ΔÆHÃ~؋¾ÞϔÅɺ ¿BÅ_ÜvÅɺ'ÖeÅKΔÒ5ÇAΔÆ6Ë'¿ÉÚAÔ¿ ҝ΋ÑÀÑÞÎVÇ¿Ï  ’   šv¦O Fš   ¸  ’   ’ £’   ’     šv¦O Fš   ¸    ’   ’     šv¦O Fš   ¸    ’   ’   ¸   ’   šv¦O Fš   ¸    ’    ’    îw_ºâs±å9ÅÉÖe΋ÐÅÉ¿7îw_ºâ"!Kå9ÅÉ֏ÔÜ'¿BÅHÆ6ŏÒ|ŏÆÅɺÄÖeÅH»BÎ »½vÅ Æ6ÔºÄËv΋ÐëÏnÃlÆ;¾ÀÃlåÄÑÞÅÉ¿Q»½vÅÉб¿ÅÉÑÞϔÅÉ¿!֏Ôºå9ÅX΋б¾Þ»Õ »BÅÉËâ.îw_ºâ"!¯¾À¿»½vÅɺ„»BÆ6Ôº'¿Ò|ΔÆ6бÅÉË~¾Àº»BÎKîw_ºâª ¿¾ÀºÄÖeÅù!Ƥø æ C:î ï ý ¾À¿"Öe΋º'¿B»Ôº»"ҝΔÆÔÑÀÑ ë C:î ï â é"½'ÅɺÚ»½vÅÁ'ÆΔåÃlåľÀÑÀ¾Þ»oßäù!Ƥø ë C:î ï ç æ C:î ï ý5¾À¿5å'ÆÎlÕ Í”Åɺ.ËvΤǺ.¾Àº»BÎ-î÷ºâ©å_ß±Ü'¿¾Àº'Ø»½vÅ)Ö;½'Ô¾ÀºkÆ6Ü'ÑÞŔ⠸    ’    ’ #  $ % & ’ ' ¸    %   ’ %( ’   ’ %( ’) * ¸   %   ’ %   ’ %( ’ + ,  ð)ΤÇAŏϔŏÆÉڍ¾Þ»M¾À¿:ÅɾÀ»½vŏÆQ¾ÙÐÁÄÑÀÔÜ'¿6¾ÞåÄÑÞÅMΔÆN¾ÀÐÁ9΋¿¿¾ÀåÄÑÞÅ »BÎäÖe΋ÐÁÄÜ'»Bűù!ÆVø ë ø| ë C:î ø.C ç æ C:î ø/C ý5ÔºÄ˄ù!Ƥø æ ø| ë C:î ø¤ç æ C:î ø.C ýA¾Àº¯î÷ºâ vâ é"½'Åë¿»Ôº'Ë'ÃlÆ6Ë ð5Ì3Ì ¿¾ÀбÁÄÑÀ¾èêÄÅÉ¿ »½vÅÉÐ å_ß Ð±ÃlÍE¾ÀºvØ$»½vųÒ|΋ÑÀÑÀΤǾÀº'Ø×»gÇAÎï¿B»BÆ6¾ÀÖe»¯Ì³ÃlÆ͔ΤÏàÔ¿Õ ¿Ü'бÁ'»¾Þ΋º~øúÖe΋º'Ë'¾À»¾Þ΋º'ÔѾÀº'Ë'ŏÁ&ÅɺÄËvÅɺ'ÖeÅVý;Úvîw_ºâ Ôº'ËÂîw_ºâ10_Ú-»BΠؔŏ»¶Ã ÐΔÆ6œ»BÆ6ÔÖe»ÃlåÑÞÅ×ҝΔÆ6Ð¯Ú îw_ºâ â ¸    %   ’ %( ’2  ’ %( ’ 43 ¸    %   %(65  %( ’  7  ¸   %   ’ %   ’ %( ’893 ¸   %   %  :  ¸    ’ 6  ’  93  $ % & ’ ' ¸    %   %(5  %( ’8 * ¸   %   %  + ;  é"½vÅ¿»Ôº'Ë'ÃlÆ6˯ð5Ì3ÌþÔ¿¿6Ü'ÐÅÉ¿X»½'Ãl»"»½'ÅÛÁ'ÆΔåÄÃnÕ åľÀÑÙ¾Þ»gßãΔÒÄÃ5֏ÜvÆÆÅɺ»1»ÃlØ ë øÄÖe΋º'Ë'¾À»¾Þ΋º'ÔÑÀÑÞßãËvŏÁ9Åɺ'Ë'¿ ΋º΋º'ÑÀß©»½vÅ Á'Æ6ŏÏ_¾Þ΋ÜÄ¿=< »Ãl؋¿ ë ø.-?> î ø.C Ôº'Ë7»½'Ãl» »½vÅ3Á'ÆΔåÄÃlåľÙÑÀ¾Þ»gß$ΔҩÓ֏ÜvÆÆÅɺ»HÇAΔÆ6Ë æ ø©Öe΋º'Ë'¾èÕ »¾Þ΋º'ÔÑÙÑÞßkËvŏÁ9Åɺ'ËÄ¿΋ºK΋º'ÑÞßk»½vũ֏ÜvÆÆ6Åɺ» »ÃlØ6@”â ÿ5Åɺ'ŏÆ6ÔÑÀÑÞߔڋ»½'Å"¿B»Ôº'Ë'ÃlÆ6Ëð5Ì³Ì ½ÄÔ¿NÃÑÀ¾Àб¾À»ÃnÕ »¾Þ΋º„»½ÄÃl»©¾Þ»֏Ժ׺'Δ»¿B΋ÑÞϔÅHÖe΋ÐÁÑÀ¾À֏Ãl»BÅÉË~ÔÐ-åľèÕ Ø‹Ü'¾Þ»¾ÀÅÉ¿"å9ÅÉ֏ÔÜ'¿BÅ©¾À»ËvÎ_ÅÉ¿ºvΔ»5Öe΋º'¿¾ÀËvŏÆ)Æ6¾ÀÖ;½äÖe΋ºEÕ »BÅeáE»¿âé1Î¥ÎVϔŏÆ6Öe΋Ðű»½'¾Ù¿)ÑÀ¾Ùб¾Þ»Ãl»¾Þ΋ºÚ»½vÅ¿B»ÔºEÕ Ë'ÃlÆ6Ë.ð5Ì3Ìb¿6½v΋Ü'ÑÀËHå&Å)Åeá_»BÅɺÄËvÅÉË¥¿BÎ-»½ÄÃl»A¾Þ»X֏Ôº Öe΋º'¿ÜÄÑÞ» Æ6¾ÀÖ;½¯¾ÀºvҝΔÆ6бÃl»¾Þ΋º¯¾Àº¯Öe΋º»BÅeáE»¿â Ì3ΔÆŏΤϔŏƤÚH»½vÅñ¿B»ÔºÄË'ÃlÆ6ËëÇAΔÆ6ËEÕgÜ'º'¾À»³ÐÎEËvÅÉÑ ÖÃ”ºKº'Δ» å&ÅÜÄ¿BÅÉ˯Åeì9ÅÉÖe»¾ÞϔÅÉÑÞߥÒ|ΔÆ»Ãlؔ؋¾ÀºvØH½'¾À؋½'ÑÞß Ãlؔ؋ÑÀÜv»¾Ùº'Ãl»¾ÞϔťÑÀÔºv؋Ü'ÃlؔÅÉ¿ÑÀ¾À͔Å.ÓÛΔÆ6ÅÉÔºâ„é"½vŏÆÅeÕ ÒÎ”ÆŔÚÛ»½vÅ~ÇAΔÆ6ËEÕgÜ'ºÄ¾Þ»äÐÎEËvÅÉÑ¿½v΋ÜÄÑÀËàå9ń»BÆ6Ôº'¿Õ ҝΔÆ6ÐÅÉËä¾Àº»BαñÐΔÆ6ÁĽvÅÉÐÅeÕgÜ'ºÄ¾Þ» ÐÎ_Ë'ÅÉÑ â A Ç ™'«šŸ±ŸCB¡D†¥@™=”7•S˜•S D¥@–7•:”Œš™ " ž7¥@¦ š ùš™¥±–  ÃFE š'° –¥±šHG •S˜£šŸ §@¡ 2IIJ  — K ˜'¥±¦ š  š™z¥@– L •¤¥‘šŸ±”7—7¡ 2IIM  § Ò ÓN O1P$Ö)â=تٷâقàáÜ Ú#QSRªâ=à†âݏ޷طßJÖ£à†Ü$Ùªâ4ã tXÃÉߔÅÉ¿6¾ÀÔº ÐÎEËvÅÉÑÀ¿CҝΔƱÐΔÆÁĽ'ÅÉÐÅeÕgÜ'º'¾Þ»-»Ãlؔ؋¾ÀºvØ êºÄËÔ»½vÅäÐ΋¿B»HÑÙ¾Þ͔ÅÉÑÞߓ¿ÅÜvÅɺÄÖeÅäΔÒ5бΔÆÁĽvÅÉÐÅÉ¿ Ôº'˓ÖeΔÆÆÅÉ¿BÁ9΋º'ËľÀºvد»Ãl؋¿CÒ|ΔÆ7Ã䨋¾ÀϔÅɺ“¿BÅ_ÜvÅɺ'ÖeŠΔÒMÇAΔÆ6Ë'¿Ú'Ô¿ ҝ΋ÑÀÑÞÎVÇ¿Ï óCø æ C:î ï ýŒ8àÃlÆ؋бÃná T ’ U î V ’ U ùÆVøW C:î X ç2Y C:î X | æ C:î ï ý ø' ‹ý Z ÃlÆ6؋бÃná T ’ U î V ’ U ùÆVøW C:î X ç[ x î X ç2Y C:î X ý ø'‹ý ¹ºÔ»½vÅäÃlå9ΤϔůÅ_Ü'Ãl»¾Þ΋º'¿ÉÚ\Qø)] ñçý-ËvÅɺvΔ»BÅÉ¿»½vÅ º_Ü'Ð-å&ŏƯΔÒÐΔÆ6ÁĽvÅÉÐÅÉ¿¯¾Àº§Ã ¿BÅÜ'Åɺ'ÖeÅ×ÖeΔÆÆÅeÕ ¿BÁ9΋º'ËľÀºvØX»½vÅ؋¾ÀϔÅɺÇAΔÆ6Ë©¿BÅ_ÜvÅɺ'ÖeŔÚW!Ë'ÅɺvΔ»BÅÉ¿çà ÐΔÆ6ÁĽvÅÉÐÅeÕgÜ'ºÄ¾Þ»»ÃlØvÚ^YõËvÅɺ'Δ»BÅÉ¿ÛïÐΔÆÁĽvÅÉÐ±Å”Ú Ã”º'Ë_[ñËvÅɺ'Δ»BÅÉ¿±Ã³»gß_Á&ůΔÒ)»BÆ6Ôº'¿¾À»¾Þ΋º×ҝÆ΋Ðþ»½vÅ Á'Æ6ŏÏ_¾Þ΋ÜÄ¿©»Ãlض»Bγ»½vÅ.֏Ü'ÆÆÅɺ»-»ÃlØvâ[ñ֏ÔºÔ½'ÃÉϔŠ΋ºvÅ7ΔÒ!»oÇίÏlÔÑÀÜvÅÉ¿ÉځGH` I.Ë'ÅɺvΔ»¾ÀºvØ¥Ã.»BÆ6Ôº'¿¾Þ»¾À΋º ÔÖeÆ΋¿6¿Ã.ÇΔÆ6˳å9΋Ü'º'ËÄÃlÆßäÔº'˂G9€I¯ËvÅɺvΔ»¾ÙºvØ.à »BÆ6ԺĿ¾Þ»¾Þ΋º¶Ç¾À»½'¾Àº¶Ã¯ÇAΔÆ6Ëâ£tAÅÉ֏ÔÜ'¿BÅ.¾Þ»¾À¿ÛË'¾aHÕ ÖÜ'ÑÀ»»BΓ֏ÔÑÀ֏Ü'ÑÙÃl»BÅKîw_ºâw _ÚX»½vÅäÇAΔÆ6Ëñ¿BÅ_ÜvÅɺ'ÖeÅ »BŏÆ6Ð æ C:î ï ¾À¿XÜ'¿Ü'ÔÑÙÑÞßH¾Þ؋ºvΔÆ6ÅÉË.Ô¿¾Àº¥îw_ºâP_â!¹ ºvÕ ¿B»BÅÉÔË:ÚAÇAÅä¾Àº»BÆÎEË'Ü'ÖeÅb[ ¾Àºïî÷ºâw³»BΓÖe΋º'¿¾ÀËvÅÆ ÇAΔÆ6ËEÕg¿BÁÔ֏¾ÀºvØclâ é½vńÁ'ÆΔåÄÃlå¾ÀÑÀ¾Þ»oß ù!ÆVøW C:î X ç[ x î X ç2Y C:î X ý¥¾À¿¯Ã”ÑÀ¿BÎ å'Æ6Δ͔ÅɺKËvÎVǺä¾Àº»Bαîw_ºâ ed±å_ߥÜ'¿¾ÀºvØ7»½vÅ-Ö6½'Ô¾Àº Æ6ÜÄÑÞŔâ ùÆVøW C:î X ç[ x î X ç2Y C:î X ý 8 X $ øf Chg ùÆVøWOøSç[ øŒ|W C:î ø.C ç[ x î ø.C ç2Y C:î ø.C ý i ùÆVø.Y øK|W C:î ø¤ç[ x î øSç2Y C:î ø.C ý j øeedý tAÅÉ֏ÔÜ'¿BÅ-îw_ºâ ed±¾À¿ ºvΔ» ÅÉÔ¿ßk»BÎHÖe΋бÁÄÜv»BŔÚľ޻ ¾À¿ ¿¾ÙÐÁÄÑÀ¾èêÅÉËå_ßCбÃlÍE¾ÀºvØÛÃÛ̳ÃlÆ͔ΤÏÔ¿¿6Ü'ÐÁ'»¾Þ΋ºC»BΠؔŏ»5ÃÐΔÆÅ»BÆ6ÔÖe»ÃlåÑÞÅҝΔÆ6Яâ Ð5º7ÅeáE»BÅɺ'ËvÅÉË7ð)̳Ì]Ò|ΔÆ!ÐΔÆ6ÁĽvÅÉÐÅeÕgÜ'ºÄ¾Þ»1»ÃlØlÕ Ø‹¾Àº'ؓ֏Ôº å&Å3ËvÅeêºvÅÉË åßñбÃlÍE¾ÀºvؓÃ×ÑÞÅÉ¿¿k¿B»BÆ6¾ÀÖe» ̳ÃlÆ͔ΤÏäÔ¿6¿Ü'ÐÁ'»¾À΋ºÚvÔ¿"ҝ΋ÑÀÑÞΤÇ)¿Ï k ømlon pqr > î st ç uvn pqrw î xHt ýF| 8àù!ƤøW C:î X ç[ x î X ç2Y C:î X ý Z X $ øf C ù!ÆnøWø2y ç[%ø.z·| W ø.-?> î ø.C y ç[ ø.-?>|{ C:î ø.C zç Y ø.s<î ø/C ý i ùƤø.Y øK| WOø.w î ø}y ç[%ø.w { C:î ø.zç2Y ø/xî ø/C ý øe¤ý ¹ºÔóбÎ_ËvÅÉÑ k øml n pqr > î st ç u n pqrw î xHt ý;Ú!»½vÅ¥Á'ÆΔåÄÃnÕ åľÙÑÀ¾Þ»gß3ΔÒ»½'ÅH֏ÜvÆÆÅɺ»ÐΔÆÁ½vÅÉÐÅ»ÃlØ~WOø Öe΋º'Ë'¾ÞÕ »¾Þ΋ºÄÔÑÀÑÞß7ËvŏÁ9Åɺ'Ë'¿N΋ºHå&Δ»½H»½vÅÁÄÆŏÏ_¾À΋Ü'¿<÷»Ãl؋¿ € L —™˜P­7•eœ¥@—O«7™‚ LƒL °Jžš™'•:”„Œ—•:š–w˜ š¦O¦O•¤™ •) )¯e•e­7˜  „Œ¥@  •S˜›šŸ §@¡ }IJIJ…  ”)¥±”–—˜ ¯e—O–7™'¥‘”)•S^†T— ”)°J™'­š¯e¥@–¦7§ W ø.-?> î ø.C øÎ”Á'»¾Þ΋ºÄÔÑÀÑÞߔÚM»½vÅ¥»gß_Á9ÅÉ¿-ΔÒ)»½vÅÉ¾ÞÆC»BÆ6ÔºEÕ ¿¾Þ»¾À΋º‡[%ø.-?>|{ C:î ø/C ý.Ôº'˧»½'ńÁ'ÆŏÏE¾Þ΋Ü'¿_ˆ ÐΔÆBÕ ÁĽvÅÉбÅÉ¿‰Y ø.sOî ø.C Ôº'Ë »½'Å3Á'Æ6ΔåÄÃlåľÀÑÀ¾À»gßÔΔÒ»½vŠ֏ÜvÆÆ6Åɺ»CÐΔÆÁĽvÅÉбʼnY ø"Öe΋º'Ë'¾À»¾Þ΋º'ÔÑÀÑÞß¶ËvŏÁ9Åɺ'Ë'¿ ΋ºñ»½'ÅK֏ÜvÆÆÅɺ»±»ÃlØÔÔº'Ë﻽vÅKÁ'ÆŏÏE¾Þ΋Ü'¿‹ŠÝ»Ãl؋¿ WOø.w î ø¯øÎ”Á'»¾Þ΋º'ÔÑÙÑÞߔÚ)»½'ń»gß_Á&ÅÉ¿äΔÒ-»½vÅɾÀƯ»BÆ6Ôº'¿¾èÕ »¾Þ΋º‹[%ø.w { C:î ø ý Ôº'˯»½'Å©Á'ÆŏÏE¾Þ΋Ü'¿ŒkÐΔÆ6ÁĽvÅÉÐÅÉ¿ Y ø.x<î ø.C ⓹ º$ÅeáEÁ9ŏÆ6¾ÀÐÅɺ»¿ÚNÇAÅä¿Bŏ»Ž< Ô¿ .Î”Æ !_Úoˆ§Ã”¿‰d¶Î”Ɖ<$ڏŠ Ô¿V¥Î”Æ!_ÚAÔº'ːŒ“Ô¿‹d¶Î”Æ Š"â ¹ Òhˆ“Ôº'Ë_ŒkÃlÆ6Å ÅÆÎvÚ&»½vÅCÃlå&ÎVϔÅCбÎ_ËvÅÉÑÙ¿ÃlÆÅ ºv΋ºEÕgÑÀÅeáE¾À֏ÔÑÙ¾ ÅÉ˓бÎ_ËvÅÉÑÙ¿â$û5»½vŏÆ6ǾÀ¿BŔÚN»½vŏßÔÃlÆÅ ÑÞÅeáv¾À֏ÔÑÀ¾ ÅÉ˯бÎ_ËvÅÉÑÙ¿â ,vÎ”Æ ÅeávÔÐÁÄÑÀÅ”Ú »½vÅ ÅeáE»BÅɺ'ËvÅÉË ÐÎEËvÅÉÑ k øml‘p2r x î x t ç u’r x î x t ý;Ú Ç½vŏÆÅ ÇΔÆ6ËvÕg¿BÁÄÔ֏¾ÀºvØ ¾À¿ Öe΋º'¿¾ÙËvŏÆÅÉË~΋ºÄÑÞß³¾Ùº~»½vÅk»ÃlØ3Á'Æ6ΔåÄÃlåľÀÑÀ¾À»¾ÞÅÉ¿Ú:֏ÔÑèÕ ÖÜ'ÑÀÃl»BÅÛ»½'Å5ÁÄÆΔåÄÃlåľÀÑÙ¾Þ»gßΔÒMúvÎEËvÅ GON%2l5t"ÿ€I Δұ»½vÅïÐ΋¿»³ÑÀ¾À͔ÅÉÑÞß¿BÅ_ÜvÅɺ'ÖeÅÔ¾Àº>,M¾Þ؋Ü'ÆŃÔÔ¿ ҝ΋ÑÀÑÞΤÇ)¿Ï ù!Ƥø“_“_”Ž• ç`‡|~;~Œç –˜—š™ ç9;珎/ ç? ý i ùƤø¤N%2 |~;~Œç –˜—š™ ç “~“_”‹•z珎 /%ç? ý › ‰CP9¸‹P¨ [v·”['¸³[ÄjV·‹dg¨ P&·”dgW1R é"½vœÅeá_»BÅɺÄËvÅÉËëÐÎ_Ë'ÅÉÑÀ¿K½'äϔÅïà ÑÀÃlÆ6ؔœº_Ü'Щå9ÅÆ Î”Ò"ÁÄÃlÆ;ÔÐŏ»BŏÆ6¿ÚÔ¿7Öe΋ÐÁÃlÆÅÉËÔ»Bζ»½vÅ.¿»Ôº'Ë'ÃlÆ6Ë ÐÎEËvÅÉÑÀ¿â é"½vŏÆŏҝΔÆŔÚk»½vÅß Ð-ÜÄ¿B»×¿ÜEì9ŏÆ×ҝÆ΋Рå9Δ»½“¿ÁÄÃlÆ6¿BÅeÕgË'Ãl»óÁ'Æ6ΔåÄÑÞÅÉÐæÃ”º'˓ÜĺvÆÅÉÑÀ¾ÀÃlåÑÞűÅÉ¿Õ »¾ÀбÃl»¾À΋º„Á'ÆΔåÄÑÀÅÉЯâé"½vűÐÎEËvÅÉÑÀ¿ÛÔËvΔÁÄ»Ã俾ÀÐ7Õ ÁÄÑÀ¾ÞêÄÅÉËKåÔÖÍÕzÎlì ¿ÐÎ_Δ»½'¾ÀºvØ¥»BÅÉÖ6½Äº'¾_ÜvÅÔ¿Ãä¿BÎlÕ ÑÀÜv»¾À΋º-»BΩ»½vÅ êÄÆ6¿»QÁÄÆΔåÄÑÞÅÉЯڋÔºÄË)í΋¾Àº»!¾Àº'Ë'ŏÁ&ÅɺvÕ ËvÅɺ'ÖeÅ)Ô¿¿Ü'ÐÁÄ»¾Þ΋º±Ã”¿Ã-¿B΋ÑÀÜv»¾À΋º»Bλ½vÅ5¿BÅÉÖe΋º'Ë⠜ Ó'Ô Õ$ßàQ·ãß/ž·â=ْŸ·×9  ¡ݍÜ9¢¤£)àáÜ$Ü%Ö#R·ßgØ"¥ ¹º ¿Ü'Á&ŏÆ6Ï_¾À¿ÅÉË ÑÞÅÉÃlÆ6º'¾ÙºvØvÚ§»½vÅ÷¿¾ÀÐÁÄÑÙ¾ÞÅÉ¿B»öÁÄÃnÕ Æ6ÔÐŏ»BŏÆ?ÅÉ¿B»¾ÀбÃl»¾Þ΋ºþ¾À¿Ê»½vÅÐHÃnáE¾ÀÐCÜ'Ð ÑÀ¾Þ͔ÅeÕ ÑÀ¾À½'ÎÎEË1øúÌùQý ÅÉ¿B»¾ÀÐHÃl»¾Þ΋º1ø¦ãÜ'Ë'à ŏ»ÊÃ”Ñ âÞÚ ‹ý ǽ'¾ÙÖ6½ñбÃnáv¾Àб¾ÅÉ¿»½vÅKÁÄÆΔåÄÃlåľÀÑÙ¾Þ»gߓΔÒãÃ×»BÆ6Ô¾ÀºEÕ ¾Àºvس¿Bŏ»â¶é"½vÅ¥Ì ÅÉ¿B»¾ÀÐHÃl»BÅ.ΔÒ)ÐΔÆÁĽvÅÉбÅH»ÃlØ ø<M9z¤ýgÕzؔÆ6ÔÐÈÁ'Æ6ΔåÄÃlåľÀÑÀ¾À»gߔڍùÆH§ w øWOø÷| WOø/-?> î ø/C ý;Ú ¾À¿ ֏ÔÑÀ֏ÜÄÑÀÃl»BÅÉËKÔ¿ ҝ΋ÑÀÑÞÎVÇ¿Ï ù!Æ § w øWOøw| WOø/-?> î ø/C ý 8 ,·9øWOø.-?> î ø ý ,·9øWOø.-?> î ø.C ý øe¨!‹ý ǽvŏÆ6ŧ»½vÅÒúÜ'º'Öe»¾Þ΋ºô,Æ9ø.©9ý$Æŏ»Ü'Æ6º'¿Ô»½vÅҝÆÅeÕ _ÜvÅɺ'ÖeßKΔÒh©×¾Àº³»½vÅ»BÆ;Ô¾Àº'¾ÀºvØ¥¿Bŏ»â;E ½'Åɺ~Ü'¿¾Àº'Ø »½vńÌùëÅÉ¿B»¾ÀбÃl»¾Þ΋º:Ú)Ë'Ãl»ÃÔ¿BÁÄÃlÆ;¿BÅɺvÅÉ¿¿¥¾À¿kŏϔÅɺ ÐΔÆÅ¿BŏÆ;¾Þ΋Ü'¿5¾ÀºK»½'ÅCÅeáE»BÅɺ'ËvÅÉ˶ÐÎEËvÅÉÑÀ¿)»½'Ôº¶¾Àº »½vÅH¿»Ôº'Ë'ÃlÆ6Ë~бÎ_ËvÅÉÑÙ¿Ûå9ÅÉ֏ÔÜ'¿Bű»½vÅHÒ|ΔÆ6бŏƩ½'Ô¿ ŏϔÅɺKбΔÆÅÁÄÃlÆ6Ôбŏ»BŏÆ6¿ »½'Ôº¯»½'Å©ÑÀÃl»B»BŏÆÉâ ¹º øX½'ÅɺÚ÷0‹ý;ÚMǽvŏÆ6ÅÏnÃlÆ6¾À΋Ü'¿¿ÐÎ_Δ»½'¾Àº'Ø »BÅÉÖ;½'º'¾_ÜvÅÉ¿çÇAÔ¿Q»BÅÉ¿B»BÅÉËCҝΔÆQÃ5ÑÀÔºv؋Ü'ÃlؔÅAÐÎEËvÅÉыå_ß Ü'¿6¾ÀºvØ©»½'Å)Á9ŏÆÁÄÑÀÅeáE¾Þ»oßÐÅÉÔ¿ÜvÆŔÚE¾Þ»AÇAÔ¿AÆŏÁ9ΔÆ»BÅÉË »½'Ãl» »½vÅåÄÔÖ6͍ÕzÎl켿6ÐÎΔ»½Ä¾ÀºvØöÐŏ»½vÎEË1øúÓÃl»:”Ú  ”ý5Á9ŏÆҝΔÆ6б¿)å9ŏ»B»BŏÆÛ΋º„Ã¥¿бÔÑÀÑç»BÆ6Ôº'¾Àº'Ø.¿Bŏ» »½'ÔºKΔ»½vŏÆãÐŏ»½vÎEË'¿â¹ºä»½'ÅåÄÔÖÍÕzÎl쓿бÎΔ»½EÕ ¾Àº'ØvÚ&»½'Å7¿6ÐÎΔ»½'ÅÉË3ÁÄÆΔåÄÃlåľÀÑÙ¾Þ»gߥΔÒ»ÃlØ~ø<M9z¤ýgÕ Ø”Æ6ÔÐÝù!ÆHª¬«^­ øW ø |W ø.-?> î ø.C ýM¾À¿!֏ÔÑÙ֏Ü'ÑÀÃl»BÅÉËHÔ¿!Ò|΋ÑèÕ ÑÞÎVÇ¿Ï ùÆ ª¬«^­ øWOøw| Wø.-?> î ø.C ýK8 ®°¯6± ù!Ƨ w øWOøK|Wø.-?> î ø.C ý ¾Àқ{zÿ’d ² øW ø.-?> î ø.C ý_ù!ƪ¬«^­ øW ø |W ø/-?>|{ C:î ø.C ýB¾ÞÒ·{8³d øe‹ý ǽvŏÆÅ{8‚,Æ9øWOø/-?> î ø ýçF{6´D8öøg{ 9‚¤ý ñ ± { C ñ ± ¯6± 8 ±Hµ ±·¶ r ± { C8t8¸ï¹»º ’ ï ’  ¶ r ± { C8t)¸ï ¹8º ’ ï ’ ¹º×»½'ÅkÅ_Ü'Ãl»¾Þ΋ºÔÃlå9ΤϔŔÚKñ ± Ë'ÅɺvΔ»BÅÉ¿7»½vÅ¥ºÜ'ÐÕ å9ŏÆñÎ”Ò ø<M9z¤ýgÕzؔÆ6ÔРǽv΋¿ÅÒ|Æ6ÅÜvÅɺÄÖeß ¾À¿ {_Ú Ã”º'Ë »½vŶÖeÎżak֏¾ÀÅɺ» ¯6± ¾À¿k֏ÔÑÀÑÞÅÉË »½vÅ3Ë'¾Ù¿Öe΋Ü'º» Æ6Ãl»¾ÀÎvÚ Ç½'¾ÙÖ6½¼ÆÅ<AÄÅÉÖe»¿ë»½vÅÿ5Î_Î_ËEÕgéQÜvÆ6¾ÙºvØ ÅÉ¿Õ »¾ÀÐHÃl»Bōøzÿ5ÎÎEËÚ  ‹ý)½‹â îw_ºâ  ¿ÃÉßE¿§»½'Ãl» ùÆ ª¾«^­ øWOøB|˜Wø.-?> î ø.C ýK¾À¿KÜ'ºÄËvŏÆBÕzÅÉ¿B»¾ÀÐHÃl»BÅÉËZå_ß ¯ ± »½'Ôºï¾À»¿CÐHÃnáE¾ÀÐCÜ'ÐõÑÀ¾À͔ÅÉÑÀ¾À½vÎ_Î_Ë×ÅÉ¿B»¾ÀбÃl»BŔÚA¾ÞÒ { ÿ³dEÚÄΔÆ5¾À¿"åÄÔÖ6͔ÅÉËKÎlì~å_߯¾Þ»¿¿6ÐÎΔ»½Ä¾ÀºvØ»BŏÆ6Ð ùÆ ª¾«^­ øWOøV|WOø.-?>|{ C:î ø.C ý±¾ÀºïÁ'Æ6ΔÁ&ΔÆ6»¾Þ΋ºÔ»BÎ×»½vÅ ÏlÔÑÀÜvÅãΔÒ1»½'Å5ÒúÜ'º'Öe»¾À΋º ² øWOø.-?> î ø.C ý!ΔÒQ¾Þ»¿ Öe΋º'Ë'¾ÞÕ »¾Þ΋ºÄÔÑ»BŏÆ6пWOø.-?> î ø.C Ú'¾ÞÒª{8³dEâ ð)ΤÇAŏϔŏÆÉÚ&å9ÅÉ֏ÔÜ'¿BÅîw_ºâ 7Æ6ÅÜ'¾ÀÆÅÉ¿ Öe΋ÐÁÄÑÀ¾ÞÕ ÖÃl»BÅÉËÖe΋ÐÁÜv»Ãl»¾Þ΋º7¾Àº ² øW ø.-?> î ø.C ý;ÚnÇAÅ ¿¾ÀÐÁÑÀ¾ÞÒß ¾Þ»X»BÎؔŏ»"Ã7ÒúÜ'º'Öe»¾Þ΋ºkΔÒQ»½vÅãÒ|ÆÅ_ÜvÅɺ'ÖeßkΔÒçÃÖe΋ºEÕ Ë'¾À»¾Þ΋º'ÔÑ&»BŏÆ6ЯÚÄÔ¿Ò|΋ÑÙÑÞΤÇ¿Ï ² ø,Æ:øWOø/-?> î ø/C ýK8ÁÀ:ýŒ8  i î y ,·9øWOø.-?> î ø.C ýw8ÁÀ?z ÷ÄÅ fÇÆ î y ,Æ9øWOø/-?> î ø/C ýK8ÈÀ?z øe<ý ǽvŏÆ6Å Â 8Š ¶ à T %(65  % î ±JÉ Æ ù!Æ ª¬«^­ øWOøv| WOø.-?> î ø.C ý à T %(65  % î ±JÉ Æ ùÆ § w øW ø | W ø/-?> î ø/C ý ç î y ,Æ&øW ø.-?> î ø.C ýK8ÁÀ?z 8 Ê T %(5 º ’ % î ± fÇÆ î ËÍÌ r T %(65  %( ’ t f Å ù!Æ ª¬«^­ øWOø| WOø/-?>|{ C:î ø.C ý ¹º3î÷º:âª<vÚ9»½vÅ©Æ;ÔºvؔÅ-ΔÒÀ×¾À¿"åÜ'Ö͔ŏ»BÅÉ˄¾Àº»BÎõ Æŏ؋¾À΋º'¿-¿6Ü'Ö6½ÔÔ¿ŽÀƒ8Îd4ççH!=ç=ç: ç ~ÔºÄˇÀÈ]Ï0 Ð8Ñ –  „Kš˜mҏ¡ }IJ…¼; ^Ó ¹  ¥CÔ9ÕÖ 7 § ¿¾ÀºÄÖež޻ ¾À¿"ÔÑÀ¿BαËľak֏Ü'ÑÞ»A»BÎHÖe΋ÐÁÜv»BÅÛ»½'¾À¿XÅ_Ü'ÃnÕ »¾Þ΋º¯ÒÎ”ÆÔÑÀÑ9Á9΋¿¿6¾ÞåÄÑÞÅ5ÏnÔÑÙÜvÅÉ¿ ΔÒ=ÀQâ ¹º »½vÅ Ò|ΔÆ6ÐHÔÑÀ¾À¿РΔү»½vÅ ¿¾ÀÐÁÄÑÙ¾èêÄÅÉËÊåÄÔÖÍÕ Îlì]¿бÎΔ»½'¾ÙºvØvÚkÅÉÔÖ;½ Á'ÆΔåÄÃlåľÙÑÀ¾Þ»gßëǽ'΋¿BÅ ÌùÅÉ¿B»¾ÀÐHÃl»BÅʾÀ¿ƒÅÆÎȾÀ¿ åÄÔÖ͔ÅÉË÷Îlì<å_ß¾Þ»¿ ÖeΔÆBÕ ÆÅÉ¿BÁ9΋º'ËľÀºvØõ¿ÐÎ_Δ»½'¾ÀºvØõ»BŏÆ;Я⠹ º Åeá_Á9ŏÆ6¾èÕ ÐÅɺ»¿Ú'»½vÅ-¿ÐÎ_Δ»½'¾ÀºvØ»BŏÆ6ÐΔÒMùÆ ª¾«×­ øWø2y ç[%ø.zÆ| W ø.-?> î ø.C y ç[ ø.-?>|{ C:î ø.C zç2Y ø.sOî ø.C ýN¾À¿NËvŏ»BŏÆ;б¾ÀºvÅÉË Ã”¿"ҝ΋ÑÀÑÞÎVÇ¿Ï ùÆHª¾«^­ øW ø y ç[ ø zÆ| WOø.-?>|{ C:î ø.C y ç[ ø/-?>|{ x î ø.C zç Y ø.s { C:î ø.C ý ¾ÞÒ <Ø]pç ˆ ÿp ùÆ ª¾«^­ øWOø y ç[%ø/zÆ| WOø.-?> î ø.C y ç[ ø/-?>|{ C:î ø.C z ý ¾ÞÒ <Ø]pç ˆ 8Š ùÆ ª¾«^­ øWOø y ç[%ø/zÆ| WOø.-?>|{ C:î ø.C y ç[ ø/-?>|{ x î ø.C z ý ¾ÞÒ <öÿpç ˆ 8³d ùÆ ÙFÚãøWOøgý ¾ÞÒ <ä8Šç ˆ 8³d é"½vÅ ¿ÐÎ_Δ»½'¾Àº'Ø »BŏÆ6Ð Î”Ò ù!Æ ª¬«^­ ø.Y ø | W ø.w î ø y ç[ ø.w { C:î ø zç2Y ø.x<î ø.C ý ¾À¿ Ëvŏ»BŏÆ;б¾ÀºvÅÉË Ã”¿"ҝ΋ÑÀÑÞÎVÇ¿Ï ùÆHª¾«^­ ø.Y ø | WOø.w { C:î ø y ç[ ø/w { x î ø zç Y ø.x { C:î ø.C ý¾ÞÒ"Š’]pç ŒLÿp ùÆ ª¾«^­ ø.Y øK| WOø.w î ø y ç[ ø/w { C:î ø.z ý ¾ÞÒ"Š’]pç Œ£8Š ùÆ ª¾«^­ ø.Y øK| WOø.w { C:î ø y ç[ ø/w { x î ø z ý ¾ÞÒ"Š’]pç Œ£8³d ùÆ ÙFÚãø.Y øzý ¾ÞÒ"Šð8³d4ç Œ£8³d ¹ºÔ»½vůÅÜÄÃl»¾Þ΋º'¿Ãlå9ΤϔŔÚA»½vÅäÜ'º'¾ÀؔÆ6Ã”ÐæÁ'ÆΔåÄÃnÕ åľÀÑÙ¾Þ»¾ÞÅÉ¿±ÃlÆń֏ÔÑÀ֏ÜÄÑÀÃl»BÅÉË åß Ü'¿¾ÀºvØ×»½vńÔË'Ë'¾Þ»¾ÞϔŠ¿ÐÎ_Δ»½'¾Àº'Ø$Ç)¾Þ»½ÁÛ 8oed x”Ú5ǽ'¾ÙÖ6½§¾À¿¯Ö;½v΋¿BÅɺ »½vÆ΋Ü'؋½ñÅeáEÁ&ŏÆ;¾ÀÐÅɺ»¿â§é"½vÅKÅ_Ü'Ãl»¾Þ΋º Ò|ΔÆ.»½vŠÔË'Ë'¾À»¾ÞϔÅ)¿бÎΔ»½'¾ÙºvØÄøX½vÅɺÚ0‹ýA¾À¿AÔ¿XÒ|΋ÑÀÑÀΤÇ¿Ï ùÆ ÙFÚ øWOøw|WOø.-?> î ø.C ýK8 ,Æ&øWOø.-?> î øzý 9Û à T % ø,Æ9øWOø/-?> î ø ý 9ÜÛlý œ ÓN ÝTÜߨÖòß؛ٷâ¬Qwâ=ت٪â4Ø" â é"½vÅãÁÄÃlÆ;ÔÐŏ»BŏÆ6¿"ΔÒMÔºäð5Ì3ÌæÐ±Ã¤ß¥½ÄÃÉϔũË'¾Þì&ŏÆBÕ Åɺ»!ËvŏؔÆŏŠΔÒ¿»Ãl»¾À¿B»¾À֏ÔÑEÆÅÉÑÙ¾ÀÃlåľÀÑÀ¾À»gßÛå9ÅÉ֏ÔÜ'¿BÅXÁÄÃnÕ Æ6ÔÐŏ»BŏÆ-ÆÅÉÑÀ¾ÀÃlå¾ÀÑÀ¾Þ»oßKËvŏÁ9Åɺ'Ë'¿ã΋º×»½vűҝÆÅÜ'Åɺ'Öeß Î”ÒÄÖe΋º'Ëľ޻¾Þ΋º'ÔÑ»BŏÆ6ЯâÆ,vΔÆNÅeávÔÐÁÄÑÞŔڔÑÞŏ»MÃãÖeΔÆÁÄÜ'¿ Öe΋º'¿¾Ù¿B»)ΔÒ-б¾ÙÑÀÑÀ¾Þ΋º¯ÇAΔÆ6Ë'¿ãÔº'˳ÑÞŏ»5»½vÅ7Ò|΋ÑÙÑÞÎ¤Ç Õ ¾ÀºvØkÁÄÃlÆ6ÔÐŏ»BŏÆ;¿)å9Å-Åeá_»BÆ6ÔÖe»BÅÉ˶ҝÆ΋Ð]»½vÅ7ÖeΔÆÁÄÜ'¿ å_ßkÜ'¿¾ÙºvØ©»½vÅÛбÃnáv¾ÀÐCÜ'ÐÈÑÀ¾Þ͔ÅÉÑÀ¾Ù½vÎÎEËÅÉ¿B»¾ÙбÃl»¾Þ΋ºâ ùÆVøÞvýK8³d4èßd  ùÆVø ¯ | ÞvýK8³d4è ùÆVøàýK8ád4èßdd  ùÆVø ¯ |àýK8Ád4è ùÆVøWÉýK8³d4èßddd  ùÆVø ¯ | WÉýK8Ád4è ¹º »½Ä¾À¿k֏Ô¿ŔÚ»½vÆŏųÖe΋º'Ë'¾Þ»¾À΋º'ÔÑ"Á'Æ6ΔåÄÃlåľÀÑÀ¾À»¾ÞÅÉ¿Ú ùÆVø ¯ |?ÞEý;ÚQù!Ƥø ¯ |Fàý;Ú1Ôº'Ë~ùÆVø ¯ |?WÉýãÃlÆűÔÑÀÑ=dEâ åÄÜ'»1ù!Ƥø ¯ |Þvý¾À¿:¿»Ãl»¾À¿B»¾À֏ÔÑÀÑÀßãбΔÆÅÆÅÉÑÙ¾ÀÃlåÄÑÞÅM»½'Ôº Δ»½vŏÆ;¿å9ÅÉ֏ÔÜ'¿Bű¾Þ»¿©¿ÔÐÁÄÑÞÅ¿¾Å3øeedEÚßdddKÇAΔÆ6Ë'¿ 8pÐH¾ÀÑÀÑÀ¾Þ΋º i ù!ÆeøÞEýBýç¾À¿1åÄ¾ÞØ”ؔŏÆQ»½'Ôº©Î”»½'ŏÆ6¿âÆÐ)Ö*Õ »Ü'ÔÑÙÑÞߔÚM»½'¾À¿-Á'ÆΔåÄÑÞÅÉÐ<å9ÅÉÖe΋ÐÅÉ¿CϔŏÆß׿BŏÆ6¾À΋Ü'¿-¾Ùº ÅeáE»BÅɺ'ËvÅÉ˶ÐÎEËvÅÉÑÀ¿Ú&ŏϔÅɺ¶»½v΋Üv؋½³ÁÄÃlÆ;ÔÐŏ»BŏÆ6¿5Î”Ò »½vÅ-ÐÎ_Ë'ÅÉÑÀ¿ ÃlÆÅ©¿BŏÅɺK¾Àº¥»½vÅ»BÆ6Ô¾Ùº'¾ÀºvØ7ÖeΔÆÁÄÜ'¿Éâ éçÎKÖe΋º'¿6¾ÀËvŏƩ¿ÜÄÖ6½×¿»Ãl»¾À¿B»¾À֏ÔÑÆÅÉÑÀ¾ÀÃlåľÙÑÀ¾Þ»gßKΔÒXà Á'Æ6ΔåÄÃlåľÀÑÀ¾À»gßïÅÉ¿B»¾ÀÐHÃl»BŔÚÛÇÅ~¾Àº»BÆÎEË'Ü'ÖeŶ»½vÅ~Öe΋ºEÕ ÖeŏÁ'»)ΔÒMÇÅÉ¾ÞØ‹½»¾ÀºvØkÌ3ÃlÆ͔ÎVÏäÔ¿¿Ü'бÁ'»¾Þ΋ºÚ'Ô¿Ò|΋ÑèÕ ÑÞÎVÇ¿Ï ù!ÆnøWøw| W C:î ø.C ç2Y C:î ø.C ý Z ù!ÆVøWOøw| WOø.-?> î ø.C ç2Y ø.s<î ø/C ý i E øW ø.-?> î ø.C ç2Y ø.s<î ø/C ý øe ‹ý ù!Ƥø.Y øw| W C:î øSç2Y C:î ø.C ý Z ùÆVø.Y øw| WOø/w î øSç2Y ø/xî ø/C ý i E øWø.w î ø¤ç2Y ø.x<î ø.C ý øe¨0‹ý ¹ Ò5»½vůÁ'ÆΔåÄÃlå¾ÀÑÀ¾Þ»oß~ҝÜĺ'Öe»¾Þ΋ºÚùÆÉÚ¾Ù¿±Ü'¿BÅÉËïÔ¿ »½vÅ7ÇAÅÉ¾ÞØ‹½»ÛҝÜ'ºÄÖe»¾Þ΋ºÚ E Ú»½vÅ7ÅÜÄÃl»¾Þ΋º'¿ÛÃlå9ΤϔŠå9ÅÉÖe΋ÐÅÊÅÜÄÃl»¾Þ΋º'¿àÔ¿¿Ü'ÐH¾ÀºvاíB΋¾Àº» ¾ÀºÄËvŏÁ9ÅɺEÕ ËvÅɺÄÖeÅå9ŏ»gÇAŏÅɺäÆ6Ôº'Ë'΋ÐÈÏnÃlÆ6¾ÙÃlåÄÑÞÅÉ¿ Ô¿"ҝ΋ÑÀÑÞÎVÇ¿Ï ù!ƤøWOø÷| W C:î ø/C ç2Y C:î ø.C ý Z ù!ÆnøW ø ç W ø.-?> î ø.C ç2Y ø/s<î ø.C ý øe)”ý ù!Ænø.Y øŒ|W C:î øSç2Y C:î ø.C ý Z ù!Ƥø.Y øSç WOø/w î øeç2Y ø.x<î ø.C ý øe ‹ý é"½'Å©Å_Ü'Ãl»¾Þ΋º'¿ãÃlå9ΤϔÅÔ¿¿ÜÄÐÅ©»½'Ãl»ã»½vÅ-Á'ÆΔåÄÃnÕ åľÙÑÀ¾Þ»g߯ΔÒ»½vÅ֏ÜvÆÆÅɺ»ãÐΔÆÁĽvÅÉбÅ-»ÃlØâWOøíB΋¾Àº»ÑÞß ËvŏÁ9Åɺ'ËÄ¿ç΋º7å9Δ»½C»½vÅ Á'Æ6ŏÏ_¾Þ΋ÜÄ¿ã< »Ãl؋¿WOø.-?> î ø.C Ôº'Ë~»½vűÁ'ÆŏÏE¾Þ΋Ü'¿ŽˆñÇΔÆ6ËÄ¿ƒY ø.sOî ø.C Ôº'Ë~»½'Ãl» »½vÅ)Á'ÆΔåÄÃlå¾ÀÑÀ¾Þ»oߩΔÒ»½vÅ5֏ÜvÆÆÅɺ»ÇAΔÆ6ˋY ø íB΋¾Àº»ÑÞß ËvŏÁ9Åɺ'ËÄ¿΋º»½vÅ֏ÜvÆÆÅɺ»:»ÃlØãÔº'ËÛ»½vÅÁ'ÆŏÏE¾Þ΋Ü'¿ÍŠ »Ãl؋¿Wø.w î ø&ÔºÄË»½vÅ Á'ÆŏÏE¾Þ΋Ü'¿=Œ©ÇΔÆ;Ë'¿=Y ø.x<î ø.C â ¹ ÒÃátXÃÉߔÅÉ¿6¾ÀÔº§ÐÎEËvÅÉÑÔ¿¿6Ü'ÐÅÉ¿ãíB΋¾Àº»k¾ÀºÄËvŏÁ9ÅɺEÕ ËvÅɺÄÖeŔÚ:Çű֏ÔÑÙÑN¾Þ»Ã-í΋¾Ùº»Û¾Àº'Ë'ŏÁ&ÅɺÄËvÅɺ'ÖeÅ-бÎ_ËvÅÉÑ ø䋹 Ì„ý;â Ð5Öe»Ü'ÔÑÀÑÀߔÚ9ÜÄ¿¾ÀºvØ.»½vÅCÁ'ÆΔåÄÃlåľÙÑÀ¾Þ»g߯ÒúÜ'º'Öe»¾À΋º3Ô¿ »½vÅHÇÅÉ¾ÞØ‹½»©ÒúÜ'º'Öe»¾À΋º„¾À¿бÃl»½vÅÉÐHÃl»¾À֏ÔÑÀÑÞ߄¾Àº'ÖeΔÆÕ ÆÅÉÖe»5Ôº'ËK¾ÙÐÁÄÑÀÔÜ'¿6¾ÞåÄÑÞŔâK,vΔÆ5ÅeávÔÐÁÄÑÞŔÚǽ'¾ÀÑÞÅ㻽vÅ ¿ÜÄРΔÒNÁ'Æ6ΔåÄÃlåľÀÑÀ¾À»¾ÞÅÉ¿XΔÒ!ÔÑÀÑQ¿BÅɺ»BÅɺÄÖeÅÉ¿)Ç)¾Þ»½ä»½vÅ ¿ÔбÅHÑÞÅɺvؔ»½„å9ÅÉÖe΋ÐÅÉ¿`lâßdK¾Àº„Ôº~ð5̳Ì×ÚQ¾Þ»Ûå9ÅeÕ Öe΋ÐÅÉ¿NºÄÃl»ÜvÆ6ÔÑÀÑÞßÛÑÞÅÉ¿6¿ç»½ÄÔºLlâßdã¾ÀºCÃ䔹Ì×âlé"½vŏÆÅeÕ ÒÎ”ÆŔÚ=䔹Ì3¿-¿½v΋Ü'ÑÀ˄º'Δ»å9űÜ'¿BÅÉË~¾Àº×֏ÔÑÀ֏Ü'ÑÀÃl»¾Àº'Ø »½vÅ-Á'ÆΔåÄÃlå¾ÀÑÀ¾Þ»oßkΔÒ!Ãk¿BÅɺ»BÅɺÄÖeŔâ)ð)ΤÇAŏϔŏÆÉÚ:¾ÞÒMÇÅ ÇXÔº»»BÎêºÄË»½vÅÐ΋¿»ÑÀ¾Þ͔ÅÉÑÀß-¿BÅ_ÜvÅɺ'ÖeÅ)Ò|ΔÆÅÉÔÖ6½ ¿BÅɺ»BÅɺ'ÖeÅÔº'Ëk»½vÅíB΋¾Àº»AÁ'ÆΔåÃlåľÀÑÀ¾Þ»oß-ΔÒçÅÉÔÖ6½¥ÁÄÃnÕ Æ6ÔÐŏ»BŏƄ¾À¿3Æŏ؋ÃlÆ6Ë'ÅÉËÂÔ¿¶Ã ¿ÖeΔÆŔڎ䔹Ì3¿3ÇΔÆ6Í ÇAÅÉÑÀÑ â tAßHÆ6ŏÁÄÑÀÔ֏¾ÀºvØCÖeΔÆÆÅÉ¿BÁ9΋º'ËľÀºvØÁÄÃlÆ;ÔÐŏ»BŏÆ6¿ÚvÔº ÅeáE»BÅɺ'ËvÅÉËHбΔÆÁĽvÅÉÐÅeÕgÜĺ'¾Þ»!ð5Ì3Ì ÖÃ”ºHå&Å»BÆ6Ôº'¿Õ ҝΔÆ6ÐÅÉ˶¾Àº»BÎ.»½'Å7ÖeΔÆ6ÆÅÉ¿BÁ9΋º'Ë'¾Àº'Ø䔹̓Ú&Ç)½'¾ÀÖ6½³¾À¿ ËvÅeêº'ÅÉ˯Ô¿"ҝ΋ÑÀÑÞΤÇ)¿Ï åÛømlon pmqr > î st ç uvn pmqræw î xHt ýF| 8àùÆVøW C:î X ç[ x î X ç2Y C:î X ý Z X $ øf C ùƤøWOø2y ç[ ø/zç WOø/-?> î ø/C y ç[%ø.-?>|{ C:î ø.C zç Y ø.sOî ø.C ý i ù!Ƥø.Y øSç Wø.w î ø y ç[%ø.w { C:î ø/zç2Y ø.x<î ø.C ý øe‹ý ¹º±»½vÅ)Åeá_»BÅɺÄËvÅÉËç䋹 Ì“Ú‚åÛøml‘p r x î x t ç uèr x î x t ý;Ú_»½vÅ Á'ÆΔåÃlåľÀÑÀ¾Þ»oßΔÒvú'Î_ËvÅGON 25t ÿI)ΔÒE»½vÅAÐ΋¿B» ÑÀ¾Þ͔ÅÉÑÀß ¿BÅ_ÜvÅɺ'ÖeÅ~¾Àº ,M¾Þ؋Ü'ÆÅ „¾À¿ä֏ÔÑÀ֏ÜÄÑÀÃl»BÅÉËÔ¿ ҝ΋ÑÀÑÞΤÇ)¿Ï ù!Ƥø“_“~”‹•zç`õç~ ~Kç –é—š™ ç9;珎 / ç?úý i ù!Ƥø¤N%2›ç~;~Kç –é—é™ç “_“~”‹• 珎/%ç?úý é"½'ÅÁÄÃlÆ6ÔÐŏ»BŏÆ;¿ΔÒNÃç䋹 ÌõÃlÆ6ÅÅÉ¿B»¾ÀбÃl»BÅÉË3å_ß Ü'¿¾ÙºvØ»½vÅ ÁÄÃlÆ6ÔÐŏ»BŏÆ;¿“ΔÒK»½vÅ ÖeΔÆ6ÆÅÉ¿BÁ9΋º'Ë'¾Àº'Ø ð5Ì3ÌþÔ¿"ҝ΋ÑÀÑÞÎVÇ¿Ï ùÆHª¾«^­ øW ø y ç[ ø zç WOø.-?> î ø.C y ç[%ø.-?>|{ C:î ø/C zç Y ø.sOî ø.C ýŒ8 ùƪ¾«^­XøW ø y ç[ ø z·| WOø.-?> î ø.C y ç[%ø.-?>|{ C:î ø/C zç Y ø.sOî ø.C ý i ùÆ ÙFÚãø W ø.-?> î ø.C y ç[ ø.-?>|{ C:î ø.C zç Y ø/s<î ø.C ý ùƤø.Y ø ç W ø.w î ø y ç[ ø/w { C:î ø zç2Y ø/xî ø/C ýK8 ùÆVø.Y øŒ|Wø.w î ø}y ç[ ø/w { C:î ø/zç2Y ø/xî ø/C ý i ùƤøWOø/w î ø y ç[%ø.w { C:î ø/zç2Y ø.x<î ø.C ý ù!Æ ÙÇÚ øWø.-?> î ø ýK8 ,·øWOø.-?> î øúý›9Û à T %(65  % ø',Æ9øW ø.-?> î ø ý$9Ûlý ê ë "=ì ['¸dg¨ [ÄR緋j ¹º„Åeá_Á9ŏÆ6¾ÙÐÅɺ»¿ÉÚ:ÇűÜÄ¿BÅÉ˄»½vÅHÓ1ùÂÖeΔÆÁÄÜ'¿ ǽ'¾ÙÖ6½ Öe΋ºÄ¿¾À¿B»¿.ΔÒ`¨0Ú $ÇAΔÆ6Ë'¿.Ôº'ˊ _Úí!4 ¿BÅɺ»BÅɺ'ÖeÅɿԺÄ˯¾À¿X»ÃlؔؔÅÉËäǾÀ»½î0 ±ùAûüH»Ãl؋¿â¹o» ÇXÔ¿ä¿Bŏ؋ÐÅɺ»BÅÉËZ¾Àº»BÎï»gÇAÎñÁÄÃlÆ6»¿Ú)»½'ń»BÆ6Ô¾Àº'¾Àº'Ø ¿Bŏ»¥Î”Ò;d¾ïÔº'Ë »½vŶ»BÅÉ¿B»ä¿Bŏ»¥Î”Òòed¾ï.Úã¾Àº »½vÅ ÇXÃÉß໽'Ãl»äÅÉÔÖ;½Z¿BÅɺ»BÅɺ'ÖeÅ×¾Àºà»½'ń»BÅÉ¿B»K¿Bŏ»äÇXÔ¿ ÅeáE»BÆ6ÔÖe»BÅÉË7Ò|Æ6΋ÐŏϔŏÆß`ed5¿BÅɺ»BÅɺ'ÖeŔâN¹º»½vÅX¿ÔÐÅ ÇXÃÉߔڔÇAÅбÔËvÅednÕzÒ|΋ÑÀËCË'Ãl»Ã)¿Bŏ»1ҝΔÆ÷ednÕzҝ΋ÑÀËCÖeÆ΋¿¿ ÏlÔÑÀ¾ÀË'Ãl»¾Þ΋º:â ¹º§Î”Æ6ËvŏÆ3»BÎ ÐΔÆÁ½v΋ÑÞΔ؋¾À֏ÔÑÀÑÀß Ã”º'ÔÑÞß=Å×ÅÉÔÖ;½ ÇAΔÆ6ËڋÇAÅ"Ü'¿BÅÉË7»½vÅ ÓÛΔÆ6ÅÉÔº±ÐΔÆÁĽ'΋ÑÞΔ؋¾À֏ÔÑEÔº'ÃnÕ ÑÞß=ÅÆø:ÅÅ”Ú ‹ýAǽľÀÖ6½.¾À¿XÖe΋º'¿¾À¿»BÅɺ»XǾ޻½k»½vÅ Ó1ùÖeΔÆÁÜ'¿âitßàÜ'¿¾Ùºvؓ»½'ŶбΔÆÁĽv΋ÑÞΔ؋¾ÞÕ ÖÃ”ÑÔº'ÔÑÞß=ÅÆÉÚ绽'ÅHÃÉϔŏÆ6Ãlؔťº_Ü'Щå9ŏÆΔÒXÁ9΋¿¿¾ÀåÄÑÞŠÔº'ÔÑÀß_¿BÅÉ¿"Á&ŏÆÇΔÆ6Ëäå&ÅÉÖe΋бÅÉ¿ _âjPlâ ,M¾Þ؋Ü'ÆÅá!VÕ¤ ʾÀÑÀÑÙÜ'¿B»BÆ6Ãl»BÅñؔÆ;ÃlÁĽ'¿×¿6½vΤǾٺvØZ»½vŠäϔŏÆ6ÃlؔųÔ֏֏Ü'Æ6ÔÖeßïÆ6Ãl»BÅÉ¿±Î”Òãð5Ì3̳¿HÔº'Ëv䔹Ì3¿Ú ǾÀ»½v΋Üv»©Öe΋º'¿6¾ÀËvŏÆ6¾Àº'ØäÇΔÆ;ËEÕg¿BÁÄÔ֏¾Àº'ØvÚQÇ)¾Þ»½×Öe΋ºEÕ ¿¾ÙËvŏÆ6¾ÀºvØ ÇAΔÆ6ËEÕg¿ÁÄÔ֏¾ÀºvØ"΋ºÄÑÞßÛ¾ÀºÛ»½vÅAÑÞÅeáE¾Ù֏ÔыÁ'ÆΔå'Õ ÃlåľÙÑÀ¾Þ»¾ÞÅÉ¿Ú1Ǿ޻½~Öe΋º'¿6¾ÀËvŏÆ6¾Àº'دÇΔÆ6ËvÕg¿BÁÄÔ֏¾ÀºvØK΋º'ÑÞß ¾Àº¥»½'ÅÛ»ÃlرÁ'ÆΔåÄÃlå¾ÀÑÀ¾Þ»¾ÞÅÉ¿ÉڍÔº'˯ǾÀ»½.Öe΋ºÄ¿¾ÀËvŏÆ6¾ÙºvØ ÇAΔÆ6ËEÕg¿BÁÔ֏¾ÀºvؾÀº©å9Δ»½Û»½vÅ»ÃlØãÔº'Ë-ÑÞÅeáE¾Ù֏ÔыÁ'ÆΔå'Õ ÃlåľÙÑÀ¾Þ»¾ÞÅÉ¿ÚNÆ6ÅÉ¿BÁ9ÅÉÖe»¾ÞϔÅÉÑÞߔâÔðŏÆŔÚAÑÀÃlå9ÅÉÑÀ¿7¾Àº“áÕgÃnáv¾À¿ ¿BÁ9ÅÉ֏¾ÞҝߓбÎ_ËvÅÉÑÙ¿C¾ÙºÔ»½vůÇAäßÔ»½'Ãl» > î s w î x ËvÅɺvΔ»BÅÉ¿ k øml n pqr > î st ç u n pqrw î xHt ý:ΔƏåøml n pqr > î st ç u n pmqræw î xHt ý;â é"½'ÅXбÎ_ËvÅÉÑÙ¿QÃlÆ6Å"ÃlÆÆ6ÔºvؔÅÉË7å_ß-»½'ÅXÔ¿6ÖeÅɺ'Ë'¾ÀºvØãΔÆBÕ ËvŏÆAΔÒ»½vŏΔÆŏ»¾Ù֏ÔÑ&º_Ü'Ð-å9ŏÆΔÒ:ÁÄÃlÆ;ÔÐŏ»BŏÆ6¿â!é"½'Å êÄÆ;¿B»»oÇÎÐÎEËvÅÉÑÀ¿AÃlÆÅÛ¿B»Ôº'ËÄÃlÆ6ËkбÎ_ËvÅÉÑÙ¿Ôº'Ë.»½vŠΔ»½vŏÆ;¿)ÃlÆÅ-Åeá_»BÅɺÄËvÅÉËKÐÎEËvÅÉÑÀ¿ÉâAé"½'Å©ÃÉϔŏÆ6ÃlؔÅÔÖ*Õ ÖÜvÆ;ÔÖeßKÆ6Ãl»BÅÉ¿ãå9ŏߔ΋º'˳»½vÅ7Æ6ÔºvؔÅCΔÒÅÉÔÖ;½„Ø”Æ6ÃlÁĽ ÃlÆÅ-¾Àº»BÅɺ»¾Þ΋º'ÔÑÙÑÞßH΋б¾À»B»BÅÉËâ ¹º »½vÅÉ¿BųêÄØ‹ÜvÆÅÉ¿ÉÚ"ÇŶ֏ÔºàΔåÄ¿BŏÆ6ϔÅ3»½'Ãl»¥»½vÅ ¿¾ÙÐÁÄÑÀ¾èêÅÉËKåÄÔÖ6͍ÕzÎlì ¿ÐÎ_Δ»½'¾Àºvد»BÅÉÖ;½'º'¾_ÜvÅHб¾À»Õ ¾Þ؋Ãl»BÅÉ¿K¿BÁÄÃlÆ6¿BÅeÕgËÄÃl»ÃïÁ'ÆΔåÄÑÀÅÉб¿k¾Ùºàå&Δ»½§ð5̳Ì3¿ Ôº'Ëð䔹Ì3¿Éâ Ð5¿?ÅeáEÁ9ÅÉÖe»BÅÉËÚñ䋹 Ì³¿?ÔÖ;½'¾ÞŏϔÅÉ¿ ½'¾À؋½vŏÆÔ֏֏ÜvÆ6ÔÖeß »½'Ôº »½vÅ ÖeΔÆÆÅÉ¿BÁ9΋º'Ë'¾ÙºvØ ð5Ì3̳¿Z¾Ùº ¿B΋ÐÅ ÅeáE»BÅɺ'ËvÅÉ˼ÐÎ_Ë'ÅÉÑÀ¿§Öe΋º'¿Ü'ÑÀ»Õ ¾Àº'Ø$Æ;¾ÀÖ6½§Öe΋º»BÅeá_»¿Éâ A΋º'¿ÜÄÑÞ»¾ÀºvØ$ÇΔÆ;ËEÕg¿BÁÄÔ֏¾Àº'Ø Ð±Ãl͔ÅÉ¿„¿6ÑÀ¾Þ؋½»3¾ÀÐÁÄÆΤϔÅÉÐÅɺ»¶¾ÀºÂ¿΋МΔÒHå&Δ»½ ð5Ì3̳¿äÔº'ËÁ䋹 Ì³¿â ¹ ».¾Ù¿.¿»Ãl»¾À¿B»¾À֏ÔÑÀÑÀß ¿¾Þ؋º'¾Þê'Õ ÖÃ”º»-Ç)¾Þ»½×Öe΋ºEê&ËvÅɺ'ÖeÅLn»½ÄÃl»C»½'ÅHå9ÅÉ¿B»-бÎ_ËvÅÉÑzÚ k øml p2r x î x t ç u p2r x î x t ýÂø'0_â±ï±ý;ÚÔ¾À¿ñå9ŏ»B»BŏÆà»½'Ôº Ôºß-Δ»½'ŏÆÐÎEËvÅÉÑÀ¿M¾Àº'֏ÑÀÜÄË'¾ÀºvØ)»½vÅ Á'ÆŏÏE¾Þ΋Ü'¿M¿B»ÔºEÕ Ë'ÃlÆ;ËæÐ±Î_ËvÅÉÑ k ømlr C:î Æ t ç uèr Æ î Æ t ýÈø'vâ± 6ï±ýeø1ÅÅ”Ú  ‹ý;Ú7»½vœÁ'ÆŏÏE¾Þ΋Ü'¿KбÎ_ËvÅÉÑ k øml p r C:î Æ t ç u r Æ î Æ t ý ø'vâ±06ï±ý øúÓ¾ÀРŏ»~Ã”Ñ âÞÚ  ‹ý;ÚkÔºÄË»½'Åïå&ÅÉ¿» 䋹 Ì“Ú k ømlr C:îjC8t ç uÜp r C:îjC8t ýãø'0_â± 6ï±ý;â ò £kW1Rf #gcjldgW1R E„Ŷ½'ÃÉϔÅ3Á'ÆÅÉ¿BÅɺ»BÅÉËñ»½'ÅäÅeá_»BÅɺÄËvÅÉË ð5̳Ì3¿HÒ|Î”Æ ÓÛΔÆ6ÅÉÔº„ùAûüä»Ãlؔ؋¾ÀºvØvÚǽľÀÖ6½³Ã”¿¿Ü'бÅ"íB΋¾Àº»5¾ÀºvÕ ËvŏÁ9Åɺ'Ë'Åɺ'ÖeÅ"å9ŏ»gÇAŏÅɺ.Æ6Ôº'Ëv΋Ð?ÏnÃlÆ6¾ÙÃlåÄÑÞÅÉ¿Ú‹Ç)½'¾ÀÖ6½ ÃlÆÅXåÄÔ¿ÅÉË7΋º7»½vÅ ÐΔÆÁĽvÅÉбÅeÕgÜ'º'¾Þ»QÑÀÃl»B»¾ÀÖeÅ ¿B»BÆ6Ü'Ö*Õ »ÜvÆ6ŔکÔº'ËÂǽ'¾ÀÖ;½ZÖe΋º'¿¾ÙËvŏÆäÇAΔÆ6ËEÕg¿BÁÄÔ֏¾ÙºvØ Ã”º'Ë Æ6¾ÙÖ6½×¾ÙºvÒ|ΔÆ;бÃl»¾Þ΋º„¾Àº~Öe΋º»BÅeáE»¿â¯¹ º×»½vÅHÐÎEËvÅÉÑÀ¿ÉÚ Ã¶¿¾ÀбÁÄÑÀ¾èêÄÅÉË~ϔŏÆ6¿¾Þ΋º$ΔÒåÔÖÍÕzÎlìà¿бÎΔ»½'¾ÙºvØ3¾À¿ Ü'¿ÅÉ˯»BÎHб¾Þ»¾À؋Ãl»BÅË'Ãl»ñ¿BÁÄÃlÆ;¿BÅɺvÅÉ¿¿ ÁÄÆΔåÄÑÞÅÉЯâ ,'Æ΋Ð?»½'ÅÅeáEÁ&ŏÆ;¾ÀÐÅɺ»ÔÑvÆÅÉ¿Ü'ÑÀ»¿Ú‹ÇAÅ)½'äϔÅ)ΔåvÕ ¿BŏÆ6ϔÅÉËZ»½'Ãl»3Åeá_»BÅɺÄËvÅÉËZÐÎEËvÅÉÑÀ¿KÔÖ6½'¾ÞŏϔÅÉËÊŏϔÅɺ å9ŏ»B»BŏƶÆÅÉ¿Ü'ÑÀ»¿ä»½'Ժ»½vœ¿»Ôº'Ë'ÃlÆ6ËÊÐÎEËvÅÉÑÀ¿K¾Ùº ֏Ô¿BÅΔÒEå&Δ»½-ð)̳̳¿1Ôº'Ëb䔹Ì3¿ÚV»½'Ãl»1»½'Å¿¾ÀÐÁÑÀ¾èÕ ó)ô}õ ö ó)ô}õ ô ó)÷}õ ö ó)÷}õ ô ó)ø}õ ö C:î Æ Æ î Æ x î Æ Æ î Æ C:î Æ C:î Æ x î Æ C:î Æ C:îjC Æ î Æ C:îjC C:î Æ C:î Æ x î Æ x î Æ x î Æ C:îjC x î Æ C:î Æ C:îjC x î Æ C:îjC C:îjC C:îjC x î x Æ î Æ x î x C:î Æ x î x x î Æ x î x C:îjC C:î Æ x î x x î Æ x î x C:îjC x î x x î x x î x ùú‹ú i i i i i i i i i i i i i i i i i i i i û#ümú ý ý ý ý ý ý ý ý ý ý ý ý ý ý ,M¾Þ؋Ü'ÆÅé!=ÏKEà¾Þ»½v΋Üv»"Öe΋º'¿6¾ÀËvŏÆ6¾Àº'ØCÇAΔÆ6ËEÕg¿ÁÄÔ֏¾ÀºvØ ó)ô}õ ö ó)ô}õ ô ó)÷}õ ö ó)÷}õ ô ó)ø}õ ö C:î Æ Æ î Æ x î Æ Æ î Æ C:î Æ C:î Æ x î Æ C:î Æ C:îjC Æ î Æ C:îjC C:î Æ C:î Æ x î Æ x î Æ x î Æ C:îjC x î Æ C:î Æ C:îjC x î Æ C:îjC C:îjC C:îjC x î x Æ î Æ x î x C:î Æ x î x x î Æ x î x C:îjC C:î Æ x î x x î Æ x î x C:îjC x î x x î x x î x ùú‹ú i i i i i i i i i i i i i i i i i i i i û#ümú ý ý ý ý ý ý ý ý ý ý ý ý ý ý ,M¾Þ؋ÜvÆ6Å=ÏKE ¾À»½äÖe΋º'¿¾ÀË'ŏÆ6¾ÀºvØCÇΔÆ;ËEÕg¿BÁÄÔ֏¾Àº'Ø΋º'ÑÞߥ¾Àº¥»½vÅÑÞÅeáv¾À֏ÔÑÁ'Æ6ΔåÄÃlåľÀÑÀ¾À»¾ÞÅÉ¿ êÄÅÉË åÄÔÖÍÕzÎlìÊ¿6ÐÎΔ»½Ä¾ÀºvØ~»BÅÉÖ;½'º'¾_ÜvÅ3ÐH¾Þ»¾Þ؋Ãl»BÅÉË Ë'Ãl»Ãã¿BÁÄÃlÆ;¿BÅɺvÅÉ¿¿·Üľ޻BÅÅeì9ÅÉÖe»¾ÞϔÅÉÑÞߔڍ»½'Ãl»NÖe΋º'¿Ü'ÑÞ»Õ ¾ÀºvØ©ÇΔÆ6ËvÕg¿BÁÄÔ֏¾ÀºvØCбÔËvÅ5¿ÑÀ¾Þ؋½»¾ÀÐÁÄÆΤϔÅÉÐÅɺ»AÎ”Ò Ã”ÖÖÜvÆ6ÔÖeߔÚçÔº'˳»½'Ãl»Û¿B΋ÐÅCÅeá_»BÅɺ'Ë'ÅÉËþ䔹Ì3¿5΋Üv»Õ Á9ŏÆҝΔÆ6ÐÅÉË¥»½vÅÖeΔÆÆ6ÅÉ¿BÁ9΋º'Ë'¾ÀºvØð5Ì3̳¿â )ΤÇÚ¯ÇAŧÃlÆŧ¾ÀÐÁÄÑÀÅÉÐÅɺ»¾ÙºvØÂÔº'ËÝŏÏlÔÑÀÜ'Ãl»Õ ¾ÀºvØkÏnÃlÆ;¾Þ΋Ü'¿5¿ÐÎ_Δ»½'¾ÀºvØk»BÅÉÖ6½'º'¾JÜvÅÉ¿ã¾Àº3ΔÆ6ËvŏÆ5»BÎ êº'ËàÐΔÆ6ńÅeì&ÅÉÖe»¾Þϔœ¿ÐÎ_Δ»½'¾ÀºvØ$»BÅÉÖ6½'ºÄ¾Ü'Å³ÒÎ”Æ ð5Ì3Ì 5䔹ÌäÕzåÔ¿BÅÉË3ÓΔÆÅÉÔº3ùXûü.»Ãlؔ؋¾Àº'ØvâÐ5º'Ë Ã”ÑÀ¿BÎvÚvÇAÅ©ÃlÆÅÛ»BÆ6ß_¾Àº'Ø»BαÃlÁ'ÁÄÑÞßî䔹Ì3¿ »BÎkË'¾èì9ŏÆÅɺ» ÃlÆÅÉÔ¿ä¿6Ü'Ö6½§Ã”¿ä¾ÙºvÒ|ΔÆ;бÃl»¾Þ΋º Åeá_»BÆ;ÔÖe»¾Þ΋º¾Àº »½vÅ åľÞÎlÕgб΋ÑÞÅÉ֏Ü'ÑÀÃlƯËv΋бÔ¾ÙºÚºv΋Ü'º ÁĽ'Æ6Ô¿BŶÖ6½_Ü'º'ÖÍÕ ¾ÀºvØvÚ'ÔºÄËä¿BÎ΋ºâ ¦¶[P*['¸‹[ÄRfE[Äj ÿ ^ù    ×û!"# $  ú%&'(*)+ -, .0/211(3 ÿ4(5*,  768(9&:,<;=6>; ?A@ "2CBDE(E EÎüF HGIJ2KLMJONQP>RSUTTWVYX[Z]\*P_^ ` a J*b2NcLJ*bedIWPgf hK"fi\*`jWbP=S"`>`kf-l!ScbK SmidnddjWoOp(qWr*(s*tu*v st1 ? Dw$  x/11(y 9z+{f>`}|f>b!l~GI J\"f>`kf€WPYfiKƒ‚~J2|S"`„€ NcJI…Z]\PY{I \*`†‡\*bAl{\l(S"‰ˆW, Šnˆ  ", *, }(  ù ‹*7Œ  }‹("< },O Œ ?Ž ÿnD } Š ƒ/1As jcbPgIJ|{KcPgfiJ*bQP=J9‘PFJKR \*€PgfiKGIJ*o KSc€€cSc€W$&$ " !, c";ùŠ ŠYn")Üû "(’“•”eˆ5 – 9’“¬ÿn¾ùn<,2/21!s3 G+\PYPFS"IWb a `}\*€o €Wf hK \PYfiJb—\b˜|–‘KScbSdb\*`k™_€Wf€W^û( ƒš› }Š "(üWûAœAA/21(3 nž BŸ n&' @ 5 Ё*,   w24(5" " }2$6 ?@ 2c ] x,  éÿ¡<, ¢ *, }( %6&' @ 5Š ,  %&:*; ¢"," £‰üF ~zfiJ*¤ S"PYIWf}¥(\*˜u(¦§i3*;=u!¨c© ª3!s2vªyu? =ú%Ÿ«“*, .¬/1(t!sAvÿ¡<, ¢,  ›6e&$ ## Š} },  68 ¢ ?A@  $ˆ, 68(•, ­® E5EÍú9"Š¢¯; @  " !,D6 ?A@ ¯’+"E  }."29üF ¯j °$°$°[± I\*b€Wo \(KcPgfiJ*b€²J*b³dnKJ{A€PgfiKc€ ´‘µ SSK R¶\*b|%‘˜f-lb\*`+GIJ*o KSc€€f>bAl mid·‘‘GDr_ 3! §Y3(¨W© u(¦(¦_vAu(¦ /(üW;ù](«“ EMû;ù](«· ¢M!  œ¯;O!«· }¢ƒ/1(1t D«·;   ¸&:<,;g6>; ?@ QBDE(E E—Œ< Eâú~¹ }¢e5 ¢ ÿ¡ !,  @ ‹ú9"ŠgüF QGIJ2KL+J<NP>RS7TºcVYX»Ze\*PgfiJ*b\*` a JbNcS"IScbKS¯Jb…¼J*IS\bMjWb2NcJ*IW¤ \*PgfiJ*b~GIJ2KSW€€Wf>b!l 1*v˜/uû;ù «· }¢ƒŸc,7ŠY[/211(3 ›«·  ½&:,<;=6>; ?A@ "2 BDE(E E #A9Œn } E…7w 5."."ƒ ",üF —GIJ2KLJONeP>RS ó)ô}õ ö ó)ô}õ ô ó)÷}õ ö ó)÷}õ ô ó)ø}õ ö C:î Æ Æ î Æ x î Æ Æ î Æ C:î Æ C:î Æ x î Æ C:î Æ C:îjC Æ î Æ C:îjC C:î Æ C:î Æ x î Æ x î Æ x î Æ C:îjC x î Æ C:î Æ C:îjC x î Æ C:îjC C:îjC C:îjC x î x Æ î Æ x î x C:î Æ x î x x î Æ x î x C:îjC C:î Æ x î x x î Æ x î x C:îjC x î x x î x x î x ùú‹ú i i i i i i i i i i i i i i i i i i i i i û#ümú ý ý ý ý ý ý ý ý ý ý ý ý ý ý ý ,Q¾À؋ÜvÆÅ ÏKEà¾Þ»½äÖe΋º'¿6¾ÀËvŏÆ6¾Àº'Ø-ÇAΔÆ6ËEÕg¿BÁÔ֏¾ÀºvØ΋º'ÑÞߥ¾Àº¥»½vÅÛ»ÃlØkÁ'ÆΔåÄÃlå¾ÀÑÀ¾Þ»¾ÞÅÉ¿ ó)ô}õ ö ó)ô}õ ô ó)÷}õ ö ó)÷}õ ô ó)ø}õ ö C:î Æ Æ î Æ x î Æ Æ î Æ C:î Æ C:î Æ x î Æ C:î Æ C:îjC Æ î Æ C:îjC C:î Æ C:î Æ x î Æ x î Æ x î Æ C:îjC x î Æ C:î Æ C:îjC x î Æ C:îjC C:îjC C:îjC x î x Æ î Æ x î x C:î Æ x î x x î Æ x î x C:îjC C:î Æ x î x x î Æ x î x C:îjC x î x x î x x î x ùú‹ú i i i i i i i i i i i i i i i i i i i i i û#ümú ý ý ý ý ý ý ý ý ý ý ý ý ý ý ,M¾Þ؋Ü'ÆÅ =ÏKEà¾Þ»½äÖe΋º'¿¾ÙËvŏÆ6¾ÀºvØCÇAΔÆ6ËEÕg¿BÁÄÔ֏¾Ùºvر¾Àº¥å9Δ»½¯»½vÅÛ»ÃlØ.Ôº'ËäÑÞÅeáv¾À֏ÔÑ9Á'ÆΔåÃlåľÀÑÀ¾Þ»¾ÀÅÉ¿ ¾ VYXUZ]\*PgfiJ*b\*` a J*b2NcScIS"b˜KS¿Jb³¼¯JIS\*b³jWb2NcJ*IW¤ \*o PgfiJ*b9GIJ2K Sc€€Wf>b!l1(3_vAy(¦3û;ù «· ¢À —œ¯;O«· ¢ƒ…/211!]ˆ  c ¢¯ , ‹ ­® E~ ½&:<,;g6>; ?A@ "2¿B‡EE( } E‹üF UGŸIJK2L J<N»P8R S~Ze\*PgfiJ*b\*` a J*bNcS"IS"b˜KS¿Jb›¼JIS\*bÁjWb2NcJ*IWo ¤ \*PgfiJ*b»‘K"fiScbK S ‘J2K"fiScPg™* ?A@  Ey!ª(s2vAy3(¦ û;Fˆ]$«· ¢ƒ ? ;<ÂD:­"( ~ù];<$’+ ¢ƒ¶/211tŽ ú9( @  "¢c;FŒ  -,9&” ? BDE(E E~ú~A "Š]  A;  E9š—A; ?@ " } EƒüF ½GIJ2KLnJON P8R SMTºcVYX²Ze\*o PgfiJ*b\*` a JbNcS"IScbKS¯Jbƒ¼JIS\*bMjcbNcJIW¤\PYfiJb~GŸIJ*o KSc€ €Wf>bAl3*vAt? ;û'­"(Q/211u²GIS|*fiK"PgfiJ*bÁ\*b|ƒÃ f€"\*¤ "f-l{\PYfiJb J<N…¼J*IS\bÅÄ9J*I| a \*PFS=l(JIW™U"™ÇƇ€f>bAl›\²ZeSc{I \*` ZeScPgÈ¡J*I¥‡ˆW,( Š!ˆ ", ,   ? 5 Šn,  Š Œn }‹("< },O «·(2 ? ;ù]­®"(É/11![¼J*IS\b›GÊ‘˱\llf>b!l½‘˜™_€WP=S"¤ a J*b€Wfi|S"IWf>b!l‰ÆDbA¥b˜J*È:bHÄ9J*I|*€ú~<,¯BŸ  «·(2 Ž ‹* c2šüF <, },5, 6 ? " } c …B®2; (Š}(E•§i« Ž ü ? B¨W «·(2 ? ;OÂ'Q­èû;Fˆ]½«· }¢ƒǚÇ;ù]½’+A5[  ù ;  ’+ ¢ƒ/1(11 Ž &:,<;=6>; ?A@ "2ÌB‡EE E ú9Š“Œn } E›­®c¹ Š·’+5 Š 9͟³ Î( @ 5 ? , *, <, "âüF ³GIJ2KLJONMP8R S–jcbPFScIWb\*PgfiJ*b\*` a J*bo NcS"IS"b˜KSÇJ*b a J*¤nµ˜{PFS"IUGIJ2KSW€€Wf>b!lÉJ<NÏʟIWfiS"bPF\*` †‡\*bAl{\l(Sc€*m8j aŸa GÊ¡†'oOppcr*˜3(t(2v31(¦ ? ;<ÂD‡­®"—/1(11MZ]S"ÈǑ˜P=\PYf€WPgfiK \`$‚»J2|S"`„€NcJ*Id{o PFJ*¤ \*PgfiK…G·Ê‘ɱ\llf>b!l³ˆnc,Š+ˆ  ", *, }(  «·(2eŒn ‹" -,O( «·  ù]; ? •­ ¢ƒ /11AsA·¼¯JIS\*b»G+\IWPYo<J<No=‘µS SKR³±\"l2lf>bAl "™¶Æ®€Wf>b!l~†:f>b!l{A€f>PgfiK–¼eb˜Jȟ`}S|"l!S~\*b|²‘PF\*Pgf€PgfiK\*` jcbNcJIW¤\PYfiJb'ˆnW, Šˆ  ", *, }( _«· Œ }; ‹("< },O«·(2 ͝^ú9 Ё¶/1(1 /½B‡EE E—B®"¹!, )+ },Ux&$ #; # Š} O,  ú9ŠY_üF ³GIJ2KL·JONMP8R S…jWbPFS"IWb˜\*PgfiJ*b\*` a JbNcS"IScbKSËJ*bÐdKJ*{A€WPgfiKW´Ç‘µ SSK RÑ\b˜|¬‘˜f-lb\*` GŸIJKSc€€f>bAl m8j a d·‘‘ GoOp(TWr_˜t¦(1_vAt/2ª
2000
48
        !" # $&%' (*)+(,.  (*)+(/$0  21344(56(87:9<; =?>A@B>A>ACED4FG>AHI>6JK> LM3M,N OQPRTSUQVWYXZR\[5W]P^YS_W]`_^YSbacRBd e0f0ehg ajiYW]S_ackA^l^YimW6npo^Yim^Yd_qrisW f dut avn w\WYk W]xlWBy3Wzs{m| f}m~]Q€ n pWsVWsk ‚ ƒm„rƒm…rƒl†s‚r…l…p‚I‡sˆ ‰‹ŠpŒl‡‰Ž‚6…Q… ‰bp4‰‘s’ “•” @ZJK–K>A—YJ ˜š™œ›p™‹s™0ž Ÿ¢¡Q™0› £¥¤lŸZ]™‹ž'žj£¢¤p¦K§l£¢¦K™ ¨ Ÿs›l™‹ž ©ªŸ¢« ¬]£T¡6£B¤l™‹­™ ®6£B­™‹› ŸK¤ ¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™³²§p¡lžj™‹­‹´¶µ¯p·j¸_¯ ·j­¹ŸK¤l™&ŸB«º›l™_«»Ÿ¢© ¨ £¢¦¢¤l·¼²º§p›l™&­ ¨ £Bžjž ™_« ²º¯l£¢¤<µ½ŸB«º›m°±®6£B­™‹› ¨ ŸY›p™0ž ­‹¾¿˜š™•£¢ž ­Ÿ ›l™_s™‹ž Ÿ¢¡r™‹›À£B¤À£Bžj· ¦K¤ ¨ ™‹¤Y²Á£Bžj¦¢Ÿ¢«· ²¯ ¨ ŸK©Â¦B«u£T¡Q¯l™ ¨ ™‹­8£B¤l›Ã¡Q¯pŸK¤p™ ¨ ™0­?©ÄŸB« ®rŸ¢²¯ÅŸ¢«º›p·j¤l£B«!ÆÇ²º™bÈm²É£¢¤l›2Ê3ˍÌ͟¢§p²Î° ¡Q§p²‹¾˜G™ ­¯lŸZµ´A®mÆ™_ÈY¡r™‹«º· ¨ ™‹¤Y²0´A²¯Q£T² ²º¯p™Á¸‹Ÿ ¨ ®Q· ¤Q£T²º· Ÿ¢¤ÂŸ¢©Ï²¯l™Ð¦¢«º£B¡l¯l™ ¨ ™_° ¡Q¯lŸ¢¤l™ ¨ ™#²§p¡Qž ™Å¤p¦¢«u£ ¨ ¨ Ÿm›p™0ž¹£B¤l› ²º¯p™Ñ¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™Ñ£¢ž · ¦K¤ ¨ ™0¤s² £¢ž ¦KŸB«º·¼²º¯ ¨ ­·j¦¢¤l·¼Òl¸Ó£B¤Y²ž¼Æ · ¨ ¡l«ŸTK™ ¸b¯Q£B«º£¢¸_²º™_« «º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤ £B¸0¸‹§p«u£B¸‹Æ ·j©'®AŸ¢²¯ ¦¢«º£B¡l¯l™ ¨ ™Ô£¢¤p› ¡Q¯pŸK¤p™ ¨ ™ «º™‹¡p«º™‹­™0¤s²º£B²·jŸ¢¤l­£T«º™3¦K·¼]™0¤A¾ Õ ÖZ× J¢–KØÙÛÚ5—YJKD±Ø × Üݤ¹²¯l· ­3¡6£T¡r™‹«‹´Iµ™ ¡l«º™‹­™‹¤Y²3£B¤»£Bžj· ¦¢¤ ¨ ™‹¤s²"£Bž ¦KŸT° «º·¼²¯ ¨ Ÿ¢©EÞ¢ßBà‹á_âäãåˍ¯l· ¤l™‹­™¸b¯Q£B«º£¢¸_²º™_«0´m¦B«u£T¡Q¯l™ ¨ ™Zæ £¢¤p›?Þ¢ßBàrßÂãÄ­!Æmž žj£B®6£T«ÆK´ç¡Q¯pŸK¤p™ ¨ ™Z敫º™_¡l«™0­™‹¤m²º£T° ²·jŸ¢¤l­½Ÿ¢© ²¯l™­º£ ¨ ™¸‹ŸK¤s²™0¤s²‹´p£¢¤p›ç·¼²­4£B¡p¡Qž · ¸Ó£T²º· Ÿ¢¤ ©ªŸ¢«ä«º™‹¸0Ÿ¢¦K¤p·j苷 ¤l¦¯Q£B¤l›mµ4«º·¼²²™‹¤¸_¯l£B«u£B¸‹²™‹«­äŸ¢©l¡r™‹«Î° ­ŸK¤l£¢ž ¤l£ ¨ ™‹­‹¾ éEs™‹¤ê©ªŸ¢«<¤Q£T²º·¼s™Å¬]£B¡Q£¢¤p™0­™B´š­Ÿ ¨ ™‹²· ¨ ™0­ë· ² · ­G]™_«Æ:›l·¼ì\¸‹§lž¼²G²ºŸ#«™Ó£B›,¬]£T¡6£B¤l™‹­™í¡A™‹«º­Ÿ¢¤Q£Bž ¤Q£ ¨ ™‹­‹´K®r™‹¸Ó£B§l­™Û²¯l™_«º™½£B«º™£T®AŸK§m²EîY´ðïKï¢ï3ˍ¯l· ¤l™‹­™ ¸b¯Q£T«u£B¸‹²™‹«­‹´K£¢¤l›™0£¢¸b¯ ¸b¯Q£T«u£B¸‹²™‹«ä¯l£¢­­™_]™‹«u£BžY›l· ©j° ©ª™‹«™0¤s²4«™Ó£B›l· ¤l¦¢­‹¾ ñ ¯l™_«º™‹©ªŸ¢«º™B´m·¼²·j­Û¸0Ÿ ¨h¨ Ÿ¢¤ò¡p«u£B¸‹²· ¸0™²Ÿ µ½«º·¼²º™£ ¡A™‹«­ŸK¤Aóð­¤Q£ ¨ ™Ï· ¤š® ŸB²¯ôÞKßTà0ábâE£B¤l›íÞ¢ßBàrßµ¯l™‹¤ ­§p® ¨ ·¼²²· ¤l¦ç©ÄŸB« ¨ £Bž›lŸs¸‹§ ¨ ™‹¤m²­‹´ ­§l¸b¯•£¢­3£B¡l¡lž ·¼° ¸0£B²·jŸ¢¤"©ªŸ¢« ¨ ­ä£B¤l›3õY§p™0­!²· ŸK¤l¤l£¢·¼«º™‹­‹´¢£B­· ž žj§p­!²«º£B²™0› · ¤•ö÷· ¦K§m«º™Ïø¢¾ ñ ¯p· ­3§l­™ Ÿ¢©*£ç¸‹™‹«!²u£B·j¤¹£ ¨ ŸK§p¤m²ŸK© «º™‹›l§p¤l›l£¢¤l¸_ÆÏ¯l™‹ž ¡l­4£¢¤çŸB¡r™‹«u£T²Ÿ¢«4£TKŸ¢·j› ¨ · ­!²º£Bùs™0­ · ¤"²º¯p™*›Q£T²u£™‹¤s²«!Æ¡l«Ÿm¸‹™‹­­‹¾ ñ ¯l™_«º™‹©ÄŸB«º™B´B·¼²÷·j­I]™‹«!Æ ú û ü ý þ ÿ      þ                                                                      ö÷·j¦¢§p«™Ðø! #" ¤ ãå£T«²· Òl¸0·j£BžÄæò™_Èp£ ¨ ¡lž ™ Ÿ¢© ¯pŸBµ+£ ¬]£B¡Q£¢¤p™0­™G¡r™‹«­ŸK¤Aó𭹤Q£ ¨ ™Ð· ­¹µ½«º·¼²²™0¤Å· ¤2®AŸB²º¯ ùK£B¤%$!·rãåˍ¯l· ¤p™0­™Û¸b¯Q£T«u£B¸‹²™‹«‹´B¦¢«º£B¡l¯l™ ¨ ™Tæ£B¤l›ùK£B¤Q£ ã ­!ÆYž žc£T®6£T«Æ]´m¡Q¯lŸ¢¤l™ ¨ ™Zæu¾ ž · ùs™0ž¼ÆÏ²º¯l£B²· ²¸0Ÿ¢§lž › £Bžj­Ÿ® ™§l­™‹› ²ºŸh¯l™‹ž ¡¸0Ÿ ¨ ° ¡Q§m²º™_«º­½«º™‹›l§p¸0™²º¯p™¤m§ ¨ ®r™_«Ÿ¢©ä¸b¯l£B«u£B¸‹²™_«4«º™‹¸0Ÿ¢¦B° ¤l·¼²·jŸ¢¤ ™_««ºŸB«º­4²º¯p™‹Æ ¨ £Tùs™B¾ ñ ¯l™_«º™· ­ £¢¤ ™‹¤lŸB« ¨ ŸK§p­¤l™0™‹› ©ªŸ¢« ¨ £Tùm· ¤l¦\²¯l™ ¡A™‹«º­Ÿ¢¤Q£Bž ¤Q£ ¨ ™Çã £¢¤l›Â£B›l›p«™0­­bæ ™‹¤m²!«ÆÇ¡p«ºŸY¸‹™0­­ £¢§m²ºŸ ¨ £B²· ¸¢´½™‹­!¡A™0¸‹·c£Bž ž Æ&·j¤í¦¢ŸZ]™‹«º¤ ¨ ™0¤]²‹´Û®Q£¢¤mùm­‹´ ¸‹«™0›p· ²¸0£B«›¹¸‹Ÿ ¨ ¡6£B¤l· ™0­‹´ ¨ £B«ùs™‹²«º™‹­™Ó£T«º¸b¯ ¸0Ÿ ¨ ° ¡6£B¤l· ™0­‹´s™‹²¸¢¾'& ŸZµÛ™_s™_«0´]¸0§m««º™‹¤Y²*¬]£T¡6£B¤l™‹­™¯Q£B¤l›m° µ4«º·¼²·j¤p¦Â¸_¯l£B«u£B¸‹²™_«&«™0¸‹ŸK¦¢¤l·¼²º· Ÿ¢¤¿²™0¸b¯p¤lŸ¢žjŸ¢¦¢Æ · ­ ¤lŸB²«º™‹žj·j£T®Qž ™ò™0¤pŸK§p¦K¯š©ªŸ¢« ²¯l· ­²º£¢­!ù6¾•ˍ¯l£B«º£¢¸_²º™_« «º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤Ð£¢¸‹¸0§m«u£B¸‹Æ»· ­¤lŸZµ,£B«ŸK§l¤p›)(Kï+* ©ªŸ¢« ¦KŸYŸs›çõY§l£¢ž ·¼²!Æ\›pŸs¸0§ ¨ ™‹¤m²­‹´p£¢¤p› £T«ºŸ¢§l¤p›îBï,* ©ªŸB« ¤lŸ¢·j­!Æò›lŸY¸‹§ ¨ ™‹¤s²º­4­§p¸b¯ £B­ö-"/.#ŸK§m²¡l§p²‹¾ 0 ŸK­!²½ŸK©²¯l™ «º™‹¸0™‹¤Y²Û«º™0­™0£T«º¸b¯òŸK¤ò²º¯p™£T¡l¡lž ·j¸0£T° ²º· Ÿ¢¤ŸK©ä­!²u£T²º· ­!²· ¸Ó£BžIžj£¢¤p¦K§l£¢¦¢™ ¨ ŸY›l™‹ž ­½²ºŸh¸‹¯l£B«u£B¸_° ²º™_««º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤»· ¤G¬]£T¡6£¢¤p™0­™h§l­™‹­ ™‹·¼²º¯p™‹«¸_¯Q£T«!° £¢¸_²º™_«ô¤p¦¢«u£ ¨ ¨ Ÿs›p™0ž ­ôŸ¢«Áµ½Ÿ¢«›,¤p¦¢«u£ ¨ ¨ ŸY›Y° ™0ž ­&ã21Ÿ¢¤l¤pŸí£B¤l›3& Ÿ¢¤l¦¢Ÿp´hø4(!(+5768"«u£Tùm·™_²É£Bžå¾ ´ ø4(!(!9:6;0Ÿ¢«º· ™_² £¢ž ¾ ´ø<(+(!=:6?>£B¦]£T²u£Y´"ø<(+(!=:6?>£Z° ¦]£T²u£Y´ ø<(+(!@s溾 ñ ¯l™‹­™ô²º™‹¸b¯l¤p·jõs§l™‹­ «º™‹õY§l·¼«º™B´ç£T² ž ™Ó£B­!²0´¹£Â¸0Ÿ¢¤m²™bÈm²ÁŸK©¹£:¸0Ÿ¢§p¡lžj™ ŸK©É¸b¯Q£T«u£¢¸_²™‹«º­ ²ºŸA$!§l›p¦K™ëµ¯p™‹²¯l™_«Ð£2¸b¯l£B«u£B¸‹²™_«&¸0£¢¤p›l· ›Q£T²™Ç· ­ ¦KŸYŸs›A¾ ñ ¯p™‹«™0©ªŸ¢«™¢´E²º¯p™‹Æš¸0£¢¤l¤pŸ¢²®A™ £B¡p¡Qž · ™0›G²ºŸ ²º¯p™ò¤Q£ ¨ ™\«º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤»²º£¢­!ù ®r™‹¸Ó£B§l­™ç¬]£T¡6£B¤l™‹­™ Òl«­!²Û£¢¤p›hžj£¢­!²5¤Q£ ¨ ™0­*£T«º™½§l­§Q£Bž ž ÆŸK¤pž Æ ©ª«Ÿ ¨ Ÿ¢¤l™ ²ºŸ²¯m«º™0™Ï¸b¯l£B«º£¢¸_²º™_«º­"ž ŸK¤p¦&ãÄ²ÎÆY¡l· ¸Ó£Bžjž¼Æ¹²ÝµŸ ¸_¯l£B«!° £¢¸_²º™_«º­b溾 ÜΤ ²¯l· ­ ¡Q£B¡r™_«0´Aµ4™"¡p«º™‹­™0¤s²£Ï¤pŸBK™‹žžj£¢¤p¦K§l£¢¦¢™ ¨ ŸY›l™‹ž"²¯Q£T²É· ­É®Q£¢­™‹›#ŸK¤#¦B«u£B¡l¯l™ ¨ ™_°±¡l¯lŸK¤p™ ¨ ™ ²§p¡Qž ™‹­ ãå£ ¡6£B·¼«\ŸK©Þ¢ßBà‹áb⍣B¤l›ëÞ¢ßBàrßò«º™_¡l«º™‹­™‹¤m²º£T° ²·jŸ¢¤r溾3˜š™ £Bžj­Ÿò¡l«™0­™‹¤s²3£¢¤¹£¢ž · ¦K¤p·j¤p¦ç£¢ž ¦KŸB«º·¼²¯ ¨ ŸK©m¦¢«º£B¡Q¯p™ ¨ ™0­ä£¢¤p›"¡Q¯pŸK¤l™ ¨ ™‹­®AŸB²º¯©ªŸ¢«äŸB«º›l· ¤l£B«Æ ²™_ÈY²£B¤l› ÊˍÌ+Ÿ¢§p²!¡Q§m²0¾CBÛÆí™bÈm¡A™‹«· ¨ ™0¤s²‹´µ4™ ­¯lŸZµ²¯Q£T²²¯l™žj£B¤l¦¢§Q£¢¦¢™ ¨ ŸY›p™0žI£¢¤p›²º¯p™£Bžj· ¦¢¤m° ¨ ™‹¤s²£Bžj¦¢Ÿ¢«· ²¯ ¨ ¸0£¢¤ ­·j¦¢¤l·¼ÒQ¸0£¢¤s²ž Æò· ¨ ¡l«ŸT]™²º¯p™ ŸZ]™‹«º£¢ž žr«º™‹¸0Ÿ¢¦K¤l·¼²· ŸK¤ £B¸0¸‹§p«º£¢¸_Æ]¾ D E –K>GFIHKJ-LMJ:NPOQH*Ø × J-LMJ “SR D H × LMJ × J ØUTWV>XF*> × Jl@YJ ˜š™&›p™‹ÒQ¤p™Á¦B«u£T¡Q¯p™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™ô£Bž ·j¦¢¤ ¨ ™‹¤Y² Ÿ¢© ¬]£B¡Q£¢¤p™0­™4£¢­²¯l™4­™0¦ ¨ ™0¤s²· ¤l¦ŸK©A£3¦B«u£B¡l¯l™ ¨ ™½­™_° õY§p™0¤p¸0™»ãŽÞKßTà0ábâ*«™‹¡p«º™‹­™0¤Y²u£T²º· Ÿ¢¤ræ"· ¤Y²Ÿ ¨ ·j¤p· ¨ § ¨ §l¤p¸0Ÿ ¨ ¡AŸK­·¼²·jŸ¢¤Q£Bž §p¤l·¼²­‹´É™0£¢¸_¯,¯l£TY· ¤l¦:£Â¸‹Ÿ¢«!° «º™‹­!¡AŸK¤p›l· ¤l¦š­§m®Q­™‹õY§p™0¤l¸‹™É· ¤ô²º¯p™ ¡Q¯pŸK¤l™ ¨ ™É­™b° õY§p™0¤p¸0™ ãÝÞ¢ßBàrß «™‹¡p«º™‹­™0¤Y²u£T²º· Ÿ¢¤r溴ä£B¤l› ²¯l™\£¢ž · ¦K¤m° · ¤l¦ Ÿ¢©™0£¢¸_¯ §p¤l·¼²²Ÿ²º¯p™\¸‹Ÿ¢«!«º™0­!¡AŸ¢¤l›p·j¤p¦­§p®l­™_° õY§p™0¤p¸0™B¾ ö6ŸB«™_Èp£ ¨ ¡lžj™B´rž ™_² ¦¢«u£T¡Q¯p™ ¨ ™0­ £¢¤p›¡Q¯lŸ¢¤l™ ¨ ™‹­ ŸK©£&©Ä£ ¨ · ž¼Æë¤l£ ¨ ™[Z!öQ§pùm§pèÓ£Óµ£Y\&®r™^]M_ £¢¤l› `badc[e ¾ ñ ¯l™ Ÿ¢§p²!¡Q§m²ŸK©5¦B«u£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™ £¢ž · ¦K¤ ¨ ™‹¤Y²\·j­\²ÝµÛŸ»¦B«u£B¡l¯l™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™²º§m¡Qž ™‹­‹´ ]bf `ga £¢¤p›h_if cMe ´sµ¯p™‹«™4²¯l™ž ™‹©ª²*£¢¤p›h«º· ¦K¯s² ­· ›l™ŸK©5ójfYóp· ¤l›p·j¸0£B²™¦¢«º£B¡l¯l™ ¨ ™0­4£B¤l›ç¡Q¯pŸK¤l™ ¨ ™‹­‹´ «º™‹­!¡A™0¸_²º·¼]™0ž¼Æ¢¾ 0 ŸK­!²Â¦B«u£B¡l¯l™ ¨ ™_°±¡l¯lŸK¤p™ ¨ ™,¸‹Ÿ¢«!«º™‹­!¡rŸK¤l›p™0¤p¸0™ · ¤ ¬]£B¡Q£¢¤p™0­™ë· ­GŸ¢¤l™b°v²ŸT° ¨ £B¤mÆ2žj·¼ù]™ô²¯l™ë£T®AŸZs™ ™bÈl£ ¨ ¡Qž ™¢¾ B*Æ Ÿ¢¤l™b°v²ŸB° ¨ £B¤mÆ¢´çµ™ ¨ ™0£¢¤ ²º¯l£B² ŸK¤p™¦B«u£T¡Q¯p™ ¨ ™3¸0ŸB««º™‹­!¡AŸK¤p›l­½²Ÿ ¨ ŸB«º™²º¯l£¢¤ è0™_«ºŸ ¡Q¯pŸK¤p™ ¨ ™‹­‹¾ & ŸZµ½™_K™_«0´¹Ÿ¢¤l™_°±²ŸB°vè0™_«ºŸm´¹è0™_«ºŸT°v²ŸB° ŸK¤p™¢´ ¨ £B¤sÆs°±²ºŸT° ¨ £¢¤sÆ¢´l£¢¤p› ¸_«ºŸ¢­­ŸT]™_«4¸‹Ÿ¢«!«º™‹­!¡rŸK¤m° ›l™‹¤l¸‹™0­£T«º™3¡rŸK­­·¼®Qž ™B´6£B­·jž ž §l­!²!«u£B²™‹› ®A™0ž ŸZµ¾ kmlPn<oqpsrtlupwvqoyxzl4{ |?}~m€/ƒ‚?„;…m†A‡ˆ|;ƒ‚‰}ЁŒ‹W~Ёƒ„ˆ€/ƒ…Q† kmvqoyxzluprzlupwlPn<oƒ{ Ž ‘“’/”•„[‡ Ž –‰‹,7’  ,”—„ km˜—™Œn‘š–psrzlups˜›™Œn–š+{ œ; ƒ…Ÿž¡ M‡ œ –…¢ž¡  km£qxzlP¤t¤tlu¥Poyx¦{ §?¨© ‘Sª¬«?­W®¯‡ § P® ¨ –Sª © <«°­ 0É£¢¤sÆ]°v²ŸB° ¨ £B¤YÆ3¸0ŸB««º™‹­!¡rŸK¤p›l™‹¤l¸‹™*«º™0­§pž¼²º­ä©ª«ºŸ ¨ ­™ ¨ £¢¤s²·j¸Ï²!«u£B¤l­žj£B²· ŸK¤GŸK©4²¯l™ ˍ¯p· ¤l™‹­™çµŸ¢«›š²ºŸ ¬]£B¡Q£¢¤p™0­™B¾ ñ ¯l· ­E­™ ¨ £¢¤s²· ¸²!«u£B¤l­žj£B²· ŸK¤h¸‹Ÿ¢§lž ›«º™b° ­§lž¼²I·j¤ £½¸‹«ºŸ¢­­ŸTK™‹«¸‹Ÿ¢«!«º™0­!¡rŸ¢¤l›p™0¤p¸0™5®r™‹¸0£¢§l­™E²º¯p™ µŸ¢«º›çŸB«º›l™_«Ÿ¢©ËÛ¯l· ¤l™‹­™"£B¤l› ¬]£B¡6£B¤l™‹­™· ­4›p·²±r™‹«Î° ™‹¤Y²‹´£¢­ò·j¤í²¯l™ žj£¢­!²ç™bÈl£ ¨ ¡lžj™¹£T®rŸBK™B¾³BÛ§m²ò©ÄŸB« ­· ¨ ¡Qž · ¸0·¼²ÝÆ&ãå£B¤l›•­·j¤p¸0™·¼²· ­3]™‹«!Æ «º£B«º™Z溴Iµ™ µ·jž ž ²«™Ó£T²Ï­§p¸_¯Á£ ¸0£¢­™£¢­Ï£ ¨ £B¤sÆs°±²ºŸT° ¨ £¢¤sÆ&¸‹Ÿ¢«!«º™b° ­!¡AŸK¤p›l™‹¤l¸0™B¾ ñ ¯l™½£B›pK£B¤]²u£B¦K™ÛŸ¢©Q§l­· ¤l¦¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™ ²º§m¡Qž ™‹­ £B­®Q£¢­· ¸\§p¤l·¼²­©ªŸ¢«²º¯p™\žj£¢¤p¦K§l£¢¦K™ ¨ Ÿs›l™‹ž · ­²¯l™‹·¼«¸0Ÿ ¨ ¡6£B¸_²º¤p™0­­‹´lµ¯l· ¸b¯ ¨ £Bùs™0­Û²º¯p™ ¨ Ÿs›p™0ž ŸK¤p™•ŸB«º›p™‹« ŸK© ¨ £B¦K¤p· ²§l›p™¹­ ¨ £¢ž ž ™‹«ç²º¯l£¢¤ëµ4ŸB«º›m° ®6£B­™0› ¨ ŸY›l™‹žj­‹¾ ñ £T®Qž ™:øÅ­¯pŸBµ4­ô²¯p™¤p§ ¨ ®r™‹«ëŸ¢©¹µŸ¢«º›,²ºŸT° ùY™‹¤l­‹´sµ4Ÿ¢«›Ï²ÝÆY¡r™‹­‹´p¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™ ²ºŸBùs™0¤l­‹´ £¢¤p›G¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™Ï²ÎÆY¡6™‹­· ¤&£É¬]£B¡Q£¢¤p™0­™ ²º™‹ž ™‹¡l¯lŸ¢¤l™ç›l·¼«º™‹¸‹²Ÿ¢«!Æ&Ÿ¢© £T®rŸK§m²´9,µY´ðï¢ïKïY´ ï¢ïKï «º™‹­Î° · ›l™0¤Y²·c£BžÏ­§m®Q­¸_«º·¼® ™_«º­‹¾ ñ ¯l· ­G›Q£T²u£#µ£B­GŸB«º· ¦K· ° ¤Q£Bžjž¼Æ ¨ £¢›p™Á©ªŸB«»£B¤:£B§p²Ÿ ¨ £T²º· ¸&²º™‹ž ™‹¡l¯lŸK¤p™&›l· ° «º™‹¸‹²Ÿ¢«!ÆG£B­­· ­!²u£B¤l¸‹™ò­!Æm­!²™ ¨ ã2&·j¦K£¢­¯p·j›l£m´½ø<(+(!9s溾 örŸ¢«›l·¼«º™‹¸‹²Ÿ¢«!Æ"£¢­­· ­!²u£B¤l¸‹™5§p­™¢´B¦B«u£T¡Q¯l™ ¨ ™ãŽÞ¢ßBà‹á_â æ £¢¤p›¡Q¯lŸ¢¤l™ ¨ ™ãŽÞ¢ßBàrßÓæ «º™_¡l«º™‹­™0¤Y²º£B²· ŸK¤l­IŸ¢©l¤Q£ ¨ ™0­ µ4™_«º™ ¨ £¢¤m§l£¢ž ž Æ £Bžj· ¦¢¤l™0›A¾ ˍŸ¢¤l­· ›p™‹«º· ¤p¦8²¯l™ © £B¸_²G²º¯l£B²Á¬]£B¡Q£¢¤ ¯Q£B­Ð£²Ÿ¢²º£¢žò¡rŸB¡Q§pžc£T²º· Ÿ¢¤ ŸK© ø%¶Bïm´ðïKï¢ïm´ðï¢ïKï ¡r™‹Ÿ¢¡lž ™¢´ä²¯l· ­· ­£ © £B·¼«ºž¼Æ•žj£B«º¦¢™\£B¤l› ™_ÈY²º™‹¤l­·¼s™Û­º£ ¨ ¡lžj™½ŸK©6¬]£T¡6£B¤l™‹­™½¡r™_«º­Ÿ¢¤Q£Bžl¤l£ ¨ ™‹­‹¾ · ™‹žj™‹¸‹²· ¤l¦¤l£ ¨ ™‹­½ãÄ· ¤l¸‹žj§p›l· ¤l¦®AŸB²º¯"Òp«º­!²ä¤l£ ¨ ™‹­ £¢¤p›/žj£¢­!²Ð¤l£ ¨ ™‹­b暲¯Q£T²í£T¡l¡r™0£B«™0›/£B²Ážj™0£B­!²#µ ²º· ¨ ™‹­h«º™0­§pž¼²º­\· ¤í£ ¤l£ ¨ ™ žj· ­!²ÏŸ¢©?5¢ïmø‘1+µ½ŸB«º›p­‹´ µ¯l· ¸_¯"¸‹ŸBK™_«º­ ¨ ŸB«º™*²º¯l£¢¤Š(!@,*ŸK©p²¯l™Û™‹¤s²º·¼«º™*­§m®p° ­¸‹«· ®A™_«º­‹¾'BÛ§p²5£B®AŸ¢§p²5²¯l™4­º£ ¨ ™4¸0ŸZ]™_«u£¢¦¢™¸Ó£B¤h®A™ Ÿ¢®p²u£B· ¤l™0›h®pÆŸ¢¤lž¼Æm¶Yø‘1#¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯pŸK¤l™ ¨ ™²§m° ¡Qž ™0­‹¾ ñ ¯l™,¡l«ºŸB®Qž ™ ¨ µ4· ²¯Í²¯l™/žj£¢¤l¦¢§Q£B¦K™ ¨ Ÿs›l™‹ž ®6£B­™0›òŸ¢¤ç¦B«u£T¡Q¯p™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™²º§m¡Qž ™‹­ž ·j™‹­· ¤ò·¼²º­ £ ¨ ®Q· ¦K§p· ²vÆ¢¾GÜΤÁ¬]£B¡Q£¢¤p™0­™B´Û™0£¢¸_¯ôËÛ¯l· ¤l™‹­™ç¸‹¯l£B«!° £¢¸_²º™_«"§p­§Q£Bžjž¼Æ ¯Q£B­²ÎµŸç›p·²±r™‹«™0¤s²«º™0£¢›l· ¤p¦K­– Ÿ¢¤l™ ¸0Ÿ ¨ ™‹­ ©ª«ºŸ ¨ ·¼²º­šËÛ¯l· ¤l™‹­™Ð¡p«ºŸ¢¤p§p¤l¸‹·c£T²º· Ÿ¢¤/ãu¸Tà-¹ º ¸%»â æu´p£B¤l›\²¯l™ ŸB²º¯p™‹«Û¸‹Ÿ ¨ ™‹­©c«ºŸ ¨ ·¼²­Û­™ ¨ £B¤m²·j¸ ²«º£¢¤p­žc£T²º· Ÿ¢¤²ºŸ¬]£B¡Q£¢¤p™0­™ ãŽÞ4¼mà-¹ º ¸%»â æu¾½&ŸZµ½™_K™‹«‹´ ·¼²¹·j­É¸0Ÿ ¨h¨ ŸK¤Å©ªŸ¢«ÉŸ¢¤l™Ðˍ¯p· ¤l™‹­™&¸_¯l£B«u£B¸‹²™_«²ºŸ ¯Q£ZK™­™‹]™_«u£¢žA›p·²±r™‹«™0¤Y²ˍ¯l· ¤p™0­™b°ŽŸB«º· ¦K· ¤ò«º™Ó£B›l· ¤p¦K­ ®A™0¸0£¢§p­™ ŸK© ã £sæ4¡l«ŸK¤Y§l¤p¸0·j£B²· ŸK¤É›p·²±r™‹«º™‹¤l¸‹™‹­²º¯l£B² ›l™_Y™‹ž Ÿ¢¡6™‹› µ·¼²¯ ²¯l™\¡Q£¢­­º£B¦K™\Ÿ¢©Û²· ¨ ™¢´£B¤l›íãÄ®ræ «º™‹¦K· ŸK¤l£¢ž¡l«ŸK¤Y§l¤p¸0·j£T²º· ŸK¤<›l·¾±r™‹«º™‹¤l¸‹™0­ · ¤ ˍ¯l· ¤l£m¾ Üݲ·j­£Bžj­Ÿ ¸‹Ÿ ¨¨ ŸK¤»©ªŸB« ŸK¤l™òˍ¯l· ¤l™‹­™\¸_¯Q£T«u£B¸‹²™‹« ²ºŸ¯l£Z]™Û­™_]™‹«u£BžY›l·¾±r™‹«º™‹¤s²¬]£B¡Q£¢¤p™0­™b°ŽŸB«º· ¦K· ¤ «º™0£¢›m° · ¤l¦K­½®A™‹¸Ó£B§l­™3ŸK©ä· ²­4­™ ¨ £¢¤s²·j¸£ ¨ ®Q· ¦¢§l·¼²ÝÆ¢¾ " ­Ç£ «º™‹­§lž¼²‹´ ®AŸB²º¯ê¦B«u£B¡l¯l™ ¨ ™b°v²ŸB°±¡Q¯pŸK¤p™ ¨ ™ £¢¤p›Í¡l¯lŸK¤p™ ¨ ™_°±²ºŸT°v¦¢«u£T¡Q¯p™ ¨ ™/¸0Ÿ¢¤ss™_«º­· ŸK¤p­:£T«º™ Y™_«Æ £ ¨ ®Q· ¦¢§lŸ¢§l­‹¾80 ŸB«º™0ŸTK™_«0´·j¤•¦K™‹¤l™_«u£Bžå´I¸b¯Q£T«!° £¢¸_²º™_«ò«º™0£¢›l· ¤p¦K­ç©ªŸB«ç¡A™_«º­ŸK¤l£¢ž¤Q£ ¨ ™0­ç£B«º™ ¨ Ÿ¢«º™ £ ¨ ®Q· ¦K§pŸK§p­²º¯l£¢¤ô²¯lŸ¢­™ ©ÄŸB«òŸ¢«›l· ¤Q£T«ÆÐ²™_ÈY²Ï®A™_° ¸Ó£B§l­™ô²¯l™_«º™ô£B«º™í£žjŸB²šŸ¢©ç«º™0£¢›p·j¤p¦K­ ²¯Q£T²&£B«™ §l­™‹›É™_Èm¸0ž §l­·¼]™0ž¼Æ ©ªŸ¢«¡A™_«º­ŸK¤l£¢žä¤l£ ¨ ™‹­‹¾ ñ £B®lž ™Š¶ ñ £B®lžj™ø! ¬¿· ­!²«· ®l§p²·jŸ¢¤ ŸK©pµ½Ÿ¢«›l­ä£¢¤p›¦B«u£T¡Q¯l™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™*²º§m¡Qž ™0­ä· ¤£ ¬]£T¡6£B¤l™‹­™*²º™‹žj™_¡Q¯pŸK¤p™*›l·¼«º™‹¸‹²Ÿ¢«!Æ µŸ¢«›ç²Ÿ¢ùs™‹¤l­ µ½Ÿ¢«›ç²ÝÆs¡r™0­ ¦B°±¡ ²Ÿ¢ùs™‹¤l­ ¦B°±¡ ²!Æs¡6™‹­ ÀÂÁ øÓï @+@Y¾ îY0 (]îs¾j=+* ø<(+=Y1 ø<9m¾j@+* øZîY9+0 (]îs¾ ï+* ø4µY1 ø4=Y¾ î+* ÀÂÁ µ @+(Y¾j5!0 (+@Y¾Ã9,* 5¢ïmø‘1 ¶+¶s¾j=+* øZî!=!0 (]îs¾j@+* ¶sø<1 ¶!5Y¾Ä9Å* ÀÂÁ ¶ (KïY¾jø<0 (+(Y¾jø4* µY(!9!1 9+9m¾j@+* øZîKîY0 (+@Y¾j@+* 9Kï!1 9+9m¾Ä9Å* £¢ž ž (KïY¾j@!0 ø¢´Ã5,¶¢îY1 øZî!(!0 (¢ï!1 ñ £B®lžj™ ¶Å ˍŸ ¨ ¡6£B«·j­Ÿ¢¤ ŸK©ÇÆ«º£B¡l¯l™ ¨ ™_°±²ºŸT° È5¯lŸ¢¤l™ ¨ ™ £¢¤l›ÉÈ5¯pŸK¤l™ ¨ ™b°v²ŸB°qÆ «u£B¡l¯l™ ¨ ™h" ¨ ®l· ° ¦K§p·¼²ÝÆ· ¤h¡r™_«º­ŸK¤l£¢žl¤Q£ ¨ ™0­㪲™0ž ™‹¡l¯lŸ¢¤l™4›l·¼«º™‹¸‹²Ÿ¢«Ælæ £¢¤p› ŸB«º›p·j¤l£B«Æò²º™bÈm²ãÄ©ª«™0™ùK£¢¤4$·A›p· ¸‹²·jŸ¢¤Q£T«Ælæ ›p· «™0¸_²ºŸB«Æ ›p· ¸‹²·jŸ¢¤Q£T«Æ ¨ £TÈ £Ó]™ ¨ £TÈ £T¢™ Æ °±²ŸB°ÊÈ ¶+µY@ øZïY¾j( 5+= 5Y¾Ë¶ Ȱv²ŸB°qÆ øKø¢øZï ø%¶s¾jø 5Kï!= =Y¾Ë¶ ­¯lŸZµ4­Ð²º¯p™ ¨ £ZÈp· ¨ § ¨ £B¤l›?£Ó]™‹«º£¢¦¢™ £ ¨ ®l·j¦¢§m° ·¼²ÝÆŸK©l¦B«u£T¡Q¯l™ ¨ ™b°v²ŸB°±¡l¯lŸK¤p™ ¨ ™£¢¤l›¡l¯lŸ¢¤l™ ¨ ™_°±²ŸB° ¦¢«º£B¡l¯l™ ¨ ™»¸‹Ÿ¢««™0­!¡AŸ¢¤l›l™‹¤l¸‹™‹­ · ¤²¯l™ ²™‹žj™_¡Q¯pŸK¤p™ ›l·¼«º™‹¸_²ºŸB«ÆK¾ö6Ÿ¢«*¸‹Ÿ ¨ ¡6£T«º· ­ŸK¤A´Y²º¯p™­º£ ¨ ™¤p§ ¨ ®r™_«º­ · ¤ £¡l§p®lžj· ¸"›pŸ ¨ £B· ¤ ¬]£B¡Q£¢¤p™0­™ù¢£¢¤4$· ›l· ¸_²º· Ÿ¢¤Q£B«!Æ ãÌ1?"Â>¬¢Üq¿ ÜËæÎÍ4£T«º™£Bžj­Ÿ ­¯pŸBµ4¤\²ºŸ ¦¢· ]™£"«ºŸ¢§l¦¢¯ ™‹­!²º· ¨ £B²™òŸ¢© £ ¨ ®l· ¦K§l·¼²ÝÆ · ¤&Ÿ¢«º›p· ¤Q£B«!Æ ²º™bÈm²‹¾ ñ £T° ®Qž ™Ï¶Ï­¯lŸZµ­²º¯l£B²¡A™‹«­ŸK¤Q£Bž÷¤Q£ ¨ ™ «º™Ó£B›l· ¤p¦K­£T«º™ ­· ¦K¤p· Òl¸Ó£B¤s²ºž¼Æ ¨ Ÿ¢«™»£ ¨ ®l·j¦¢§lŸ¢§l­ç²¯Q£B¤ÇŸB«º›p·j¤l£B«!Æ ²™_ÈY²I«º™0£¢›p·j¤p¦K­‹´¢£B¤l›3²º¯l£B²¡Q¯lŸ¢¤l™ ¨ ™b°v²ŸB°v¦¢«º£B¡l¯l™ ¨ ™ ¨ £T¡l¡Q· ¤p¦ · ­ ¨ ŸB«º™h£ ¨ ®Q· ¦K§pŸK§p­ ²¯Q£B¤•¦B«u£T¡Q¯l™ ¨ ™b° ²ŸB°±¡Q¯pŸK¤p™ ¨ ™ ¨ £B¡p¡Q· ¤l¦m¾ Ð Ñ HKJgÒ> × HäÚE>AHÓJG=ÀØäÙ¡J R > × Ù2J+HIJ ÔÖÕm× =?Ø÷Ù¡J R ØGÙ2Ú ÛKÜÞÝÓßáàâÜ-ßÞã¢ä³åUæÓã:ç ˜š™<©ªŸ¢« ¨ §lžj£T²º™ë²º¯p™£Bž ·j¦¢¤ ¨ ™‹¤s²ÐŸK© ¦¢«º£B¡Q¯p™ ¨ ™0­ £¢¤p›š¡Q¯pŸK¤p™ ¨ ™‹­©ªŸ¢«hÊˍÌ,Ÿ¢§p²!¡Q§m²·j¤š²¯l™ò¤pŸK· ­!Æ ¸b¯Q£B¤l¤p™0ž¡6£B«º£¢›p·j¦ ¨ ¾°èI™‹²· ¤m¡Q§p²¦B«u£T¡Q¯l™ ¨ ™‹­3£B¤l› ¡Q¯pŸK¤p™ ¨ ™‹­\®r™Ÿé £¢¤p›#ê´ÊËÛÌ꟢§p²!¡Q§p²\®r™Ÿé?ë £¢¤p›Öê ë ¾ ñ ¯p™ ²u£B­!ù¹· ­ÒQ¤p›l· ¤l¦ò²¯l™ ¨ ŸK­!²¡p«ºŸ¢®Q£T° ®Qž ™3¦¢«u£T¡Q¯p™ ¨ ™‹­íì é £B¤l› ¡l¯lŸ¢¤l™ ¨ ™0­íì 겺¯l£B² ¨ £TÈs° · ¨ · è0™dêÏã2é8îuêhï é ë îÎê ë æu¾AB*ÆG§l­· ¤l¦^B£TÆK™‹­‹ó÷«º§lž ™B´ µ™ŸB®l²º£¢· ¤G ðÅñ òKó ñ ôKõ÷ö ™uxzøù˜›™Îú û:ü ý ô ð òKó ô•þ ò½ÿ²ó ô ÿ õ ö ™ŒxzøÞ˜›™Îú û:ü ý ô ð ò ÿ ó ô ÿ þ òKó ôKõsô ð òKó ôKõ ðõ  rÅ{ P  r £y£ ˜¡lPnY™Œ¤  o   ™     n %lPn4øPl  ˜š™Ï¸0£¢ž žKêÏãté8îÎê æ²º¯p™òžc£B¤l¦¢§Q£B¦K™ ¨ Ÿm›p™0ž ´E£¢¤p› êòã2é ë îuê ë ï éŠîuê æ•²º¯p™Ê3ËÌ ¨ ŸY›p™0ž ¾ ˜š™ë¸0Ÿ¢¤m° ­· ›l™‹«ô£:žj£¢¤p¦K§l£¢¦¢™ ¨ ŸY›p™0žç®6£B­™0›,ŸK¤,­ ¨ £¢ž ž ™0­!² ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™&²º§m¡Qž ™‹­‹¾ ê\ã2é8îÎêæ ·j­»£T¡p° ¡l«ŸTÈm· ¨ £B²™0›G®Yƚ²¯l™ç®Q· ¦¢«º£ ¨ ¨ Ÿm›p™0ž½Ÿ¢©£É²º§m¡Qž ™ ŸK©"£G¦B«u£B¡l¯l™ ¨ ™"£¢¤p› £»¡l¯lŸ¢¤l™ ¨ ™ £B­ç©ªŸKž ° ž ŸBµ­‹´pµ¯p™‹«™4®AŸK­ À £B¤l›™‹Ÿ¢­ À «º™‹¡p«º™‹­™0¤Y²4²¯l™ ®A™0¦¢·j¤p¤l· ¤l¦h£¢¤p› ™‹¤l›Ÿ¢©I²º¯p™­™‹õY§p™0¤p¸0™B¾ ô ð òKó ôIõ ô ð!  ó#"  þ $ !lP¤ % õ&('!)+* '!)+, ô ð! ' ó-" ' þ '/.  ó#" '#.  õ ô ð $ o¦lP¤ % þ * ó-" * õ ð/0 õ ñ ¯l™G®l·j¦B«u£ ¨ ¡l«Ÿ¢®Q£B®l·jž ·¼²º· ™‹­^êòã21+Êî3uï 1+54 Í î6724 Í æ £B«™í™0­!²· ¨ £B²™0›'©c«ºŸ ¨ ²¯l™ô¸0Ÿ¢§l¤s²º­»· ¤'²º¯p™í¸‹Ÿ¢«!° «º™‹­!¡ Ÿ¢¤l›p· ¤l¦ ™‹s™‹¤]²º­ë·j¤¶£ ¸0ŸB«¡l§l­ë²¯Q£T²Ç·j­ë™0· ° ²º¯p™‹« ¨ £B¤Y§Q£¢ž ž¼Æ Ÿ¢«£B§p²Ÿ ¨ £T²·j¸0£¢ž ž¼Æ £¢ž ·j¦¢¤l™‹› ¾ ñ ¯l™ ®Q· ¦¢«º£ ¨ ¡l«ºŸB®6£T®Q· ž · ²ÝÆ Ÿ¢©\§l¤mùm¤lŸZµ4¤Å²§p¡lž ™0­Ðã ¤pŸ¢² ©ÄŸ¢§l¤p›:· ¤Å²º¯p™Á›p·j¸_²º· Ÿ¢¤Q£T«ÆQæ· ­ ™0­!²· ¨ £B²™0›2©c«ºŸ ¨ ²º¯p™0·¼«ò§l¤p· ¦¢«u£ ¨ ¡l«ºŸB®6£T®Q· ž · ²ÝÆG®mÆ&žj· ¤l™0£B«ò· ¤Y²º™_«¡6ŸT° žj£B²·jŸ¢¤ ¾ ñ ¯l™§p¤l· ¦¢«º£ ¨ ¡p«ºŸB®6£B®l· žj·¼²ÎÆŸK©5§p¤pùm¤pŸZµ¤ ²º§m¡Qž ™‹­· ­™0­!²· ¨ £B²™0›&£B­²¯l™Ï¡l«ŸY›l§l¸_²ŸK©½žj™‹¤l¦B²º¯ ¡l«Ÿ¢®6£T®Q· ž · ²ÎÆ´êÏã829!î:8<;]æu´p¦B«u£T¡Q¯l™ ¨ ™3­!¡r™0ž ž ·j¤p¦¡p«ºŸB®p° £B®l·jž ·¼²ÎÆÖêÏã51pæu´£B¤l›»¦B«u£T¡Q¯l™ ¨ ™¡Q¯lŸ¢¤l™ ¨ ™\¡p«ºŸ¢®Q£T° ®Q· ž · ²ÎÆdê\ã= æu´rµ4¯l™_«º™>8 9 £¢¤p›?8 ; £B«º™²º¯p™ž ™‹¤l¦B²º¯ Ÿ¢© £¦B«u£T¡Q¯l™ ¨ ™@1ò£¢¤p› £¡l¯lŸ¢¤l™ ¨ ™A¾ ê\ã21áî æCBíêÏã8/9!îD8!;Kæ êòã21læ êòã= æ ãt5]æ ˜š™ §l­™™ ¨ ¡Q·¼«º· ¸Ó£Bž ›l· ­!²«· ®l§p²· ŸK¤çžj™0£B«¤l™‹› ©c«ºŸ ¨ ²«º£¢· ¤l· ¤l¦ç›l£B²º£ç©ªŸB«3ž ™0¤p¦¢²¯É¡l«ºŸB®6£T®Q· ž · ²ÎƓêÏã829!îD8<;]æu¾ ˜G™"£T¡l¡p«ºŸÓÈp· ¨ £T²º™¦¢«º£B¡Q¯p™ ¨ ™"­!¡A™0ž ž ·j¤p¦h¡p«ºŸ¢®Q£B®l·jž ° ·¼²!Ưêòã21læ ®pÆÅè0™_«ºŸ¢¦¢«u£ ¨ ¨ Ÿs›l™‹ž¹ãħl¤p·j©ªŸ¢« ¨ ›l· ­Î° ²«· ®l§p²· ŸK¤ræÛ®A™0¸0£¢§p­™"m·¼«²§Q£Bžjž¼Æ£B¤YÆ ¸0Ÿ ¨ ®l·j¤l£B²·jŸ¢¤ ŸK©Û¸b¯Q£T«u£¢¸_²™‹«º­¸‹ŸK§pž › ®A™Ï£ ž ™‹¦K·¼²· ¨ £T²º™\¬]£B¡Q£¢¤p™0­™ ¤Q£ ¨ ™! êÏã51pæCB EGF H I J Í êòãKD1 I æ Á øLÞï MN9 ï E F ã29sæ µ¯p™‹«™OKD1 I · ­¿²¯l™¶· ¤l›l·¼Y·j›p§Q£Bž<¸b¯Q£B«º£¢¸_²º™_«¿·j¤ ¦¢«º£B¡l¯l™ ¨ ™­™‹õY§p™0¤p¸0™B´Q£¢¤l›Éï MN9:ïs· ­½²¯l™3¸b¯Q£B«º£¢¸_²º™_« ­™_²­· è0™ŸK©¦¢«º£B¡l¯l™ ¨ ™0­‹¾ ˜š™ £T¡l¡p«ºŸÓÈp· ¨ £T²º™ ¡Q¯pŸK¤p™ ¨ ™ ­!¡r™‹ž žj· ¤l¦ ¡l«ºŸB®6£Z° ®Q· ž ·¼²ÝÆíêòã= æÏ®mÆí®Q· ¦¢«º£ ¨Ã¨ Ÿs›p™0ž3®r™‹¸Ó£B§l­™ ²¯l™_«º™ £B«™ç¸0™_«²º£¢· ¤&¡l¯lŸ¢¤l™_²º· ¸ ¸0Ÿ¢¤l­!²!«u£¢· ¤s²º­ · ¤&¡l¯lŸ¢¤l™ ¨ ™ ­™‹õY§l™‹¤l¸‹™0­‹¾ êÏã<AæCB êòãKP Í ï4®AŸK­ À æ &  JQ  JSR êòãKPÎï KT754 Í æ êòãT™‹ŸK­ À ï KP Q æ ãzµKæ µ¯p™‹«™UKP  · ­š²º¯p™<· ¤p›l·¼m· ›l§l£¢žç¸b¯l£B«u£B¸‹²™_«Ð· ¤¿£ ¡Q¯pŸK¤p™ ¨ ™3­™‹õY§l™‹¤l¸‹™¢¾ ØGÙ5V WYX[Z ä¯åUæâã7ç ö6ŸB«œ²º¯p™ ÊËÌ ¨ Ÿs›p™0ž ´ µ½™ £B­­§ ¨ ™¥²¯Q£T² ¦¢«º£B¡l¯l™ ¨ ™0­£¢¤l›G¡l¯lŸK¤p™ ¨ ™0­h£T«º™ç· ¤l›l™_¡r™‹¤l›l™‹¤Y²ºž¼Æ «º™‹¸0Ÿ¢¦K¤p·j苙‹› ´s£¢¤l›²º¯l£B²E™Ó£B¸b¯h¸b¯l£B«u£B¸‹²™_«5· ­5£¢ž ­Ÿ· ¤m° ›l™_¡A™0¤p›l™‹¤s²ºž¼Æ«º™‹¸0Ÿ¢¦K¤p·j苙0› µ·¼²¯l· ¤²¯l™¦B«u£B¡l¯l™ ¨ ™0­ £¢¤p› ¡l¯lŸ¢¤l™ ¨ ™0­‹¾ êÏã2é?ëÌîÎê°ëqï é8îÎêæ Á êÏãté°ë ï é æ êòãÌê°ë ï êæ Á &  J EGF  J Í êòãK1 ë  ï KD1\±æ & IDJ EG] IDJ Í êÏãKP ë I ï KT I æ ã2=sæ ÜΛp™Ó£Bž ž ÆK´¿²¯l™ ¡l«Ÿ¢®Q£B®Q· ž ·¼²ÝÆÃ²¯Q£T²8£¢¤ · ¤p¡l§p² ¸b¯Q£T«u£B¸‹²™‹«^K  µ· ž ž\®A™ô«º™‹¸0Ÿ¢¦K¤p·j苙‹›,£B­Ð£B¤ ŸK§p²Î° ¡Q§m²ô¸b¯l£B«º£¢¸_²º™_«_K I ­¯pŸK§pžj›¿®r™ ™‹­!²º· ¨ £T²º™‹›/™ ¨ ° ¡Q·¼«º· ¸0£¢ž ž¼Æ]¾ & ŸZµ™‹K™_«0´Ð­· ¤l¸‹™'²º¯p™‹«™ £B«™C=+@Kî!( ¦¢«º£B¡l¯l™ ¨ ™0­ ãÝÞ¢ßBà‹ábâjæò£¢¤p›[@KîG¡l¯lŸ¢¤l™ ¨ ™0­»ãŽÞ¢ßBàrßÓæ · ¤ ²º¯p™¬s£T¡6£B¤l™‹­™ ¸b¯Q£T«u£¢¸_²™‹« ­™_²0´¬¢Ü · . ï+¶¢ï!@m´ ·¼² · ­· ¨ ¡rŸK­­·¼®lžj™²ŸÏ™‹­²· ¨ £B²™²¯l™"¡p«ºŸ¢®Q£B®l·jž ·¼²ÎÆ ™ ¨ ° ¡Q·¼«º· ¸0£¢ž ž¼Æ&›p§l™ ²Ÿ ›l£B²º£ ­!¡Q£B«º­™‹¤l™‹­­‹¾ ñ ¯l™_«º™0©ªŸB«º™¢´ µ™h£T¡l¡p«ºŸÓÈp· ¨ £T²º™·¼²®6£¢­™‹›¹ŸK¤¹²Ýµ½ŸÏ¡Q£B«u£ ¨ ™_²º™_«º­– ²¯l™£¢¸‹¸‹§p«u£B¸‹ÆhŸ¢© ²¯l™Òp«º­!²¸Ó£B¤l›p·j›l£B²™` Í £¢¤p›\²º¯p™ ¸‹§ ¨ §lžj£B²· ]™3£B¸0¸‹§p«u£B¸‹ÆçŸK©ä£¢ž ž ¸Ó£B¤l›p·j›l£B²™0­a Q ¾ ô ð/b6c þ b ' õed f g "    b c ĤGr6 <oCh4xz¤2rU£¦™Œn  %™urto ]ji . ] k *.  o lä2oC  b c Ĥ♌˜—lPn4øKr6 <o'£¦™Œn  %™urto¦¤  . ] i m nom .\* lŒr6 <oyxpqä2o ð/r õ µ¯l™_«º™>s»· ­²º¯p™¤Y§ ¨ ®6™‹« Ÿ¢©¸0£¢¤p›l· ›Q£T²º™‹­ ©ªŸB« ²¯p™ ¸b¯Q£T«u£B¸‹²™‹«‹´6£B¤l›Éï Mhï]· ­4²¯p™¸_¯Q£T«u£B¸‹²™‹«½­™‹²­·j苙¢¾ ÜΤí²¯l· ­ Ê3ËÌ ¨ ŸY›l™‹ž ´«™0¦K£B«º›pžj™‹­­ Ÿ¢©3²º¯p™É·j¤Y° ¡Q§m²4£¢¤p› ŸK§m²!¡Q§p²½¸_¯Q£T«u£B¸‹²™‹«Û¡6£B·¼«º­‹´p²º¯p™Òp«º­!²½¸Ó£B¤m° ›l· ›l£B²™Ï· ­£Bž µ½£ÓÆm­£B­­·j¦¢¤l™‹›»²º¯p™h¡p«ºŸ¢®Q£B®l·jž ·¼²ÎÆY Í ¾ ö6ŸB«*¸Ó£B¤l›p·j›l£B²™0­5ŸB²º¯p™‹«5²¯Q£B¤h²¯l™½Òl«º­!²5¸Ó£B¤l›p·j›l£B²™¢´ ²¯l™ «™ ¨ £B· ¤l· ¤l¦ô¸‹§ ¨ §lžj£T²º·¼K™»£B¸‹¸0§p«º£¢¸_Æt Q?u  Í · ­h›l· ­!²«· ®l§p²™0›&§p¤l· ©ÄŸB« ¨ ž¼Æ]¾Gö6ŸB«h¸b¯l£B«u£B¸‹²™‹«­¤lŸB² £ ¨ ŸK¤p¦ ²º¯p™h¸0£¢¤p›l· ›Q£T²™0­‹´²¯l™«º™ ¨ £¢· ¤p·j¤p¦ ¡l«Ÿ¢®Q£T° ®Q· ž ·¼²ÝÆ ¨ £B­­3ø u  Q · ­4›l· ­!²«· ®l§p²™0› §l¤p·j©ªŸ¢« ¨ ž¼Æ]¾ v+w ü wqxzy $ !lP¤ %|{ 0 ‹ w ü w ð $ !lP¤ % õ x}~ 2€‚„ƒ … ö‡†‰ˆ €[Š F‰‹ € ~ 5€‚Œƒ: ö‡†Œˆ €>Š ]‰‹ € Ž 2€‚D‘’ ð! '/.  ó#" '#.  õ3“ v•”– ü ”— ‹ € ˜ 5€‚š™… ö ƒj…q› ˆ €œŠ F‰‹ €  2€‚‰™ ½ö ƒ  › ˆ €[Š ] ‹ € r ð! ' ó-" ' õ ö ð/b6 ž – ”Ÿ– ó b " ž — ” — õ   ¡- ð! ' ó-" ' õq¢ “ v ž – ü ž — ˆ ’£ ¤ v ž – ü ž — xUv ž – ü ž —¦¥ y–ð< ' ó-" ' õ { † ‹ ž – ü ž — ð! ' ó#" ' õ x † £ ‹ ¡! 0 ¡! ð ‹ ” – ü ” — ð! '/.  ó#" '#.  õsô ð! ' ó#" ' þ '#.  ó/" '/.  õ % ‹ ž – ü ž — ð! ' ó#" ' õtõ7ˆ ’£ ~ ‹ ž – ü ž — ð! ' ó#" ' õ x ‹ ” – ü ” — ð! '/.  ó#" '#.  õwô ð< ' ó#" ' þ '#.  ó#" '/.  õ Ž £ ‹ ¡! 0 £ ‹ 0 £ ‹ ~ £ ‹ Ž £ ‹ ˜ £ ‹ ö÷·j¦¢§p«™ ¶Å Æ «u£T¡Q¯p™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™³£Bž ·j¦¢¤ ¨ ™‹¤s² £¢ž ¦KŸB«º·¼²¯ ¨ ãIJε۟B°v›p· ¨ ™‹¤l­· Ÿ¢¤Q£Bž ¨ ŸB«¡Q¯pŸKž Ÿ¢¦K· ¸Ó£Bž £¢¤l£¢ž¼Æm­· ­£Bžj¦¢Ÿ¢«· ²¯ ¨ æ § E –K>GFIHKJ-LMJ:NƒO´H*Ø × J-LMJ “SR DåH × LMJ × J “^R HIØ–]DÄJ+HIL ¨ÓÙ2Ú W©%æ|ª2ÝÓÜ©«­¬›ã ®°¯±qܳ²°Ü7ªŸ©Ïå3´¶µ°¯©‚ª2Ýâß3·¸ ö÷· «­!²0´<µ™/›p™0­¸_«º·¼®r™À£8¬]£B¡Q£¢¤p™0­™,¦¢«º£B¡l¯l™ ¨ ™_° ¡Q¯pŸK¤p™ ¨ ™Ç£¢ž · ¦K¤ ¨ ™‹¤m²Ð£Bžj¦¢Ÿ¢«· ²¯ ¨ ©ÄŸB«ÁŸB«º›p·j¤l£B«!Æ ²º™bÈm²‹´µ¯l™_«º™ç·¼²º­ ·j¤m¡Q§m²h· ­£ ¡Q£¢·¼«ŸK©4¦B«u£B¡l¯l™ ¨ ™‹­ £¢¤p›¡Q¯pŸK¤p™ ¨ ™‹­‹¾/" ž¼²¯lŸ¢§l¦¢¯²º¯p™ £¢ž ¦¢Ÿ¢«º·¼²¯ ¨ ›lŸs™‹­ ¤lŸB²2·j›p™0¤s²· ©ªÆ8µŸ¢«›Ñ®rŸK§p¤l›l£B«º· ™‹­2Ÿ¢«#¡Q£B«!²º­ÅŸK© ­!¡A™0™‹¸b¯ ´¹µ™#¸Ó£Bž ž ²º¯p·j­<£¢ž · ¦K¤ ¨ ™0¤Y²í²º£¢­!ù Zέ!Æm¤m° ¸_¯p«ºŸ¢¤lŸ¢§l­ ¨ Ÿ¢«!¡Q¯lŸ¢ž ŸK¦¢·j¸0£¢žÂ£B¤Q£Bž ÆY­· ­Î\Ô®A™‹¸Ó£B§l­™ ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™G²§p¡Qž ™‹­•· ¤Â¬]£T¡6£B¤l™‹­™G¡ ™_«!° ­ŸK¤l£¢ž¤Q£ ¨ ™‹­3¸Ó£B¤•®r™²º¯pŸK§p¦K¯Y²ŸK©Û£¢­3£ ¨ · ¤l· ¨ £Bž ¸0Ÿ ¨ ¡AŸK­·¼²·jŸ¢¤Q£Bž*§l¤p·¼²²º¯l£B²"¯Q£B­£¸‹™‹«!²u£B·j¤ ¨ ™Ó£B¤m° · ¤l¦p´µ¯p·j¸b¯¹· ­²º¯p™²™‹¸_¯p¤l· ¸0£¢žä›p™‹Òl¤l·¼²º· Ÿ¢¤ ŸK© ¨ ŸB«!° ¡Q¯p™ ¨ ™‹­‹¾¯0 ŸB«º™‹ŸZ]™‹«0´²¯l™É£Bž ¦KŸB«º·¼²º¯ ¨ · ­ò£ ²ÎµŸT° ›l· ¨ ™‹¤l­· ŸK¤l£¢žr™bÈm²™0¤p­· ŸK¤çŸK©I£ ¬]£T¡6£B¤l™0­™ ¨ ŸB«¡l¯lŸB° ž ŸK¦K· ¸0£¢ž£¢¤l£¢ž¼ÆY­·j­£Bžj¦¢Ÿ¢«º·¼²¯ ¨ ãÌ>£B¦]£T²u£Y´ø<(+(Y9Y溾 è™_²÷· ¤p¡l§p²÷¦¢«º£B¡l¯l™ ¨ ™0­÷£¢¤p›¡l¯lŸ¢¤l™ ¨ ™0­®r™—é Á KD1 Ͱ¹¹º¹ KD1 E F £B¤l›dê Á KP Ͱ¹º¹º¹ KP E ] ´pµ¯l™_«º™>KD1ò£¢¤p› KP £T«º™· ¤p›l·¼m· ›l§l£¢ž4¦¢«º£B¡l¯l™ ¨ ™0­ò£¢¤p›Á¡Q¯pŸK¤p™ ¨ ™‹­‹¾ ÜΤ/Ÿ¢«º›p™‹«Ð²ºŸÂÒQ¤l›/£Â­™‹õY§p™0¤p¸0™ ŸK© ¦¢«º£B¡l¯l™ ¨ ™_° ¡Q¯pŸK¤p™ ¨ ™É²§p¡Qž ™‹­(1 Í î Í î ¹º¹¹ îj1 Q î Q ²º¯l£B² ¨ £ZÈp· ° ¨ · è0™‹­ÂêÏãtéŠîuê æ½›l™‹­¸‹«· ®r™‹›É· ¤ é*õs§Q£T²º· Ÿ¢¤&ãz¶Kæu´rµ™ §l­™É²ÝµŸB°v›l· ¨ ™‹¤l­· ŸK¤l£¢ž›mÆm¤l£ ¨ · ¸É¡l«ŸK¦B«u£ ¨h¨ ·j¤p¦p´ £¢­4­¯pŸBµ4¤ · ¤ ö÷· ¦K§m«º™;¶Y¾ ÜΤÀö÷· ¦K§m«º™³¶s´¼»3½‚¾ ¿#· ­í£:²u£T®Qž ™ ²¯Q£T²í¯pŸKž ›l­ ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™"²§p¡lž ™0­™0¤p›l· ¤l¦\£B²¡rŸ¢­·¼²º· ŸK¤ ã5Àâî:Áp溾? ½¾ ¿ ã21  î  æ¯pŸKž ›l­²¯l™ ¨ £TÈm· ¨ § ¨ ¡p«ºŸ¢®m° £B®l· žj·¼²ÝÆ\ŸK©¦B«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™²§p¡Qž ™­™0õs§l™‹¤l¸‹™0­ ­!²º£B«²· ¤l¦©c«ºŸ ¨ ã ï:îºï]棢¤p›™‹¤l›p· ¤l¦ £T²4ãŸÀÓî:Ápæ µ¯lŸ¢­™ ÒQ¤l£¢žA²§p¡lžj™·j­ã21  î  æu¾ ñ ¯l™,£Bžj¦¢Ÿ¢«· ²¯ ¨ ­!²u£T«²­Â©c«ºŸ ¨ ãåï7îºïsæ µ¯p·j¸b¯ ¸‹Ÿ¢««™0­!¡AŸK¤p›l­²Ÿô²º¯p™»®A™0¦¢·j¤p¤l· ¤l¦íŸ¢©¦B«u£B¡l¯l™ ¨ ™‹­ £¢¤p› ¡Q¯pŸK¤p™ ¨ ™‹­‹´£B¤l› ¡p«ºŸY¸0™‹™0›p­ ²ŸZµ4£T«º›²¯l™™‹¤l› ŸK© ¦¢«º£B¡l¯l™ ¨ ™0­\£B¤l›Á¡l¯lŸ¢¤l™ ¨ ™0­ÉãŸ8 9 îD8 ; æu´¸b¯l£B«u£B¸_° ²™‹«&®pÆ ¸_¯Q£T«u£B¸‹²™‹«‹´ ©ªŸB«Ð®AŸ¢²¯¿¦¢«u£T¡Q¯p™ ¨ ™‹­ô£B¤l› ¡Q¯pŸK¤p™ ¨ ™‹­‹¾ "²5™‹s™‹«!Æ¡rŸ¢·j¤Y²ã5ÀÓîDÁlæ· ¤\²¯l™«º™‹¦K· Ÿ¢¤ ï[_ÀÄÃ_8 9 îuïœÅÁÃ_8 ; ´¢²º¯p·j­5£Bž ¦KŸB«º·¼²º¯ ¨ §p¡r›l£B²™‹­ ²¯l™ ¨ £TÈm· ¨ § ¨ ¡p«ºŸ¢®Q£B®l·jž ·¼²ÝÆ"©ÄŸB«ä²º¯p™½­§m®Q­™‹õY§p™0¤p¸0™ ŸK©ä¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™3²º§m¡Qž ™‹­¶Â ½‚¾ ¿ ã51  î  æ3㠞 · ¤l™ ø%¶ £¢¤p›Ðø<5 · ¤»ö÷·j¦¢§p«™Q¶Kæu¾ ñ ¯Y§l­‹´£B²çãŸ8 9 î:8 ; 溴µ™ ¸0£¢¤<Ÿ¢®p²u£B·j¤Ç£G­™0õs§l™‹¤l¸‹™•Ÿ¢©²º§m¡Qž ™0­ò²º¯l£B² ¨ £ZÈp· ° ¨ · è0™‹­•êòã2é8îÎê溾 Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Ñ Ò Ó Ñ Õ Ò Ó Ô Ö × Ò Ø × Ò Ù Ø × Ò Ú Ù Ø Û × Ò Ü Ú Ù Ø Ý Þ ß à Þ ß à á â ã ä á å æ ç è é å æ ç è é å æ ç è é å æ ç è é å æ ç è é ê ë ì í î ï Þ ð à ñ ï Þ ð ß ñ ï Þ ð Þ ñ ï Ý ð Ý ñ ï à ð á ñ ï à ð â ñ ï á ð ã ñ ï à ð â ñ ï à ð ã ñ ï á ð ä ñ ê ò ì ó î ê ò ì ë î ê ò ì ò î ê ô ì ô î ï ß ð á ñ ï ß ð á ñ ï ß ð á ñ ï ß ð á ñ ï ß ð á ñ ï ß ð á ñ ï ß ð á ñ ö÷· ¦K§p«™ 57 " ­¤l£B¡Q­¯pŸ¢²:ŸK©ô²¯l™¿¦¢«º£B¡l¯l™ ¨ ™_° ¡Q¯pŸK¤p™ ¨ ™£¢ž · ¦K¤ ¨ ™‹¤m²½©ªŸ¢«ŸB«º›p·j¤l£B«!Æç²º™bÈm² ö÷·j¦¢§p«™É5ô·j­É£Á­¤l£B¡l­¯lŸB²ÉŸK© ²º¯p™š¦¢«º£B¡Q¯p™ ¨ ™_° ¡Q¯pŸK¤p™ ¨ ™ £Bž ·j¦¢¤ ¨ ™‹¤s²0´ µ¯p™‹«º™ ²¯l™ ·j¤m¡Q§m² ¦¢«º£B¡l¯l™ ¨ ™0­ò£¢¤p›ô¡Q¯pŸK¤p™ ¨ ™‹­ç£T«º™¢] _öõø÷ £¢¤l› `ía^c e³ùûú­ü ´ £B¤l›À²¯l™#¸‹§p«!«º™0¤s²Á¡ Ÿ¢· ¤Y²ô· ­ ãt¶7îÎ9Y溾 ñ ¯l™_«º™#£T«º™Ç©ÄŸ¢§p«í¦B«u£T¡Q¯l™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™ ²§p¡Qž ™‹­3™0¤p›l· ¤l¦ò¯p™‹«™¢´I£¢¤p›¹²¯p«™0™²§p¡lžj™‹­3­!²º£B«!²º· ¤l¦ ¯l™_«º™B¾ " ž ž¸‹Ÿ ¨ ®Q· ¤Q£T²º· ŸK¤p­ŸK©²º¯p™0­™»²§p¡lžj™‹­ £B«™ ­™0£B«º¸b¯p™0›A´ £¢¤l›Í²º¯p™ ¨ £ZÈp· ¨ § ¨ ¡l«Ÿ¢®Q£B®l·jž ·¼²º· ™‹­ §p¡:²ºŸÇ²º¯p™Á™‹¤l›p·j¤p¦ ¡AŸK· ¤s²»Ÿ¢©ç™0£¢¸b¯2²º§m¡Qž ™0­š£T«º™ §p¡A›l£B²™0›A¾ ¨ÓÙ5V WYX[Z÷åùàq¯ý½àþ¯¼±qܲÂÜ7ªŸ©´å3´¶X[ÿÓÜ©%Ü\¯‘ã© ä¯Üo¯©‚ª+ã·¸ > ™bÈm²‹´Bµ½™5›l™‹­¸‹«º·¼®r™£¦B«u£B¡l¯l™ ¨ ™_°±¡l¯lŸK¤p™ ¨ ™Û£¢ž · ¦K¤Y° ¨ ™‹¤s²£Bžj¦¢Ÿ¢«· ²¯ ¨ ©ªŸB«ÊˍÌŸK§m²¡l§p²‹¾5˜»™3£¢­­§ ¨ ™ ²¯l™_«º™£T«º™3¤lŸ\­™‹¦ ¨ ™‹¤s²u£T²º· Ÿ¢¤ ™_««ºŸB«º­· ¤ ²¯l™ Ê3ËÌ ŸK§m²!¡Q§p²‹´Iµ4¯l· ¸_¯•· ¤•¡p«u£B¸‹²·j¸0£Bž²™‹« ¨ ­ ¨ ™Ó£B¤l­3²¯Q£T² ²¯l™½©ÄŸB« ¨ ¯Q£B­5£3¦B«º· ›h©ªŸ¢«5™Ó£B¸b¯¸_¯Q£T«u£B¸‹²™‹«‹¾äÜΤ²º¯p· ­ ¸Ó£B­™¢´Iµ½™¸0£¢ž žE²¯l™\ÊËÌ Ÿ¢§p²!¡Q§p²"¸b¯Q£T«u£B¸‹²™‹« ¨ £T° ²«·¼Èr´m· ¤Ïµ¯l· ¸b¯Ï™Ó£B¸b¯Ï¸b¯Q£T«u£B¸‹²™‹«Û¯l£¢­Û£žj· ­!²ÛŸ¢©­™‹]° ™‹«º£¢ž¸0£¢¤p›l· ›Q£T²™0­ÏŸ¢«º›p™‹«™0›ô®mÆ&²¯l™‹· «ò¸‹™‹«!²u£B· ¤m²· ™0­‹¾ ÜΤ ©Ä£B¸‹²‹´6·¼² · ­¤lŸB²›l·¼ì\¸‹§lž¼²²Ÿ\™bÈm²™0¤p›²º¯p™£Bžj· ¦K¤Y° ¨ ™0¤Y²½£Bžj¦¢Ÿ¢«· ²¯ ¨ ²Ÿ¯Q£¢¤p›lž ™£¸b¯Q£T«u£B¸‹²™‹«½žj£B²!²º· ¸‹™¢´ µ¯l· ¸_¯š· ­£ ›l£B²º£ ­!²«º§p¸‹²§p«™Ï²º¯l£B²¸‹ŸK¤p­·j›p™‹«­ ²¯l™ ¡AŸK­­·¼®Q· ž · ²ÝÆ Ÿ¢©4­™‹¦ ¨ ™‹¤s²u£T²º· ŸK¤š™_««Ÿ¢«º­‹¾^&ŸTµ™_K™‹«‹´ µ4™Éž · ¨ · ²™‹›Ç²¯l™•·j¤m¡Q§m² ²ŸÁ£Ð¸b¯Q£T«u£B¸‹²™‹« ¨ £B²!«º· È ®A™0¸0£¢§p­™µ4™4›lŸK¤Aó ²Ûùm¤pŸBµí¯pŸBµô²Ÿ ¨ £Tùs™£¢¤ ÊËÌ ¨ ŸY›l™‹žl²¯Q£T²*²º£Bù]™‹­*­™‹¦ ¨ ™‹¤Y²u£T²º· Ÿ¢¤h™_««ºŸB«º­Û· ¤s²Ÿ£¢¸b° ¸0Ÿ¢§l¤Y²0¾ ñ ¯l™ç£Bžj· ¦¢¤ ¨ ™‹¤s²£¢ž ¦KŸB«º·¼²¯ ¨ ©ÄŸB«hÊˍÌ,Ÿ¢§p²!¡Q§m² · ­Ï®Q£¢­· ¸Ó£Bžjž¼ÆÐ²¯l™ ­º£ ¨ ™É£B­ò­¯pŸZµ¤ô· ¤íö÷·j¦¢§p«™Ö¶Y¾ & ŸTµÛ™‹]™_«0´½­·j¤p¸0™ ²º¯p™‹«º™ £B«º™ ­Ÿ ¨ ™_²º· ¨ ™‹­ò¤pŸš¸0ŸB«!° «º™‹¸‹² ¸_¯l£B«u£B¸‹²™_«º­£ ¨ Ÿ¢¤l¦\²¯l™¸0£¢¤p›l· ›Q£T²™0­‹´rµ™"· ¤m° ²«ŸY›l§l¸‹™ £B¤ë£B¡p¡l«ºŸÓÈm· ¨ £B²™ ¨ £T²¸‹¯&®A™‹²Ýµ™0™‹¤Ð²¯l™ ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™¹²º§m¡Qž ™‹­ · ¤Ç²¯l™ ›l· ¸_²º· ŸK¤l£B«!Æ £¢¤p› ²¯lŸK­™h· ¤ ²º¯p™Ï¸b¯Q£T«u£B¸‹²™‹« ¨ £B²!«º· ÈíãÌ& ™‹«™¢´äµ™ ›l™_ÒQ¤p™h£ò­§p®l­!²!«º· ¤l¦çŸK©*£Ï¸_¯Q£T«u£¢¸_²º™_« ¨ £B²!«º· ȹ£¢­£ ­§p®l­!²«·j¤p¦²º¯l£B²ä· ­I©ªŸ¢« ¨ ™0›"®YÆ3­™‹ž ™0¸_²º· ¤l¦Ÿ¢¤l™*¸b¯l£B«Î° £¢¸_²º™_«©c«ºŸ ¨ ™Ó£B¸b¯ ¸0£¢¤p›l· ›Q£T²º™žj· ­!²bæu¾ "4²2™0£¢¸b¯ ¡rŸ¢·j¤Y²,ã5Àâî:Álæu´ÁÒl«­!²0´ôµ4™ «º™_²!«º· ™‹]™ ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™í²§p¡lžj™‹­š§l­· ¤l¦Å¦¢«º£B¡l¯l™ ¨ ™0­ £¢­½ùs™‹ÆY­– øK¾IèI·j­!²É£¢ž ž"²§p¡lžj™‹­·j¤²¯l™»›l· ¸‹²· ŸK¤Q£T«ÆÇµ4¯lŸK­™ ¦B«u£T¡Q¯p™ ¨ ™‹­ £T«º™É£š­§p®l­!²«·j¤p¦š· ¤ë²¯p™¹¸b¯l£B«Î° £B¸‹²™_« ¨ £B²!«º· ÈŸK©÷¦¢«u£T¡Q¯p™ ¨ ™‹­ ­!²º£T«²º· ¤p¦Ï©ª«Ÿ ¨ ÀI¾ ¶Y¾ÛˍŸ ¨ ¡Q§m²º™ò²º¯p™ ¨ · ¤l· ¨ § ¨ ™‹›l·¼²›l· ­!²u£B¤l¸‹™çŸK© ²¯l™‹·¼«í¡Q¯lŸ¢¤l™ ¨ ™‹­ë£B¤l›?­§m®Q­!²!«º· ¤l¦¢­ë· ¤/²¯l™ ¸b¯l£B«º£¢¸_²º™_« ¨ £B²!«º· È:Ÿ¢©ç¡l¯lŸK¤p™ ¨ ™0­»­!²u£T«²· ¤l¦ ©c«ºŸ ¨ ÁQ¾ 5m¾5ö÷·jž¼²™‹« ²º¯pŸK­™h²º§m¡Qž ™0­®mƕ™0›p· ² ›p· ­!²u£B¤l¸‹™ç£¢¤l› ©c«º™‹õY§p™0¤p¸‹ÆK¾ " ­£²º¯m«º™0­¯pŸKž ›ÉŸK©÷™0›p· ² ›p· ­!²u£B¤l¸‹™¢´rµ™Òlž ²™_«º™0› ŸK§m²\²§p¡lžj™‹­hµ¯lŸ¢­™™‹›l·¼²Ï›p·j­!²º£¢¤p¸0™ ŸK©¡Q¯pŸK¤p™ ¨ ™‹­ · ­ ¨ ŸB«º™ ²¯Q£B¤ôŸB«ò™0õs§Q£¢ž½²Ÿö8 ; L!¶Y´½™bÈp¸‹™‹¡p²Ïµ¯p™0¤ 8 ; Á ø¢¾ >Ÿ¢²™Ï· ©A8 ; Á øK´ä™0›p· ² ›p· ­!²u£B¤l¸‹™Ï¸Ó£B¤l¤pŸ¢² ®A™4§p­™0›©ÄŸB«EÒQž¼²™‹«·j¤p¦3®r™‹¸Ó£B§l­™4· ²E· ­E™0·¼²¯l™‹«5ï3ŸB«4ø¢¾ ñ ¯m§l­‹´¢µ½™­ŸB«²™0›h²¯l™²§p¡Qž ™‹­®pÆ©c«º™0õs§l™‹¤l¸‹· ™0­5£B¤l› ­™0ž ™‹¸‹²™0›Ð²º¯p™ ²ºŸB¡íµ¹²§m¡Qž ™0­‹¾ ñ ¯l™‹­™ ²º¯m«º™‹­¯lŸ¢žj›p­ µ4™_«º™›l™_²º™_« ¨ · ¤p™0›ç²º¯m«ºŸK§p¦K¯ ™_ÈY¡ ™_«º· ¨ ™‹¤s²º­‹¾ ˜š™*²º¯p™0¤«™‹²!«º· ™‹]™Û¦B«u£B¡l¯l™ ¨ ™_°±¡l¯lŸK¤p™ ¨ ™Û²§m¡Qž ™0­ §l­· ¤l¦¡Q¯pŸK¤p™ ¨ ™‹­£¢­½ù]™_Æm­– øK¾IèI·j­!²É£¢ž ž"²§p¡lžj™‹­·j¤²¯l™»›l· ¸‹²· ŸK¤Q£T«ÆÇµ4¯lŸK­™ ¡l¯lŸ¢¤l™ ¨ ™0­£B«™h£Ï­§m®Q­!²!«º· ¤l¦ç·j¤ ²¯l™ ¸_¯Q£T«u£B¸_° ²™_« ¨ £T²«·¼ÈòŸK© ¦¢«u£T¡Q¯p™ ¨ ™‹­½­!²º£B«²· ¤l¦©c«ºŸ ¨ ÁQ¾ ¶Y¾ÛˍŸ ¨ ¡Q§m²º™ò²¯l™ ¨ · ¤l· ¨ § ¨ ™‹›l·¼²›l· ­!²º£¢¤p¸0™çŸK© ²º¯p™0·¼«»¦B«u£B¡l¯l™ ¨ ™‹­G£B¤l› ­§m®Q­!²!«º· ¤l¦¢­ · ¤:²º¯p™ ¸b¯Q£B«º£¢¸_²º™_« ¨ £B²!«º· ÈëŸK©"¦¢«º£B¡l¯l™ ¨ ™0­ ­!²º£T«²º· ¤p¦ ©ª«Ÿ ¨ ÀI¾ 5m¾5ö÷· ž ²™‹« ²¯lŸ¢­™\²º§m¡Qž ™0­®YÆ ™0›p· ²›l· ­!²u£B¤l¸‹™ç£¢¤p› ©ª«™0õs§l™‹¤l¸_Æ]¾ ˜š™ £Bžj­Ÿò­™_² ²¯l™²º¯m«º™‹­¯lŸ¢žj›p­Ÿ¢©E™0›p· ²›p· ­!²u£B¤l¸‹™ £¢¤p› ©c«º™‹õY§p™0¤p¸‹ÆçŸK©¦¢«u£T¡Q¯p™ ¨ ™0­£B­„8 9 L+¶£¢¤l› µY¾ ö÷·j¤l£¢ž ž¼ÆK´B©ÄŸB«÷§l¤pùY¤lŸZµ¤"²§p¡lž ™0­‹´Tµ½™5ž ·j­!²£Bž žs¸‹Ÿ ¨ ° ®Q· ¤l£B²·jŸ¢¤l­IŸ¢©m²º¯p™E¡p«º™‹ÒmÈp™‹­ŸK©Y²¯p™EÒl«­!²ä¸Ó£B¤l›p·j›l£B²™0­ ŸK© ¦¢«º£B¡l¯l™ ¨ ™0­ ­!²º£B«²· ¤l¦ô©c«ºŸ ¨ À£B¤l›²¯lŸK­™ ŸK© ¡Q¯pŸK¤p™ ¨ ™‹­­!²u£T«²·j¤p¦\©c«ºŸ ¨ ÁQ´A· ©ä²¯p™‹Æ£T«º™¤lŸB² £Bž ° «º™0£¢›mÆ ž · ­!²º™‹› ®mƲ¯l™½£T® ŸZ]™£T¡l¡l«ŸÓÈp· ¨ £B²™ ¨ £T²º¸_¯ ¾                                        !  " # $ % & " # $ % & " # $ % & " # $ % & ' ( ) * + , . /  0 1  2 3 4 5   " # $ % &    " # $ % &   6      7   8   9 " # $ % &   :   9 ; " # $ % & <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  = > <  = > <  = ! > ' ? ) @ + ' ? ) A + ' * ) B + ' C ) ? + ' C ) ( + ' C ) C + ' D ) D + <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > <  =  > ö÷· ¦K§p«™m9 8"/­¤Q£T¡Q­¯lŸB² Ÿ¢©4¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™ £¢ž · ¦K¤ ¨ ™‹¤Y²4©ªŸ¢« Ê3ˍÌŸK§m²¡l§p² ö÷·j¦¢§p«™ 9 · ­ £ ­¤l£B¡l­¯lŸ¢²ÑŸK©'¦¢«º£B¡l¯l™ ¨ ™_° ¡Q¯pŸK¤p™ ¨ ™ £¢ž · ¦K¤ ¨ ™‹¤m² ©ªŸ¢« Ê3ËÌ Ÿ¢§p²!¡Q§m²0¾ ö6ŸB«:™0£¢¸_¯ ¸b¯Q£B«º£¢¸_²º™_«Â¡AŸ¢­· ²· ŸK¤ÍŸK©ô²º¯p™/· ¤m¡Q§p² ¦¢«º£B¡l¯l™ ¨ ™0­] _³õÅ÷¥£B¤l›»· ¤p¡l§p²¡Q¯lŸ¢¤l™ ¨ ™‹­ ` aScCeù úÅü ´ë²Ýµ½Ÿê«º™‹¸0Ÿ¢¦K¤p·¼²º· ŸK¤ ¸0£¢¤l›p· ›Q£T²º™‹­ £B«™¡p«º™‹­™0¤Y²º™‹› ¾Ï> ŸB²º™²¯l™‹«™Ï£T«º™²¯p«™0™²ÝÆY¡r™‹­ŸK© ¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™ ²§p¡lžj™‹­– ™bÈl£B¸‹²ž Æ ¨ £T²º¸_¯l™‹› ´ £B¡p¡l«ŸÓÈp· ¨ £B²™0ž¼Æ ¨ £B²¸b¯l™‹› ´½£B¤l›ô§p¤pùY¤lŸZµ¤A¾ëö6ŸB« ™bÈl£ ¨ ¡Qž ™¢´m©c«ºŸ ¨ ãt¶ÅîŒ9s沺Ÿ ã25:]溴Y²¯l™²§p¡Qž ™„õ)f ù · ­h¦K™‹¤l™_«u£B²™‹›&®A™‹¸Ó£B§l­™ò®AŸ¢²¯Ð¦B«u£T¡Q¯p™ ¨ ™ õ £¢¤p› ¡Q¯pŸK¤p™ ¨ ™ ù £T«º™3·j¤ò²¯l™ ¨ £T²«º· Èr¾5ö6«Ÿ ¨ ãz¶Åîu9sæE²ºŸ ã25:îu=s溴p²º¯p™²§p¡lž ™FEif ù^ú · ­½¦K™‹¤l™_«u£T²º™‹› ®r™‹¸Ó£B§l­™ ¡Q¯pŸK¤p™ ¨ ™ ù ú ·j­ · ¤+²¯l™ ¨ £T²«º· Èr´Ð£¢¤p›8²¯l™ ²§p¡Qž ™ò· ­¯p·j¦¢¯lž¼Æ ©c«º™‹õY§l™‹¤Y²0¾¢" žj­ŸÉ©c«ºŸ ¨ ãz¶Åîu9sæ3²ºŸ ã25:îu=s溴m£B¤\§l¤mùm¤pŸZµ¤²§p¡lžj™šõ)fHGJI8·j­5¦¢™0¤l™_«u£T²º™‹› ®A™0¸0£¢§p­™‡õ £B¤l›KGFI £B«™ ²¯l™ ¡l«º™_ÒpÈm™0­\ŸK©²¯l™ Òl«­!²¸0£¢¤p›l· ›Q£T²º™‹­ŸK©¦¢«º£B¡l¯l™ ¨ ™0­£B¤l› ¡l¯lŸ¢¤l™ ¨ ™0­‹¾ L MON F›JQ–]DtLMJ × J PXÙ2Ú ¬A©%Ü7ªÌÝ|ª2ÝâßSÜÞݬæÅ¬—ã·º¯RQdÜo¯‘Ü ˜G™§p­™0›ç£ ¬]£B¡Q£¢¤p™0­™ ¤Q£ ¨ ™ ž ·j­!²Ÿ¢©ø¢¾j5!08µŸ¢«›l­‹´ µ¯l· ¸_¯»µ½£¢­ Ÿ¢«º· ¦K· ¤l£¢ž ž¼Æ ¨ £¢›l™ò©ªŸB«h£B¤Á£B§p²Ÿ ¨ £B²·j¸ ²º™‹ž ™‹¡l¯lŸ¢¤l™ç›l·¼«º™‹¸‹²Ÿ¢«!Æ&Ÿ¢© £T®rŸK§m²´9,µY´ðï¢ïKïY´ ï¢ïKï «º™‹­Î° · ›l™0¤Y²·c£Bž"­§m®Q­¸_«º·¼®r™‹«­ÁãÌ& · ¦]£B­¯l· ›Q£Y´Ïø4(!(!9s溾 " ž ° ²º¯pŸK§p¦K¯G²º¯p™ò¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™ £Bžj· ¦K¤ ¨ ™‹¤Y²Ÿ¢© ²º¯p™Á¤l£ ¨ ™Ðž ·j­!²•µ½£¢­ ¨ £B¤m§Q£Bžjž¼Æ#›lŸ¢¤l™¢´®r™‹¸Ó£B§l­™ ŸK©*²º¯p™\™‹¤lŸB« ¨ Ÿ¢§l­£ ¨ Ÿ¢§l¤Y²3ŸK©Û›Q£T²u£ · ¤ ²¯l™²º™‹ž ™_° ¡Q¯pŸK¤p™ ›l·¼«º™‹¸‹²ŸB«ÆK´G¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™ £¢ž · ¦K¤Y° ¨ ™0¤Y²ÐŸK© ž ŸBµÛ™_«ô©c«º™‹õY§p™0¤p¸‹Æ ¤Q£ ¨ ™0­Á· ­ô­žj· ¦¢¯Y²ž¼Æ ¤lŸ¢·j­!Æ]¾ ñ ¯p™‹«™0©ªŸ¢«™¢´]µ½™ÒQž¼²™‹«º™‹›\Ÿ¢§p²Û¤l£ ¨ ™‹­Eµ¯l· ¸b¯ £B¡p¡A™Ó£T«º™0›?¤pŸ ¨ Ÿ¢«™#²º¯l£¢¤ÀÒpY™Ç²º· ¨ ™‹­<· ¤?²º¯p™ ¬]£B¡Q£¢¤p™0­™²º™‹ž ™‹¡l¯lŸK¤p™"›p· «™0¸_²ºŸB«ÆK¾ ñ ¯l· ­5«º™0­§pž¼²º™‹›Ï·j¤ç£"¤l£ ¨ ™žj· ­!²ÛŸ¢©Ó5KïYø<1µŸ¢«º›p­‹´ µ¯l· ¸_¯"¸‹ŸBK™_«º­ ¨ ŸB«º™*²º¯l£¢¤Š(!@,*ŸK©p²¯l™Û™‹¤s²º·¼«º™*­§m®p° ­¸‹«· ®A™_«º­‹¾½" ­ä­¯lŸTµ¤· ¤ ñ £B®lž ™øK´¢²¯l™‹«™½£B«º™øZî!=!0 ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™3²º§m¡Qž ™²Ÿ¢ù]™‹¤l­4· ¤ ²¯l™"¤l£ ¨ ™ ž ·j­!²‹´3£¢¤p›<²¯p™‹«º™ £B«™i¶sø<1 ›l·¾±A™_«º™‹¤m²ç¦B«u£T¡Q¯l™ ¨ ™b° ¡Q¯pŸK¤p™ ¨ ™²§p¡lž ™ ²ÝÆY¡r™‹­‹¾˜G™ §l­™‹›M(Kï+*œŸ¢©²º¯p™ ¤Q£ ¨ ™*ž ·j­!²ŸK©:5KïYø<1ôµ4ŸB«º›p­©ÄŸB«²«u£B· ¤l· ¤l¦m´K£¢¤p›"¤lŸ¢¤m° ŸB¢™‹«ºžj£T¡l¡Q· ¤p¦çøÓïKï¢ïh¤Q£ ¨ ™‹­ãĵŸ¢«›l­bæ*©ÄŸB«4²™0­!²· ¤l¦m¾ PXÙ5V S‡©%Üý|ÿâãUTiãWV² ÿÓåáÝâãUTiãYXhçªwßáÝZTSã:ݰ¯ X[W,঩4Ü\«­´Êå3© W ©Y榪2ÝâÜ©«Å¬¡ã•®°¯ Ê ²¯l™_«:²º¯l£¢¤ ²º¯p™/¦B«u£T¡Q¯p™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™À£Bž ·j¦¢¤m° ¨ ™0¤Y² ¨ ŸY›l™‹ž"²«º£¢· ¤l™‹›Å©ª«Ÿ ¨¥¨ £B¤p§l£¢ž ž¼Æ £¢ž · ¦K¤p™0› ›Q£T²u£Y´Ïµ½™ ¨ £B›l™í£B¤ £¢ž · ¦K¤ ¨ ™0¤Y² ¨ Ÿs›l™‹ž\µ¯p·j¸b¯ · ­¥®AŸsŸB²º­!²!«u£B¡p¡A™0› ©c«ºŸ ¨ £ ¡l§p®lžj· ¸ ›pŸ ¨ £B· ¤ ¬]£B¡Q£¢¤p™0­™Ô¦B«u£B¡l¯l™ ¨ ™_°±²ŸB°±¡Q¯pŸK¤p™ ¨ ™œ›p· ¸‹²·jŸ¢¤Q£T«Æ ã21°"Â>¬¢Üq¿Ü!Ë溾Z˜»™5¸Ó£Bž ž]²¯l™5©ªŸ¢« ¨ ™‹«÷£­§m¡r™‹«!m· ­™0› ¨ ŸY›l™‹ž ´m£¢¤p›h²¯l™žj£T²²™‹«Û£B¤Ï§l¤p­§p¡r™_«Y·j­™‹› ¨ ŸY›l™‹ž ¾ " ­h­¯lŸZµ¤&· ¤ ñ £B®lžj™“¶Y´K1?"Â>¬¢Üq¿ ÜË ¯Q£B­´5m¾j¶ «º™0£¢›p·j¤p¦K­ä©ÄŸB«™0£¢¸b¯ËÛ¯l· ¤l™‹­™Û¸_¯Q£B«º£¢¸_²º™_«äŸK¤"²¯l™£Zs° ™‹«º£¢¦¢™¢¾ ñ Ÿ ¨ £Bù]™£¢¤ £¢ž ·j¦¢¤ ¨ ™0¤Y² ¨ ŸY›l™‹žå´lµ½™3¸0Ÿ¢¤m° ­· ›l™‹«Û²¯l™ ›p·j¸_²º· Ÿ¢¤Q£T«Æh·¼²º­™‹žj©£¢­½£¸0ŸB«¡Q§p­‹´m²º¯l£B²Û· ­‹´ µ4™h£¢­­· ¦K¤š£ §l¤p·j©ªŸB« ¨ ¡l«ºŸB®6£T®Q· ž · ²ÝÆÉ²ºŸ £Bž ž5¡rŸK­­· ° ®Qž ™É¦¢«º£B¡l¯l™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™É²§p¡lžj™‹­‹¾ ñ ¯l™É­§ ¨ Ÿ¢© ²º¯p™ ¡p«ºŸ¢®Q£B®l·jž ·¼²º· ™‹­ŸK©§p¤pùY¤lŸTµ¤<²º§m¡Qž ™0­· ­™0­!²·¼° ¨ £B²™‹›Å®YÆÇ²¯l™š˜<·¼²!²º™‹¤m° Bۙ0ž ž ¨ ™_²º¯pŸm› ãĘ<·¼²²™‹¤ £¢¤p› Bۙ0ž ž ´ ø4(!(møT溴£¢¤p›Ð«™0›l· ­!²!«º·¼®Q§m²º™‹›Ð®Q£¢­™‹›ôŸK¤ ²º¯p™§p¤pùY¤lŸTµ¤ò²º§m¡Qž ™ ¨ Ÿm›p™0ž ´lé5õY§l£B²·jŸ¢¤Gã25s溾 ñ £T®Qž ™ 5í¯pŸBµ­+²¯l™Ô¦B«u£T¡Q¯l™ ¨ ™b°v¡l¯lŸ¢¤l™ ¨ ™ £¢ž · ¦K¤ ¨ ™0¤Y²É£B¸0¸‹§p«º£¢¸‹·j™‹­¹Ÿ¢©²º¯p™š­§p¡A™_«Y·j­™‹›2£¢¤p› §l¤p­§p¡A™‹«!m· ­™0› ¨ ŸY›p™0ž ¾ä܎²äµ£B­ä™bÈm¡A™‹¸‹²™0›²¯Q£T²²¯p™ ­§p¡A™_«m· ­™‹› ¨ Ÿs›l™‹ž*µ4Ÿ¢§lž ›G£¢¸_¯p·j™_K™ò£s™‹«Æ ¯p· ¦K¯ ¦¢«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™£¢ž · ¦K¤ ¨ ™0¤Y²4£¢¸‹¸0§m«u£B¸‹Æ]¾ · §m«!° ¡l«·j­· ¤l¦¢ž¼Æs´¯lŸTµÛ™‹]™‹«‹´²¯l™&§l¤p­§p¡A™_«m· ­™‹› ¨ ŸY›p™0ž ñ £B®lžj™ 57 /Æ«u£T¡Q¯p™ ¨ ™b°ÊÈ*¯pŸK¤p™ ¨ ™" ž ·j¦¢¤ ¨ ™‹¤m² " ¸b° ¸‹§p«u£B¸‹Æ «º™‹¸0£¢ž ž ¡l«™0¸‹·j­· Ÿ¢¤ © · §m¡r™‹«!m· ­™0› (!(m¾Ã=,* (!(m¾jµ!* (+(Y¾jµ \ ¤p­§p¡r™_«m· ­™‹› (!@m¾Ã=,* (!@m¾ î!* (+@Y¾ î £¢ž ­Ÿ £B¸_¯l· ™‹K™‹­"£ ]™‹«!ƹ¯p·j¦¢¯G£¢¸‹¸0§m«u£B¸‹ÆK´÷£Bž¼²º¯pŸK§p¦K¯ ·¼²· ­ ¤pŸ¢²£¢­ ¦¢ŸsŸY› £B­ ²º¯l£B² ŸK©²¯l™•­§p¡A™_«m· ­™‹› ¨ ŸY›l™‹ž ¾ ñ ¯p· ­4­§l¦¢¦K™‹­!²º­4²¯l™¡rŸ¢­­·¼®Q· žj·¼²ÎÆ\²º¯l£B²‹´6©ªŸ¢« ÊˍÌ:¡l§p«¡rŸ¢­™0­"£T²"ž ™0£¢­!²‹´¤pŸ ¨ £B¤Y§Q£Bžjž¼ÆÉ£¢ž · ¦K¤p™0› ›Q£T²º£h£B«™"· ¤ ©Ä£¢¸_²4¤l™‹¸0™‹­­º£B«Æ]¾ PXÙsØ S‡©4ÜýaÿâãUTiãWV² ÿÓåáÝâãUTiãYXh矪ÌßáÝZTSã:ÝS¯ X]^,঩4Ü\«­´Êå3© WYXœZ åáàq¯ý¬àq¯ Üݤ\Ÿ¢«›l™_«5²ºŸ3²º™‹­!²5²¯l™4¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™£Bž ·j¦¢¤m° ¨ ™‹¤s²&£¢ž ¦KŸB«º·¼²º¯ ¨ ©ªŸB«ôÊ3ËÌ Ÿ¢§p²!¡Q§m²0´òµ½™í§l­™‹› £¢¤?ÊËÌ ­· ¨ §lžj£B²Ÿ¢«Á²º¯l£B²í¦¢™‹¤l™‹«º£B²™0­ë£:¸_¯Q£T«!° £¢¸_²™‹« ¨ £T²«· ÈÀ©ª«Ÿ ¨ £¢¤/·j¤m¡Q§m²ë­!²«· ¤l¦p´ µ¯lŸ¢­™ ¡6£T«u£ ¨ ™‹²™_«º­ £T«º™ ²¯l™ Òl«­!² ¸0£¢¤p›l· ›Q£T²™•£B¸‹¸0§m«u£¢¸_Æ £¢¤p›,²¯l™Ç¸0§ ¨ §pžj£B²· ]™<£B¸0¸‹§p«u£B¸_Æ¿ŸK©É£Bžjž ¸0£¢¤p›l· ° ›Q£T²™0­‹¾8˜š™ ¨ £¢›p™š©ÄŸ¢§p« ²™0­!²É­™_²­ µ¯pŸK­™•Òl«º­!² ¸0£¢¤l›p· ›Q£T²º™£¢¸‹¸0§m«u£B¸‹ÆÉ£¢¤p›•¸‹§ ¨ §lžj£B²·¼K™h£B¸0¸‹§p«º£¢¸_Æ µ™‹«º™:ã2=Kï+*ò´Ã(Kï+*\æu´&ã±îBï,*ò´Ã(+¶Y¾jµ+*\溴Gãt@¢ï,*ò´Ã(,µ!*\溴 £¢¤p›¹ã2(Kï+*ò´j(KîY¾jµ!*\æu´]«º™0­!¡A™‹¸‹²·¼]™0ž¼ÆK¾ ñ ¯l™‹­™¡Q£B«u£ ¨ ° ™_²º™_«º­3µ™‹«º™­™‹žj™‹¸_²º™‹›•®Q£¢­™‹› ŸK¤É²¯l™²ÎÆY¡Q· ¸0£¢ž¡A™_«!° ©ªŸ¢« ¨ £¢¤p¸0™3ŸK©÷¬]£B¡6£B¤l™‹­™¯l£¢¤p›pµ4«·¼²º· ¤l¦\Êˍ̾ 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 Character Recognition Accuracy (After NLP) Character Recognition Accuracy (Before NLP) Error Correction Accuracy First Rank Accuracy Cumulative Accuracy Grapheme (G-P aligned) Phoneme (G-P aligned) Grapheme (G only) Phoneme (P only) ö÷· ¦K§p«™;µÅ ÛËÛ¯Q£T«u£¢¸_²™‹«4«™0¸‹ŸK¦¢¤l·¼²·jŸ¢¤£B¸0¸‹§p«u£B¸‹ÆÏ®r™b° ©ªŸ¢«º™°>ÂèÓÈ룢¤p›£B©ª²™‹«•>ÂèÓÈ ö÷·j¦¢§p«™ µ ­¯lŸTµ­/²¯l™ ¸_¯Q£T«u£B¸‹²™‹«À«™0¸‹ŸK¦¢¤l· ° ²·jŸ¢¤:£B¸0¸‹§p«º£¢¸_ÆÅŸ¢©h²º¯p™G®6£¢­™‹ž ·j¤p™ÁÊˍ̥㪮r™0©ªŸ¢«™ >ÂèÓÈæ £B¤l› ²¯Q£T²GŸK©ç­!Æm¤l¸b¯m«ºŸ¢¤lŸK§p­š£¢¤l£¢ž¼Æm­· ­šŸK© ¦¢«º£B¡l¯l™ ¨ ™0­ë£¢¤p›À¡Q¯pŸK¤p™ ¨ ™‹­Âã £B©ª²™‹«M>/èâȍ溾ÏÜݤ ñ £T®Qž ™ 9: ¿·¾±A™_«º™0¤p¸0™ëŸ¢© ²¯l™í«º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤ £B¸_° ¸0§m«u£B¸‹Æ ®A™‹²Ýµ™‹™0¤­§m¡A™‹«Y· ­™0›¹£¢¤p›É§l¤p­§p¡r™_«m· ­™‹› ¨ ŸY›l™‹ž ­ ¤ !oyxt¥ä2o 4n4¤ !oyxt¥Ĥto øŒxʙD‚  ‚ 4lPn  øŒxʙD   4lPn•  †`_ ð ¤ †  †`_›õ    Ža_ ¤ 0   _    ~ _ ¤ 0   _ r †`_ ð ¤ 0  ˜ _›õ  ¤  ¤ _ ¤  †`_ ¤ †  _ ¤  _   †`_ 𠤘  †`_›õ ¤ 0  _ ¤ r  †`_ ¤ 0  †`_ ¤ r  _ ¤ †`_ ð ¤ r  ˜ _›õ ¤ Ž    _ ¤     _ ¤ Ž    _ ¤     _ ö÷·j¦¢§p«™CµY´Ð¦B«u£T¡Q¯l™ ¨ ™¶ãŽÞ¢ßBà‹á_â æÇ£B¤l›+¡l¯lŸ¢¤l™ ¨ ™ ãÝÞ¢ßTàAßÓæš£B¸0¸‹§p«º£¢¸‹·j™‹­Á£T«º™ë¡l«º™‹­™0¤Y²™0› ­™_¡6£B«º£B²™0ž¼ÆK¾ örŸ¢«Ï¸‹Ÿ ¨ ¡6£B«· ­ŸK¤ ´4²¯l™¹£¢¸‹¸0§m«u£B¸0· ™0­òŸ¢®p²u£B· ¤l™‹›ë®YÆ §l­· ¤l¦­· ¨ ¡Qž ™¸b¯Q£T«u£¢¸_²™‹«Û®Q· ¦¢«º£ ¨¿¨ Ÿs›l™‹ž £B«º™3£Bžj­Ÿ ¡l«™0­™‹¤m²™0›A¾ ö6ŸB«:™bÈl£ ¨ ¡Qž ™¢´ë·j©ô²¯l™¿®6£¢­™‹ž ·j¤p™ Êˍ̿«™0¸‹ŸK¦¢¤l·¼²º· Ÿ¢¤&£¢¸‹¸0§m«u£¢¸_ƚ·j­î¢ï+*ò´E®YÆ»£Bž ·j¦¢¤m° · ¤l¦ò¦¢«º£B¡l¯l™ ¨ ™0­ £¢¤p›¡Q¯pŸK¤l™ ¨ ™‹­‹´r²º¯p™£¢¸‹¸0§m«u£B¸0· ™0­ ŸK©÷¦¢«u£T¡Q¯p™ ¨ ™‹­3£B¤l›¡l¯lŸ¢¤l™ ¨ ™0­ £B«™· ¨ ¡l«ºŸTK™‹› ²ºŸ @+(Y¾j(+*+£¢¤p›i(!=m¾Ã=,*Ï´«º™‹­!¡r™0¸_²º·¼s™0ž¼Æ¢¾ÜΩ*µ™h›pŸ ¤lŸB² £¢ž · ¦K¤2²º¯p™ ¨ £B¤l›:£B¡p¡Qž¼ÆÂžj£¢¤p¦K§l£¢¦¢™ ¨ Ÿs›p™0ž ­ · ¤m° ›l™_¡A™0¤p›l™‹¤Y²ž¼ÆK´²º¯p™‹Æë£T«º™»£B­ ž ŸBµ8£B­ îKîs¾Ã9,* £B¤l› @,µs¾j@+*ò¾EÜݲ·j­4ŸB®pY· ŸK§p­²º¯l£B²4²º¯p™"£Bžj· ¦¢¤ ¨ ™‹¤m²£¢ž ¦¢ŸB° «º·¼²º¯ ¨ ­§p¸0¸‹™0­­©ª§lž ž¼ÆÉ²º£Bù]™‹­3£B›pK£B¤s²u£B¦K™ Ÿ¢©5²¯p™«™_° ›l§p¤l›l£¢¤l¸_ƹ²Ÿ · ¨ ¡l«ºŸZ]™²º¯p™\ŸTK™‹«º£¢ž žE«™0¸‹ŸK¦¢¤l·¼²·jŸ¢¤ £¢¸‹¸0§m«u£B¸‹Æ]¾ ñ £T®Qž ™Ÿ9G­¯lŸZµ­\²º¯p™¹›l·¾±A™_«º™‹¤l¸‹™¹· ¤ë¸b¯Q£T«u£B¸‹²™‹« «º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤š£B¸0¸‹§p«º£¢¸_ƹ®r™‹²ÎµÛ™‹™0¤¹²º¯p™h­§m¡ ™_«Y·j­™‹› £¢¤p›§l¤p­§p¡A™_«m· ­™‹›#£¢ž · ¦K¤ ¨ ™0¤s² ¨ ŸY›l™‹žj­‹¾ & ™‹«™¢´ ²º¯p™&§l¤l­§m¡r™‹«!m· ­™0›Â£Bžj· ¦K¤ ¨ ™‹¤m² ¨ Ÿm›p™0ž· ­ ¨ £B›l™ ©ª«Ÿ ¨ ²¯l™ò²!«u£B·j¤p·j¤p¦ ›l£B²º£ £Bž ·j¦¢¤l™‹›&®mÆ»§l­· ¤l¦¹²º¯p™ · ¤l·¼²º·j£¢žm™‹­!²· ¨ £T²º™ÛŸ¢©l²º¯p™Û§l¤p­§p¡A™‹«!m· ­™0›£Bžj· ¦¢¤ ¨ ™‹¤m² ¨ ŸY›l™‹žÛ›l™‹­¸‹«· ®r™‹›&·j¤š²¯l™Ï¡p«º™‹Y· ŸK§p­­™0¸_²·jŸ¢¤ ´E· ¾ð™B¾ ²º¯p™ ¨ Ÿm›p™0ž · ­ ¨ £¢›p™3®YÆçŸ¢¤l™«º™‹™0­!²· ¨ £B²· ŸK¤ ¾ ñ ¯l™_«º™í£T«º™Ðm·¼«²§Q£Bžjž¼Æ:¤lŸ ›l·¾±A™_«º™‹¤l¸‹™0­»· ¤'£¢¸b° ¸0§m«u£B¸‹Æ ® ™_²Ýµ™0™‹¤¹²¯l™­§p¡r™_«Y·j­™‹›»£B¤l›•§l¤p­º§m¡A™‹«Î° m· ­™‹› ¨ Ÿs›p™0ž ­‹¾ ñ ¯l· ­ ¨ ™Ó£B¤l­²¯Q£T²0´p· © µ4™ ¯l£ÓK™£¢¤ · ¤l·¼²º·j£¢ž¦B«u£T¡Q¯p™ ¨ ™b°v²ŸB°±¡l¯lŸK¤p™ ¨ ™•›p·j¸_²·jŸ¢¤Q£T«Æë£¢¤p› £&žj£T«º¦K™ £ ¨ ŸK§p¤Y²çŸ¢©"§l¤l£¢ž · ¦K¤l™‹› ¦B«u£T¡Q¯p™ ¨ ™•£¢¤p› ¡Q¯pŸK¤p™ ¨ ™«º™_¡l«º™‹­™0¤s²º£B²· ŸK¤ŸK©l²¯l™½­º£ ¨ ™¸‹ŸK¤s²™0¤s²­‹´ µ4™Ç¸Ó£B¤ê£¢§m²ºŸ ¨ £B²· ¸Ó£Bž ž ÆÀ£Bž ·j¦¢¤À²º¯p™ ¨ £B¤l›?§l­™ ²º¯p™ ¨ £¢­ £Gžj£¢¤p¦K§l£¢¦¢™ ¨ ŸY›l™‹ž3©ªŸB« ÊËÛÌ"´Ûµ4¯l· ¸_¯ ­· ¦K¤l·¼Òl¸Ó£B¤m²ž¼Æ · ¨ ¡p«ºŸTK™0­²º¯p™hŸZ]™_«u£¢ž ž«º™‹¸‹ŸK¦K¤p·¼²º· ŸK¤ £¢¸‹¸0§m«u£B¸‹Æ]¾ b c Då@B—lÚ5@B@¢DåØ × > × Ù × J R >6J!JlÙed¶Ø–¢C@ Æ«º£B¡Q¯p™ ¨ ™_°±¡Q¯pŸK¤p™ ¨ ™"£¢ž ·j¦¢¤ ¨ ™0¤s²4·j­4§p­§Q£Bžjž¼Æ ›p·j­Î° ¸0§p­­™0›Á·j¤Á²¯p™¸0Ÿ¢¤Y²º™bÈm²hŸK©²º™bÈm²Î°±²ºŸT°Ž­!¡r™‹™0¸_¯Ð­!Æm¤m° ²º¯p™0­· ­Ï£B¡p¡Qž · ¸Ó£T²º· ŸK¤p­‹¾íÜݤЫ™0¸‹™0¤Y²hÆ]™0£B«­‹´½£•žj£B«¦K™ ¤p§ ¨ ®r™‹«<ŸK©•µ½Ÿ¢«!ùm­í¯Q£ZK™®A™0™‹¤?¡Q§m®Qž · ­¯l™0›¶Ÿ¢¤ ¦¢«º£B¡l¯l™ ¨ ™ò²ºŸÉ¡Q¯pŸK¤p™ ¨ ™ç¸‹ŸK¤s]™‹«­·jŸ¢¤ ´E· ¤G¡Q£B«!²º· ¸b° §lžj£T«0´÷§l­· ¤p¦ ÒQ¤p· ²™Ï­!²º£B²™h²™0¸_¯p¤l· õY§p™0­‹¾m&ŸBµÛ™_]™‹«0´ ²¯l™_ÆÉ›p™Ó£Bž ²µ·¼²¯•™‹·¼²º¯p™‹«¦¢«u£T¡Q¯p™ ¨ ™b°±²ºŸT°v¡l¯lŸ¢¤l™ ¨ ™ ¸‹ŸK¤s]™‹«­·jŸ¢¤ëŸ¢« ¡l¯lŸ¢¤l™ ¨ ™_°±²ŸB°v¦¢«º£B¡Q¯p™ ¨ ™•¸‹ŸK¤s]™_«!° ­· ŸK¤Ôãğ¢¤l™ · ­ · ¤p¡l§p²Å£B¤l›8²º¯p™ Ÿ¢²¯l™_«#· ­ ŸK§m²!° ¡Q§m²_溴µ4¯l· žj™Éµ½™É£B«™¹µ½Ÿ¢«!ùm· ¤l¦GŸ¢¤Ç­!ÆY¤l¸b¯p«ŸK¤pŸK§p­ £¢¤l£¢ž¼ÆY­·j­Ÿ¢©¦B«u£B¡l¯l™ ¨ ™0­ £B¤l›¡l¯lŸK¤p™ ¨ ™0­G㪮rŸ¢²¯ ¦¢«º£B«!¡Q¯l™ ¨ ™‹­<£B¤l›À¡l¯lŸ¢¤l™ ¨ ™0­í£B«™#· ¤p¡Q§m²­<£B¤l› ²¯l™‹· « £Bž ·j¦¢¤ ¨ ™‹¤s²º­"£T«º™\Ÿ¢§p²!¡Q§m²_溾 ñ ¯Y§p­‹´I²¯l™_«º™\· ­ ž · ²!²žj™«º™‹žj™_K£B¤l¸‹™3®r™_²Îµ™‹™‹¤ç²¯l™‹­™¢¾ " ­h©Ä£B«ò£B­h²º¯p™£B§p²¯lŸB«º­hùm¤pŸTµ3´*²¯l™Ÿ¢¤lž¼ÆG¡Q£T° ¡A™‹«G²¯Q£T²ô£B›l›m«º™‹­­™0­&²¯l™<· ­­§l™<ŸK©¦B«u£T¡Q¯l™ ¨ ™b° ¡Q¯pŸK¤p™ ¨ ™ £¢ž · ¦K¤ ¨ ™0¤s²ò£B¸0¸‹§p«º£¢¸_ÆÐ· ¤ë¬]£B¡Q£¢¤l™‹­™· ­ ŸK¤p™4®mÆ8B£Bžj›mµ· ¤£B¤l› ñ £¢¤l£BùB£ ãŽø<(+(!(s溾 ñ ¯l™_Æ«º™b° ¡AŸ¢«!²º™‹›A(!@m¾j¶Y(,*Í£B¸0¸‹§p«u£B¸_Æ&©ªŸ¢«\¦¢™0¤p™‹«º£¢ž]Ÿs¸Ó£T®Q§Y° žj£B«Æòµ½ŸB«º›p­½²u£Tù]™0¤ ©ª«Ÿ ¨ £\¬]£T¡6£B¤l™‹­™›p·j¸_²º· Ÿ¢¤Q£T«Æs´ ®YÆ §p­· ¤l¦£¢¤\£Bž ·j¦¢¤ ¨ ™‹¤s² ¨ ŸY›l™‹žp®Q£¢­™‹›Ÿ¢¤\£­¸0ŸB«º™ ­· ¨ · žc£T«4²Ÿ ñ öä°±Üq¿3öÛ´Q£¢¤p› £¢¤ · ¤p¸‹«º™ ¨ ™‹¤Y²º£Bž §p¤l­§Y° ¡A™‹«!m· ­™0›<ž ™Ó£T«º¤l· ¤p¦Á£Bž ¦KŸ¢«· ²¯ ¨ ¾ ܎² ·j­òY™_«Æô›l· ©j° ÒQ¸‹§lž¼²²ŸÉ¸0Ÿ ¨ ¡6£T«º™\²¯l™‹·¼««º™0­§pž ²­"µ·¼²º¯GŸ¢§p«º­"®r™_° ¸0£¢§l­™¹Ÿ¢©3²º¯p™¹›l·¾±A™_«º™0¤p¸0™‹­ç· ¤ë²º¯p™ ²«º£¢· ¤l· ¤l¦Ð£B¤l› ²™0­!²4›Q£T²u£§l­™‹› ¾'& ŸTµ™_]™‹«‹´p­·j¤p¸0™µ4™3£¢­­§ ¨ ™B´6· ¤ ¦K™‹¤l™_«u£Bžå´*²º¯p™¤Q£ ¨ ™ç²u£B­!ùз ­Ï­· ¦¢¤l·¼ÒQ¸0£¢¤s²ž Æ ¨ Ÿ¢«™ ›l·¼ì\¸‹§lž¼²Û²¯Q£¢¤Ï²¯l™ ¦¢™0¤p™‹«º£¢ž6sŸ]¸0£B®l§lžj£B«!Æ\²º£¢­!ù6´mµ½™ ¸‹ŸK¤l­· ›p™‹«"Ÿ¢§p«"«™0­§pž ²Ÿ¢©›(+(Y¾Ã=,*ê«™0¸0£¢ž žE®YÆ ­§p¡A™_«!° m· ­™‹› ¨ Ÿ]›l™‹žl£B¤l›8(+@Y¾Ã=,*«º™0¸0£¢ž žs®mÆ"§l¤p­§p¡r™_«m· ­™‹› ¨ ŸY›l™‹žI²ŸÏ¯Q£ZK™¦¢«º™0£B²™‹«­· ¦K¤p·¼ÒQ¸Ó£B¤l¸‹™"²¯Q£B¤²º¯p™0·¼« «º™‹­§lž¼²­‹¾ >£B¦]£B²º£ ãÝø<(+(!@sæ¡p«ºŸ¢¡AŸ¢­™0›ë£G¬]£T¡6£¢¤p™0­™•Ê3ËÌ ™_««ºŸB«Å¸0ŸB««º™‹¸‹²· ŸK¤ ¨ ™_²º¯pŸY› §l­· ¤l¦ÀµŸ¢«›m°±®6£B­™0› žj£¢¤p¦K§l£¢¦¢™ ¨ ŸY›p™0ž•£B¤l›8¸_¯l£B«º£¢¸_²º™_« ­¯l£B¡r™Â­· ¨ ° · žc£T«º·¼²ÝÆK¾ Ë۟ ¨ ¡Q£B«™0›êµ·¼²º¯¶ŸK§m« ­· ¨ ¡lžj™:ÊËÛÌ ¨ ŸY›l™‹žpé5õY§Q£T²º· Ÿ¢¤Éã2=s溴]²¯l™‹· « ¨ Ÿs›l™‹žQ¸0£¢¤­Ÿ¢«²5¸0ŸB«!° «º™‹¸‹²· ŸK¤ ¸0£¢¤p›l· ›Q£T²™0­ µ·¼²º¯2²º¯p™ô­º£ ¨ ™ô™‹›l·¼²š›p·j­Î° ²º£¢¤p¸0™<®6£B­™0›/Ÿ¢¤À¸b¯l£B«u£B¸‹²™_«Á­¯Q£T¡A™ ­· ¨ · žj£B«º·¼²ÎÆ¢¾ ñ ¯p·j­hµŸK§pžj›&®A™ £•]™_«ÆG™–±A™‹¸_²º·¼s™ µ½£ÓÆG²Ÿ ÒQž¼²™‹« ŸK§m²4¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™²º§m¡Qž ™0­½«™‹²!«º· ™‹s™0›ç©c«ºŸ ¨ ¡Q¯pŸK¤p™ ¨ ™‹­#£¢­ ù]™‹ÆY­ · ¤ £B¡p¡l«ŸTÈm· ¨ £B²™ ¨ £T²¸‹¯Y° · ¤l¦m´s­· ¤l¸‹™Û¡Q¯lŸ¢¤l™ ¨ ™b°v²ŸB°v¦¢«º£B¡l¯l™ ¨ ™4¸‹ŸK¤YK™_«º­· ŸK¤·j­ ¨ ŸB«º™¹£ ¨ ®Q· ¦¢§lŸK§p­‹¾ ñ ¯p§p­‹´4µ™É£T«º™ ¸‹ŸK¤p­·j›p™‹«·j¤p¦ · ¨ ¡Qž ™ ¨ ™0¤s²·j¤p¦ ²¯l™‹·¼« ÊËÌ ¨ Ÿs›p™0žE£¢­"£ç­§p®7$™‹¸_² ©ªŸ¢«©ª§m²º§m«º™µŸ¢«!ùr¾ f Õ Ø × — R ÚE@¢D±Ø × ˜š™›l™_]™0ž Ÿ¢¡r™‹›»£ ¤lŸZ]™0ž÷žj£B¤l¦K§l£¢¦¢™ ¨ Ÿm›p™0ž÷®Q£¢­™‹› ŸK¤¦¢«u£T¡Q¯p™ ¨ ™b°±¡Q¯lŸ¢¤l™ ¨ ™ ²§p¡lž ™0­‹´µ¯l· ¸b¯· ­ŸK¤p™ Ÿ¢«›l™_«ŸK© ¨ £B¦K¤p· ²§l›p™\­ ¨ £Bžjž ™_«²¯Q£¢¤•µ½ŸB«º›Y°v®Q£¢­™‹› ¨ ŸY›l™‹ž ­‹¾h˜»™\£Bž ­Ÿ ›l™_s™0ž Ÿ¢¡Q™0› £¢¤š£Bžj· ¦¢¤ ¨ ™‹¤s²"£Bž ° ¦KŸB«º·¼²¯ ¨ ŸK©r¦¢«º£B¡l¯l™ ¨ ™0­*£B¤l›h¡l¯lŸ¢¤l™ ¨ ™0­5©ªŸ¢«5®rŸB²º¯ Ÿ¢«›l· ¤Q£T«Æh²º™bÈm²½£B¤l› ÊËÛÌǟK§m²¡l§p²‹¾'BÛÆ\§p­·j¤p¦²º¯p™ žj£¢¤l¦¢§Q£B¦K™ ¨ Ÿs›l™‹ž6£B¤l›²º¯p™£¢ž · ¦K¤ ¨ ™0¤Y²E£¢ž ¦¢Ÿ¢«º·¼²¯ ¨ ´ µ4™Éµ4™_«º™ £B®lžj™¹²Ÿô­· ¦K¤p· Òl¸Ó£B¤m²ž¼Æë· ¨ ¡p«ºŸTK™ ¸_¯Q£T«!° £¢¸_²º™_««º™‹¸0Ÿ¢¦K¤p· ²· ŸK¤&£¢¸‹¸‹§p«u£B¸‹Æ · ©®AŸB²º¯š¦B«u£T¡Q¯l™ ¨ ™ £¢¤p›h¡Q¯pŸK¤p™ ¨ ™4«º™‹¡p«º™‹­™0¤s²º£B²·jŸ¢¤l­5Ÿ¢©r²¯l™· ¤m¡Q§p²Û£B«º™ ¦K·¼s™0¤ £B²½²¯l™­º£ ¨ ™²· ¨ ™¢¾ “ —]C × Øhg R JpÙ*HÓLMJ × J ñ ¯l· ­«º™‹­™Ó£T«º¸b¯ µ4£¢­4›pŸK¤p™"µ¯l· ž ™²¯l™ £B§p²¯lŸB«µ£B­ m· ­·¼²º· ¤l¦\£B² " ñjiñ è£B®l­‹¾Û܍µ· ­¯ ²Ÿh²¯Q£B¤pùd13™0¤ ˍ¯m§p«º¸º¯£¢¤p›Ÿ¢²¯l™_« ¨ ™ ¨ ®6™‹«­÷£T²¬" ñjiñ è£T®Q­©ÄŸB« ²º¯p™0·¼«¯p™0ž¼¡l©Ä§pžI¸‹Ÿ ¨h¨ ™‹¤s²º­½£¢¤p››l· ­¸0§p­­· ŸK¤l­‹¾ × JÞTyJl–+J × — Jl@ k oyrz¤ 4lmlÓxʙonqpsr<™urzlŒx6 utvnƒo Y™uxʙwpyxâl <š wn k ¤6 wnP™ %™uxʙwp ™Œn {zX™Œ¤ 4n%lŒx6}| lP˜›™urz¤  ¤¤ Ž ylân¡oy¥ƒ™l Y™urÃlPn'rzla <oqp rto¦£qrәŒn /£ylŒxtxto¦£qrXoyxtxzlPn<o¦l 4¤U£ %™uxʙŒ£qrtoyxz¤pGxzlPn%øl š ¤ <p ¤tr6 r6 <rto ^p <o lÄoyr2o °™Œn Ãn4¤2oyxtrto ¶ÄnJ~ƒ™DY™Œn<o¦¤toI™Œn mÞn‘p øl=ä •¤2o¦n‘rto¦n4£qo¦¤ %¤Ãn4ø€™ux‚n–lÎ¥¡˜—lº <o läjƒtÌnF„†…s‡‰ˆ‚ŠŒ‹† ސ pY™ŒøPo¦¤   r’‘\¤~  k ʗlŒr6 ‘š”“X™l pþÃnm™Œn [•âlPvj 4˜A k ™Œn%™onP™ ¤¤¤  k <o ™Dl㦙ur6ÄlPn%¤¡l  4n%¤ !oyxt¥ä2o l oΙuxÊnÄn4ø?rzl–~–™DY™Œn<o¦¤to øŒxʙ <o¦˜¡oqp/ %lPn<o¦˜¡o™l=ÄøPn%˜Ko¦n‘r Ftwn]—˜„‰‡š™ ŽaŽœ›J`ž Ÿ  ¡£¢ ¥¤¦`§©¨y§ ¡«ª ¤­¬’ž¥®°¯ ¡ ¬‚± ‡ ¬ ²`ž£§­¯³§}´µ¯³§ Š ²`¶ ª ž‚²`· ‡ ²`§  ´ ª ²´a¬Z¸¹ž °º¥¬ ¡£¡ ¯»§¼´ pºY™ŒøŒo¦¤ ¤ ‘\  €™Œ¤z™Œn4l ]•¦Äøƒ™Œ¤6 = %™ ¤¤ Ž [l  llĚ´™ 4rtlP˜›™urto (  p xto¦£qrzlŒxtšQ™Œ¤t¤6Ĥtrz™Œn%£qo?¤2oyxt¥Ä£qoWr6 %™ur ™Œ£y£ylP˜—˜—lº %™urto¦¤Œ <oqp øŒo¦n4oyxʙur2o ½n–oyšºpXlŒx6 šÃn <rá¥j™½rto lÄoj 4lPn<o¦¤ ƒtwn ¸¹²aº°¯ ¾¿º ÀW¬’·Á¬ º `ÂH ª §­¯»º ²a¶Ã¯»`§ „ `§’Ä’¬’ž ¬«§Åº£¬ pY™ŒøPo¦¤  r’‘\:r Ž  lÆnÇn‘lÈ|½lPn%n4l½™Œn zX™Œ¤6 4lȕâlPn4øPl ¤¤~ yÉÞlP¤2r4xzlƒ£qo¦¤z¤Än%ø ™lÃøPlŒx6Är6 4˜}Y™Œ¤to dlƒnmr6 <o>4xzlY™D=lä2r6ã;™Œn d¤to¦˜—™Pn‘p r6ã˜Koyr6 4l  lŒxj~ƒ™DY™Œn<o¦¤tomʚ˿Ì`utÌn ¸†ž‚°º¥¬ ¬ ±`¯³§}´ ¡ Ä ˆÍ„‰ÎƗÆÏZ ŽaÐ pY™ŒøŒo¦¤  Ž  ‘  Ž ¤  •¦ÄxzlÍn­€°lŒx6qp¼•¦ xÊlŒrtlP˜—llâ¤tlÑp‘™Œn jr 4lƒvql€?™onÃn4l ¤¤  ÌÓl %¤2r/n‘pwøŒxʙŒ˜ ˜—l <o l¬l  ~ƒ™DY™Œn<o¦¤to£T %™uxʙŒ£qrtoyx™Œn Ärz¤?™D‚l㦙ur6ÄlPnŸrtlY %l–£j 4˜¡o¦n‘r°xto¦£ylPøPnÄrÃlPn ҈‚ӈ̈́‰Ó À޲`§ ¡ ²aº’¶q¯»`§ ¡ `§ ˆ §«Ä’`ž¥Â˜²`¶Ã¯Ã`§Ô²`§Å±µÕ­Ö ¡ ¶v¬’ ¡¥p× r ¤ p Ø ð ˜ õ { Ž r£‘ Ž r   €™Œ¤z™P™on¿x¬™Œøƒ™urʙ ¤¤ Ž ml)¤2rzl–£ %™Œ¤2r6ã½~ƒ™DY™Œn4o¦¤2o ˜—lŒx2p ‚ 4llÄlPø Ä£¦™lX™Œn%™lĚ<vqoqxN 4¤Ãn4ø°™  lŒxpU™Œx6 ‘p2 œY™Œ£¥npG™ux6 ‘p Ù­Ú n‘p2!o¦¤2râ¤to¦™Œxz£T °™lÄøPlŒx6Är6 4˜ ¹twn[„†…‡Uˆ틆 ސ pY™ŒøŒo¦¤ 0 † £‘0 † r  €™Œ¤z™P™on†x¬™Œøƒ™urʙ ¤¤ uËálPn‘rtoqú<r2p/!™Œ¤2o ФYo l=l=Än4ø£ylŒx2p xto¦£qr6ÃlPn  lŒx~ƒ™D!™Œn<o¦¤2o×ʚ˿Ì`˜twnµ„†…s‡‰ˆ‚ŠŒ‹† Ž}Û p Y™ŒøŒo¦¤   †  ‘    €™Œ¤z™P™onhx¬™Œø–™urz™ ¤¤  ½~ƒ™DY™Œn4o¦¤2o×ʚ˿ÌSoyxtxzlŒx'£ylŒxtxto¦£ p r6ÃlPnN 4¤6Än%ø £T %™uxʙŒ£qrtoyxÞ¤ %™D!oU¤ØAlj™ux6Är̚K™Œn ¡¤trʙur6Ĥ2r6㦙l lj™Œn4ø %™PøŒo¡˜—l <o l#ÜtÌn݄¿…s‡UˆŠŒ‹¹Ã—˜„‰‡Ü™ ŽaÞ p‚Y™ŒøŒo¦¤ ¤ 00°‘ ¤ 0    t̙Pnߕ wà‡Ärtrto¦nW™Œn k ؗlŒr6 ƒšmË|­“áo ll/ ¤ ¤  k <o'vqoyxzlup  xto°áº <o¦n4£qš‰4xzllÄo¦˜Â{ƒù¤2r6ʛ™ur6Ãn4øKr6 <o|4xzlY™D=lÄr6 o¦¤Ul  n%lÎ¥ƒo lùoy¥ƒo¦n–rz¤ Ãn´™ %™D4r6Ä¥ƒo•rtoqú<r¡£ylP˜N4xto¦¤t¤6ÄlPn•RˆÓÓsÓ À޲`§ ¡ ²aº’¶q¯»`§â`§ ˆ §«Ä’`ž£ÂŒ²`¶Ã¯Ã`§ãÀ ¢ ¬¥`ž£Ö p ~ r<𠎑õ { †  ˜ ‘ † ¤ Ž 
2000
49
      "!#$%'&' ( )*+&, .-/ %' &01 2  &,$3-45"67 981 :%;-< %0&0 >= ?%0 ) &0 @ACBED?F G?HI JLKNM OPMH?QSRUTVWA0DX@YO[Z J]\^G:H _1`ba7c degfNhjik]lmf]n kNfUolqprlqstlq`bn^uv+wjx'iy zLz'{ lqa/|+n }mc^~ {€ lqs,e~  k ~ { hNu { stkƒ‚ `l„|kNs]~ {€ lqs,eN~  k~ { h RUTM OD…†V*G?HI J]\ V^VI vj‡‰ˆ?y‹ŠŒˆŽuv+wjx'iry  s,kƒ‚  f| { `bh,etfNhta‘~ {€ lms'e~  k~ { h ’”“•—–˜ M™ – š^›œœžƒŸ, b¡£¢›¤0ž¦¥#§/›¨,©«ª¬ bžU­WŸ'›® [¤'¯°›œ0ª²±€  ª°¤³±€ bž´±µ/œ'©[žE¶‰›¨t©[ª¬ bž5œŸ'›® [¤0¯°›œ0ª²±€  ¶‰žƒœ'©b±b¥«¤‘·©«ª°¨,©¹¸žƒŸ'µm±Ÿ,¶†¤‘¤º[ ®œ0›¨gœ0ª°¨ ›® «›¯²º¤0ª°¤›® [¥»¤œŸt¼[¨gœ'¼bŸ0ž´œŸ'›® «¤µmžƒŸ½›œ œ'©[ž;¤0›®¶‰ž<œ0ª¬¶‰ž”¼[¤'ª¬ b¾¿«ª°¯°ª¬ [¾€¼[›¯À¸«›œÁ¡ œžƒŸt [¤ƒÂXš:¢§<­Eª°¤¼[¤0ž¦¥7œ±†žgÃ[¸«›® [¥7œ'©bž ¯²ž— [¾œ'©Ä±µ‰¸«›œœžƒŸt [¤;¼«¸Äœ±5¤ž— ®œž— «¨gžg¡ ¯²ž— [¾œ'©Eª¬ Å±Ÿ'¥[žƒŸ;œ±½Ÿ0ž¦¥r¼[¨gž#›®¶j¿«ª²¾€¼¡ ª²œ0ª²ž¦¤/ª¬ ½œŸ'›® [¤0¯°›œ0ª²±€ ÆÀ¿¼bœÇª²œ7¿[Ÿ0±€¼b¾€©€œ ±€¼[œœ'©bžŒ¸[Ÿ'±®¿«¯²ž—¶È±µ Ÿ'›¸«ª°¥«¯²ºª¬ [¨gŸ0ž¦›¤0ž¦¥ ¸r›œœžƒŸ, [¤ƒÂ Éʞ¸«Ÿ0±®¸ ±®¤ž:›À¶+±b¥[ž¦¯·©[ª°¨t© ¤,©b±Ÿ0œž— [¤;œ'©bž#¯²ž— b¾œ'©Ë±µ†¸«›œœžƒŸt [¤”œ± ¸©bŸ'›¤žg¡£¯²ž— b¾œ'©½›® [¥½Ÿ0ž¦¥¼[¨gž¦¤Ç›®¶j¿«ª²¾€¼¡ ª²œ0ª²ž¦¤5ª¬ »œŸ'›® [¤0¯°›œ0ª²±€ Ì¿ º.¼[¤0ª¬ b¾¹œ£·±¡ ¯²žƒÍž¦¯ÎœŸ'›® «¤0¯°›œ0ª²±€ Ï¸«›œœžƒŸ, Ð¤ž¦¯²ž¦¨gœ0ª²±€  ¶‰žƒœ'©b±b¥Âѝ Òœ'©bžÓrŸ'¤œ*¯²žƒÍž¦¯„Ɛœ'©bž¸[Ÿ0±®¸ žƒŸ œŸ,›® [¤0¯°›œ0ª²±€ ‘¸r›œœžƒŸ, [¤›Ÿ'ž†¤ž¦¯²ž¦¨gœž¦¥Î¿€º ¼«¤0ª¬ b¾7›<©€º¿«Ÿ'ª°¥Y¶+žƒœ'©[±¥”±µXžgÃ[›¨gœŽžgá ›®¶†¸«¯²ž<¶‰›œ0¨t©[ª¬ b¾Î›® [¥³¤ž—¶†›® ®œ0ª°¨/¨g±€ ¡ ¤0œŸ'›ª¬ ®œÎ¿€ºÈœ'©bž¦¤'›®¼bŸ,¼[¤ƒÂ>ѝ Ôœ'©bžÕ¤0ž¦¨t¡ ±€ «¥;¯²žƒÍž¦¯„Æ*œ'©bž‰¶+±®¤œÒ [›œ'¼bŸ'›¯ œŸ'›® «¤0¯°›N¡ œ0ª²±€ ¸«›œœžƒŸ, ÕµÖ±Ÿ×œ'©bž/͐žƒŸ'¿½¸r©bŸ'›¤ž/ª°¤ ¤0ž¦¯²ž¦¨gœž¦¥Ë›®¶‰±€ b¾Õœ'©[žÎ¤ž¦¯²ž¦¨gœž¦¥ËœŸ'›® «¤Á¡ ¯°›œ0ª²±€ »¸«›œœžƒŸ, S¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤Õ¿ ºS¼«¤0ª¬ b¾ ¤0œ0›œ0ª°¤œ0ª°¨´ª¬ bµÖ±Ÿ,¶‰›œ0ª²±€ ¹±µ<œ'©[ž½œ0›Ÿ'¾žƒœ ¯°›® [¾€¼[›¾žÂ"¢ºÅ¼[¤0ª¬ b¾œ'©[ª°¤;¸[Ÿ0±®¸ ±®¤ž¦¥ ¶‰±¥«ž¦¯„Æ·?ž×¨g±€¼[¯°¥Y¤'©[±Ÿ0œž— ”œ'©bž†¯²ž— [¾œ'© ±µX¸«›œœžƒŸ, «¤Ò·Œª²œ'©b±€¼bœŸ'›ª°¤0ª¬ b¾Çœ'©bž×›®¶¡ ¿rª²¾€¼[ª²œ0ª²ž¦¤Œª¬ ØœŸ,›® [¤0¯°›œ0ª²±€  Â Ù Ú H –˜ V9Q:G:™ – A£V*H Û ­WŸ'›® [¤0µmžƒŸ¡£¢X›¤ž¦¥ §/›¨,©[ª¬ [ž ­*Ÿ'›® «¤0¯°›œ0ª²±€  ¶+žƒœ'©[±¥×¾ž— bžƒŸ'›¯°¯²ºØ©[›¤Xµm±€¼[Ÿ¤œž¦¸r¤‹Ü„Ý‹ª¬¶ÇÆ Þ¦ßßNàbá0â ¶+±Ÿ,¸r©b±®¯²±¾®ª°¨ƒ›¯;›® [›¯²º¤'ª°¤±µ‘¤0±€¼bŸ'¨gžE¯°›® [¾€¼[›¾žÆ ¤º[ ®œ0›¨gœ0ª°¨/›® [›¯²ºb¤0ª°¤‰±µÀ¤±€¼bŸ'¨gž7¯°›® [¾€¼[›¾žÆ¤œŸ,¼[¨t¡ œ'¼bŸ0žËœŸ'›® [¤0µmžƒŸ5œ±œ'©bžËœ0›Ÿ0¾žƒœÄ¯°›® b¾€¼[›¾žÆ#›® [¥ ¤ž— €œž— [¨gžÎ¾ž— bžƒŸ'›œ0ª²±€ Ë±µ+œ'©bžœ0›Ÿ0¾žƒœÊ¯°›® b¾€¼[›¾žÂ ѝ ½¤œŸ,¼[¨gœ'¼bŸ'ž<œŸ'›® [¤0µmžƒŸÇ¤œž¦¸ÆœŸ'›® [¤µÖžƒŸØ¸«›œœžƒŸ, [¤ ¼[¤0ž¦¥‰œ±Ž¿ ž¾Ÿ'ž¦›œ0¯²º†¯²žgÃbª°¨ƒ›¯°ª²ãƒž¦¥Çª¬ ‰±Ÿ'¥[žƒŸ?œ±Ÿ'›ª°¤ž œ'©bž›¨ƒ¨¦¼bŸ'›¨gº5±µœŸ'›® [¤0¯°›œ0ª²±€  Âäš ›œœžƒŸt ¡£¢X›¤ž¦¥ §/›¨,©[ª¬ [žj­WŸ'›® [¤0¯°›œ0ª²±€ Î܄š:¢§/­ÀḠžƒŸ0µÖ±Ÿ,¶†¤¿±œ'© ¤º[ €œ0›¨gœ0ª°¨å›® [›¯²º¤'ª°¤æ›® [¥ ¤œŸ,¼[¨gœ'¼bŸ'žçœŸ'›® «¤µmžƒŸ ¤0ª¬¶¼[¯²œ0›® [žƒ±€¼[¤0¯²ºU¼«¤0ª¬ b¾jœ'©[ª°¤X¯²žgÃbª°¨ƒ›¯°ª²ãƒž¦¥/¸«›œœžƒŸ, [¤ ܄­ ›èž¦¥«›bƴަߐߐé á0 š:¢§<­ ¨ƒ›® ê¤'©b±Ÿ0œž— Ìœ'©bž œŸ'›® «¤0¯°›œ0ª²±€ ¹œ0ª¬¶+žÕ›® [¥ëŸ'›ª°¤žœ'©bž›¨ƒ¨¦¼bŸ,›¨gºì±µ ¤º[ €œ0›¨gœ0ª°¨Ô›® [›¯²º¤'ª°¤½¿ ºä¼[¤'ª¬ b¾¹œ'©[žË¯²žgÃ[ª°¨ƒ›¯°ª²ãƒž¦¥ ¸«›œœžƒŸt [¤ƒÂ Û œíÓ«Ÿ,¤œƒÆE¤0ª¬ «¨gžÌ›¯°¯5¸«›œœžƒŸ, «¤¹·žƒŸ0žî¤'©b±Ÿ'œ ¸r©[Ÿ'›¤žg¡£¯²ž— b¾œ'©5¸«›œœžƒŸt [¤ƒÆ1¶†›® ®º¤ºb €œ0›¨gœ0ª°¨‘›®¶+¡ ¿«ª²¾€¼«ª²œ0ª²ž¦¤Ø±¨ƒ¨¦¼bŸ'Ÿ0ž¦¥5ª¬ ¸r›œœžƒŸ, 5¶‰›œ0¨,©«ª¬ b¾[Â Û ¤ ›†Ÿ0ž¦¤'¼[¯²œƒÆ ¸«›œœžƒŸ, [¤¿ž¦¨ƒ›®¶‰ž¯²±€ b¾žƒŸ1œ±×¤ž— €œž— [¨gžg¡ ¯²ž— b¾œ'©Êœ±ÇŸ0ž¦¥r¼[¨gž+›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤ƒÂ¢¼bœƒÆ*¤ž— €œž— [¨gžg¡ ¯²ž— b¾œ'© ¸«›œœžƒŸ, [¤3¨ƒ›®¼[¤žï¸r›œœžƒŸ,  ¤0¸r›Ÿ'¤ž— bž¦¤0¤ ¸[Ÿ'±®¿«¯²ž—¶ÇÆð¿ ž¦¨ƒ›®¼[¤žñœ'©bž ¤0›®¶‰ž   ¼«¶j¿ žƒŸð±µ ¤ž— €œž— [¨gžg¡£¯²ž— b¾œ'©Å¸«›œœžƒŸ, [¤Ø¨ƒ›® 5¨g±]͐žƒŸ<¯²ž¦¤0¤Ç¤ž— ¡ œž— [¨gž¦¤ œ'©[›® œ'©bž¤0›®¶+žX  ¼«¶j¿ žƒŸ±µ«¸r©[Ÿ'›¤žg¡£¯²ž— b¾œ'© ¸«›œœžƒŸt [¤ƒÂ ­W± ±—͐žƒŸ,¨g±€¶+ž œ'©«ª°¤ ¸[Ÿ0±®¿«¯²ž—¶7Æ É‘›œ0›® [›¿ žj›® [¥7­ ›èž¦¥«›<ÜÁަߐߐò«á ›¥[±®¸[œž¦¥ žgÃ[›®¶‰¸«¯²žg¡£¿r›¤ž¦¥/›¸r¸[Ÿ0±®›¨,©ÂóŒ±]·?žƒÍžƒŸ¦Æ žgÃ[›®¶‰¸«¯²žg¡ ¿«›¤0ž¦¥Î›¸r¸[Ÿ0±®›¨,©P©«›¤+¤±€¶+žØ¸[Ÿ0±®¿«¯²ž—¶†¤ƒÂYôÒ bžU±µ œ'©bžÎ¸[Ÿ0±®¿«¯²ž—¶†¤7ª°¤/œ'©[›œ<œC·?±5¥«ªöõ žƒŸ0ž— ®œ;͐žƒŸ'¿«¤<±µ œ0›Ÿ0¾žƒœ¯°›® b¾€¼«›¾ž1œ0›èžÒœC·?±+¤ž—¶†›® ®œ0ª°¨ƒ›¯°¯²ºU¤0ª¬¶‰ª°¯°›Ÿ  b±€¼r [¤:›¤±®¿b÷Áž¦¨gœ0¤XŸ0ž¦¤0¸ ž¦¨gœ0ª²Íž¦¯²º†›¯²œ'©b±€¼[¾€©×›ŽÍžƒŸ'¿ ±µ”¤±€¼bŸ'¨gžÄ¯°›® [¾€¼[›¾žÄœ0›èž¦¤³œ'©[žÅ b±€¼« [¤#›¤³ª²œ0¤ ±®¿b÷Áž¦¨gœ0¤ƒÂ7ø[±ŸžgÃ[›®¶‰¸«¯²žÆ^›<݋±Ÿ0ž¦›® Ê͐žƒŸ'¿ÕùqúüûýÁþ®ûÿ ·Œª²œ'©¹±®¿b÷Áž¦¨gœ0¤5ùƒÜ„¿r¼[¤gá0ÿ›® [¥äù+û °ÜL©b±Ÿ'¤žá0ÿ‰ª°¤ œŸ'›® «¤0¯°›œž¦¥Çª¬ ®œ±  b¾®¯°ª°¤'©×͐žƒŸ'¿«¤ù œ0›èžÿ«›® [¥/ù Ÿ'ª°¥[žÿ Ÿ0ž¦¤'¸ž¦¨gœ0ª²Íž¦¯²ºÂ §/›® €º;Ÿ'ž¦¤ž¦›Ÿ'¨,©[ž¦¤‹·žƒŸ0ž†¥[±€ bž+œ±7¤0±®¯²Íž†œ'©bž—¶7 ­Œ©[žƒºY¼[¤ž¦¥Î¤ºb €œ0›¨gœ0ª°¨Ø¨g±®¯°¯²±¨ƒ›œ0ª²±€ E܄݋ª¬¶ ƒúÒû ¬Æ ަߐߐéâ*žƒž ¦újû qÆÞ¦ßßß á0Æ1¤0ž—¶‰›® €œ0ª°¨<¨g±€ [¤œŸ,›ª¬ ®œ ¿ ºÕœ'©bž¦¤'›®¼bŸ,¼[¤”ÜL§7±±€  ƒú‰ûqÆjަߐߐò á0ÆÒ¤ž—¶†›® ®œ0ª°¨ µÖž¦›œ'¼bŸ0ž¦¤ØÜ„š^›¯¬¶+žƒŸ ¦ú1ûqÆÞ¦ßßß áқ® [¥‘¤0œ0›œ0ª°¤œ0ª°¨ƒ›¯ ª¬ bµÖ±Ÿ,¶‰›œ0ª²±€ ½Ü„¢Ÿ0±—·1  ¦úÀûqÆަߐßbށâÀ›¾®›® #›® [¥ Ñ œ0›ª„ÆަߐßNàbá0¢¼bœX·©bž— U¤º[ €œ0›¨gœ0ª°¨‹¨g±®¯°¯²±¨ƒ›œ0ª²±€ /ª°¤ ¼[¤ž¦¥Æ9ž¦›¨,©ÎžgÃ[›®¶‰¸«¯²žUµm±Ÿ+›/͐žƒŸ'¿P©[›¤jœ'©bžÇ¤0›®¶+ž žgõ ž¦¨gœjœ±”¤ž¦¯²ž¦¨gœœ'©bžØ¸[Ÿ0±®¸ žƒŸ·±Ÿ'¥Î¤ž— [¤ž×±µœ'©bž ͐žƒŸ'¿Â±€ «¤ž¼[ž— ®œ0¯²ºÆŒª²œUª°¤×¥rª†¨¦¼[¯²œ‰œ±Î±®¿[œ0›ª¬  œ'©bžÇŸ0ž¦¸«Ÿ0ž¦¤ž— €œ0›œ0ª²Íž<žgÃ[›®¶‰¸«¯²ž¦¤‰›® «¥Pœ±Y¥[ž¦¤0¨gŸ,ª°¿ž ¤ž— [¤0ž¦¤‰·©[ª°¨t©³©[›—͐ž;¥[±€¶†›ª¬ [¤±µ‹¥«ªöõ žƒŸ0ž— €œ†¤0ª²ãƒžÂ ѝ ´›¥«¥«ª²œ0ª²±€  Æ·©bž— 5±€ [¯²º´¤ž—¶†›® ®œ0ª°¨¨g±€ [¤œŸ'›ª¬ €œ ¿ º;œ'©[ž¦¤0›®¼bŸ,¼[¤Žª°¤j¼«¤ž¦¥ÆWª²œjª°¤Ž¥rª†¨¦¼[¯²œ‹œ±;±®¿[œ0›ª¬  ¾±±¥/œŸ'›® [¤0¯°›œž¦¥”·±Ÿ'¥«¤1¿ ž¦¨ƒ›®¼[¤žŽ±µœ'©bžª¬ [¤'¼+¡ ¨ƒª²ž— €œ œ'©[ž¦¤0›®¼bŸ,¼[¤ ¸[Ÿ0±®¿r¯²ž—¶ÇÂ Û ¤^›ŒŸ'ž¦¤'¼[¯²œƒÆ·žX¼[¤0ž› ©€º¿[Ÿ,ª°¥×¶+žƒœ'©[±¥†±µ*¿ ±œ'©‰žgÃ[›¨gœXžgÃb›®¶†¸«¯²ž1¶†›œ0¨,©¡ ª¬ b¾;œž¦¨,©« «ª¼bžU·Œª²œ'©¤º[ ®œ0›¨gœ0ª°¨Ø¨g±®¯°¯²±b¨ƒ›œ0ª²±€ ³›® [¥ ¤ž—¶†›® ®œ0ª°¨‹¨g±€ «¤œŸ'›ª¬ €œŒ¿€ºØœ'©bž¦¤'›®¼bŸ,¼[¤ƒÂ Éʞ¼«¤ž¸r©bŸ'›¤žg¡£¯²ž— [¾œ'©+¸«›œœžƒŸ, [¤^œ±Ž¤±®¯²ÍžŒ¸«›œÁ¡ œžƒŸ, ¤0¸«›Ÿ'¤ž— [ž¦¤0¤*¸[Ÿ'±®¿«¯²ž—¶Ä›® [¥Ž¸[Ÿ0±®¸ ±®¤žœC·?±1¯²žƒÍž¦¯ ¤ž¦¯²ž¦¨gœ0ª²±€ 7¶‰žƒœ'©b±b¥‰±µ*œŸ,›® [¤0¯°›œ0ª²±€ Ø¸«›œœžƒŸ, ×œ±Ÿ0žg¡ ¥r¼[¨gžÒœ'©[ž›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤±µ^¸«›œœžƒŸ, /¶‰›œ0¨t©[ª¬ b¾[  "! V#£O%$*O&# ˜ MH • #üM – AüVWH('M –– O ˜ H \ O&#£Or™ – AüV*H*)O – TV9Q Éʞ½¼«¤ž³±€ [¯²ºì¶+±€ [±¡£¯°ª¬ b¾€¼[›¯‰Ÿ'ž¦¤±€¼bŸ'¨gž¦¤Yœ±ÅŸ0žg¡ ¥r¼[¨gžœŸ'›® [¤0¯°›œ0ª²±€ Ø›®¶j¿«ª²¾€¼«ª²œ0ª²ž¦¤ƒÂ9ÑCœXª°¤X›¯¬¶+±®¤œXª¬¶¡ ¸ ±®¤0¤0ª°¿«¯²žœ±´ª¬ [¨g±Ÿ'¸ ±Ÿ'›œž#¤ž—¶‰›® €œ0ª°¨Yè[ b±]·Œ¯²ž¦¥[¾ž ±µœ£·±¯°›® b¾€¼«›¾ž¦¤ƒÂ䭌©[žY¶j¼[œ'¼[›¯Òª¬ bµÖ±Ÿ,¶‰›œ0ª²±€  ¿ žƒœ£·žƒž— 5œ£·±³¥«ªöõ žƒŸ0ž— €œ7¯°›® b¾€¼[›¾ž¦¤<¨ƒ›® 5¿ žÊ¥[žg¡ ¤0¨gŸ'ª°¿ ž¦¥Ô b±œÊ¿ ºÅ±€ bžg¡üœ±¡ü±€ bž½¶‰›¸«¸«ª¬ [¾¿r¼bœ”¿ º ¨g±®›Ÿ'¤žg¡C¶†›¸«¸«ª¬ b¾†Ü„š^›¯¬¶‰žƒŸ+ ƒú^û ¬Æ Þ¦ßßß á0Â:ø[±Ÿ:žgá ›®¶‰¸r¯²žÆ€›ÒÝÒ±Ÿ0ž¦›® ͐žƒŸ,¿‰¸r©bŸ'›¤0žX¸r›œœžƒŸ, Uù-,1š/.01 23bý þ®ûÿU¨ƒ›® Ì¿ žÄœŸ'›® [¤'¯°›œž¦¥Ìª¬ €œ±Ìù ·?ž¦›Ÿ4,1šÿ°Æ ù ·Ÿ'ª²œž5,1šÿ°Æbù ¨g±€¶†¸ ±®¤ž6,šŒÿ°Æbù ¼[¤ž5,1šÿ°Æ±Ÿù ¤0¸ ž— [¥ ,1šÿ°Â^ÑCµ7,šë©«›¤+›”¶‰ž¦›® [ª¬ b¾8 ©bž¦›¥b¡ü¾ž¦›Ÿ:9üÆ?œ'©bž—  œ'©bž1ÝÒ±Ÿ0ž¦›® ×͐žƒŸ'¿U¸r©bŸ'›¤ž¸«›œœžƒŸ, U¨ƒ›® †¿ žœŸ'›® [¤Á¡ ¯°›œž¦¥½ª¬ ®œ±Õù ·?ž¦›Ÿ,šŒÿ±Ÿ7ù ¸¼bœ‰±€ ;,1šÿ°Â:ÑCµÀœ'©bž ©bž¦›¥[·±Ÿ'¥E±µ<,1šêª°¤Yùmþ=>rÜL¶+±€ [žƒºrá0ÿ°Æœ'©bž— Eœ'©bž ͐žƒŸ'¿7¸©bŸ'›¤žÒ¸«›œœžƒŸ, «¤›Ÿ0ž¨t©[›® b¾ž¦¥<ª¬ €œ±Çù ¤0¸ ž— [¥ ¶+±€ [žƒºÿ°Â9ѝ ×œ'©[ª°¤·›—ºÆù-,š?.0@ABCbýÁþ®ûÿ«©[›¤Xӫ͐ž œŸ'›® [¤'¯°›œ0ª²±€ ¸r›œœžƒŸ, ³¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤¦ÂEÑ œ†œ0›èž¦¤Uœ£·± ¤œž¦¸«¤ œ±Ò¤ž¦¯²ž¦¨gœ^œ'©bž¶+±®¤œ^ [›œ'¼bŸ'›¯1 ? [¾®¯°ª°¤'©ŽœŸ'›® [¤Á¡ ¯°›œ0ª²±€ #¸«›œœžƒŸ, ›¤›<¨g±Ÿ0Ÿ'ž¦¤0¸ ±€ [¥«ª¬ b¾<¸r›œœžƒŸ, ±µ ›‰ÝÒ±Ÿ'ž¦›® Ç¸«›œœžƒŸ, AD ÞNá œ±1¤0ž¦¯²ž¦¨gœ^¸ ±®¤0¤'ª°¿«¯²ž?œŸ,›® [¤0¯°›œ0ª²±€ j¸«›œœžƒŸ, j¨ƒ›œÁ¡ žƒ¾±Ÿ'ª²ž¦¤ƒÆ E ᜝±†¤ž¦¯²ž¦¨gœŒœ'©bž¶+±®¤œ [›œ'¼[Ÿ'›¯A ? b¾®¯°ª°¤'©UœŸ,›® [¤0¯°›N¡ œ0ª²±€ ‘¸r›œœžƒŸ, Ê›®¶+±€ [¾7¸±®¤'¤0ª°¿«¯²žœŸ'›® [¤0¯°›œ0ª²±€ ¸«›œÁ¡ œžƒŸ, Ç¨ƒ›œžƒ¾±Ÿ,ª²ž¦¤ƒÂ ­Œ©[žSÓrŸ'¤œì¤œž¦¸ ª°¤ë¸žƒŸ'µm±Ÿ,¶‰ž¦¥3ª¬  ¸«›œœžƒŸ,  ¶‰›œ0¨t©[ª¬ b¾[Æ<›® «¥íœ'©bžÄ¤ž¦¨g±€ [¥¤œž¦¸.ª¬ í¸«›œœžƒŸ,  œŸ'›® [¤0µmžƒŸ¦Â Û ¶+±€ [¾‘œ'©bž/›®¶j¿«ª²¾€¼«ª²œ0ª²ž¦¤×ª¬ ÕÝÒ±Ÿ0ž¦›® Pœ±F ? b¡ ¾®¯°ª°¤'©Ç§/›¨,©«ª¬ bž‹­*Ÿ,›® [¤0¯°›œ0ª²±€  Æ[œ'©bžÀ›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤ª¬  ÝÒ±Ÿ'ž¦›® ”¸«›œœžƒŸ, Ê¶‰›œ0¨t©[ª¬ b¾Ø›Ÿ0žŸ0ž¦¥r¼[¨gž¦¥”¿ º;¼[¤¡ ª¬ b¾Ž›Ž©€º¿[Ÿ,ª°¥†¶+žƒœ'©b±b¥±µžgÃ[›¨gœ:žgÃ[›®¶‰¸«¯²ž¶†›œ0¨,©¡ ª¬ b¾1›® [¥¤0ž—¶‰›® €œ0ª°¨:¨g±€ [¤œŸ,›ª¬ ®œ^¿€ºÒœ'©[ž¦¤0›®¼bŸ,¼[¤¦ÆN›® [¥ œ'©bž‹›®¶j¿«ª²¾€¼«ª²œ0ª²ž¦¤Œª¬ ÇÝÒ±Ÿ'ž¦›® Øœ±" ? b¾®¯°ª°¤,©Ç¸«›œœžƒŸ,  œŸ'›® «¤µmžƒŸØ›Ÿ0ž;Ÿ0ž¦¥r¼[¨gž¦¥½¿ º¼[¤0ª¬ b¾Î¤º[ ®œ0›¨gœ0ª°¨Ê¨g±®¯ö¡ ¯²±b¨ƒ›œ0ª²±€ [›¯:ª¬ [µm±Ÿ,¶†›œ0ª²±€ /±µ ? [¾®¯°ª°¤'©PÜGžƒžH ¦úŒû ¬Æ Þ¦ßßß á0 IKJGL M7NPOORQSUTV*N%ORW XZYGT\[ ­Œ©[žƒŸ0žÅ›Ÿ0žË¤žƒÍžƒŸ'›¯] ? b¾®¯°ª°¤'©»œŸ'›® [¤0¯°›œ0ª²±€ Ì¸«›œÁ¡ œžƒŸ, «¤‰±µ‹›Y݋±Ÿ0ž¦›® ³ÍžƒŸ,¿¸r©[Ÿ'›¤ž<¸«›œœžƒŸt EÜ֞Â ¾[ ù-,1š/.01Z2CbýÁþ®ûÿ¨ƒ›® ”¿ žŽœŸ,›® [¤0¯°›œž¦¥Êª¬ €œ±/ù ·ž¦›Ÿ ,1šÿ°Æù ·Ÿ'ª²œž+,šŒÿ°Ærù ¼[¤0ž7,1šÿ ›® «¥×¤±Ž±€  ²á0Â9ÉʞÀµm±¡ ¨¦¼[¤±€ /¤ž¦¯²ž¦¨gœ0ª¬ b¾Øœ'©bžj¶‰±®¤œ1 «›œ'¼bŸ'›¯œŸ,›® [¤0¯°›œ0ª²±€  ¸«›œœžƒŸt 5›®¶+±€ b¾#œ'©bž—¶7ÂìÑÁ ½œ'©[ž;ÓrŸ'¤œÇ¤œž¦¸Æ1·?ž ¥«ª²Íbª°¥[ž7œ'©[ž—¶æª¬ ®œ±Î¤žƒÍžƒŸ'›¯ŒœŸ'›® [¤0¯°›œ0ª²±€ ¸«›œœžƒŸ,  ¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤‰·Œª²œ'©Y¿ ±œ'©‘žgÃb›®¶†¸«¯²ž¦¤Ž›® «¥¤ž—¶†›® ®œ0ª°¨ ¨g±€ [¤0œŸ'›ª¬ ®œ1¿€ºØÝÒ±Ÿ0ž¦›® Çœ'©bž¦¤0›®¼bŸ,¼«¤ƒÂ ^ _ ` a b c d b e e a ` c f d g h i h j j i k l b m a n b c e o p q _ c j e ` b o c e _ r f d s t a b l k u a b ` v w c u h o j t x ` b c j h b e o _ c d b e e a ` c y a b ` f d z i e _ c f d { | } ~ €  ‚ ƒ „ … † ‡ ˆ ‰ Š ‹ Œ  Ž  ‹  Œ ‘ ’ ƒ „ … † ‡ ˆ i j a f d t b c l h a f d j z a c l n _ c a “ b l _ z e b j e ` b e b u a n ” • – — } ~  ‚ € ˜ ™ – — ‰ Š  ‚ š › ‹ Œ } ~ ” •   } ~ ‹ œ Œ  ž Ÿ ‰ Š  ‚ € ‹ Œ   ¡ š › s b ` e o r b p e v ‘ l _ c ¢ n _ c a “ £ ’ ‘ p t i c k h “ b ¤ ¢ j e ` b e b u a n £ ’ m a n b c e o p q _ c j e ` b o c e ¥ “ ^ _ ` a b c x t a j b i ` i j w ¦ b p e w ¦ b n z h a § b e p t o c u x a p t c o ¨ i a j o g h i h j j i k l b m e a z © { | } ~ €  ‚ ƒ „  ‹  Œ ƒ „  Ž ª « p _ n z _ j a z _ a n ¬ ­ a ` ¥ _ ¥ ® a p e r ` a ¨ i a c p “ w c u h o j t m “ c e b p e o p q _ h h _ p b e o _ c b h ¯ c r _ ` n b e o _ c m e a z ° y ` o e a z _ a n § a b c o c u _ r ‘ j o ’ x ` b c j h b e o _ c ± _ ` l _ r ‘ j o ’ s z h b p a v s e o n a v s b ` e o r b p e ¢ h b c u i b u a £ v p o e “ t _ i ` z _ a n ø^ª²¾€¼bŸ0ž³Þ D Û žgÃ[›®¶‰¸«¯²žÊ±µ›¸«¸«¯²ºbª¬ b¾#œ£·±¯²žƒÍž¦¯ œŸ'›® «¤0¯°›œ0ª²±€ 7¸«›œœžƒŸt 7¤ž¦¯²ž¦¨gœ0ª²±€ ;¶+žƒœ'©b±b¥ ø^ª²¾€¼bŸ'žŽÞ¤'©[±—·Œ¤:ӫ͐žœŸ'›® [¤'¯°›œ0ª²±€ ×¸r›œœžƒŸ, ‰¨ƒ›œÁ¡ žƒ¾±Ÿ'ª²ž¦¤/±µUù-,š?.0@²BCbýÁþ®ûÿ¿ º½œ'©bžY¸[Ÿ0±®¸ ±®¤ž¦¥ ©€º¿«Ÿ'ª°¥<¶+žƒœ'©[±¥Â?ø ª²Ÿ,¤œœ'©bŸ0žƒžŽ¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤À›Ÿ0žŽ¤žg¡ ¶†›® ®œ0ª°¨ƒ›¯°¯²º;¨g±€ [¤œŸ'›ª¬ bž¦¥Ê¿ º<ÝÒ±Ÿ'ž¦›® <œ'©[ž¦¤0›®¼bŸ,¼[¤¦Â ѝ Çœ'©[ª°¤1¨ƒ›¤žÆª²µ9œ'©bž¶+ž¦›® [ª¬ b¾†±µ:©bž¦›¥[·±Ÿ'¥<±µ›  b±€¼r Y¸r©[Ÿ'›¤ž,šÞUª°¤›Ê©€º¸ ±€ €ºb¶"±Ÿ+›”©€ºb¸žƒŸ0¡  €ºb¶î±µ:œ'©bž‰¤ž—¶†›® ®œ0ª°¨†¨g±€ [¤œŸ'›ª¬ €œ‹µÖ±Ÿ,š Æœ'©bž—  ,1šŒÞ1¨ƒ›® U¿žÒ¶‰›œ0¨t©bž¦¥×œ±³,1š Âø«±ŸžgÃb›®¶†¸«¯²žÆrù3´qÿ ¨ƒ›® Ž¿ ž:›® ‹±®¿[÷ ž¦¨gœ ±µ ù2CbýÁþ®ûÿ¿ ž¦¨ƒ›®¼[¤ž:œ'©[ž¶+ž¦›® b¡ ª¬ b¾µ8 ¸[Ÿ0±b¥r¼[¨gœ0ª²±€ ܄¯°›® b¾€¼«›¾žá¶9±µ œ'©bž¶+ž¦›® «ª¬ b¾®¤^±µ ù3´qÿ°Æª°¤†›Î©®ºb¸ ±€ ®º[¶Ï±µ"8 ¸[Ÿ0±b¥r¼[¨gœ0ª²±€ 9üÆÀ܄ª„ žÂEùC´Öý 1&BCbý þ€ûÿ[ª°¤X¶†›œ0¨,©bž¦¥×œ±×ù-,š+8 ¸[Ÿ0±b¥r¼[¨gœ0ª²±€ 19.01 23bý þ®ûÿ[·Œª²œ'©×œ'©bžÒ¸±®¤'¤0ª°¿«¯²ž1œŸ'›® [¤0¯°›œ0ª²±€ Ç¸r›œœžƒŸ, [¤ · ù ·Ÿ'ª²œž¸,1šÿ°Æù ¨g±€¶‰¸ ±®¤ž],šŒÿ¹á0Â Û œØœ'©bž”¤0›®¶+ž œ0ª¬¶+žÆ ù3´qÿ ª°¤ÀœŸ'›® [¤0¯°›œž¦¥Êª¬ €œ±”ÿ ¸ ± ž—¶7ÿr›®¶+±€ b¾Ø›¯°¯ ? [¾®¯°ª°¤'©#·±Ÿ'¥«¤+µÖ±Ÿ‰œ'©bž7ÝÒ±Ÿ'ž¦›® #·?±Ÿ,¥5ùC´mÿ°Æ¿ žg¡ ¨ƒ›®¼[¤ž/±€ [¯²º5ù ¸ ± ž—¶7ÿ:©[›¤‰œ'©[ž;¶+ž¦›® «ª¬ b¾Ê±µº8 ¸[Ÿ0±¡ ¥r¼[¨gœ0ª²±€ *܄¯°›® b¾€¼[›¾žá¶9ü W›¤œ+œ£·±‘œŸ'›® [¤'¯°›œ0ª²±€ ³¸«›œœžƒŸ, ³¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤U±µ ù-,1š/.01»23bý þ®ûÿ ›Ÿ0ž¨g±€ [¤œŸ,›ª¬ bž¦¥”¿€ºÇžgÃ[›¨gœžgá ›®¶‰¸r¯²ž½¶‰›œ0¨t©[ª¬ b¾Ô¶+žƒœ'©b±b¥ ø[±ŸPžgÃb›®¶†¸«¯²žÆØª²µ ›® [¥±€ [¯²ºÕª²µÒœ'©bžÊ©bž¦›¥[·±Ÿ'¥±µ›# b±€¼« ¸r©bŸ'›¤ž ,1šŒÞ‘ª°¤‘ù½¼3¾@>«ý¿ÁÀûÐ܄¤œŸ'›œžƒ¾ºá0ÿ°Æ†ù-,1šŒÞR.³Á1²23bý þ®ûÿ œ0›èž¦¤ · ù ›¥«±®¸[œ?›‹¤0œŸ'›œ0›¾ž—¶Çÿ¹À›¤:ª²œ0¤œŸ,›® [¤0¯°›N¡ œ0ª²±€ ”¸«›œœžƒŸ, ;¨ƒ›œžƒ¾±Ÿ0ºÂj¢ž¦¨ƒ›®¼[¤ž+œ'©bžžgÃ[›¨gœ1žgá ›®¶‰¸r¯²žÒ¶†›œ0¨,©[ª¬ [¾†¶+žƒœ'©b±b¥Çª°¤œ±±+Ÿ'ª²¾®ª°¥Æ«ª²œƒÿ ¤Œ¿ žƒœÁ¡ œžƒŸ*œ±›¥[±®¸[œžgÃ[›®¶‰¸«¯²žg¡£¿r›¤ž¦¥›¸«¸«Ÿ0±®›¨,©×ÜÖɑ›œ0›® ¡ ›¿ žÇ›® [¥#­W›èž¦¥«›bÆÀަߐߐò á0ÂP¢¼bœƒÆ:ª¬ Îœ'©[ª°¤+¸«›¸ žƒŸ¦Æ ·ž7±€ [¯²º#ª¬¶‰¸«¯²ž—¶‰ž— ®œž¦¥ÎžgÃ[›¨gœ†žgÃ[›®¶‰¸«¯²ž<¶†›œ0¨,©¡ ª¬ b¾×¶‰žƒœ'©b±b¥ ­Œ©[ž×¸[Ÿ0±®¸ ±®¤ž¦¥#©€º¿[Ÿ,ª°¥Î¶‰žƒœ'©b±¥Y±µžgÃ[›¨gœjžgá ›®¶‰¸r¯²ž<¶‰›œ0¨,©«ª¬ b¾‘›® [¥³¤ž—¶†›® ®œ0ª°¨<¨g±€ [¤œŸ'›ª¬ €œ†¿ º œ'©bž¦¤0›®¼[Ÿ,¼[¤À¨ƒ›® ”Ÿ0ž¦¥r¼«¨gžjœ'©[ž+ ¼«¶j¿ žƒŸ±µX¸ ±®¤0¤0ª°¿«¯²ž œŸ'›® [¤'¯°›œ0ª²±€ <¸«›œœžƒŸt [¤Œ›® [¥<›¯°¤±†œ'©bžj›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤ ±µ9¸«›œœžƒŸ, 7¶†›œ0¨,©[ª¬ [¾[ à ÞNáÄ>rûýÅ>P>Æ3´mý¿Á1²¼3¾1=dz@ÂNýÈ =F>û>Eþ®û®ý ýÉ ÊBCbý¶ËÌ=µ3´ÎÍ ýÁþ®ûÂ‰ÜÖÑ·X›® €œ1œ±Ø·Ÿ,ª²œžj›U¸ ± ž—¶ ›µÖœžƒŸŒž¦›œ0ª¬ b¾U¤±€¶+žƒœ'©[ª¬ [¾[Â²á ø[±ŸËžgÃ[›®¶‰¸«¯²žÆP¿ ±œ'©îÝÒ±Ÿ0ž¦›®  ͐žƒŸ'¿«¤ìù0@ý þ®û®Ü֞¦›œgá0ÿ›® [¥Eù2CbýÁþ®û®ÜÖ·Ÿ,ª²œžá0ÿ¨ƒ›® ½œ0›èž#ùC´ÖýÈ1 ÿ ›¤Ç±®¿b÷Áž¦¨gœ0¤7ª¬ ½žgÃ[›®¶‰¸«¯²ž¸ à ށÂ.ù0@ý þ®ûÿœ0›èž¦¤ ±€ [¯²ºY b±€¼r [¤‹·Œª²œ'©Î¶+ž¦›® «ª¬ b¾Ï8 ¤±€¶‰žƒœ'©[ª¬ b¾<œ±;ž¦›œÉ9 ›¤Ž±®¿b÷ ž¦¨gœ0¤¦Æ^¿¼bœù23ýÁþ®ûÿ^¨ƒ›® Êœ0›èžÇ b±€¼« «¤Ò·Œª²œ'© ¶+ž¦›® «ª¬ b¾Ð8 ¸[Ÿ0±b¥r¼[¨gœ0ª²±€ 19›¤›® ”±®¿b÷Áž¦¨gœƒÂ Û ¤Ž›ÇŸ0žg¡ ¤'¼[¯²œƒÆ«ùC´ÖýÈ1 ÿ€ª°¤Ÿ0žƒ¾®›Ÿ0ž¦¥‰›¤?›® ±®¿b÷Áž¦¨gœ±µWÿBCbýÁþ®ûÿ°Æ ¿ ž¦¨ƒ›®¼[¤žÀù3´qÿ®¥[±ž¦¤ b±œ:©[›—͐žœ'©bž¶+ž¦›® «ª¬ b¾"8 ¤±€¶+žg¡ œ'©[ª¬ b¾Îœ±Îž¦›œÉ9Ò¿¼bœU©[›¤×œ'©bžÊ¶+ž¦›® [ª¬ [¾8 ¸[Ÿ0±b¥r¼[¨t¡ œ0ª²±€ ܄¯°›® b¾€¼«›¾žá¶9üÆ ›‰©€ºb¸±€ €º[¶º×±µ58 ¸[Ÿ0±b¥r¼[¨gœ0ª²±€ 19ü IAJI M7N%ORORQ1SÃTÑ/SUN&T\ÒÓÈQS Û µmœžƒŸ9œ'©bž¤ž¦¯²ž¦¨gœ0ª²±€ ‰±µr¸ ±®¤0¤0ª°¿«¯²žXœŸ'›® «¤0¯°›œ0ª²±€ +¸«›œÁ¡ œžƒŸ, ×¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤ª¬ ‰¸r›œœžƒŸ, ×¶†›œ0¨,©[ª¬ [¾[Æ œ'©bžÀ¶+±®¤œ  [›œ'¼bŸ,›¯ œŸ'›® [¤0¯°›œ0ª²±€ Ç¸r›œœžƒŸ, Ç›®¶+±€ [¾jœ'©[±®¤ž¸«›œÁ¡ œžƒŸ, [¤+ª°¤+¤0ž¦¯²ž¦¨gœž¦¥Õª¬ #¸«›œœžƒŸ, PœŸ'›® [¤µÖžƒŸ¦Â­W±‘›¨t¡ b¼[ª²Ÿ0žUœ'©bžØ¶‰±®¤œj «›œ'¼bŸ'›¯/ ? b¾®¯°ª°¤'©Î¤ž— €œž— [¨gžÆ?·?ž ¼[¤žÔ  b¾®¯°ª°¤'©´¤0ºb €œ0›¨gœ0ª°¨¨g±®¯°¯²±b¨ƒ›œ0ª²±€ [›¯ª¬ bµÖ±Ÿ,¶‰›N¡ œ0ª²±€  Æ^ž¦¤0¸ ž¦¨ƒª°›¯°¯²ºÊµÖ±Ÿj¤'¼«¿b÷ ž¦¨gœÁ¡ü͐žƒŸ,¿‘Ÿ0ž¦¯°›œ0ª²±€ Î›® [¥ ͐žƒŸ'¿b¡ü±®¿[÷ ž¦¨gœŸ0ž¦¯°›œ0ª²±€  ÂÊÉʞ؟0žƒ¾®›Ÿ,¥œ'©[žµ  b¾®¯°ª°¤'© ¸«›œœžƒŸ, U±µ œ'©bž¶‰±®¤œ?µÖŸ0žb¼bž— €œ¤ºb €œ0›¨gœ0ª°¨ƒ›¯°¯²ºÇŸ0žg¡ ¯°›œž¦¥¸«›ª²Ÿ9›¤ œ'©bž¶‰±®¤œ  [›œ'¼[Ÿ'›¯€œŸ,›® [¤0¯°›œ0ª²±€ ¸«›œÁ¡ œžƒŸ,  Â Û ¤žgÃ[¸«¯°›ª¬ bž¦¥³›¿ ±—͐žÆŒù ·Ÿ'ª²œžÄ,1šÿ°ÆXù ¨g±€¶†¸±®¤0ž ,1šÿ›Ÿ0ž/¤ž¦¯²ž¦¨gœž¦¥Õ›¤+œ'©[žÇœŸ'›® [¤0¯°›œ0ª²±€ Õ¸«›œœžƒŸ, [¤ µÖ±Ÿù-,1š/.³Á1\2C[þ®ûÿ°Æ›® [¥Êù ¸ ±ž—¶Çÿrª°¤À¤ž¦¯²ž¦¨gœž¦¥Ê›¤ œ'©bžÀœŸ'›® «¤0¯°›œž¦¥Ç·±Ÿ'¥Uµm±ŸÀù3´qÿ[ª¬ Ø¸«›œœžƒŸ, Ç¶†›œ0¨,©¡ ª¬ b¾[ÂS­Œ©bžƒŸ0žƒµÖ±Ÿ0žÆù ·Ÿ'ª²œžÊ¸ ± ž—¶7ÿ?›® [¥Ëù ¨g±€¶†¸±®¤0ž ¸ ±ž—¶ÇÿW›Ÿ'žUœ'©[žØ¸ ±®¤0¤'ª°¿«¯²žØœŸ'›® [¤0¯°›œ0ª²±€ #µÖ±ŸUùC´ÖýÈ1 BCbý þ€ûÿ°ÂÕ ? b¾®¯°ª°¤'©½ÍžƒŸ,¿Èù ·Ÿ'ª²œžÆÒ¨g±€¶‰¸ ±®¤žÿ›® [¥  b¾®¯°ª°¤'© [±€¼« jù ¸ ± ž—¶7ÿ¦›¸«¸ ž¦›Ÿ*›¤W͐žƒŸ'¿b¡ü±®¿b÷Áž¦¨gœ*Ÿ0žg¡ ¯°›œ0ª²±€ Uª¬ +œ'©[žŒ¨g±Ÿ'¸r¼[¤:µm±€¼[Ÿ:œ0ª¬¶+ž¦¤?›® [¥+マƒŸ0±jœ0ª¬¶+ž¦¤ Ÿ0ž¦¤'¸ž¦¨gœ0ª²Íž¦¯²ºÂ»­©bžƒŸ0žƒµÖ±Ÿ0žÆù ·ŒŸ'ª²œž‘¸ ± ž—¶7ÿ?ª°¤7¤žg¡ ¯²ž¦¨gœž¦¥/›¤œ'©bžŽ¶‰±®¤œ [›œ'¼bŸ'›¯ œŸ'›® «¤0¯°›œ0ª²±€ ÇµÖ±ŸÀùC´Öý 1K23ýÁþ®ûÿ°Â š^›œœžƒŸ, ÊœŸ'›® [¤µÖžƒŸ‹ª°¤ bžƒž¦¥[ž¦¥”ž¦¤'¸ž¦¨ƒª°›¯°¯²º;·©bž—  ›+͐žƒŸ,¿7±µ^¤±€¼bŸ'¨gž¯°›® [¾€¼[›¾žŽ¨ƒ›® 7œŸ'›® [¤0¯°›œž¦¥/ª¬ ®œ± ¤žƒÍžƒŸ,›¯ ·?±Ÿ'¥r¤X±µ*œ0›Ÿ0¾žƒœ¯°›® b¾€¼«›¾ž›¨ƒ¨g±Ÿ'¥«ª¬ b¾œ± ±®¿b÷Áž¦¨gœ0¤ƒÆ«›¯²œ'©b±€¼[¾€©Øœ'©bžÀ͐žƒŸ'¿Øœ0›èž¦¤Œ¤0ž—¶‰›® €œ0ª°¨ƒ›¯°¯²º ¤0ª¬¶†ª°¯°›Ÿ b±€¼« [¤X›¤X±®¿b÷Áž¦¨gœ0¤ƒÂ:ø[±ŸŒžgÃb›®¶†¸«¯²žÆ%,š³ª¬  ù-,1š/.01 úüû®ýGË=µÂûýÁþ®ûÿ¶¼[¤œ1©[›¦Ížœ'©bž+¶+ž¦›® [ª¬ [¾ 8 ¤±€¶‰žƒœ'©[ª¬ b¾Êœ±‘Ÿ,ª°¥[ž39üÆ›® «¥³ª²œ0¤+œŸ'›® [¤'¯°›œ0ª²±€ Õ¸«›œÁ¡ œžƒŸ, «¤¨ƒ›® ‹¿ žXù Ÿ'ª°¥[žÖ,1šÿ—±Ÿ9ù ¾±¿ º+,1šÿ°Â]­Œ©bžƒŸ0ž›Ÿ0ž ¶†›® ®º‰ÝÒ±Ÿ'ž¦›® × b±€¼r [¤·Œª²œ'©†œ'©bžÒ¶+ž¦›® [ª¬ [¾<8 ¤±€¶+žg¡ œ'©[ª¬ [¾7œ±7Ÿ'ª°¥[ž39¯°ª²èž7ù ©b±Ÿ'¤0žÿ°Æ?ÿ ¨ƒ›Ÿ¦ÿ°Æÿ ¿r¼[¤¦ÿ°Æ:ÿ œŸ'›ª¬  ÿ ›® [¥È¤±5±€  ÂæÉ´©bž— Ëœ'©bž³©bž¦›¥«·?±Ÿ'¥Ë±µµ,šîª°¤ ù ©b±Ÿ,¤žÿ¦±Ÿÿ ¨ƒ›Ÿ¦ÿ°Æù-,š?.0Á1úüûýGË=7ûýÁþ®ûÿNª°¤W¼[¤,¼[›¯°¯²º œŸ'›® «¤0¯°›œž¦¥Øª¬ ®œ±Uù Ÿ'ª°¥[ž²,1šÿ°Æb¿r¼bœ:·1©bž— +œ'©bžÒ©bž¦›¥b¡ ·±Ÿ'¥”±µ?,šÄª°¤jù ¿¼[¤ƒÿ ±Ÿù œŸ'›ª¬  ÿ°Æ9ù-,1š/.³Á1úüûý¶ËÌ= ûý þ€ûÿª°¤¼[¤'¼[›¯°¯²º‹œŸ,›® [¤0¯°›œž¦¥+›¤ù ¾±Ò¿€ºÊ,šŒÿ°Â¢¼bœ œ'©bž”ÝÒ±Ÿ0ž¦›® Õœ'©bž¦¤0›®¼[Ÿ,¼[¤ƒÆ·©«ª°¨,©·?žÊ¼[¤ž¦¥Æ¥[± ž¦¤  b±œ+¿[Ÿ,ª¬ b¾/±€¼[œŽœ'©bžÇ¥«ªöõ žƒŸ0ž— [¨gž¦¤¦Â‘­Œ©[ª°¤¸[Ÿ0±®¿«¯²ž—¶ ¤'©[±—·Œ¤:œ'©[›œ?ÝÒ±Ÿ'ž¦›® ‰›® [¥"  b¾®¯°ª°¤'©×©[›¦Íž͐žƒŸ0º+¥«ª²µq¡ µÖžƒŸ0ž— €œ¤ž—¶‰›® €œ0ª°¨‰©[ª²žƒŸ'›Ÿ,¨,©[ª²ž¦¤±µ?œ'©bž¦¤'›®¼bŸ,¼[¤Ò¨g±€ ¡ ¤œŸt¼[¨gœ0ª²±€ Y܄š^›¯¬¶+žƒŸ³ ƒúXû qÆ*Þ¦ßßß á0 IKJ× M7NPOORQSUTÙØAWÚÛS YGTÜ[ Éʞ”¼[¤ž¦¥ÞÝ1ž— [žƒŸ'›¯°ª²ãƒž¦¥Þ\ß.¸«›Ÿ,¤0ª¬ b¾ÕÜàݲ\ßÒ᎛¯ö¡ ¾±Ÿ'ª²œ'©r¶ ܄­W±€¶‰ª²œ0›bÆUަߐßbÞNáØµÖ±Ÿ/¸r›œœžƒŸ, E¶†›œ0¨,©¡ ª¬ b¾[­*±Y¸«Ÿ,¼« bžØ±€¼bœ× [±¥[ž¦¤‰¶†›¥[ž7¿ ºݲ\ß.›¯ö¡ ¾±Ÿ'ª²œ'©r¶ ¥r¼[Ÿ'ª¬ b¾P¸«›œœžƒŸ, Ë¶‰›œ0¨t©[ª¬ b¾[ÆÒ·ž¤0¨g±Ÿ0ž ž¦›¨t©+ b±¥«ž?›® [¥ŽŸ0ž—¶+±]͐ž b±¥«ž¦¤·©[ª°¨t©©[›¦Íž¯²±—·žƒŸ ¤0¨g±Ÿ'ž;œ'©«›® Õœ'©[±®¤ž”±µ‹œ'©bž;œ±®¸5œž— ½ [±¥[ž¦¤U·Œª²œ'© œ'©bž7¤'›®¶+žUŸ'›® b¾ž7›® [¥Îœ'©bž7¤0›®¶‰žÇ¤ºb €œ0›¨gœ0ª°¨7¨ƒ›œÁ¡ žƒ¾±Ÿ0ºÂ#­W±‘¤0¨g±Ÿ0žØž¦›¨,© b±¥«žÆ¤žƒÍžƒŸ'›¯Œ¶‰žƒœ'©b±¥r¤ ¨ƒ›® 5¿ žÊ¼[¤ž¦¥âÀœ'©bž”µÖŸ0ž¼[ž— [¨gº³±µ‹ž¦›¨t©´¸«›œœžƒŸ,  ª¬ œ'©bžØ¨g±Ÿ'¸r¼[¤ØÜà᱐Ÿ, «ª²žƒŸ0œ0¯°›®¶ŽÍN›® [ª°¨t© ƌަߐߐò á›® [¥ œ'©bž#ÝÒ±Ÿ0ž¦›® E¤º[ ®œ0›¨gœ0ª°¨#¨g±®¯°¯²±b¨ƒ›œ0ª²±€ [›¯‰ª¬ bµÖ±Ÿ,¶†›N¡ œ0ª²±€ ìܽâ±±€  Æ+ަߐߐò á0Âãábª¬ [¨gžÊ·ž‘¼[¤ž”͐žƒŸ0º5¯²žgÃbªö¡ ¨ƒ›¯°ª²ãƒž¦¥5¸«›œœžƒŸ, «¤ƒÆœ'©bž<µÖŸ0žb¼bž— [¨gº#±µÒž¦›¨,©½¸«›œÁ¡ œžƒŸ, Îª¬ œ'©bžØ¨g±Ÿ'¸r¼[¤jª°¤+ b±œ+›¦Í›ª°¯°›¿«¯²žÂ Û ¯°¤±”·?ž ¥«ª°¥  ÿ œU¥«ž¦¤0¨gŸ'ª°¿ ž;¤0ºb €œ0›¨gœ0ª°¨Êª¬ bµÖ±Ÿ,¶‰›œ0ª²±€ ±µ‹œ'©bž ÝÒ±Ÿ0ž¦›® ³¸«›œœžƒŸ,  ÆXœ'©bž7ÝÒ±Ÿ'ž¦›® ³¤º[ ®œ0›¨gœ0ª°¨/¨g±®¯°¯²±¡ ¨ƒ›œ0ª²±€ [›¯ª¬ [µm±Ÿ,¶†›œ0ª²±€ Uª°¤ b±œ¼[¤žƒµL¼[¯rºžƒœƒÂ?­Œ©bžƒŸ0žg¡ µÖ±Ÿ0žÆ€·ž1¼[¤0ž¦¥+µm±®¯°¯²±]·Œª¬ b¾¸[Ÿ0žƒµÖžƒŸ0ž— [¨gž¦¤:µm±Ÿ¤0¨g±Ÿ'ª¬ b¾ ¸«›œœžƒŸ, «¤ƒÂ ä œ±×¸[Ÿ0žƒµÖžƒŸ¶+±Ÿ0žÒ¯²žgÃ[ª°¨ƒ›¯°ª²ãƒž¦¥”¸«›œœžƒŸ, [¤ ä ·1©bž— Ô¤ž—¶†›® ®œ0ª°¨ƒ›¯°¯²ºì¨g±€ [¤0œŸ'›ª¬ bž¦¥S¿€ºÈœ'©bžg¡ ¤'›®¼bŸ,¼[¤ƒÆœ±¸«Ÿ0žƒµmžƒŸU¸«›œœžƒŸ, [¤†·©b±®¤ž”¶+ž¦›® ¡ ª¬ [¾†±µ9œ'©bžj›Ÿ0¾€¼«¶+ž— €œŒª°¤1¨ƒ¯²±®¤žƒŸÀœ±†œ'©bžj¨g±€ ¡ ¤0œŸ'›ª¬ ®œ0¤Œµm±ŸŒœ'©bž›Ÿ0¾€¼«¶‰ž— ®œ ä ·1©bž—  ¨g±€ [¤œŸ'›ª¬ bž¦¥ ¿ º žgÃ[›¨gœæžgÃ[›®¶‰¸«¯²ž ¶†›œ0¨,©[ª¬ [¾[Æ[œ±†›¯°¯²±—·Ë¸«›œœžƒŸ, [¤X·©[±®¤ž©bž¦›¥b¡ ·±Ÿ'¥”±µœ'©bž‰›Ÿ0¾€¼«¶‰ž— ®œÒª°¤‹ª¬ «¨ƒ¯¬¼[¥[ž¦¥Êª¬ ;œ'©bž žgÃ[›®¶‰¸r¯²ž¦¤ ä œ±+¾®ª²Íž‹¸ ž— [›¯²œCº×œ±‰›Ÿ'¾€¼«¶+ž— €œ0¤·©[ª°¨,©Ç©«›¦Íž  [±‰¨g±€ [¤œŸ'›ª¬ €œ Û ¸«›œœžƒŸ,  ¤0¨g±Ÿ'ª¬ b¾ ¶‰žƒœ'©b±¥ ›¨ƒ¨g±Ÿ,¥«ª¬ b¾ œ± œ'©bž¦¤ž ¸«Ÿ0žƒµmžƒŸ'ž— [¨gž¦¤ ª°¤>¤'©b±]·  ›œ>ø^ª²¾¡ ¼bŸ0žæåÂÄѝ ³œ'©bž;¨ƒ›¤ž”±µ‹ø^ª²¾€¼bŸ0ž E Æç3¼B=èÉ —Üé1š?áá ª°¤êœ'©bž ¤'¼r¶ ±µîÜàêÜ:¼ƒáÅ.çë7ì?3¼B=èÉ —Ü:¼gáá̱µ¹›¯°¯ ¨,©«ª°¯°¥  b±b¥[ž ¼ï±µíéÀš/á ¹ÑCµî¼çª°¤ç¯²žgÃ[ª°¨ƒ›¯„Æ œ'©bž—  ï¼2=è: ¦Ü:¼ƒáÅð²ñÆ ±œ'©bžƒŸ'·Œª°¤žÆ ï¼2=è: ¦Ü:¼ƒá>ª°¤ ¨ƒ›¯°¨¦¼[¯°›œž¦¥ Ÿ0ž¦¨¦¼bŸ'¤0ª²Íž¦¯²ºÂ ø[±Ÿ žgÃ[›®¶‰¸«¯²žÆ 3¼B=è: ¦ÜCùÁË®ûÿ²áÅðÊ3¼B=èÉ —ÜCùòÁ1öÿ²áÅðÊ3¼B=è: ¦Ü ùBC[þ€ûÿ²á:ð²ñrÆ ›® [¥ó3¼2=ÌèÉ —ÜG,šÞNáY›® [¥Õï¼2=è: ¦ÜG,1š E ᑛŸ0ž5¨ƒ›¯°¨¦¼¡ ¯°›œž¦¥#Ÿ0ž¦¨¦¼bŸ,¤0ª²Íž¦¯²ºÂÐêÜ:¼gáÒª°¤¥[žƒœžƒŸ,¶†ª¬ bž¦¥Y¿ º‘œ'©bž ¨g±€ [¤œŸ,›ª¬ ®œ0¤µÖ±ŸÄ¼ƒÂ;ÑCµÊ¼ª°¤¯²žgÃ[ª°¨ƒ›¯„Æ?êŒÜ:¼gáÅð²ô9Â”Ñ µ ôЪ°¤½©[ª²¾€©[žƒŸ¦Æ;¯²žgÃ[ª°¨ƒ›¯°ª²ãƒž¦¥î¸«›œœžƒŸ, «¤›Ÿ0žÈ¶+±Ÿ0ž ¸[Ÿ0žƒµÖžƒŸ0Ÿ0ž¦¥ÂÑ µ5¼1ª°¤¤ž—¶†›® ®œ0ª°¨ƒ›¯°¯²º/¨g±€ [¤œŸ'›ª¬ bž¦¥;¿ º œ'©bž¦¤0›®¼[Ÿ,¼[¤·ª²œ'©µáPÀÆêÜ:¼ƒá ª°¤X¥[žƒœžƒŸ,¶†ª¬ bž¦¥†¿ º+œ'©bž ¥«ª°¤œ0›® «¨gž‘¿žƒœC·?žƒž— (áP䛮 [¥½œ'©bžÊŸ0ž¦›¯Ò¤ž—¶†›® ®œ0ª°¨ ¨g±b¥[ž‹±µ5¼ª¬ Øœ'©[ž‹œ'©bž¦¤0›®¼bŸt¼[¤ƒÂ9Ñ µWœ'©bžƒºÇ›Ÿ0žŽ¨ƒ¯²±®¤žƒŸ¦Æ êÜ:¼ƒáª°¤7©«ª²¾€©bžƒŸ¦ÆŒª„ žÂë¤0ž¦¨g±€ [¥5¤0¨g±Ÿ'ª¬ b¾P¸[Ÿ0žƒµÖž— [¨gž ª°¤U›¸«¸«¯°ª²ž¦¥Â5ør¼« [¨gœ0ª²±€ Èþ´túüûÌ>&¼B —ÜÍÆ+áPá¶+ž¦›® [¤ œ'©bž†¥rª°¤œ0›® [¨gž†¿ žƒœ£·žƒž— Êœ'©[ž†¤ž—¶‰›® €œ0ª°¨‰¨g±b¥[ž‰±µÍ ›® [¥¸áP1 ø[±ŸÒžgÃb›®¶†¸«¯²žÆ ª²µõÍ/©[›¤À¤ž—¶‰›® €œ0ª°¨Ž¨g±b¥[ž 8²ÞÞÞÞ E ö ÜL©¼«¶‰›® #Ÿ'±®¯²žá¶9›® [¥;áP¹ª°¤Ä8²ÞÞÞ ööö ÜL© ¼b¡ ¶‰›®  á¶9üÆ;þ´túüû>Û¼B ¦ÜÁÞÞÞÞ E ö Æ¬ÞÞÞ ööö áÅð E  ÑCµÐ¼Îª°¤ ¨g±€ [¤œŸ,›ª¬ bž¦¥”¿€ºÇžgÃ[›¨gœÀžgÃb›®¶†¸«¯²žj¶†›œ0¨,©[ª¬ b¾×·Œª²œ'© /目 [¥¹œ'©bž´©bž¦›¥[·±Ÿ'¥S±µÏ¼#ª°¤#ª¬ [¨ƒ¯¬¼[¥[ž¦¥Sª¬  /ÀÆ ©[ª²¾€©Y¤0¨g±Ÿ0ž×ª°¤Ž¾®ª²Íž— œ±¸êŒÜ:¼gá0ÂÇÑ µ¼j©«›¤Ž b± ¨g±€ [¤œŸ,›ª¬ ®œƒÆÀœ'©bž— 4êÜ:¼ƒá‰ª°¤Ø¾®ª²Íž— ´›¤Ç›#¸ž— «›¯²œ£ºÂ žƒœ÷ëõðށ ö Æ]ô\ð E  ö ÆùøKðçå ö Æ]ú ðçå ö Ææû[ÞRðçå ö Æ ü ð+ށ ö Æ&ñð ö  ö ÆAýAð ö  ò‰›® «¥þ,­­ÿÜ ,çÝðÒé       !#"$"&% ga NP1 lul NP2 ssu-da VPS [111000(person)] [‘don(money)’] ø^ª²¾€¼bŸ'ž E D Û žgÃb›®¶†¸«¯²žÒ±µ^¸«›œœžƒŸ,  ø«±Ÿ žgÃb›®¶†¸«¯²žÆ ª²µ 3¼2=ÌèÉ —ÜG,šÞNáÅð1à[ ö ›® [¥ ï¼2=è: ¦ÜG,1š E áÅð('  ö ›® [¥ ,1šŒÞXª°¤¤ž—¶†›® ®œ0ª°¨ƒ›¯°¯²º+¨g±€ ¡ ¤œŸ,›ª¬ bž¦¥×¿ ºù8²ÞÞÞ ööö ÜL©¼«¶‰›®  á¶9›® [¥µ,1š E ª°¤X¨g±€ ¡ ¤œŸ,›ª¬ bž¦¥»¿ º¹žgÃ[›®¶‰¸«¯²ž¦¤ · ùmþÌ=>rÜL¶‰±€ bžƒºrá0ÿ¹®ÆÊ›® [¥ œ'©bžŽŸ0ž¦›¯*¤0ž—¶‰›® €œ0ª°¨¨g±b¥[ž±µÖ,1šŒÞŽª°¤Ê8²ÞÞÞÞ E ö ÜL©¼¡ ¶†›® ‰Ÿ0±®¯²žá¶9*›® «¥×œ'©[žÒ©[ž¦›¥[·?±Ÿ,¥×±µZ,š E ª°¤Œùmþ=>[ÿ°Æ œ'©bž— Ä3¼B=è: ¦ÜéÀš/á᪰¤ E ) ö ì"å .æå ) ö ì7Ü ö) ò . ö)-E ì *,+.* á .;å ) ö .Îà ) ö ./' ) ö ð EE  ò = P(c) ) p, SC ( sem 1. if c is lexical, β 2. if c is semantically constrained by thesaurus with SC, γ × sem(c, SC) 3. if c is constrained by exact example method with ES, 1) if headword(c) ∈ES, δ 2) if headword(c) ∉ES, -θ1 4. if c has no constraint, -ρ = φ + (1–φ) × NTTLENG – distance(p, SC) NTTLENG • c : p’s child node • N : the number of p’s child nodes • if c is lexical, score(c) = η score(p) = (P(c) + α × score(c)) Σ ∀c ø^ª²¾€¼bŸ'žÊå1D Û ¸«›œœžƒŸ, Ç¤'¨g±Ÿ'ª¬ b¾×¶‰žƒœ'©b±¥ Û µÖœžƒŸØ›¯°¯œ'©bž” b±¥«ž¦¤×›Ÿ0ž;¤0¨g±Ÿ0ž¦¥½Ÿ0ž¦¨¦¼bŸ,¤0ª²Íž¦¯²º ·Œª²œ'©Ëœ'©[ž³¸«›œœžƒŸ, È¤0¨g±Ÿ'ª¬ [¾Å¶+žƒœ'©b±b¥Æœ'©[žPŸ0± ±œ  b±b¥[žÀ±µ*œ'©[ž‹¿ž¦¤0œX¤'¨g±Ÿ0žÒª°¤Ÿ0žƒ¾®›Ÿ'¥[ž¦¥Ç›¤Xœ'©[žÒŸ0± ±œ ±µ‹œ'©[ž;¿ ž¦¤œØÝÒ±Ÿ0ž¦›® ½¸«›œœžƒŸ, œŸ0žƒžÂ Û µmœžƒŸØœ'©bž ¿ ž¦¤œXÝÒ±Ÿ0ž¦›® Ø¸«›œœžƒŸt UœŸ0žƒžÒª°¤¤0ž¦¯²ž¦¨gœž¦¥Æ«œ'©[ª°¤XœŸ0žƒž ª°¤ÀœŸ'›® «¤µmžƒŸ'Ÿ0ž¦¥”œ±Ä ? b¾®¯°ª°¤,©/¸r›œœžƒŸ, ”œŸ0žƒž‰ª¬ ”¸«›œÁ¡ œžƒŸ, ‘œŸ'›® [¤µÖžƒŸ›® [¥‘œ'©bžº ? [¾®¯°ª°¤'©Ê¸«›œœžƒŸ, ‘œŸ0žƒž†ª°¤ œŸ'›® «¤µmžƒŸ'Ÿ0ž¦¥Åœ±½œ'©bžF ? b¾®¯°ª°¤,©E¸©bŸ'›¤žÎ¤œŸ,¼[¨gœ'¼bŸ'ž ›®¼bœ±€¶†›œ0ª°¨ƒ›¯°¯²ºÂÌ­©bž—  ÆÀ›¨ƒ¨g±Ÿ'¥«ª¬ b¾³œ±Õœ'©bžÐ ? b¡ ¾®¯°ª°¤'©Y¸r©bŸ'›¤0ž‰¤œŸ,¼[¨gœ'¼bŸ'žÆ*œ'©bž‰Ó  [›¯Ö ? [¾®¯°ª°¤'©Ê¤ž— ¡ œž— [¨gžjª°¤¾ž— bžƒŸ'›œž¦¥ 0 13254 O ˜ A£B½OrH – ×»JGL 678 Q1S Y:9ÔQT ORN<;=6çT?>\YGSRÚ T@9ÐQ1T O ÉʞXœŸ'›® [¤'¯°›œž¦¥UÞ öö ¤ž— €œž— [¨gž¦¤^ª¬ j¯²žƒœœžƒŸ'¤ ±µ«œŸ'›¥[ž Ó«ž¦¯°¥»­Œ©bžÊœ'©bž¦¤0›®¼[Ÿ,¼[¤7©[›¤7¤'ªöô¯²žƒÍž¦¯°¤;›® [¥´ª¬ ¡ ¨ƒ¯¬¼[¥«ž¦¤U›¿ ±€¼bœ”ÞÆ ò öö ·±Ÿ'¥«¤ Â Û  «¥Õ·ž”¶†›®  ¼b¡ ACB EDF!#GEH $I!#J&!#"K$" MLONP$C!J QR!J S 5G !#"$"TU#$.J," J.!#JV=WX$L&%TY YZ" QQVK&=# ›¯°¯²ºE¶†›¥[žYà€òé՜Ÿ'›® [¤0¯°›œ0ª²±€ È¸r›œœžƒŸ, [¤/µm±Ÿ;œž¦¤œ ¤ž— €œž— [¨gž¦¤ƒÂ ­Œ©bž¦¤0žE›¯°¤±íª¬ [¨ƒ¯¬¼[¥[žÔ¶¼[¨,©»¾ž— ¡ žƒŸ'›¯¸r›œœžƒŸ, [¤‹¤'¼«¨,©Ê›¤Ê,².²,ŽÆZá[¼«¿b÷ .²é9žƒŸ'¿Æ›® [¥ ôÀ¿b÷Áž¦¨gœB.+é9žƒŸ'¿»­Œ©bžÔ ? [¾®¯°ª°¤'©´¤º[ ®œ0›¨gœ0ª°¨¨g±®¯°¯²±¡ ¨ƒ›œ0ª²±€ [›¯Àª¬ bµÖ±Ÿ,¶‰›œ0ª²±€ Ë܄›¿ ±€¼bœ\[Nà[Æ ööö ¥«ªöõ žƒŸ'ž— ®œ ¤'¼[¿[÷ ž¦¨gœÁ¡ü͐žƒŸ'¿/¸«›ª²Ÿ'¤›® «¥7›¿ ±€¼bœ]'V[Æ ööö ¥«ªöõ žƒŸ'ž— ®œ ͐žƒŸ'¿b¡ü±®¿[÷ ž¦¨gœ^¸«›ª²Ÿ'¤ ª¬ [¨ƒ¯¬¼[¥[ž¦¥á·X›¤*±®¿[œ0›ª¬ [ž¦¥ŽµÖŸ0±€¶ š ž— « Ç­WŸ0žƒž¦¿«›® bè^N ×KJI M7N%ORORQ1SÃT`_QTÜ[&OUXba0Ú<9c8\N&S YÒÚÛT 6]78ÖQ1S Y:9ÔQT O 9Ã[¸ žƒŸ'ª¬¶+ž— €œ5ÑÈÜG ÃÑ,á½¶‰ž¦›® [¤´œ'©bžì¤ºb¤œž—¶ ±µ ឃ±] ¦ú?û ÀÜÁަߐߐòbá/›® «¥ 9Ã[¸ žƒŸ'ª¬¶+ž— €œÊÑ0ÑÎÜG ÃÑ'Ñ'á ¶+ž¦›® «¤X±€¼bŸŒ¤ºb¤œž—¶Ç 9ÃbÑ ÃÑ'Ñ Û ÍžƒŸ,›¾žŽ¯²ž— b¾œ'©7±µ ¤ž— €œž— [¨gž E ށÂd' EE Â-å ­Œ©[žÒ ¼«¶j¿ žƒŸ±µ ¸r›œœžƒŸ, [¤ åéNà à«Þe[ Û ÍžƒŸ'›¾žj¯²ž— b¾œ'©Ç±µ9¸«›œœžƒŸt  é à ­Œ©[žÒ ¼«¶j¿ žƒŸ±µ ¸r›œœžƒŸ, [¤ ì Û ÍžƒŸ'›¾žj¯²ž— b¾œ'© E Þ¦òNà Þ¦éé ö ±µ9¸«›œœžƒŸt  ­Œ©[ž  ¼«¶j¿ žƒŸX±µWžƒŸ'Ÿ0±Ÿ'¤ Þ E [ ސÞe[ ­ ›¿«¯²ž‹Þ D ­©bžXŸ0ž¦¤'¼[¯²œ9±µ ¸«›œœžƒŸ, +¯²ž— b¾œ'©‰¨g±€¶‰¸«›Ÿ¡ ª°¤±€ ØžgÃ[¸ žƒŸ'ª¬¶+ž— €œ ­ ›¿«¯²ž‹ÞXf¤'©b±]·Œ¤ œ'©bžŸ0ž¦¤'¼[¯²œ^±µ ¨g±€¶‰¸«›Ÿ'ª°¤0±€ Ž¿ žg¡ œ£·žƒž— jœ£·±‹¤ºb¤œž—¶‰¤¦ÂW­Œ©bžœ0›¿«¯²ž¤'©b±]·Œ¤ œ'©[›œ9¸[Ÿ0±¡ ¸ ±®¤ž¦¥+¶+±b¥[ž¦¯ Ÿ0ž¦¥r¼[¨gžXœ'©bž¯²ž— b¾œ'©±µ ¸«›œœžƒŸ, jœ± ^ œ'©[›®  9Ã[¸žƒŸ,ª¬¶+ž— €œ Ñ›® [¥‹œ'©bž? ¼«¶j¿žƒŸ ±µ[›®¶j¿«ª²¾€¼¡ ª²œ0ª²ž¦¤^±µ«œ'©bžX¸[Ÿ0±®¸ ±®¤ž¦¥¶+±b¥[ž¦¯ ª°¤ ¯²ž¦¤'¤Wœ'©«›® Ê Ãb¸ žƒŸ¡ ª¬¶+ž— €œÑt¦­Œ©bžƒŸ0žƒµÖ±Ÿ0žÆN·?ž¨g±€ [¨ƒ¯¬¼«¥[ž¦¥‹œ'©[›œ±€¼[Ÿ*¸[Ÿ0±¡ ¸ ±®¤ž¦¥”¶+±b¥[ž¦¯*ª°¤Àžgõ ž¦¨gœ0ª²Íž+œ±ØŸ0ž¦¥r¼[¨gžjœ'©bž+¯²ž— b¾œ'© ±µŒ¸«›œœžƒŸt ›® «¥YœŸ'›® «¤0¯°›œ0ª²±€ #›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤ƒÂÊ­Œ©bž  ¼«¶j¿žƒŸ±µ:¸«›œœžƒŸ, «¤±µ:œ'©bžj¸«Ÿ0±®¸ ±®¤ž¦¥;¶‰±¥«ž¦¯*ª°¤ ¶+±Ÿ'žÒœ'©[›® Çœ'©bžj  ¼«¶j¿ žƒŸX±µ9¸«›œœžƒŸ, [¤Œ±µõ Ãb¸ žƒŸ'ªö¡ ¶+ž— €œ Ñ,¢¼bœ ›¤ œ'©bž ¼«¶j¿ žƒŸ±µ¸«›œœžƒŸt [¤²ì/›—͐žƒŸ¡ ›¾ž¯²ž— b¾œ'©j±µ ¸«›œœžƒŸ, j±µ«œ'©bž¸[Ÿ'±®¸±®¤0ž¦¥¶‰±¥«ž¦¯€ª°¤ ¯²ž¦¤0¤œ'©«›® Øœ'©[›œ±µõ 9Ã[¸ žƒŸ'ª¬¶+ž— €œXÑ,Æ«ª²œŒª°¤žgÃb¸ ž¦¨gœž¦¥ œ'©[›œ1›¤Œœ'©bžj¤0ª²ãƒž±µ¨g±Ÿ,¸r¼[¤Œª°¤¿«ª²¾¾žƒŸ¦Æ œ'©bžj ¼«¶+¡ ¿ žƒŸX±µ*¸«›œœžƒŸ, [¤X±µWœ'©bžÒ¸[Ÿ0±®¸ ±®¤ž¦¥Ø¶+±b¥[ž¦¯·Œª°¯°¯ ¿ž µÖžƒ·?žƒŸ¦Â 5\ !#"$"@hgIie%kj?"kl?="h,J QmNnoQ&WX&Qklk RpC &&DqrLs " Z" QtRJ]H #$I!J uOtv w#wxlll % p&% " s&JJy% IK"swx$&LV!J zw,D{,% OD{Q |&Y@Q&JS#F#yV!x$Jol!p&!Qdp}" QR!IFLONrJe"Do~ Ls$#<D{#$ &D{!#J  =QRJ S#(?&JO&J phl!#5p&!#Qm~ p"Qd!xILON{  Je"DrLs$?#TlY$C&% ×»J× 6çSUSUÚÛS€TÜN<;‚»ÒÃYÒ ­Œ©[ž‹žƒŸ0Ÿ0±Ÿ'¤±µ9±€¼bŸŒ¤ºb¤œž—¶ä›Ÿ0ž‹Ÿ0±€¼b¾€©«¯²ºØ¥rª²Íª°¥[ž¦¥ ª¬ €œ±Êœ'©bžÇžƒŸ0Ÿ0±Ÿ,¤+ª¬ #¸«›œœžƒŸ, P¶‰›œ0¨t©[ª¬ b¾Ê›® [¥#œ'©bž žƒŸ0Ÿ'±Ÿ'¤?ª¬ U¸«›œœžƒŸt †œŸ,›® [¤µÖžƒŸ›® [¥Ø¤ž— €œž— [¨gžÀ¾ž— bžƒŸ¡ ›œ0ª²±€  Â?­Œ©bžžƒŸ0Ÿ'±Ÿ'¤:›¸«¸ ž¦›Ÿ0ž¦¥Uª¬ ‰¸r›œœžƒŸ, ×¶†›œ0¨,©¡ ª¬ b¾/›Ÿ0ž†›¯¬¶+±®¤0œÒœ'©bž‰žƒŸ0Ÿ0±Ÿ,¤Ò±µX¾±—͐žƒŸ, ›® [¥Ê¾±]̀¡ žƒŸ, [±ŸŸ0ž¦¯°›œ0ª²±€   ßž¦¯°›œ0ª²±€  ­Œ©bž‹ ¼«¶j¿žƒŸX±µ žƒŸ0Ÿ0±Ÿ,¤ á[¼[¿[÷ ž¦¨gœïDÎé9žƒŸ'¿ ß Û ¥[͐žƒŸ'¿*܄¸r©[Ÿ'›¤žá DÎé9žƒŸ'¿ ' Û ¥÷ ž¦¨gœ0ª²Íž€Ü„¸r©[Ÿ'›¤žá DÎé9žƒŸ'¿ é ,±€¼« ܄¸r©bŸ,›¤žá D ,Œ±€¼r  E ­W±œ0›¯ E é ­ ›¿«¯²ž E DÀ­Œ©bžjžƒŸ0Ÿ0±Ÿ‹›® [›¯²ºb¤0ª°¤ÀŸ0ž¦¤'¼«¯²œ1±µ¸«›œœžƒŸ,  ¶†›œ0¨,©[ª¬ b¾ ­ ›¿«¯²ž E ¤'©[±—·Œ¤+œ'©[žÇžƒŸ0Ÿ0±Ÿ‰›® [›¯²ºb¤0ª°¤+Ÿ0ž¦¤,¼[¯²œ+±µ ¸«›œœžƒŸt ‰¶‰›œ0¨t©[ª¬ b¾[Â9­*±‹¤0±®¯²Ížœ'©bž¦¤ž¸[Ÿ'±®¿«¯²ž—¶‰¤ƒÆ®ª²œ ª°¤ bžƒž¦¥«ž¦¥œ±Ž¼[¤žXœ'©bžÝÒ±Ÿ0ž¦›® ¤0ºb €œ0›¨gœ0ª°¨Œ›® [›¯²ºb¤0ª°¤ ܽâ±±€  ÆWަߐߐò á0 Éʞ7¨g±€¼r ®œž¦¥Pœ'©[žUžƒŸ'Ÿ0±Ÿ'¤ª¬ #¸«›œœžƒŸ, ÎœŸ'›® «¤µmžƒŸ ›® [¥P¤0ž— ®œž— [¨gž<¾ž— bžƒŸ'›œ0ª²±€ P¤'¼«¿b÷ ž¦¨gœ0ª²Íž¦¯²ºÂ³­© ¼[¤¦Æ œ'©bžÇ ¼«¶j¿žƒŸ±µžƒŸ0Ÿ0±Ÿ'¤¶‰›—ºY b±œŽ¿ ž†žgÃ[›¨gœƒÆ:¿r¼[œ œ'©bžžƒŸ0Ÿ0±ŸŸ'›œ0ª²±×±µ^žƒŸ0Ÿ0±ŸœCº¸ ž¦¤Œ¤0žƒž—¶‰¤¿ ž¶+ž¦›® b¡ ª¬ b¾µL¼[¯„ Ÿ0Ÿ'±Ÿœ£ºb¸ž ­Œ©[ž  ¼«¶j¿ žƒŸ ±µWžƒŸ'Ÿ0±Ÿ'¤ ߌž¦¤œ±Ÿ'›œ0ª²±€ <žƒŸ0Ÿ0±Ÿ éNà Û  «›¯²º¤0ª°¤ŒžƒŸ0Ÿ0±Ÿ E é Àª°¨gœ0ª²±€ [›Ÿ'º7¨g±€ [¤œŸ,¼[¨gœ0ª²±€  EE žƒŸ'Ÿ0±Ÿ ­WŸ'›® [¤0¯°›œž¦¥<·?±Ÿ'¥<¤ž¦¯²ž¦¨gœ0ª²±€  [ žƒŸ'Ÿ0±Ÿ áž— ®œž— «¨gž‹¾ž— bžƒŸ'›œ0ª²±€  å žƒŸ'Ÿ0±Ÿ ­*±œ0›¯ ÞÞ E ­ ›¿«¯²žºå1DÀ­Œ©bžjžƒŸ0Ÿ0±Ÿ‹›® [›¯²ºb¤0ª°¤ÀŸ0ž¦¤'¼«¯²œ1±µ¸«›œœžƒŸ,  œŸ'›® «¤µmžƒŸ›® [¥7¤ž— €œž— [¨gž‹¾ž— bžƒŸ,›œ0ª²±€  ­ ›¿«¯²žåS¤'©b±]·Œ¤½œ'©bžÈžƒŸ0Ÿ0±Ÿ´›® [›¯²ºb¤0ª°¤5Ÿ0ž¦¤'¼[¯²œ ±µÒ¸«›œœžƒŸ, PœŸ'›® [¤µÖžƒŸ†›® [¥P¤ž— €œž— [¨gž/¾ž— bžƒŸ'›œ0ª²±€   ߌž¦¤œ±Ÿ'›œ0ª²±€ ØžƒŸ0Ÿ'±Ÿ©«›¤?±b¨ƒ¨¦¼bŸ0Ÿ'ž¦¥†·©[ž— ‰œ'©bž‹ bž¦¨t¡ ž¦¤0¤'›Ÿ0ºEª¬ bµÖ±Ÿ,¶†›œ0ª²±€ Åª¬ * ? [¾®¯°ª°¤'©Ë¤ž— ®œž— «¨gžP·›¤  b±œŒŸ0ž¦¤œ±Ÿ0ž¦¥ÆržÂ ¾[Â:›Ÿ0œ0ª°¨ƒ¯²žÆ œž— [¤žÆ ¼«¶j¿ žƒŸ¦Â ƒ R×V*H:™%#CG • AüVWH Éʞì¸[Ÿ0±®¸ ±®¤ž¦¥ÌœC·?±¡£¯²žƒÍž¦¯œŸ'›® [¤0¯°›œ0ª²±€  ¸«›œœžƒŸ,  ¤ž¦¯²ž¦¨gœ0ª²±€ ³¶‰±¥«ž¦¯œ±;Ÿ0ž¦¥r¼[¨gžUœ'©bžØ¯²ž— b¾œ'©±µ¸«›œÁ¡ œžƒŸ, ×›® [¥×Ÿ0ž¦¥r¼[¨gžœ'©bž›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤±b¨ƒ¨¦¼bŸ0Ÿ0ž¦¥U¥r¼bž œ±”¤'©b±Ÿ0œj¸«›œœžƒŸ, [¤ƒÂÇѝ Yœ'©bž†Ó«Ÿ,¤œ¤œž¦¸Æ^œ'©bžØ›®¶¡ ¿«ª²¾€¼[ª²œ0ª²ž¦¤+ª¬ Yœ'©bžØ¸«›œœžƒŸ, #¶‰›œ0¨t©[ª¬ b¾”ª°¤jŸ0ž¦¥r¼[¨gž¦¥ ›® [¥ ¤0žƒÍžƒŸ'›¯ÕœŸ,›® [¤0¯°›œ0ª²±€  ¸r›œœžƒŸ,  ¨ƒ›œžƒ¾±Ÿ'ª²ž¦¤ ›Ÿ0ž”¤ž¦¯²ž¦¨gœž¦¥´¿ º³œ'©bž;©€ºb¿[Ÿ'ª°¥5¶+žƒœ'©b±b¥³±µ‹žgÃ[›¨gœ žgÃ[›®¶‰¸«¯²ž5¶‰›œ0¨,©«ª¬ b¾E›® [¥S¤ž—¶†›® ®œ0ª°¨½¨g±€ [¤œŸ'›ª¬ €œ ¿ ºœ'©bž¦¤'›®¼bŸ,¼[¤ƒÂÎѝ #œ'©bž7¤ž¦¨g±€ [¥Õ¤œž¦¸Æœ'©bž<¶+±®¤œ  [›œ'¼bŸ,›¯œŸ'›® [¤0¯°›œ0ª²±€ ;¸«›œœžƒŸt 7ª°¤1¤ž¦¯²ž¦¨gœž¦¥”¿€ºØœ'©bž ? [¾®¯°ª°¤'©Ç¤º[ ®œ0›¨gœ0ª°¨Ž¨g±®¯°¯²±¨ƒ›œ0ª²±€ «›¯ ª¬ [µm±Ÿ,¶†›œ0ª²±€   ѝ ´œ'©bžµ„¼bœ'¼[Ÿ0žÆ‹ª²œ/ª°¤” bžƒž¦¥[ž¦¥Åœ±½¼[¤žœ'©bžY¤º[ ¡ œ0›¨gœ0ª°¨‰¨g±®¯°¯²±b¨ƒ›œ0ª²±€ [›¯ª¬ bµÖ±Ÿ,¶†›œ0ª²±€ <±µÝÒ±Ÿ0ž¦›® /œ± Ÿ0ž¦¥r¼«¨gž›®¶j¿«ª²¾€¼[ª²œ0ª²ž¦¤Àª¬ 7¸«›œœžƒŸt /¶†›œ0¨,©[ª¬ b¾U¤œž¦¸Â Û ¯°¤±Ôª²œ³ª°¤# [žƒž¦¥[ž¦¥íœ±ÈžgÃb¸«›® «¥¹œ'©[ž5¤0ºb €œ0›¨gœ0ª°¨ ¨g±®¯°¯²±b¨ƒ›œ0ª²±€ [›¯ìª¬ bµÖ±Ÿ,¶‰›œ0ª²±€  ±µ  b¾®¯°ª°¤'© ÆížÂ ¾[ ›¥÷Áž¦¨gœ0ª²Ížg¡C b±€¼«  Ÿ0ž¦¯°›œ0ª²±€  ›® «¥ ͐žƒŸ'¿[¡£›¥[͐žƒŸ'¿ Ÿ0ž¦¯°›œ0ª²±€  Â Û  [¥ÏŸ0ž¦¤ž¦›Ÿ,¨,©Ï±€ ÐŸ0ž¦¤0œ±Ÿ'ª¬ b¾îœ'©bž  bž¦¨gž¦¤0¤'›Ÿ0º3ª¬ bµÖ±Ÿ,¶†›œ0ª²±€ 3ª¬   b¾®¯°ª°¤'©Ð¤ž— €œž— [¨gž ©[›¤œ±×¿ž‹¥«±€ bžÂ ’ ™t„^H:V ! #üOrQ:IOrB½OrH – ­Œ©«ª°¤ÅŸ0ž¦¤0ž¦›Ÿ'¨,©Ðª°¤Ë¤'¼[¸«¸ ±Ÿ0œž¦¥3¿€ºîœ'©bžíÝÒ±¡ Ÿ0ž¦› áb¨ƒª²ž— «¨gž ›® [¥  b¾®ª¬ bžƒžƒŸ'ª¬ b¾ ø«±€¼« [¥«›N¡ œ0ª²±€ ܄ÝÒôá ?ø:áñœ'©bŸ'±€¼b¾€© œ'©bž …0§/¼[¯²œ0ª°¯°ª¬ [¾€¼[›¯ ѝ bµm±Ÿt¶‰›œ±€ íߌžƒœŸ'ª²žƒÍN›¯‡†Ì¸[Ÿ0±÷Áž¦¨gœE›œËœ'©bž Û ¥b¡ ́›® [¨gž¦¥ ÑÁ [µm±Ÿ,¶†›œ0ª²±€  ­Wž¦¨,©r b±®¯²±¾º ßž¦¤ž¦›Ÿ'¨t© ž— €œžƒŸ]Ü Û Ñ ­WŸ'¨Ná0Æ›® [¥;›¯°¤±Ç¤'¼[¸«¸ ±Ÿ0œž¦¥;¿ º<ÝÒ±Ÿ0ž¦› ­WžƒŸ,¶‰ª¬ b±®¯²±¾ºÙߌž¦¤ž¦›Ÿ'¨t©ž— ®œžƒŸ;µÖ±Ÿ]W›® b¾€¼[›¾ž ›® [¥ Ý b±]·Œ¯²ž¦¥[¾ž  b¾®ª¬ bžƒžƒŸ'ª¬ [¾[Æ>¼« [¥«žƒŸ œ'©bž ¸[Ÿ0±÷Áž¦¨gœˆ…'ôÒ µ1žƒÍž¦¯²±®¸r¶+ž— €œ±µÜ1žƒž¦¸b¡ÈžƒÍž¦¯*šŸ0±¡ ¨gž¦¤0¤0ª¬ [¾‰›® [¥Š‰Ò¼[›¯°ª²œ£ºÇ§/›® [›¾ž—¶+ž— €œ­*ž¦¨t©« b±®¯²±¾º µÖ±ŸÏé9žƒŸ0ºW›Ÿ0¾ž5ÝÒ±Ÿ0ž¦›® ëÑÁ bµÖ±Ÿ,¶†›œ0ª²±€ Ô¢X›¤že†bÆ ›‘¸[Ÿ'±÷ ž¦¨gœ‰±µÒ¸«¯°›® Þáb­5 :š E ööö ¿ ºœ'©bž<µ„¼« «¥#±µ §<ÑÁ [ª°¤0œŸ0ºP±µ0áb¨ƒª²ž— [¨gž”›® [¥­*ž¦¨t©« b±®¯²±¾ºÆÀ¥r¼bŸ'ª¬ b¾ œ'©bž¸ žƒŸ'ª²±b¥ØµÖŸ0±€¶îަߐߋ'œ'©[Ÿ0±€¼b¾€© E ööö  Œ”O ÁO ˜ OrH?™bO • Ž}O‘ ’.“.”Y•x–•F—˜d“™ –•š‹›‹•#’ŠœF˜.ž•ŸZŸU ‹“Y e’t¡€¢=£Z’t¤•’ – ¥t˜O=•#ŸdŸU ž”Y•#C– t˜¦#§§‹¦K˜.¨©¡™V•’tª}•5=£Uª} K«­¬‹£Z®K¯y O° –£d’©¯tª}£Z’‹®G™V– O–£ZªC–}£U¤ KŸk±G•x–›‹s¡‹ª˜²’´³5µ}¶#·&¸&¸&¹Oº‡»V¼O½ ¶C¾5¿ž»y»tÀtÁOÂyÃE¸&¸Ăº‡» ¼(¶C¾žÄÆÅV¸ž¿r½&½¶#·ºÆÁeĂºÆ¶O»F¾x¶eµÇk¶eÈ]É Ê À‹ÄËÁKÄÆºÆ¶e» ÁOÂÌYº‡» ¼KÀsºU½xÄÆºÆ·x½I“tšt K®K•,ª5ÍKÎeÏeÐsÍÑeҋ˜ r e® K’ “ ²¡s] K’t¡Mœ=Ÿd’(²:– K£‚˜5¦#§§eÏy˜<¨€K&¡M™V•#’tªC•r=£UªC°  K«­¬‹£Z®K¯t e–}£ZK’P¯tªC£Z’‹®´ \™V•#¤K’t¡ÔÓ. K’‹®K¯t K®K•G±GK’‹K° ŸZ£d’t®K¯t KŸ@Õ@Kš‹¯tª#˜]²’qÇk¶OÈ Ê À‹Ä:ÁeĂºÆ¶O»ÁOÂY̺‡» ¼KÀsºU½xÄÆºÆ·x½I“ ÍKÒtÖ×ÏsØ&ÙmÚeÎÛXÐsÚK§KÎt˜ Üo£Z«G“žÝ= K}£o e’y¡ßÞk¯‹’‹®à? e•áßÜo£Z«G˜â¦,§K§KÎt˜`œ=«­¬‹£d° ®¯‹£d–ãŠä •#ª}KŸZ¯s–}£ZK’åeæžÜo}•, e’å™V•#’–•’t¤•3œ=’t eŸZãsªC£Uª  K’t¡GÜoK•# K’s°:ç’‹®KŸZ£UªC›(à< K’tªCæ×•hŽ5 Kª}•#¡èK’GÜoK•# K’ ¢•#}¬é” e–C–•’tª˜²’\ês¶eÀsµI» ÁeÂ?¶¾rëžìIítí “yÍeÛyÖËÑKØIÙmÑeÎKÎOÐ ÑÑeڋ˜ZÖCº‡»3ë¶Oµ}¸&ÁO»sØ&˜ Üo£Z«G“YÞk¯‹’‹®´à< K•á˜å¦,§K§eÏy˜€îFÁKÄÆÀsµÁOÂkÌ.Áe» ¼KÀtÁ¼ ¸(³µ}¶OÉ ·&¸x½&½&º‡»V¼e˜kÜoãe°ï= Ká°™s t˜ Ó.••K“ïhãV¯‹’ðœh› “ ¥e’‹®{՞˜e” K}á“e K’t¡ñr£ZŸyÕ@›t K’‹®rÜo£Z«G˜ ¦,§K§§‹˜.Ó.•xòs£Z¤# eŸ‹™V•#Ÿd•,¤I–}£ZK’]‘ £R–› oà? e®K•–?Ó. K’‹®K¯y e®K• ±ˆK’‹Ÿd£Z’‹®K¯y eŸÕ@}š‹¯yªE e’y¡ó e’`±ˆä=­˜ô²’b³µ}¶OÉ ·&¸&¸&¹Oº‡»V¼O½r¶C¾žÄÆÅV¸oõÄ×Å­ìx»yÄ:¸xµI»ÁeĂºÆ¶O» ÁeÂ<Çk¶e»#¾x¸µ}¸x»·&¸{¶O» ö Ås¸¶eµ}¸#ÄÆºÆ·&ÁO­Áe» ¹åÃE¸ÄÆÅV¶,¹K¶eÂd¶¼eºÆ·&ÁOÂrì&½&½IÀt¸I½Eº‡»÷ÃEÁOÉ ·xꇻ¸ˆösµ}Áe»‹½IÂdÁKÄÆºÆ¶e»t“‹št K®K•,ªo¦,ÚeÒOÐ ¦,ÎKҋ˜ ±GV’ “TÜoãKK’t®e°ïh£Ë“¥KK’‹®K°:ï=ãK•#KáèÓ.••“T¥K¯‹’‹®K°Ë²’éÜo£Z«G“  K’t¡øñr£ ùVÔÞ e’‹®y˜ú¦#§K§û‹˜üä •,ªCŸd¯s–£d’ýe権¡ ™s•’tª}•Eœh«]¬‹£Z®K¯‹£d–}£Z•#ªM¯tªC£Z’‹®ÔÕ@ŸdŸZs¤ e–}£ZK’ß” O–}–}•’tª £Z’é¥K Kšt e’t•#ª}•x°Ë–}e°ÜoK•# K’M±ˆàø™sãVªC–}•#«G˜hꋶOÀsµI» ÁeÂ.¶¾ ërìIítíy“tÍڋւû Ø&˜UÖº‡»Gë]¶eµ}¸&ÁO»sØ&˜ ” KŸd«þ•,“e±ˆ e}–}›y ‹“er e’‹£U oçk®K•,¡s£Ë“Õ@› ¯t’‹®K›VãK•ï= K’ “K—‹•#£ ÿž£Z t“t e’y¡G¥Kª}•št›3ä  ªC•#’‘•£Z®t˜ ¦,§K§§‹˜Õ@K’tªC–}& e£Z’s° £Z’‹®ÔÓ •xòs£U¤ KŸr™V•#Ÿd•,¤I–}£ZK’÷œ=¤} ª}ªèÓ. e’t®K¯t K®K•#ª(¯tªC£Z’‹® à<••­œž¡XùK£Z’‹£Z’‹®Gñr& e«þ«ð e&ª#˜èö¿ ˆ¶Oµ ,½&ÅV¶ Ê ³5µ}¶#·&¸&¸&¹Oº‡»V¼O½ =Ç?ísÌ.ì O¶OÂÀsÈ𸘠™V•#t“´Üo‘ K’‹®e°ZùVK’ ¸ÄåÁe˜ ¦#§K§û‹˜ œ ™V–}¯t¡s㠐K’ çkò‹ e«þš‹ŸZ•x°Ž ªC•,¡ ÜoK•# K’ú–} ç’‹®KŸZ£UªC› ±ˆ K¤&›‹£Z’‹• à<& e’tª}ŸZ e–}£ZK’™Vãsª–•« ž••ŸZKš‹«þ•’ –,˜ßëo¿hìIíö.˜UÖº‡» ë¶eµ}¸&ÁO»VØ ™V}’t£d•#C–ŸZ K«O e’‹£U¤&› “ž¢=£Z ¤&› ˜ ¦#§§Kût˜ ”kK¬t K¬‹£ZŸd£Uª–£Z¤ Ó< e’‹®¯t e®•G±ˆV¡‹•ŸZ£d’‹®´æ×Mñr•’‹•# KŸd£•,¡Ó.ä ” e&ª° £Z’‹®y˜]¸ Ê ÁeµxÄÆÈð¸x» ĶC¾\Çk¶OÈ Ê À‹Ä:¸xµéí·ºÆ¸x»·¸ßöy¶ K¶ ìx»‹½xĂº×ÄÆÀ‹Ä:¸F¶¾3öt¸&·xÅ» ¶eÂd¶¼ Föt¸&·xÅ»tºÆ·&ÁeÂh¸ Ê ¶OµxØ à? eá•#¡‹ t“rÜo£Z¤&›‹£Ë˜ ¦#§K§΋˜ ” O–C–•’s°:¬t Kª}•#¡÷±ˆ K¤&›‹£Z’‹• à<& e’tª}ŸZ e–}£ZK’ ˜rÇÌ ì}î5É X“yšt e®•#ªr¦K¦XÚKÚXÐ ¦K¦XÚeût˜ à<K«þ£d– ‹“.±é Kª e¯ ˜þ¦#§K§t¦K˜!h¸x»¸xµÁOº#",¸&¹èÌ$`³ Áeµ&½Iº‡» ¼K˜ ÜoŸZ¯V‘@•# œ=¤ ¡s•«þ£U¤{”k¯‹¬tŸd£UªC›t•&ª˜ ¨´ O–& e’t K¬y•“åïh£U¡s•  e’y¡ ÜoK£U¤&›‹£´à< KáK•#¡t ‹˜ ¦#§K§û‹˜ œ ” O–}–}•#}’s°:¬t ªC•,¡ ±ˆ K¤&›‹£Z’‹• à.& e’tª}ŸU O–}£ZK’ ™sãVªC° –•« çòV–}•#’t¡s•,¡q¬ ã÷çò‹ e«þš‹ŸZ•x°:¬t ªC•,¡ø”}s¤x•,ª}ª}£Z’‹®t˜ ÇÌ ìî%5ÉKõK“‹št K®K•#ªo¦,ÛKΧXÐ ¦,Û Ñeۋ˜ ÞkVK’.“Y¥K¯‹’ – K•K˜E¦,§K§û‹˜ˆí&O»yÄ:ÁK·#ÄÆºÆ·þ¿ž» ÁeÂ'X½IºU½=¾x¶Oµë¶OÉ µ¸Áe»þí¸»yÄ:¸x»·&¸I½)(<½Iº‡» ¼=Ì<¸+*ºÆ·&ÁO¿o½&½x¶,·ºÆÁeĂºÆ¶O»-,žÁX½¸&¹ ¶e»Ç¶eÉC¶#·&·ÀsµIµ}¸» ·&¸)h¸ÂdÁeĂºÆ¶O»y˜r”› ˜ ­˜y–}›‹•,ªC£Uª#“yÞk’s° ª}•£/.h’‹£K•#ª}£R–ã“ =•#šs–#˜@eæ]Õ@K«þš‹¯s–•è™‹¤x£Z•’t¤•K˜UÖº‡» ë¶eµ}¸&ÁO»VØ
2000
5
                                  !"!#$%#&#' !()#"($*    %*+    ,   %'  -.  $!!(($.//  01 /2/ /2                      !"!#$%#&#'  !()#"($*   2 01 /2/ /2     3    * *1       *         11 1 / 3   #21#* 1       1     1 2              1 *  / 4  21   5             *1 1           /         *          6 $77$8/        1 2  *   1               / 3   1 2          *    *         1 2                1   */       *      5      6& $77$9 : 1 $77!9  $77;9   $77!9 3 $77;9 ' $77<9 2 $7778/                     2     1        6$77;9=$77>9 $7778/     ? 1  ?1  6: 1$77!9, $77<8/                      6$77"8   #*   1      5   #  1  /  @          1          1     / A  1 21*          */             BC 1 * D1 6 ?      /8    D 6& $77$9  ,  $77<8/      *         * D1     1 D/    1 1 2        1E                                                          :     * 1         *    **1       /          *   *         *E      1   /:           *1*       6: 1 $77!9 $77;9$77"81         1 ?     *   6  $77"8/       *             1  1            6   $77<9   $77>8/        *        '     1   *  1   /   1            * 1    F              3 G  *    /   *    1    *     /  3 G *   *  ?    6$77"8 *        * 1    2       */            *1      2     6  -     -   8 * 1 21         *1/  1* 211 *      * 1                   1  /              *      1   *  #*          *1 /         1   1 1          /31           1#    ?  ?/                 =  #    #      2        *          1          / :    *             *1           *                 ?                6//   @@  1    *             @  @   * 8/ 3      #         * 6   *8         *1 /  <"<!         &:H!$!#>(   *  *    ;!<            I  /   !    3       ?    *1           E 1     ##  6.8?     /      *<(J  51        1  6 #  1 8 6  $7>78/ G      1       1    1  1      1         / 6//DK L   6 8 ! K L6  8 " KL6   8DDK L6  8 ! K L6  8 " KL6 88 1 K LK LKL  1             1    / 1    *1  1 / : *             1         5  / =  ?  68  B?  C 1 ? *   6 8   *   6 8        D  D /: 1 1           ?D6 8D6D* D8     1    1 B 6 8C 6  9 B  C8 1               /  1*   D D 1            1    / =  ? BC             2 B          BC/       ?                 /     ?    ? *1          *        1 2 6  $77!8/      .      *1 . 1  / =  ?   1                **    1 * / .          *      *    *         6$77;9  $77<8/     .    *1     /   ?  1  * ""/$J         6 81 >J     */:  1       ?   !;/;J    *        *1!)/$J    /  1 1     2    21     /                 1   1     /= ?  * D 6 8 6 8D    1 1  *D6 86 M8D      D686 8D             1  D   6 86 8D  D *ND/ 3   B  1 C *           1 / 3 $E61   *     1 8     1       .       /    1                     / 3   D?          D/   = ?   B CE K L N68"KL N  6 8      *N"O+:-N *O 6KL   N " N   8  ?          6  DK L  *N " KLO  N *OD8 *   #  / " #$    3   6// *1   =  8 1 6//*1   8   .         / *1           1 2 6&$77$9: 1$77!9, $77<8/ :         1             I*  1      5  /   P   1 P*  1    5?*/    1          *      *       64 $7>$8/  :               *1    /                                /       *  *           *   6 ? ?8/      ! " "      !    " "   !      ! " "  #     ! $% $% " " #  !  #    1                    1     /                  .   *  .         1 1 ?1           ?  1 /       1     **      /31  *!/!/ "  %  !     "    &'  *'         *6 1 8*   / ?  1 .I 6$E(8    1* ? /    ?  5          1   ? /  " (     )   #$ *$1   1       /  *$/    :          1     1    1       *  ?        / : G     1                   *1  1 /  *   1  /   * 1  **    1 *            1    ?           1    / =  *         *1 21 /                = $/ #      ""      3      .             / =  $   #     /     !" #$ %   3     *              5 6 $7;)8   ?   / 1E $/ +   *       1             *         *    / H/            *  / / 3           1 /  ?     1  1    *  *   ' /  !/          #  68 1 5 6$8/     68Q6 81 /                6$8      *   ?   :      ?       : G     . 1. **        :     *1  &'(                                         (   (  (  )* +  (   ,  -$   (   -$    -$                $     (  "  (  !    R R R R R R H ? 8  6                  + ∩ × × = ∈ 3     S 1    S ?       S    S    RR S*      RRS*          E  /6   .     RRS$   RRS$ T$/((9  1S$/((8   # " #$ %   31 #  1   1     * *  /#  1* = 6$77<8       / :         1              1  #    / #      *1  56H8/  *   1 /        6H8  3      S 1       S       S         S         S #       S  *     & ##" #$ %   3    : G 1      6   =    / $7>!8         6A ,  $7778   /A  ?  GE  6K L *($KL$H(;(8 S(/);>9 6 *($E19$H(;(E18  6K L *($KL$H()(8S(/)$;9  6 *($E19$H()(E  8  6K L *($KL$H$((8S(/!H;9  6 *($E19$H$((E 8 MM . .         6!8   +         -$             /           "   " 0 1   %222!   U   !  V 1          U" "  ! " V     1   * / &  1         *  5 6!8/     *   /  ' (      %         1     * ;1  /  1 *  2 1    *         *     #*  #*  #* /         1  1  *1* /       1* / )   " #$ %   111    *        1  **   /  1  1   1 *       1          1 *  /4@2  1?/       1 5 D N D  *         DN'D *    *     1  D"   D/    ?    1 D"   D   D"   #"   #"  D   "   *6 H8/     = H/# #*       **   D  !   DD !"  D1 56;8/           6;8 1  $       **     DN'D     *2/ $!   #  .    ** /               × × − × = −     #      #     #              → × × = " ## # $ "# ## # $ %%# # & & % # &                                           ∈ ∈ = = =                  *1  O  1   *    O    1 * /  ? **    *     * !"/7J1 **   *    ?  *H</(J/   **  *       1       */  *   % "#      1 1       1W       * /               1   $E6 8 E 6  8/  =  ?      DK L6 8 " KL68D  DK L6 8 "  KL 6 8D      1 *    DK L 6 8D        DKL  6  8D/ * +,  3  *      $;((((     #* / 1<((((         #* /     #*  * !((        /     $H( *       1 $!/"  1   7/)    1  6$"/7   1 8         /              /  ?            /?  6*H8 1     #*  1  1  /3 1       1  2* /  #*   1   *          *   1   /  #*        #*            @6$77"8/  11 #  5    # 1    1     # / # #*      *1* 1 *  1      1 2  / =        *1/            *H/      * H        #* 251     <(J    5 1       1        / :    1      H/H/$  ? 1   1    1          / =  ?        1  D  6 8D  1 D  6   8"K L 6% 8D    D* 2   D *    1  D       .     #*6Q(/7(8                 6Q(/>$8                 6Q(/<<8                 6Q(/;78 H)/)J !!/"J ;)/<J ))/>J 7;/;J >7/HJ ></$J ">/;J  #*  6Q(/H(8             6Q(/()8 H</"J !>/$J >;/<J "H/>J #*    6Q$/$8                 6Q(/")8                 6Q(/)8 H(/(J !$/<J ;"/!J <)/"J )"/>J ;"/HJ 6 8D      5          1  D    6   8D/     ?          1    1        /  ?   6* !8 1   #*    #*            11  1      1   #*   /     *            ?              !J        1 1     *      <(((   $;((((  *!5  2*/  = ?  1       1        21   #  1           1 2    1   1     * /       * !      1       / # #*          /   #       *     1/3 *     1 *    1  2      /       .     , #* 6 Q(/<<,,Q(/()8RR6 Q(/78RR6Q(/H(8 !</)J 7H/<J   , #* 6 Q(/<<,, Q(/)(8RR6 Q(/78  H7/$J 7H/"J   , , #* 6 Q(/<<,,Q(/()8RR6 Q(/<<,, Q(/)(8RR 6 Q(/>$,, Q(/!)8RR6 Q(/78RR6Q(/H(8 !7/$J 7!/(J N# #* ))/$J >7/HJ *!/ 1        ?  1    <(J #       $E$ / 3    $EH  HE$    7)J/ $EH   1   .    / -          1      / :   1    *   *  * 1    / 3      21 1 *              1         *1             /     ?          /  3 2      *1        1      ?    ?  ?         ?   /                 *1      *     /    1       1 1  1 2 /   1 2*                    1/ 3 1 2  ? 1 #    #  ?   1 *          /     1 21 *        = 6=8   D   D '              6  8/  1 2 1    *           = 6=8   ' B 1 5   #  C/ (  3  &4 ( , &  56 , &   7) 8 %229!                       !       )   %2 :!  :;9-9%%   4  %2<2!   "    #   $     %    = & >$ ?   +  !  " %229! &  '     '         " (  + &   9%               )   %-< , +    +   ?@ ( %22%! $        (     + &    :2  8             )   %9A-%B9 , +  "   C %22D! )   *  + % '        ,'% %   + &  D      E  ) &   E)&-2D! 9D-DA , + 8 %22B! '     '     ##     # %   + &  9F                )  ,)7 %2DF!                  $  #   6  G   :;/:2B-9A: C   "  %22%! '#             #  + &   :2                )   %BB-%<D 4@ ( "  8"   5  0 H     %22;!               '      ##   +      )   :% D!/ %-9< 4 &  %22F! '     %  -% . % # .     .   #  + &   99                 )    :9;-:D9 4 &   ) > > %22<! ' /) '##        . $ + %   . #   #     + &   ')+EC-)I2< D%D-D:A 0 6-J  "-(   %222! '         "     .  $  "   + &   %%      "  )  +    &   9<2 K 92; + " ! " ( 6 6  (  %22B! ' , % '##    + % '          )  %22B 5  :9 E$ :  9%9 9D9 " 8=    = %222! ' "  ,.   # ,0  '            + &  < +        =    8     +   8 =     2< %A< )  E 1 (  =   %2<%! =   $ 3@ 6 -C3@ 6 -0 3@ G-> ' 8->   > -0!  %%   1  9DDF ) 6-6 "-(   %22B! 0   % /#          ,           + &   %B  +           &   '  ) +&')I2B!  DAA-DA9 8  "  8  7   %229!   ,       +      )   %2 %!/ %:%-%D: 8 6-6 >-8 L >-M C  0 -J > %2<9! =     (  4  ! (0 & )*   3 @   ! ( 6 0 >  (0  "-(   %22;! *   $ %  ' 2       ,    #3 '      %   ,   '      + % %   " (  + &   %F  +             )   :9A-:9F =@ =@@  >  8  %222!         2(     . ,   #  + < &  +        =    8     +   8 =     << 2B = , %22<! '                           + &   %;  +             )   %:22-%9A;  ,@ %22D! '    #   ,    #      $       + &   9:   8             )  )N2D! <A-<B
2000
50
Specifying the Parameters of Centering Theory: a Corpus-Based Evaluation using Text from Application-Oriented Domains M. Poesio, H. Cheng, R. Henschel, J. Hitzeman, y R. Kibble, x and R. Stevenson  University of Edinburgh, ICCS and HCRC, fpoesio,huac,henschel [email protected] y The MITRE Corporation, [email protected] xUniversity of Brighton, ITRI, [email protected] University of Durham, Psychology and HCRC, [email protected] Abstract The definitions of the basic concepts, rules, and constraints of centering theory involve underspecified notions such as ‘previous utterance’, ‘realization’, and ‘ranking’. We attempted to find the best way of defining each such notion among those that can be annotated reliably, and using a corpus of texts in two domains of practical interest. Our main result is that trying to reduce the number of utterances without a backwardlooking center (CB) results in an increased number of cases in which some discourse entity, but not the CB, gets pronominalized, and viceversa. 1 MOTIVATION Centering Theory (Grosz et al., 1995; Walker et al., 1998b) is best characterized as a ‘parametric’ theory: its key definitions and claims involve notions such as ‘utterance’, ‘realization’, and ‘ranking’ which are not completely specified; their precise definition is left as a matter for empirical research, and may vary from language to language. A first goal of the work presented in this paper was to find which way of specifying these parameters, among the many proposed in the literature, would make the claims of centering theory most accurate as predictors of coherence and pronominalization for English. We did this by annotating a corpus of English texts with the sort of information required to implement some of the most popular variants of centering theory, and using this corpus to automatically check two central claims of the theory, the claim that all utterances have a backward looking center (CB) (Constraint 1), and the claim that if any discourse entity is pronominalized, the CB is (Rule 1). In doing this, we tried to make sure we would only use information that could be annotated reliably. Our second goal was to evaluate the predictions of the theory in domains of interest for real applications–natural language generation, in our case. For this reason, we used texts in two genres not yet studied, but of interest to developers of NLG systems: instructional texts and descriptions of museum objects to be displayed on Web pages. The paper is organized as follows. We first review the basic notions of the theory. We then discuss the methods we used: our annotation method and how the annotation was used. In Section 4 we present the results of the study. A discussion of these results follows. 2 FUNDAMENTALS OF CENTERING THEORY Centering theory (Grosz et al., 1995; Walker et al., 1998b) is an ‘object-centered’ theory of text coherence: it attempts to characterize the texts that can be considered coherent on the basis of the way discourse entities are introduced and discussed.1 At the same time, it is also meant to be a theory of salience: i.e., it attempts to predict which entities will be most salient at any given time (which should be useful for a natural language generator, since it is these entities that are most typically pronominalized (Gundel et al., 1993)). According to the theory, every UTTERANCE in a spoken dialogue or written text introduces into the discourse a number of FORWARD-LOOKING CENTERS (CFs). CFs correspond more or less 1For a discussion of ‘object-centered’ vs. ‘relationcentered’ notions of coherence, see (Stevenson et al., 2000). to discourse entities in the sense of (Karttunen, 1976; Webber, 1978; Heim, 1982), and can be linked to CFs introduced by previous or successive utterances. Forward-looking centers are RANKED, and because of this ranking, some CFs acquire particular prominence. Among them, the so-called BACKWARD-LOOKING CENTER (CB), defined as follows: Backward Looking Center (CB) CB(U i+1), the BACKWARD-LOOKING CENTER of utterance U i+1, is the highest ranked element of CF(U i) that is realized in U i+1. Utterance U i+1 is classified as a CONTINUE if CB(U i+1) = CB(U i) and CB(U i+1) is the most highly ranked CF of U i+1; as a RETAIN if the CB remains the same, but it’s not any longer the most highly-ranked CF; and as a SHIFT if CB(U i+1) 6= CB(U i). The main claims of the theory are articulated in terms of constraints and rules on CFs and CB. Constraint 1: All utterances of a segment except for the 1st have exactly one CB. Rule 1: if any CF is pronominalized, the CB is. Rule 2: (sequences of) continuations are preferred over (sequences of) retains, which are preferred over (sequences of) shifts Constraint 1 and Rule 2 express a preference for utterances in a text to talk about the same objects; Rule 1 is the main claim of the theory about pronominalization. In this paper we concentrate on Constraint 1 and Rule 1. One of the most unusual features of centering theory is that the notions of utterance, previous utterance, ranking, and realization used in the definitions above are left unspecified, to be appropriately defined on the basis of empirical evidence, and possibly in a different way for each language. As a result, centering theory is best viewed as a cluster of theories, each of which specifies the parameters in a different ways: e.g., ranking has been claimed to depend on grammatical function (Kameyama, 1985; Brennan et al., 1987), on thematic roles (Cote, 1998), and on the discourse status of the CFs (Strube and Hahn, 1999); there are at least two definitions of what counts as ‘previous utterance’ (Kameyama, 1998; Suri and McCoy, 1994); and ‘realization’ can be interpreted either in a strict sense, i.e., by taking a CF to be realized in an utterance only if an NP in that utterance denotes that CF, or in a looser sense, by also counting a CF as ‘realized’ if it is referred to indirectly by means of a bridging reference (Clark, 1977), i.e., an anaphoric expression that refers to an object which wasn’t mentioned before but is somehow related to an object that already has, as in the vase . . . the handle (see, e.g., the discussion in (Grosz et al., 1995; Walker et al., 1998b)). 3 METHODS The fact that so many basic notions of centering theory do not have a completely specified definition makes empirical verification of the theory rather difficult. Because any attempt at directly annotating a corpus for ‘utterances’ and their CBs is bound to force the annotators to adopt some specification of the basic notions of the theory, previous studies have tended to study a particular variant of the theory (Di Eugenio, 1998; Kameyama, 1998; Passonneau, 1993; Strube and Hahn, 1999; Walker, 1989). A notable exception is (Tetreault, 1999), which used an annotated corpus to compare the performance of two variants of centering theory. The work discussed here, like Tetreault’s, is an attempt at using corpora to compare different versions of centering theory, but considering also parameters of centering theory not studied in this earlier work. In particular, we looked at different ways of defining the notion of utterance, we studied the definition of realization, and more generally the role of semantic information. We did this by annotating a corpus with information that has been claimed by one or the other version of centering theory to play a role in the definitions of its basic notions - e.g., the grammatical function of an NP, anaphoric relations (including information about bridging references) and how sentences break up into clauses and subclausal units– and then tried to find out the best way of specifying these notions automatically, by trying out different configurations of parameters, and counting the number of violations of the constraints and rules that would result by adopting a particular parameter configuration. The Data The aim of our project, which is called GNOME and whose home page is at http://www.hcrc.ed.ac.uk/ ~ gnome, is to develop NP generation algorithms whose generality is to be verified by incorporating them in two distinct systems: the ILEX system developed at the University of Edinburgh, that generates Web pages describing museum objects on the basis of the perceived status of its user’s knowledge and of the objects she previously looked at (Oberlander et al., 1998); and the ICONOCLAST system, developed at the University of Brighton, that supports the creation of patient information leaflets (Scott et al., 1998). The corpus we collected includes texts from both the domains we are studying. The texts in the museum domain consist of descriptions of museum objects and brief texts about the artists that produced them; the texts in the pharmaceutical domain are leaflets providing the patients with the legally mandatory information about their medicine. The total size of the corpus is of about 6,000 NPs. For this study we used about half of each subset, for a total number of about 3,000 NPs, of which 103 are third person pronouns (72 in the museum domain, 31 in the pharmaceutical domain) and 61 are third-person possessive pronouns (58 in the museum domain, 3 in the pharmaceutical domain). Annotation Previous empirical studies of centering theory typically involved a single annotator annotating her corpus according to her own subjective judgment (Passonneau, 1993; Kameyama, 1998; Strube and Hahn, 1999). One of our goals was to use for our study only information that could be annotated reliably (Passonneau and Litman, 1993; Carletta, 1996), as we believe this will make our results easier to replicate. The price we paid to achieve replicability is that we couldn’t test all hypotheses proposed in the literature, especially about segmentation and about ranking. We discuss some of the problems in what follows. (The latest version of the annotation manual is available from the GNOME project’s home page.) We used eight annotators for the reliability study and the annotation. Utterances Kameyama (1998) noted that identifying utterances with sentences is problematic in the case of multiclausal sentences: e.g., grammatical function ranking becomes difficult to measure, as there may be more than one subject. She proposed to use all and only tensed clauses instead of sentences as utterance units, and then classified finite clauses into (i) utterance units that constitute a ’permanent’ update of the local focus: these include coordinated clauses and adjuncts) and (ii) utterance units that result in updates that are then erased, much as in the way the information provided by subordinated discourse segments is erased when they are popped. Kameyama called these EMBEDDED utterance units, and proposed that clauses that serve as verbal complements behave this way. Suri and McCoy (1994) did a study that led them to propose that some types of adjuncts–in particular, clauses headed by after and before–should be treated as ‘embedded’ rather than as ‘permanent updates’ as suggested by Kameyama; these results were subsequently confirmed by more controlled experiments Pearson et al. (2000). Neither Kameyama nor Suri and McCoy discuss parentheticals; Kameyama only briefly mentions relative clauses, but doesn’t analyze them in detail. In order to evaluate these definitions of utterance (sentences versus finite clauses), as well as the different ways of defining ‘previous utterance’, we marked up in our corpus what we called (DISCOURSE) UNITS. These include clauses, as well as other sentence subconstituents which may be treated as separate utterances, including parentheticals, preposed PPs, and (the second element of) coordinated VPs. The instructions for marking up units were in part derived from (Marcu, 1999); for each unit, the following attributes were marked:  utype: whether the unit is a main clause, a relative clause, appositive, a parenthetical, etc.  verbed: whether the unit contains a verb or not.  finite: for verbed units, whether the verb is finite or not.  subject: for verbed units, whether they have a full subject, an empty subject (expletive, as in there sentences), or no subject (e.g., for infinitival clauses). The agreement on identifying the boundaries of units, using the  statistic discussed in (Carletta, 1996), was  = :9 (for two annotators and 500 units); the agreement on features(2 annotators and at least 200 units) was follows: Attribute  Value utype .76 verbed .9 finite .81 subject .86 NPs Our instructions for identifying NP markables derive from those proposed in the MATE project scheme for annotating anaphoric relations (Poesio et al., 1999). We annotated attributes of NPs which could be used to define their ranking, including:  The NP type, cat (pronoun, proper name, etc.)  A few other ‘basic’ syntactic features, num, per, and gen, that could be used to identify contexts in which the antecedent of a pronoun could be identified unambiguously;  The grammatical function, gf;  ani: whether the object denoted is animate or inanimate  deix: whether the object is a deictic reference or not The agreement values for these attributes are as follows: Attribute  Value ani .81 cat .9 deix .81 gen .89 gf .85 num .84 per .9 one of the features of NPs claimed to affect ranking (Sidner, 1979; Cote, 1998) that we haven’t so far been able to annotate because of failure to reach acceptable agreement is thematic roles ( = :35). Anaphoric information Finally, in order to compute whether a CF from an utterance was realized directly or indirectly in the following utterance, we marked up anaphoric relations between NPs, again using a variant of the MATE scheme. Theories of focusing such as (Sidner, 1979; Strube and Hahn, 1999), as well as our own early experiments with centering, suggested that indirect realization can play quite a crucial role in maintaining the CB; however, previous work, particularly in the context of the MUC initiative, suggested that while it’s fairly easy to achieve agreement on identity relations, marking up bridging references is quite hard; this was confirmed by, e.g., Poesio and Vieira (1998). As a result we did annotate this type of relations, but to achieve a reasonable agreement, and to contain somehow the annotators’ work, we limited the types of relations annotators were supposed to mark up, and we specified priorities. Thus, besides identity (IDENT) we only marked up three non-identity (‘bridging’ (Clark, 1977)) relations, and only relations between objects. The relations we mark up are a subset of those proposed in the ‘extended relations’ version of the MATE scheme (Poesio et al., 1999) and include set membership (ELEMENT), subset (SUBSET), and ‘generalized possession’ (POSS), which includes part-of relations as well as more traditional ownership relations. As expected, we achieved a rather good agreement on identity relations. In our most recent analysis (two annotators looking at the anaphoric relations between 200 NPs) we observed no real disagreements; 79.4% of these relations were marked up by both annotators; 12.8% by only one of them; and in 7.7% of the cases, one of the annotators marked up a closer antecedent than the other. Concerning bridges, limiting the relations did limit the disagreements among annotators (only 4.8% of the relations are actually marked differently) but only 22% of bridging references were marked in the same way by both annotators; 73.17% of relations are marked by only one or the other annotator. So reaching agreement on this information involved several discussions between annotators and more than one pass over the corpus. Segmentation Segmenting text in a reliable fashion is still an open problem, and in addition the relation between centering (i.e., local focus shifts) and segmentation (i.e., global focus shifts) is still not clear: some see them as independent aspects of attentional structure, whereas other researchers define centering transitions with respect to segments (see, e.g., the discussion in the introduction to (Walker et al., 1998b)). Our preliminary experiments at annotating discourse structure didn’t give good results, either. Therefore, we only used the layout structure of the texts as a rough indication of discourse structure. In the museum domain, each object description was treated as a separate segment; in the pharmaceutical domain, each subsection of a leaflet was treated as a separate segment. We then identified by hand those violations of Constraint 1 that appeared to be motivated by too broad a segmentation of the text.2 Automatic computation of centering information The annotation thus produced was used to automatically compute utterances according to the particular configuration of parameters chosen, and then to compute the CFs and the CB (if any) of each utterance on the basis of the anaphoric information and according to the notion of ranking specified. This information was the used to find violations of Constraint 1 and Rule 1. The behavior of the script that computes this information depends on the following parameters: utterance: whether sentences, finite clauses, or verbed clauses should be treated as utterances. previous utterance: whether adjunct clauses should be treated Kameyama-style or Suri-style. rank: whether CFs should be ranked according to grammatical function or discourse status in Strube and Hahn’s sense 2(Cristea et al., 2000) showed that it is indeed possible to achieve good agreement on discourse segmentation, but that it requires intensive training and repeated iterations; we intend to take advantage of a corpus already annotated in this way in future work. realization: whether only direct realization should be counted, or also indirect realization via bridging references. 4 MAIN RESULTS The principle we used to evaluate the different configurations of the theory was that the best definition of the parameters was the one that would lead to the fewest violations of Constraint 1 and Rule 1. We discuss the results for each principle. Constraint 1: All utterances of a segment except for the 1st have precisely one CB Our first set of figures concerns Constraint 1: how many utterances have a CB. This constraint can be used to evaluate how well centering theory predicts coherence, in the following sense: assuming that all our texts are coherent, if centering were the only factor behind coherence, all utterances should verify this constraint. The first table shows the results obtained by choosing the configuration that comes closest to the one suggested by Kameyama (1998): utterance=finite, prev=kameyama, rank=gf, realization=direct. The first column lists the number of utterances that satisfy Constraint 1; the second those that do not satisfy it, but are segment-initial; the third those that do not satisfy it and are not segment-initial. CB Segment Initial NO CB Total Number Museum 132 35 245 412 Pharmacy 158 13 198 369 Total 290 48 443 791 The previous table shows that with this configuration of parameters, most utterances do not satisfy Constraint 1 in the strict sense even if we take into account text segmentation (admittedly, a very rough one). If we take sentences as utterances, instead of finite clauses, we get fewer violations, although about 25% of the total number of utterances are violations: CB Segment Initial NO CB Total Number Museum 120 22 85 227 Pharmacy 152 8 51 211 Total 272 30 136 438 Using Suri and McCoy’s definition of previous utterance, instead of Kameyama’s (i.e., treating adjuncts as embedded utterances) leads to a slight improvement over Kameyama’s proposal but still not as good as using sentences: CB Segment Initial NO CB Total Number Museum 140 35 237 412 Pharmacy 167 14 188 369 Total 307 49 425 791 What about the finite clause types not considered by Kameyama or Suri and McCoy? It turns out that we get better results if we do not treat as utterances relative clauses (which anyway always have a CB, under standard syntactic assumptions about the presence of traces referring to the modified noun phrase), parentheticals, clauses that occur in subject position; and if we treat as a single utterance matrix clauses with empty subjects and their complements (as in it is possible that John will arrive tomorrow). CB Segment Initial NO CB Total Number Museum 143 35 153 331 Pharmacy 161 14 159 334 Total 304 49 312 665 But by far the most significant improvement to the percentage of utterances that satisfy Constraint 1 comes by adopting a looser definition of ’realizes’, i.e., by allowing a discourse entity to serve as CB of an utterance even if it’s only referred to indirectly in that utterance by means of a bridging reference, as originally proposed by Sidner (1979) for her discourse focus. The following sequence of utterances explains why this could lead to fewer violations of Constraint 1: (1) (u1) These “egg vases” are of exceptional quality: (u2) basketwork bases support egg-shaped bodies (u3) and bundles of straw form the handles, (u4) while small eggs resting in straw nests serve as the finial for each lid. (u5) Each vase is decorated with inlaid decoration: . . . In (1), u1 is followed by four utterances. Only the last of these directly refers to the set of egg vases introduced in u1, while they all contain implicit references to these objects. If we adopt this looser notion of realization, the figures improve dramatically, even with the rather restricted set of relations on which our annotators agree. Now the majority of utterances satisfy Constraint 1: CB Segment Initial NO CB Total Number Museum 225 35 71 331 Pharmacy 174 14 146 334 Total 399 49 217 665 And of course we get even better results by treating sentences as utterances: CB Segment Initial NO CB Total Number Museum 171 17 39 227 Pharmacy 168 7 36 211 Total 339 24 75 438 It is important, however, to notice that even under the best configuration, at least 17% of utterances violate the constraint. The (possibly, obvious) explanation is that although coherence is often achieved by means of links between objects, this is not the only way to make texts coherent. So, in the museum domain, we find utterances that do not refer to any of the previous CFs because they express generic statements about the class of objects of which the object under discussion is an instance, or viceversa utterances that make a generic point that will then be illustrated by a specific object. In the following example, the second utterance gives some background concerning the decoration of a particular object. (2) (u1) On the drawer above the door, gilt-bronze military trophies flank a medallion portrait of Louis XIV. (u2) In the Dutch Wars of 1672 1678, France fought simultaneously against the Dutch, Spanish, and Imperial armies, defeating them all. (u3) This cabinet celebrates the Treaty of Nijmegen, which concluded the war. Coherence can also be achieved by explicit coherence relations, such as EXEMPLIFICATION in the following example: (3) (u1) Jewelry is often worn to signal membership of a particular social group. (u2) The Beatles brooch shown previously is another case in point: Rule 1: if any NP is pronominalized, the CB is In the previous section we saw that allowing bridging references to maintain the CB leads to fewer violations of Constraint 1. One should not, however, immediately conclude that it would be a good idea to replace the strict definition of ’realizes’ with a looser one, because there is, unfortunately, a side effect: adopting an indirect notion of realizes leads to more violations of Rule 1. Figures are as follows. Using utterance=s, rank=gf, realizes=direct 22 pronouns violating Rule 1 (9 museum, 13 pharmacy) (13.4%), whereas with realizes=indirect we have 38 violations (25, 13) (23%); if we choose utterance=finite, prev=suri, we have 23 violations of rule 1 with realizes=direct (13 + 10) (14%), 32 with realizes=indirect (21 + 11) (19.5%). Using functional centering (Strube and Hahn, 1999) to rank the CFs led to no improvements, because of the almost perfect correlation in our domain between subjecthood and being discourse-old. One reason for these problems is illustrated by (4). (4) (u1) A great refinement among armorial signets was to reproduce not only the coat-of-arms but the correct tinctures; (u2) they were repeated in colour on the reverse side (u3) and the crystal would then be set in the gold bezel. They in u2 refers back to the correct tinctures (or, possibly, the coat-of-arms), which however only occurs in object position in a (non-finite) complement clause in (u1), and therefore has lower ranking than armorial signets, which is realized in (u2) by the bridge the reverse side and therefore becomes the CB having higher rank in (u1), but is not pronominalized. In the pharmaceutical leaflets we found a number of violations of Rule 1 towards the end of texts, when the product is referred to. A possible explanation is that after the product has been mentioned sentence after sentence in the text, by the end of the text it is salient enough that there is no need to put it again in the local focus by mentioning it explicitly. E.g., it in the following example refers to the cream, not mentioned in any of the previous two utterances. (5) (u1) A child of 4 years needs about a third of the adult amount. (u2) A course of treatment for a child should not normally last more than five days (u3) unless your doctor has told you to use it for longer. 5 DISCUSSION Our main result is that there seems to be a tradeoff between Constraint 1 and Rule 1. Allowing for a definition of ’realizes’ that makes the CB behave more like Sidner’s Discourse Focus (Sidner, 1979) leads to a very significant reduction in the number of violations of Constraint 1.3 We also noted, however, that interpreting ‘realizes’ in this way results in more violations of Rule 1. (No differences were found when functional centering was used to rank CFs instead of grammati3Footnote 2, page 3 of the intro to (Walker et al., 1998b) suggests a weaker interpretation for the Constraint: ‘there is no more than one CB for utterance’. This weaker form of the Constraint does hold for most utterances, but it’s almost vacuous, especially for grammatical function ranking, given that utterances have at most one subject. cal function.) The problem raised by these results is that whereas centering is intended as an account of both coherence and local salience, different concepts may have to be used in Constraint 1 and Rule 1, as in Sidner’s theory. E.g., we might have a ‘Center of Coherence’, analogous to Sidner’s discourse focus, and that can be realized indirectly; and a ‘Center of Salience’, similar to her actor focus, and that can only be realized directly. Constraint 1 would be about the Center of Coherence, whereas Rule 1 would be about the Center of Salience. Indeed, many versions of centering theory have elevated the CP to the rank of a second center.4 We also saw that texts can be coherent even when Constraint 1 is violated, as coherence can be ensured by other means (e.g., by rhetorical relations). This, again, suggests possible revisions to Constraint 1, requiring every utterance either to have a center of coherence, or to be linked by a rhetorical relation to the previous utterance. Finally, we saw that we get fewer violations of Constraint 1 by adopting sentences as our notion of utterance; however, again, this results in more violations of Rule 1. If finite clauses are used as utterances, we found that certain types of finite clauses not previously discussed, including relative clauses and matrix clauses with empty subjects, are best not treated as utterances. We didn’t find significant differences between Kameyama and Suri and McCoy’s definition of ‘previous utterance’. We believe however more work is still needed to identify a completely satisfactory way of breaking up sentences in utterance units. ACKNOWLEDGMENTS We wish to thank Kees van Deemter, Barbara di Eugenio, Nikiforos Karamanis and Donia Scott for comments and suggestions. Massimo Poesio is supported by an EPSRC Advanced Fellowship. Hua Cheng, Renate Henschel and Rodger Kibble were in part supported by the EPSRC project GNOME, GR/L51126/01. Janet Hitzeman was in part supported by the EPSRC project SOLE. 4This separation among a ‘center of coherence’ and a ‘center of salience’ is independently motivated by considerations about the division of labor between the text planner and the sentence planner in a generation system; see, e.g., (Kibble, 1999). References S.E. Brennan, M.W. Friedman, and C.J. Pollard. 1987. A centering approach to pronouns. In Proc. of the 25th ACL, pages 155–162, June. J. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational Linguistics, 22(2):249–254. H. H. Clark. 1977. Inferences in comprehension. In D. Laberge and S. J. Samuels, editors, Basic Process in Reading: Perception and Comprehension. Lawrence Erlbaum. S. Cote. 1998. Ranking forward-looking centers. In M. A. Walker, A. K. Joshi, and E. F. Prince, editors, Centering Theory in Discourse, chapter 4, pages 55–70. Oxford. D. Cristea, N. Ide, D. Marcu, and V. Tablan. 2000. Discourse structure and co-reference: An empirical study. In Proc. of COLING. B. Di Eugenio. 1998. Centering in italian. In M. A. Walker, A. K. Joshi, and E. F. Prince, editors, Centering Theory in Discourse, chapter 7, pages 115– 138. Oxford. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):202–225. J. K. Gundel, N. Hedberg, and R. Zacharski. 1993. Cognitive status and the form of referring expressions in discourse. Language, 69(2):274–307. I. Heim. 1982. The Semantics of Definite and Indefinite Noun Phrases. Ph.D. thesis, University of Massachusetts at Amherst. M. Kameyama. 1985. Zero Anaphora: The case of Japanese. Ph.D. thesis, Stanford University. M. Kameyama. 1998. Intra-sentential centering: A case study. In M. A. Walker, A. K. Joshi, and E. F. Prince, editors, Centering Theory in Discourse, chapter 6, pages 89–112. Oxford. L. Karttunen. 1976. Discourse referents. In J. McCawley, editor, Syntax and Semantics 7 - Notes from the Linguistic Underground. Academic Press. R. Kibble. 1999. Cb or not Cb? centering applied to NLG. In Proc. of the ACL Workshop on discourse and reference. D. Marcu. 1999. Instructions for manually annotating the discourse structures of texts. Unpublished manuscript, USC/ISI, May. J. Oberlander, M. O’Donnell, A. Knott, and C. Mellish. 1998. Conversation in the museum: Experiments in dynamic hypermedia with the intelligent labelling explorer. New Review of Hypermedia and Multimedia, 4:11–32. R. Passonneau and D. Litman. 1993. Feasibility of automated discourse segmentation. In Proceedings of 31st Annual Meeting of the ACL. R. J. Passonneau. 1993. Getting and keeping the center of attention. In M. Bates and R. M. Weischedel, editors, Challenges in Natural Language Processing, chapter 7, pages 179–227. Cambridge. J. Pearson, R. Stevenson, and M. Poesio. 2000. Pronoun resolution in complex sentences. In Proc. of AMLAP, Leiden. M. Poesio and R. Vieira. 1998. A corpus-based investigation of definite description use. Computational Linguistics, 24(2):183–216, June. M. Poesio, F. Bruneseaux, and L. Romary. 1999. The MATE meta-scheme for coreference in dialogues in multiple languages. In M. Walker, editor, Proc. of the ACL Workshop on Standards and Tools for Discourse Tagging, pages 65–74. D. Scott, R. Power, and R. Evans. 1998. Generation as a solution to its own problem. In Proc. of the 9th International Workshop on Natural Language Generation, Niagara-on-the-Lake, CA. C. L. Sidner. 1979. Towards a computational theory of definite anaphora comprehension in English discourse. Ph.D. thesis, MIT. R. Stevenson, A. Knott, J. Oberlander, and S McDonald. 2000. Interpreting pronouns and connectives. Language and Cognitive Processes, 15. M. Strube and U. Hahn. 1999. Functional centering– grounding referential coherence in information structure. Computational Linguistics, 25(3):309– 344. L. Z. Suri and K. F. McCoy. 1994. RAFT/RAPR and centering: A comparison and discussion of problems related to processing complex sentences. Computational Linguistics, 20(2):301–317. J. R. Tetreault. 1999. Analysis of syntax-based pronoun resolution methods. In Proc. of the 37th ACL, pages 602–605, University of Marylan, June. ACL. M. A. Walker, A. K. Joshi, and E. F. Prince, editors. 1998b. Centering Theory in Discourse. Oxford. M. A. Walker. 1989. Evaluating discourse processing algorithms. In Proc. ACL-89, pages 251–261, Vancouver, CA, June. B. L. Webber. 1978. A formal approach to discourse anaphora. Report 3761, BBN, Cambridge, MA.
2000
51
The Role of Cen tering Theory's Rough-Shift in the T eac hing and Ev aluation of W riting Skills Eleni Miltsak aki Univ ersit y of P ennsylv ania Philadelphia, P A  0 USA elenimi @unagi.cis.up enn. edu Karen Kukic h Educatinal T esting Service Princeton, NJ 0 USA kkukic [email protected] Abstract Existing soft w are systems for automated essa y scoring can pro vide NLP researc hers with opp ortunities to test certain theoretical h yp otheses, including some deriv ed from Cen tering Theory . In this study w e emplo y ETS's e-r ater essa y scoring system to examine whether lo cal discourse coherence, as de ned b y a measure of Rough-Shift transitions, migh t b e a signi can t con tributor to the ev aluation of essa ys. Our p ositiv e results indicate that Rough-Shifts do indeed capture a source of incoherence, one that has not b een closely examined in the Cen tering literature. These results not only justify Rough-Shifts as a v alid transition t yp e, but they also supp ort the original form ulation of Cen tering as a measure of discourse con tin uit y ev en in pronominal-free text.  In tro duction The task of ev aluating studen t's writing abilit y has traditionally b een a lab orin tensiv e h uman endea v or. Ho w ev er, several di eren t soft w are systems, e.g., PEG P age and P eterson (  ), In telligen t Essa y Assessor  and e-r ater  , are no w b eing used to p erform this task fully automatically . F urthermore, b y at least one measure, these softw are systems ev aluate studen t essa ys with the same degree of accuracy as h uman exp erts. That is, computer-generated scores tend to matc h h uman exp ert scores as frequen tly as t w o h uman scores matc h eac h other (Burstein et al.,  ). Essa y scoring systems suc h as these can pro vide NLP researc hers with opp ortunities to test certain theoretical h yp otheses and to explore a v ariet y of practical issues in computational linguistics. In this study , w e emplo y the e-r ater essa y scoring system to test a h y h ttp://lsa.colorado.edu .  h ttp://www.ets.org/researc h/erater.h tml p othesis related to Cen tering Theory (Joshi and W einstein,  ; Grosz et al.,  , inter alia). W e fo cus on Cen tering Theory's Rough-Shift transition whic h is the least w ell studied among the four transition t yp es. In particular, w e examine whether the discourse coherence found in an essa y , as de ned b y a measure of relativ e prop ortion of Rough-Shift transitions, migh t b e a signi can t con tributor to the accuracy of computer-generated essa y scores. Our p ositiv e nding v alidates the role of the Rough-Shift transition and suggests a route for exploring Cen tering Theory's practical applicabilit y to writing ev aluation and instruction.  The e-r ater essa y scoring system One goal of automatic essa y scoring systems suc h as e-r ater is to represen t the criteria that h uman exp erts use to ev aluate essa ys. The writing features that e-r ater ev aluates w ere sp eci cally c hosen to re ect scoring criteria for the essa y p ortion of the Graduate Managemen t Admissions T est (GMA T). These criteria are articulated in GMA T test preparation materials at h ttp://www.gmat. org. In e-r ater, syn tactic v ariet y is represen ted b y features that quan tify o ccurrences of clause t yp es. Logical organization and clear transitions are represen ted b y features that quantify cue w ords in certain syn tactic constructions. The existence of main and supp orting p oin ts is represen ted b y features that detect where new p oin ts b egin and where they are dev elop ed. E-r ater also includes features that quan tify the appropriateness of the v o cabulary con ten t of an essa y . One feature of writing v alued b y writing exp erts that is not explicitly represen ted in the curren t v ersion of e-r ater is lo cal coherence. Cen tering Theory pro vides an algorithm for computing lo cal coherence in written discourse. Our study in v estigates the applicabili t y of Cen tering Theory's lo cal coherence measure to essa y ev aluation b y determining the e ect of adding this new feature to e-r ater's existing arra y of features.  Ov erview of Cen tering A syn thesis of t w o di eren t lines of w ork (Joshi and Kuhn,   ; Joshi and W einstein,  ) and (Sidner,   ; Grosz,  ; Grosz and Sidner,  ) yielded the form ulation of Cen tering Theory as a mo del for monitoring lo cal fo cus in discourse. The Centering mo del w as designed to accoun t for those asp ects of pro cessing that are resp onsible for the di erence in the p erceiv ed coherence of discourses suc h as those demonstrated in () and () b elo w (examples from Hudson-D'Zm ura (  )). () a. John w en t to his fa v orite m usic store to buy a piano. b. He had frequen ted the store for man y y ears. c. He w as excited that he could nally buy a piano. d. He arriv ed just as the store w as closing for the da y . () a. John w en t to his fa v orite m usic store to buy a piano. b. It w as a store John had frequen ted for man y y ears. c. He w as excited that he could nally buy a piano. d. It w as closing just as John arriv ed. Discourse () is in tuitiv ely more coheren t than discourse (). This di erence ma y b e seen to arise from the di eren t degrees of contin uit y in what the discourse is ab out. Discourse () cen ters a single individual (John) whereas discourse () seems to fo cus in and out on di eren t en tities (John, stor e, John, stor e). Cen tering is designed to capture these uctuations in con tin uit y .  The Cen tering mo del In this section, w e presen t the basic definitions and common assumptions in Centering as discussed in the literature (e.g., W alk er et al. (  )). W e presen t the assumptions and mo di cations w e made for this study in Section .. . Discourse segmen ts and en tities Discourse consists of a sequence of textual segmen ts and eac h segmen t consists of a sequence of utterances. In Cen tering Theory , utterances are designated b y U i U n . Eac h utterance U i ev ok es a set of discourse en tities, the F OR W ARD-LOOKING CENTERS, designated b y C f (U i ). The mem b ers of the Cf set are rank ed according to discourse salience. (Ranking is describ ed in Section ..)The highest-rank ed mem b er of the Cf set is the PREFERRED CENTER, Cp. A BA CKW ARD-LOOKING CENTER, Cb,is also iden ti ed for utterance U i . The highest rank ed en tit y in the previous utterance, C f (U i  ), that is r e alize d in the curren t utterance, U i , is its designated BA CKW ARD-LOOKING CENTER, Cb. The BA CKW ARD-LOOKING CENTER is a sp ecial mem b er of the Cf set b ecause it represen ts the discourse en tit y that U i is ab out, what in the literature is often called the 'topic' (Reinhart,  ; Horn,  ). The Cp for a giv en utterance ma y b e identical with its Cb, but not necessarily so. It is precisely this distinction b et w een lo oking bac k in the discourse with the Cb and projecting preferences for in terpretations in the subsequen t discourse with the Cp that provides the k ey elemen t in computing lo cal coherence in discourse. . Cen tering transitions F our t yp es of transitions, re ecting four degrees of coherence, are de ned in Cen tering. They are sho wn in transition ordering rule (). The rules for computing the transitions are sho wn in T able . () T ransition ordering rule: Con tin ue is preferred to Retain, whic h is preferred to Smo oth-Shift, whic h is preferred to RoughShift. Cen tering de nes one more rule, the Pronoun rule whic h w e will discuss in detail in Section . Cb(Ui)=Cb(Ui-) Cb(Ui)=Cb(Ui-) Cb(Ui)=Cp Con tin ue Smo oth-Shift Cb(Ui)=Cp Retain Rough-Shift T able : T able of transitions . Utterance In early form ulations of Cen tering Theory , the 'utterance' w as not de ned explicitly . In subsequen t w ork (Kamey ama,  ), the utterance w as de ned as, roughly , the tensed clause with relativ e clauses and clausal complemen ts as exceptions. Based on crosslinguistic studies, Miltsak aki ( ) de ned the utterance as the traditional 'sen tence', i.e., the main clause and its accompan ying sub ordinate and adjunct clauses constitute a single utterance. . Cf ranking As men tioned earlier, the PREFERRED CENTER of an utterance is de ned as the highest rank ed mem b er of the Cf set. The ranking of the Cf mem b ers is determined b y the salience status of the en tities in the utterance and ma y v ary crosslinguistically . Kamey ama (  ) and Brennan et al. (  ) prop osed that the Cf ranking for English is determined b y grammatical function as follo ws: () Rule for ranking of forw ard-lo oking cen ters: SUBJ>IND. OBJ>OBJ>OTHERS Later crosslinguistic studies based on empirical w ork (Di Eugenio,  ; T uran,  ; Kamey ama,  ) determined the follo wing detailed ranking, with QIS standing for quanti ed inde nite sub jects (p eople, ev ery one etc) and PR O-ARB (w e, y ou) for arbitrary plural pronominals. ()Revised rule for the ranking of forw ard-lo oking cen ters: SUBJ>IND. OBJ>OBJ>OTHERS> QI S, PR O-ARB. .. Complex NPs In the case of complex NPs, whic h ha v e the prop ert y of ev oking m ultiple discourse entities (e.g. his mother, soft w are industry), the w orking h yp othesis commonly assumed (e.g. W alk er and Prince (  )) is ordering from left to righ t.   The role of Rough-Shift transitions As men tioned brie y earlier, the Cen tering mo del includes one more rule, the Pronoun Rule giv en in (). () Pronoun Rule: If some elemen t of Cf(Ui-) is realized as a pronoun in Ui, then so is the Cb(Ui). The Pronoun Rule re ects the in tuition that pronominals are felicitously used to refer to discourse-salien t en tities. As a result, Cbs are often pronominalized, or ev en deleted (if the grammar allo ws it). Rule () then predicts that if there is only one pronoun in an utterance, this pronoun m ust realize the Cb. The Pronoun Rule and the distribution of forms (de nite/inde ni te NPs and pronominals) o v er transition t yp es pla ys a signi can t role in the dev elopmen t of anaphora resolution algorithms in NLP . Note that the utilit y of the Pronoun Rule and the Cen tering transitions in anaphora resolution algorithms relies hea vily on the assumption that the texts under consideration are maximally coheren t. In maximally coheren t texts, ho w ev er, RoughShifts transitions are rare, and ev en in less than maximally coheren t texts they o ccur infrequen tly . F or this reason the distinction b et w een Smo oth-Shifts and Rough-Shifts w as collapsed in previous w ork (Di Eugenio,  ; Hurewitz,  , in ter alia). The status of Rough-Shift transitions in the Cen tering mo del w as therefore unclear, receiving only negativ e evidence: Rough-Shifts are v alid b ecause they are found to b e rare in coheren t discourse. In this study w e gain insigh ts p ertaining to the nature of the Rough-Shifts precisely b ecause w e are forced to drop the coherence assumption. Our data consist of studen t essa ys whose degree of coherence is under ev aluation and therefore cannot b e assumed. Using studen ts' paragraph marking as segmen t b oundaries, w e 'cen tered' 00 GMA T essa ys. The a v erage length of these essa ys w as ab out  But see also Di Eugenio (  ) for the treatmen t of complex NPs in Italian. Def. Phr. Indef. Phr. Prons Rough-Shifts  0  T otal    T able : Distribution of forms o v er Rough-Shifts 0 w ords. In the next section w e sho w that Rough-Shift transitions pro vide a reliable measure of inc oher enc e, correlating w ell with scores pro vided b y writing exp erts. One of the crucial insigh ts w as that, in our data, the incoherence detected b y the Rough-Shift measure is not due to violations of the Pronominal Rule or infelicitous use of pronominal forms in general. In T able , w e rep ort the results of the distribution of forms o v er Rough-Shift transitions. Out of the  Rough-Shift transitions, found in the set of 00 essa ys, in   o ccasions the Cp w as a nominal phrase, either de nite or indefinite. Pronominals o ccurred in only  cases of whic h  cases instan tiated the pronominals 'w e' or 'y ou' in their generic sense. T able  strongly indicates that studen t essa ys w ere not incoheren t in terms of the pro cessing load imp osed on the reader to resolv e anaphoric references. Instead, the incoherence in the essa ys w as due to discon tin uities in studen ts' essa ys caused b y their in tro ducing to o man y undev elop ed topics within what should b e a conceptually uniform segmen t, i.e. their paragraphs. This is, in fact, what Rough-Shift pic k ed up. These results not only justify Rough-Shifts as a v alid transition t yp e but they also supp ort the original form ulation of Cen tering as a measure of discourse con tin uit y ev en when anaphora resoluion is not an issue. It seems that Rough-Shifts are capturing a source of incoherence that has b een o v erlo ok ed in the Cen tering literature. The pro cessing load in the Rough-Shift cases rep orted here is not increased b y the e ort required to resolv e anaphoric reference but instead b y the e ort required to nd the relev an t topic connections in a discourse b om barded with a rapid succession of m ultiple en tities. That is, RoughShifts are the result of absen t or extremely short-liv ed Cbs. W e in terpret the RoughShift transitions in this con text as a re ection of the incoherence p erceiv ed b y the reader when s/he is unable to iden tify the topic (focus) structure of the discourse. This is a signi can t insigh t whic h op ens up new a ven ues for practical applications of the Centering mo del.  The e-r ater Cen tering study In an earlier preliminary study , w e applied the Cen tering algorithm man ually to a sample of  GMA T essa ys to explore the h yp othesis that the Cen tering mo del pro vides a reasonable measure of coherence (or lac k of ) re ecting the ev aluation p erformed b y h uman raters with resp ect to the corresp onding requiremen ts describ ed in the instructions for h uman raters. W e observ ed that essa ys with higher scores tended to ha v e signi can tly lo w er p ercen tages of R OUGH-SHIFTs than essa ys with lo w er scores. As exp ected, the distribution of the other t yp es of transitions w as not significan t. In general, CONTINUEs, RET AINs, and SMOOTH-SHIFTs do not yield incoheren t discourses (in fact, an essa y with only CONTINUE transitions migh t sound rather b oring!). In this study w e test the h yp othesis that a predictor v ariable deriv ed from Cen tering can signi can tly impro v e the p erformance of e-r ater. Since w e are in fact prop osing Centering's R OUGH-SHIFTs as a predictor v ariable, our mo del, strictly sp eaking, measures inc oher enc e. The corpus for our study came from a p o ol of essa ys written b y studen ts taking the GMA T test. W e randomly selected a total of 00 essa ys, co v ering the full range of the scoring scale, where  is lo w est and  is highest (see app endix). W e applied the Cen tering algorithm to all 00 essa ys, calculated the p ercen tage of R OUGH-SHIFTs in eac h essa y and then ran m ultiple regression to ev aluate the con tribution of the prop osed v ariable to the e-r ater's p erformance. . Cen tering assumptions and mo di cations Utterance. F ollo wing Miltsak aki ( ), w e assume that the eac h utterance consists of one main clause and all its sub ordinate and adjunct clauses. Cf ranking. W e assumed the Cf ranking giv en in (). A mo di cation w e made in v olv ed the status of the pronominal I.  W e observ ed that in lo w-scored essa ys the rst p erson pronominal I w as used extensiv ely , normally presen ting p ersonal narrativ es. Ho w ev er, p ersonal narrativ es w ere unsuited to this essa y writing task and w ere assigned lo w er scores b y exp ert readers. The extensiv e use of I in the sub ject p osition pro duced an un w an ted e ect of high coherence. W e prescriptiv ely decided to p enalize the use of I's in order to b etter re ect the coherence demands made b y the particular writing task. The w a y to p enalize w as to omit I's. As a result, coherence w as measured with resp ect to the treatmen t of the remaining en tities in the I-con taining utterances. This ga v e us the desired result of b eing able to distinguish those I-con taining utterances whic h made coheren t transitions with resp ect to the en tities they w ere talking ab out and those that did not. Lac k of Fit Source DF Sum of Squares Mean Square FRatio Lac k of Fit  . 0. .0 Pure Error  . 0. Prob>F T otal Error  . 0. Max RSq 0.  P arameter Estimates T erm Estimate Std Error tRatio Prob> jtj In tercept . 0. .  0.000 E-RA TER 0.0 0.0 .  <.000 R OUGH -0.0 0.00 -. 0.00 E ect T est Source Nparm DF Sum of Squares FRatio Prob> F E-RA TER   00. . <.000 R OUGH   . .0 0.00 T able : Regression Segmen ts. Segmen t b oundaries are ex In fact, a similar mo di cation has b een prop osed b y Hurewitz ( ) and W alk er ( ) observ ed that the use of I in sen tences suc h as 'I b eliev e that...', 'I think that...' do not a ect the fo cus structure of the text. tremely hard to iden tify in an accurate and principled w a y . F urthermore, existing algorithms (Morris and Hirst,  ; Y oumans,  ; Hearst,  ; Kozima,  ; Reynar,  ; P assonneau and Litman,  ; P assonneau,  ) rely hea vily on the assumption of textual coherence. In our case, textual coherence cannot b e assumed. Giv en that text organization is also part of the ev aluation of the essa ys, w e decided to use the studen ts' paragraph breaks to lo cate segmen t b oundaries. . Implemen tation F or this study , w e decided to man ually tag coreferring expressions despite the a v ailabilit y of coreference algorithms. W e made this decision b ecause a p o or p erformance of the coreference algorithm w ould giv e us distorted results and w e w ould not b e able to test our h yp othesis. F or the same reason, w e man ually tagged the Preferred cen ters as Cp. W e only needed to mark all the other en tities as OTHER. This information w as adequate for the computation of the Cb and all of the transitions. Discourse segmen tation and the implementation of the Cen tering algorithm for the computation of the transitions w ere automated. Segmen ts b oundaries w ere mark ed at paragraph breaks and the transitions w ere calculated according to the instructions giv en in T able . As output, the system computed the p ercen tage of Rough-Shifts for eac h essa y . The p ercen tage of Rough-Shifts w as calculated as the n um b er of Rough-Shifts o v er the total n um b er of iden ti ed transitions in the essa y .  Study results In the app endix, w e giv e the p ercen tages of Rough-Shifts (R OUGH) for eac h of the actual studen t essa ys (00) on whic h w e tested the R OUGH v ariable in the regression discussed b elo w. The HUMAN (HUM) column contains the essa y scores giv en b y h uman raters and the EAR TER (E-R) column con tains the corresp onding score assigned b y the e-r ater. Comparing HUMAN and R OUGH, w e observ e that essa ys with scores from the higher end of the scale tend to ha v e lo w er p ercen tages of Rough-Shifts than the ones from the lo w er end. T o ev aluate that this observ ation can b e utilized to impro v e the e-r ater's p erformance, w e regressed X=E-RA TER and X=R OUGH (the predictors) b y Y=HUMAN. The results of the regression are sho wn in T able . The 'Estimate' cell con tains the co ef cien ts assigned for eac h v ariable. The co ef cien t for R OUGH is negativ e, th us p enalizing o ccurrences of Rough-Shifts in the essa ys. The t-test ('t-ratio' in T able ) for R OUGH has a highly signi can t p-v alue (p<0.00) for these 00 essa ys suggesting that the added v ariable R OUGH can con tribute to the accuracy of the mo del. The magnitude of the con tribution indicated b y this regression is appro ximately 0. p oin t, a reasonalb y sizable e ect giv en the scoring scale (-). Additional w ork is needed to precisely quantify the con tribution of R OUGH. That w ould in v olv e incorp orating the R OUGH v ariable in to the building of a new e-r ater mo del and comapring the results of the new mo del to the original e-r ater mo del. As a preliminary test of the predictabilit y of the mo del, w e jac knifed the data. W e p erformed 00 tests with ERA TER as the sole v ariable lea ving out one essa y eac h time and recorded the prediction of the mo del for that essa y . W e rep eated the pro cedure using b oth v ariables. The predicted v alues for ERA TER alone and ERA TER+R OUGH are sho wn in columns PrH/E and PrH/E+R resp ectiv ely in T able . In comparing the predictions, w e observ e that, indeed,  % of the predicted v alues sho wn in the PrH/E+R column are b etter appro ximations of the HUMAN scores, esp ecially in the cases where the ERA TER's score is discrepan t b y  p oin ts from the HUMAN score.  Discussion Our p ositiv e nding, namely that Cen tering Theory's measure of relativ e prop ortion of Rough-Shift transitions is indeed a signi can t con tributor to the accuracy of computergenerated essa y scores, has sev eral practical and theoretical implications. Clearly , it indicates that adding a lo cal coherence feature to e-r ater could signi can tly impro v e e-r ater's scoring accuracy . Note, ho w ev er, that o v erall scores and coherence scores need not b e strongly correlated. Indeed, our data con tain sev eral examples of essa ys with high coherence scores but lo w o v erall scores and vice v ersa. W e brie y review ed these cases with sev eral ETS writing assessmen t exp erts to gain their insigh ts in to the v alue of pursuing this w ork further. In an e ort to maximize the use of their time with us, w e carefully selected three pairs of essa ys to elicit sp eci c information. One pair included t w o high-scoring () essa ys, one with a high coherence score and the other with a lo w coherence score. Another pair included t w o essa ys with lo w coherence scores but di ering o v erall scores (a  and a ). A nal pair w as carefully c hosen to include one essa y with an o v erall score of  that made sev eral main p oin ts but did not dev elop them fully or coheren tly , and another essa y with an o v erall score of  that made only one main p oin t but did dev elop it fully and coheren tly . After brie y describing the Rough-Shift coherence measure and without rev ealing either the o v erall scores or the coherence scores of the essa y pairs, w e ask ed our exp erts for their commen ts on the o v erall scores and coherence of the essa ys. In all cases, our exp erts precisely iden ti ed the scores the essa ys had b een giv en. In the rst case, they agreed with the high Cen tering coherence measure, but one exp ert disagreed with the lo w Cen tering coherence measure. F or that essa y , one exp ert noted that "coherence comes and go es" while another found coherence in a "c hronological organization of examples" (a notion b ey ond the domain of Cen tering Theory). In the second case, our exp erts' judgmen ts con rmed the Rough-Shift coherence measure. In the third case, our exp erts sp eci cally iden ti ed b oth the coherence and the dev elopmen t asp ects as determinan ts of the essa ys' scores. In general, our exp erts felt that the dev elopmen t of an automated coherence measure w ould b e a useful instructional aid. The adv an tage of the Rough-Shift metric o v er other quan ti ed comp onen ts of the er ater is that it can b e appropriately translated in to instructiv e feedbac k for the studen t. In an in teractiv e tutorial system, segmen ts containing Rough-Shift transitions can b e highligh ted and supplemen tary instructional commen ts will guide the studen t in to revising the relev an t section pa ying atten tion to topic discon tin uities. F uture w ork Our study prescrib es a route for sev eral future researc h pro jects. Some, suc h as the need to impro v e on fully automated tec hniques for noun phrase/discourse en tit y identi cation and coreference resolution, are essen tial for con v erting this measure of lo cal coherence to a fully automated pro cedure. Others, not explicitly discussed here, suc h as the status of discourse deictic expressions, nominalization resolution, and global coherence studies are fair game for basic, theoretical researc h. Ac kno wledgemen ts W e w ould lik e to thank Jill Burstein who pro vided us with the essa y set and h uman and e-r ater scores used in this study; Mary F o wles, P eter Co op er, and Seth W einer who pro vided us with the v aluable insigh ts of their writing assessmen t exp ertise; Henry Bro wn who kindly discussed some statistical issues with us; Ramin Hemat who pro vided p erl co de for automatically computing Cen tering transitions and the RoughShift measure for eac h essa y . W e are grateful to Ara vind Joshi and Alistair Knott for useful discussions. References S. Brennan, M. W alk er-F riedman, and C. P ollard.  . A Cen tering approac h to pronouns. In Pr oc e e dings of the th A nnual Me eting of the Asso ciation for Computational Linguistics, pages {. Stanford, Calif. J. Burstein, K. Kukic h, S. W ol , M. Cho doro w, L. Braden-Harder, M.D. Harris, and C. Lu.  . Automated essa y scoring using a h ybrid feature iden ti cati on tec hnique. In A nnual Me eting of the Asso ciation for Computational Linguistics, Montr e al, Canada, August. B. Di Eugenio.  . Cen tering in Italian. In Centering The ory in Disc ourse, pages {. Clarendon Press, Oxford. B. Grosz and C. Sidner.  . A tten tions, in ten tions and the structure of discourse. Computational Linguistics, :{0. B. Grosz, A. Joshi, and S. W einstein.  . Pro viding a uni ed accoun t of de nite noun phrases in discourse. In A nnual Me eting of the Asso ciation for Computational Linguistics, pages {0. B. Grosz.  . The represen tation and use of fo cus in language underastandin g. T ec hnical Rep ort No. , Menlo P ark, Calif., SRI In ternational. M. Hearst.  . Multiparagrap h segmen tation of exp ository text. In Pr o c. of the nd A CL. L. Horn.  . Presupp ositio n, theme and v ariations. In Chic ago Linguistics So ciety, v olume , pages { . S. Hudson-D'Zm ura.  . The Structur e of Disc ourse and A naphor R esolution: The Disc ourse Center and the R oles of Nouns and Pr onouns. Ph.D. thesis, Univ ersit y of Ro c hester. F. Hurewitz.  . A quan titativ e lo ok at discourse coherence. In M. W alk er, A. Joshi, and E. Prince, editors, Centering The ory in Disc ourse, c hapter . Clarendon Press, Oxford. A. Joshi and S. Kuhn.   . Cen tered logic: The role of en tit y cen tered sen tence represen tation in natural language inferencing. In th Internationa l Joint Confer enc e on A rti cial Intel ligenc e , pages { . A. Joshi and S. W einstein.  . Con trol of inference: Role of some asp ects of discourse structure: Cen tering. In th International Joint Confer enc e on A rti cial Intel ligenc e , pages {. M. Kamey ama.  . Zer o A naphor a: The Case of Jap anese. Ph.D. thesis, Stanford Univ ersit y . M. Kamey ama.  . In trasen ten tial Cen tering: A case study . In M. W alk er, A. Joshi, and E. Prince, editors, Centering The ory in Disc ourse, pages  { . Clarendon Press: Oxford. H. Kozima.  . T ext segmen tation based on similarit y b et w een w ords. In Pr o c. of the st A CL (Student Session), pages {. E. Miltsak aki.  . Lo cating topics in text pro cessing. In Pr o c e e dings of Computational Linguistics in the Netherlands (CLIN' ). J. Morris and G. Hirst.  . Lexical cohesion computed b y thesaural relations as an indicator of the structure of the text. Computational Linguistics , :{. E. B. P age and N. P eterson.  . The computer mo v es in to essa y grading: Up dating the ancien t test. Phi Delta Kapp an, Marc h:{. R. P assonneau and D. Litman.  . Discourse segmen tation b y h uman and automated means. Computational Linguistics, ():0{ . R. P assonneau.  . In teraction of discourse structure with explicitne ss of discourse anaphoric noun phrases. In M. W alk er, A. Joshi, and E. Prince, editors, Centering The ory in Disc ourse, pages { . Clarendon Press: Oxford. T. Reinhart.  . Pragmatics and linguis tics: An analysis of sen tence topics. Philosophic a, :{ . J. Reynar.  . An automatic metho d of nding topic b oundaaries. In Pr o c. of nd A CL (Studen Session), pages {. C. Sidner.   . T o w ard a computational theory of de nite anaphora comprehension in English. T ec hnical Rep ort No. AI-TR-, Cam bridge, Mass. MIT Press. U. T uran.  . Nul l vs. Overt Subje cts in T urkish Disc ourse: A Centering A nalysis. Ph.D. thesis, Univ ersit y of P ennsylv ania. M. W alk er and E. Prince.  . A bilateral approac h to giv enness: A hearer-status algorithm and a Centering algorithm. In T. F retheim and J. Gundel, editors, R efer enc e and R efer ent A c c essibility . Amsterdam: John Benjamins. M. W alk er, A. Joshi, and E. Prince (eds).  . Centering The ory in Disc ourse. Clarendon Press: Oxford. M. W alk er.  . Cen tering : Anaphora resolution and discourse structure. In M. W alk er, A. Joshi, and E. Prince, editors, Centering The ory in Disc ourse, pages 0{. Clarendon Press: Oxford. G. Y oumans.  . A new to ol for discourse analyis: The v o cabulary-managemen t pro le. L anguage, :{ . HUM E-R R OUGH PrH/E PrH/E+R    .0 .    .  .     . .0    .  .     . .     . .    . .    . . 0   0 .0 .0    0 . .   0 . .   0 .0 .    . .00   0 . .    . .     .0 .    . .    .0 .     .0 .   0 .0 .0    . .    .0 .0    . .0    .0 .    0 . .    .0 .    .0 .    0 . .    .0 .0    .0 .   0 .0 .    .0 .   0 . .   0 . .    . .   0 .0 .    .0 .     . .   0 .0 .   0 .0 .0    . .    .0 .    . .0   0 . .    . .    . .    .0 .    0 . .    . .   0 . . HUM E-R R OUGH PrH/E PrH/E+R    . .    . .    . .    . .00    . .   0 . .    .0 .   . .    . .   00 . .    . .   0 . .    . .    . .   0 . .    . .     . .0   0 .0 .0    . .   0 . .    . .    . .0    .0 .   0 .0 .   ? . ?    . .    0 . .    .0 .    . .   0 .0 .    . .    . .    . .    . .     . .   ? . ?    . .0   0 . .    . .0   0 . .   0 . .   0 . .    . .    . .     . .  0 00 0. -0.0    . .0    . .    . .   0 . . T able : T able with the h uman scores (HUM), the e-r ater scores (E-R), the Rough-Shift measure (R OUGH), the (jac knifed) predicted v alues using e-r ater as the only v ariable (PrH/E) and the (jac knifed) predicted v alues using the e-r ater and the added v ariable Rough-Shift (PrH/E+R). The R OUGH measure is the p ercen tage of Rough-Shifts o v er the total n um b er of iden ti ed transitions. The question mark app ears where no transitions w ere iden ti ed.
2000
52
A Hierarchical Account of Referential Accessibility Nancy IDE Department of Computer Science Vassar College Poughkeepsie, New York 12604-0520 USA [email protected] Dan CRISTEA Department of Computer Science University “Al. I. Cuza” Iasi, Romania [email protected] Abstract In this paper, we outline a theory of referential accessibility called Veins Theory (VT). We show how VT addresses the problem of "left satellites", currently a problem for stack-based models, and show that VT can be used to significantly reduce the search space for antecedents. We also show that VT provides a better model for determining domains of referential accessibility, and discuss how VT can be used to address various issues of structural ambiguity. Introduction In this paper, we outline a theory of referential accessibility called Veins Theory (VT). We compare VT to stack-based models based on Grosz and Sidner's (1986) focus spaces, and show how VT addresses the problem of "left satellites", i.e., subordinate discourse segments that appear prior to their nuclei (dominating segments) in the linear text. Left-satellites pose a problem for stack-based models, which remove subordinate segments from the stack before pushing a nuclear or dominating segment, thus rendering them inaccessible. The percentage of such cases is typically small, which may account for the fact that their treatment has been largely overlooked in the literature, but the phenomenon nonetheless persists in most texts. We also show how VT can be used to address various issues of structural ambiguity. 1 Veins Theory Veins Theory (VT) extends and formalizes the relation between discourse structure and reference proposed by Fox (1987). VT identifies “veins” over discourse structure trees that are built according to the requirements put forth in Rhetorical Structure Theory (RST) (Mann and Thompson, 1987). RST structures are represented as binary trees, with no loss of information. Veins are computed based on the RST-specific distinction between nuclei and satellites; therefore, RST relations labeling nodes in the tree are ignored. Terminal nodes in the tree represent discourse units and nonterminal nodes represent discourse relations. The fundamental intuition underlying VT is that the distinction between nuclei and satellites constrains the range of referents to which anaphors can be resolved; in other words, the nucleus-satellite distinction induces a domain of referential accessibility (DRA) for each referential expression. More precisely, for each anaphor a in a discourse unit u, VT hypothesizes that a can be resolved by examining referential expressions that were used in a subset of the discourse units that precede u; this subset is called the DRA of u. For any elementary unit u in a text, the corresponding DRA is computed automatically from the text's RST tree in two steps: 1. Heads for each node are computed bottomup over the rhetorical representation tree. Heads of elementary discourse units are the units themselves. Heads of internal nodes, i.e., discourse spans, are computed by taking the union of the heads of the immediate child nodes that are nuclei. For example, for the text in Figure 1,1 with the rhetorical structure shown in Figure 2,2 the head of span [5,7] is unit 5. Note that the head of span [6,7] is the list <6,7> because both immediate children are nuclei. 2 . Using the results of step 1, Vein expressions are computed top-down for each node in the tree, using the following functions: − mark (x), which returns each symbol in a string of symbols x marked with parentheses. − seq(x,y), which concatenates the labels in x with the labels in y, left-to-right. − simpl(x), which eliminates all marked symbols from x, if they exist. The vein of the root is its head. Veins of child nodes are computed recursively, as follows: • for each nuclear node whose parent has vein v, if the node has a left nonnuclear sibling with head h, then the vein expression is seq(mark(h), v); otherwise v. • for each non-nuclear node with head h whose parent node has vein v, if the node is the left child of its parent, then seq(h,v); otherwise, seq(h, simpl(v)). 1 Figure 1 highlights two co-referential equivalence classes: referential expressions surrounded by boxes refer to “Mr. Casey”; those surrounded by ellipses refer to “Genetic Therapy Inc.”. 2 The rhetorical structure is represented using the conventions proposed by Mann and Thompson (1988). One of the conjectures of VT is that the vein expression of a unit (terminal node), which includes a chain of discourse units that contain that unit itself, provides an “abstract” or summary of the discourse fragment that contains that unit. Because it is an internally coherent piece of discourse, all referential expressions (REs) in the unit preferentially find their referees within that sub-text. Referees that do not appear in the DRA are possible, but are more difficult to process, both computationally and cognitively (see Section 2.2). This conjecture expresses the intuition that potential referees of the REs of a unit depend on the nuclearity of previous units: both a satellite and a nucleus can access a previous nuclear node, a nucleus can only access another left nuclear node or its own left satellite, and the interposition of a nucleus after a satellite blocks the accessibility of the satellite for any nodes lower in the hierarchy. 1. Michael D. Casey, a top Johnson & Johnson manager, moved to Genetic Therapy Inc., a small biotechnology concern here, 2. to become its president and chief operating officer 3. Mr. Casey, 46, years old, was president of J&J’s McNeil Pharmaceutical subsidiary, 4. which was merged with another J&J unit, Ortho Pharmaceutical Corp., this year in a cost-cutting move. 5. Mr. Casey succeeds M. James Barrett, 50, as president of Genetic Therapy. 6. Mr. Barrett remains chief executive officer 7. and becomes chairman. 8. Mr. Casey said 9. he made the move to the smaller company 10. because he saw health care moving toward technologies like the company’s gene therapy products. 11. I believe that the field is emerging and is prepared to break loose, 12. he said. Figure 1: MUC corpus text fragment The DRA of a unit u is given by the units in the vein that precede u. For example, for the text and RST tree in Figures 1 and 2, the vein expression of unit 3, which contains units 1 and 3, suggests that anaphors from unit 3 should be resolved only to referential expressions in units 1 and 3. Because unit 2 is a satellite to unit 1, it is considered to be “blocked” to referential links from unit 3. In contrast, the DRA of unit 9, consisting of units 1, 8, and 9, reflects the intuition that anaphors from unit 9 can be resolved only to referential expressions from unit 1, which is the most important unit in span [1,7] and to unit 8, a satellite that immediately precedes unit 9. Figure 2 shows the heads and veins of all internal nodes in the rhetorical representation. In general, co-referential relations (such as the identity relation) induce equivalence classes over the set of referential expressions in a text. When hierarchical adjacency is considered, an anaphor may be resolved to a referent that is not the closest in a linear interpretation of a text. However, because referential expressions are organized in equivalence classes, it is sufficient that an anaphor is resolved to some member of the set. This is consistent with the distinction between "direct" and "indirect" references discussed in (Cristea, et al., 1998). 1 2 3 4 5 6 7 8 9 10 11 12 13-?? ??-?? H = 1 9 * V = 1 9 * H = 1 V = 1 9 * H = 9 V = 1 9 * H = 1 V = 1 9 * H = 5 V = 1 5 9 * H = 1 V = 1 9 * H = 3 V = 1 3 5 9 * H = 6 7 V = 1 5 6 7 9 * H = 9 V = 1 9 * H = 9 V = 1 9 * H = 9 V = 1 (8) 9 * H = 10 V = 1 9 10 * H = 11 V = 1 9 10 11 * H = 3 V = 1 3 5 9 DRA = 1 3 H = 9 V = 1 (8) 9 DRA = 1 8 9 Figure 2: RST analysis of the text in Figure 1 2 VT and Stack-based Models Veins Theory claims that references from a given unit are possible only in its DRA, i.e., that discourse structure constrains the areas of the text over which references can be resolved. In previous work, we compared the potential of hierarchical and linear models of discourse--i.e., approaches that enumerate potential antecedents in an undifferentiated window of text linearly preceding the anaphor under scrutiny--to correctly establish co-referential links in texts, and hence, their potential to correctly resolve anaphors (Cristea, et al., 2000). Our results showed that by exploiting the hierarchical discourse structure of texts, one can increase the potential of natural language systems to correctly determine co-referential links, which is a requirement for correctly resolving anaphors. In general, the potential to correctly determine coreferential links was greater for VT than for linear models when one looks back 4 elementary discourse units. When looking back more than four units, the linear model was equally effective. Here, we compare VT to stack-based models of discourse structure based on Grosz and Sidner's (1986) (G&S) focus spaces (e.g., Hahn and Strübe, 1997; Azzam, et al., 1998). In these approaches, discourse segments are pushed on the stack as they are encountered in a linear traversal of the text. Before a dominating segment is pushed, subordinate segments that precede it are popped from the stack. Antecedents for REs appearing in the segment on the top of the stack are sought in discourse segments in the stack below it. Therefore, in cases where a subordinate segment a precedes a dominating segment b, a reference to an entity in a by an RE in b is not resolvable. Special provision could be made in order to handle such cases—e.g., subsequently pushing a on top of b—but this would violate the overall strategy of resolving REs appearing in segments currently on the top of the stack. The special status given to left satellites in VT addresses this problem. For example, one RST analysis of (1) proposed by Moser and Moore (1996) is given in Figure 3. Moser and Moore note that the relation of an RST nucleus to its satellite is analogous to the dominates relation proposed by G&S (see also Marcu, 2000). As a subordinate segment preceding the segment that dominates it, the satellite is popped from the stack before the dominant segment (the nucleus) is pushed in the stack-based model, and therefore it is not included among the discourse segments that are searched to resolve co-references.3 Similarly, the text in (2), taken from the MUC annotated corpus (Marcu, et al., 1999), was assigned the RST structure in Figure 4, which presents the same problem for the stack-based approach: the referent for this in C2 is to the Clinton program in A2, but because it is a subordinate segment, it is no longer on the stack when C2 is processed. (1) A1. George Bush supports big business. B1. He's sure to veto House Bill 1711. Figure 3: RST analysis of (1) 3 Note that Moser and Moore (1996) also propose an informational RST structure for the same text, in which a « volitional-cause » relation holds between the nucleus a and the satellite b, thus providing for a to be on the stack when b is processed. (2) A2. Some of the executives also signed letters on behalf of the Clinton program. B2. Nearly all of them praised the president for his efforts to pare the deficit. C2. This is not necessarily the package I would design, D2. said Martin Marietta's Mr. Augustine. E2. But we have to attack the deficit. Figure 4: RST analysis of (2) 2.1 Validation To validate our claim, we examined 23 newspaper texts with widely varying lengths (mean length = 408 words, standard deviation 376). The texts were annotated manually for coreference relations of identity (Hirschman and Chinchor, 1997). The co-reference relations define equivalence relations on the set of all marked references in a text. The texts were also annotated manually with discourse structures built in the style of Mann and Thompson (1988). Each analysis yielded an average of 52 elementary discourse units. Details of the annotation process are given in (Marcu et al., 1999). Six percent of all co-references in the corpus are to left satellites. If only co-references pointing outside the unit in which they appear (inter-unit references) are considered, the rate increases to 7.76%. Among these cases, two possibilities exist: either the reference is unresolvable using the stack-based method because the unit in which the referent appears has been popped from the stack, or the stack-based algorithm finds a correct referent in an earlier unit that is still on the stack. Twenty-two percent (2.38% of all coreferring expressions in the corpus) of the referents that VT finds in left satellites fall into B1 A1 evidence A2-B2 background elaboration-addition A2 B2 C2-D2-E2 antithesis C2-D2 attribution C2 D2 E2 the first category. For example, in text fragment (3), taken from the MUC corpus, the coreferential equivalence class for the pronoun he in C3 includes Saloman Brothers analyst Jeff Canin in B3 and he in A3. The RST analysis of this fragment in Figure 5 shows that both A3 and B3 are left satellites. A stack-based approach would not find either antecedent for he in C3, since both A3 and B3 are popped from the stack before C3 is processed. (3) A3. Although the results were a little lighter than the 49 cents a share he hoped for, B3. Salomon Brothers analyst Jeff Canin said C3. he was pleased with Sun's gross margins for the quarter. Figure 5: RST analysis of (3) In cases where stack-based approaches find a coreferent (although not the most recent antecedent) elsewhere in the stack, it makes sense to compare the effort required by the two models to establish correct co-referential links. That is, we assume that from a computational perspective (and, presumably a psycholinguistic one as well), the closer an antecedent is to the referential expression to be resolved, the better. We have shown elsewhere (Cristea et al., 2000) that VT, compared to linear models, requires significantly less effort for DRAs of any size. We use a similar strategy here to compute the effort required by VT and stack-based models. DRAs for both models are treated as ordered lists. For example, text fragment (4) reflects the set of units on the stack at a given point in processing one of the MUC texts; units D4 and E4, in brackets, are left satellites and therefore not available using the stack-based model, but visible using VT. To determine the correct antecedent of Mr. Clinton in F4 using the stackbased model, it is necessary to search back through 3 units (C4, B4, A4) to find the referent President Clinton. In contrast, using VT, we search back only 1 unit to D4. (4) A4. A group of top corporate executives urged Congress to pass President Clinton's deficitreduction plan, B4. declaring that it is superior to the only apparent alternative: more gridlock. C4. Some of the executives who attended yesterday's session weren't a surprise. [ D4. Tenneco Inc. Chairman Michael Walsh, for instance, is a staunch Democrat who provided an early endorsement for Mr. Clinton during the presidential campaign. E4. Xerox Corp.'s Chairman Paul Allaire was one of the few top corporate chief executive officers who contributed money to the Clinton campaign. ] F4. And others, such as Atlantic Richfield Co. Chairman Lodwrick M. Cook and Zenith Electronics Corp. Chairman Jerry Pearlman, have also previously voiced their approval of Mr. Clinton's economic strategy. We compute the effort e(M,a,DRAk) of a model M to determine correct co-referential links with respect to a referential expression a in unit u, given a DRA of size k (DRAk(u)) is given by the number of units between u and the first unit in DRAk that contains a co-referential expression of a. The effort e(M,C,k) of a model M to determine correct co-referential links for all referential expressions in a corpus of texts C using DRAs of size k is computed as the sum of the efforts e(M,a,DRAk) of all referential expressions a where VT finds the co-reference of a in a left satellite. Since co-referents found in units that are not left satellites will be identical for both VT and stack-based models, the difference in effort between the two models depends only on co-referents found in left satellites. Figure 6 shows the VT and stack-based efforts computed over referential expressions resolved by VT in left satellites and k = 1 to 12. Obviously, for a given k and a given referent a, that no co-reference exists in the units of the corresponding DRAk In these cases, we consider B3-C3 attribution concession A3 B3 C3 the effort to be equal to k. As a result, for small k the effort required to establish co-referential links is similar for both models, because both can establish only a limited number of links. However, as k increases, the effort computed over the entire corpus diverges, with VT performing consistently better than the stackbased model. Figure 6: Effort required by VT and stack-based models Note that in some cases, the stack-based model performs better than VT, in particular for small k. This occurs when VT searches back through n adjacent left satellites, where n > 1, to find a coreference, but a co-referent is found using the stack-based method by searching back m nonleft satellite units, where m < n. This would be the case, if for instance, VT first found a coreferent for Mr. Clinton In text (4) in D4 (2 units away), but the stack-based model found a coreferent in C4 (1 unit away since the left satellites are not on the stack). In our corpus, 15% of the co-references found in left satellites by VT required less effort using the stack-based method, whereas VT out-performed the stack-based method 23% of the time. In the majority of cases (62%), the two models required the same level of effort. However, all of the cases in which the stack-based model performed better are for small k (k<4), and the average difference in distance (in units) is 1.25. In contrast, VT out-performs the stack-based model for cases ranging over all values of k in our experiment (1 to 12), and the average difference in distance is 3.8 units. At k=4, VT can determine all the co-referents in our corpus, whereas the stack-based model requires DRAs of up to 12 units to resolve them all. This accounts for the marked divergence in effort shown in Figure 6 as k increases. So, despite the minor difference in the percentage of cases where VT out-performs the stack-based model, VT has the potential to significantly reduce the search space for co-referential links. 2.2 Exceptions We have also examined the exceptions, i.e., coreferential links that VT and stack-based models cannot determine correctly. Because of the equivalence of the stack contents for leftbalanced discourse trees, there is no case in which the stack-based model finds a referent where VT does not. There is, however, a number of referring expressions for which neither VT nor the stack-based model finds a co-referent. In the corpus of MUC texts we consider, 12.3% of inter-unit references fall into this category, or 9.3% of the references in the corpus if we include intra-unit references. Table 1 provides a summary of the types of referring expressions for which co-referents are not found in our corpus—i.e., no antecedent exists, or the antecedent appears outside the DRA.4 We show the percentage of REs in our corpus for which VT (and the stack-based model as well, since all units in the DRA computed according to VT are in the DRA computed using the stack-based model) fails to find an antecedent, and the percentage of REs for which VT finds a co-referent (in a left satellite) but the stack-based model does not. 4 Our calculations are made based on the RST analysis of the MUC data, in which we detected a small number of structural errors. Therefore, the values given here are not absolute but rather provide an indication of the relative distribution of RE types. 0 2 0 4 0 6 0 8 0 100 120 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 DRA length (k) Number of co-refs Stack VT We consider four types of REs: (1) Pragmatic references, which refer to entities that can be assumed part of general knowledge, such as the Senate, the key in the phrase lock them up and throw away the key, or our in the phrase our streets. (2) Proper nouns, such as Mr. Gerstner or Senator Biden. (3) Common nouns, such as the steelmaker, the proceeds, or the top job. (4) Pronouns Following (Gundel, et al., 1993), we consider that the evoking power of each of these types of REs decreases as we move down the list. That is, pragmatic references are easily understood without an antecedent; proper nouns and noun phrases less so, and are typically resolved by inference over the context. On the other hand, pronouns have very poor evoking power, and therefore a message emitter employs them only when s/he is certain that the structure of the discourse allows for easy recuperation of the antecedent in the message receiver's memory.5 Except for the cases where a pronoun can be understood without an antecedent (e.g., our in our streets), it is virtually impossible to use a pronoun to refer to an antecedent that is outside the DRA. Type of RE VT Stack-based pragmatic 56.3% 0.0% proper nouns 22.7% 26.1% common nouns 16.0% 39.1% pronouns 5.0% 34.8% Table 1: Exceptions for VT and stack-based models The alignment of the evoking power of referential expressions with the percentage of exceptions for both models shows that the predictions made by VT relative to DRAs are fundamentally correct--that is, their prevalence corresponds directly to their respective evoking 5 Ideally, a psycho-linguistic study of reading times to verify the claim that referees outside the DRA are more difficult to process would be in order. powers. On the other hand, the almost equal distribution of exceptions over RE types for the stack-based model shows that it is less reliable for determining DRAs. Note that in all VT exceptions for pronouns, the RST attribution relation is involved. Text fragment (5) and the corresponding RST tree (Figure 7) shows the typical case: (5) A5. A spokesman for the company said, B5. Mr. Bartlett’s promotion reflects the current emphasis at Mary Kay on international expansion. C5. Mr. Bartlett will be involved in developing the international expansion strategy, D5. he said The antecedent for he in D5 is a spokesman for the company in A5, which, due to the nuclearsatellite relations, is inaccessible on the vein. Our results suggest that annotation of attributive relations needs to be refined, possibly by treating X said and the attributed quotation as a single unit. If this were done, the vein expression would allow appropriate access. Figure 7: RST analysis of (5) 2.3 Summary In sum, VT provides a more natural account of referential accessibility than the stack-based model. In cases where the discourse structure is not left-polarized, at least one satellite precedes its nucleus in the discourse and is therefore its left sibling in the binary discourse tree. The vein definition formalizes the intuition that in a sequence of units a b c, where a and c are satellites of b, b can refer to entities in a (its left satellite), but the subsequent right satellite, c, cannot refer to a due to the interposition of nuclear unit b--or, if such a reference exists, it is A5-B5 elaboration attribution A5 B5 C5-D5 attribution D5 C5 harder to process. In stack-based approaches to referentiality, such configurations pose problems: because b dominates a, in order to resolve potential references from b to a, b must appear below a on the stack even though it is processed after a. Even if the processing difficulties are overcome, this situation leads to the postulation of cataphoric references when a satellite precedes its nucleus, which is counterintuitive. 3 VT and Structural Ambiguity The fact that VT considers only the nuclearsatellite distinction and ignores rhetorical labeling has practical ramifications for anaphora resolution systems that rely on discourse structure to determine the DRA for a given RE. (Marcu, et al., 1999) show that over a corpus of texts drawn from MUC newspaper texts, the Wall Street Journal corpus, and the Brown Corpus, reliable agreement among annotators is consistently obtained for discourse segmentation and assignment of nuclear-satellite status, while agreement on rhetorical labeling was less reliable (statistically significant for only the MUC texts). This means that even when there exist differences in rhetorical labeling, vein expressions can be computed and used to determine DRAs. VT also has ramifications for evaluating the viability of different structural representations for a given text, at least for the purposes of reference resolution. Like syntactic parsing, discourse parsing typically yields several interpretations, and one of the a priori tasks for further analysis of the parsed texts is to choose one from among potentially several alternative structures. Marcu (1996) showed that using only rhetorical relations, as many as five different structures can be identified for some texts. Considering intention-based relations can yield even more alternatives. For anaphora resolution, the choice of one structure over another may have significant impact. For example, an RST tree for (6) using rhetorical relations is given in Figure 8; Figure 9 shows another RST tree for the same text, using intention-based relations. If we compute the vein expressions for both representations, we see that the vein for segment C6 in the intentional representation is <A6 B6 C6>, whereas in the rhetorical representation, the vein is <(B6), C6>. That is, under the constraints imposed by VT, John is not available as a referent for he in C6 in the rhetorical version, although John is clearly the appropriate antecedent. Interestingly, the intention-based analysis is skewed to the right and thus is a "better" representation according to the criteria outlined in (Marcu, 1996); it also eliminates the left-satellite that was shown to pose problems for stack-based approaches. It is therefore likely that the intention-based analysis is "better" for the purposes of anaphora resolution. (6) A6. Tell John to bring the car home by 5. B6. That way I can get to the store before it closes. C6. Then he can finish the bookshelves tonight. Figure 8: RST tree for text (6), using rhetorical relations Figure 9: RST tree for text (6), using intention-based relations Conclusion Veins Theory is based on established notions of discourse structure: hierarchical organization, as in the stack-based model and RST's tree structures, and dominance or nuclear/satellite motivation B6-C6 motivation B6 C6 A6 A6-B6 condition condition A6 B6 C6 relations between discourse segments. As such, VT captures and formalizes intuitions about discourse structure that run through the current literature. VT also explicitly recognizes the special status of the left satellite for discourse structure, which has not been adequately addressed in previous work. In this paper we have shown how VT addresses the left satellite problem, and how VT can be used to address various issues of structural ambiguity. VT predicts that references not resolved in the DRA of the unit in which it appears are more difficult to process, both computationally and cognitively; by looking at cases where VT fails we determine that this claim is justified. By comparing the types of referring expressions for which VT and the stack-based model fail, we also show that VT provides a better model for determining DRAs. Acknowledgements We thank Daniel Marcu for providing us with the RST annotated MUC corpus, and Valentin Tablan for developing part of the software that enabled us to process the data. References Azzam S., Humphreys K. and Gaizauskas R. (1998). Evaluating a Focus-Based Approach to Anaphora Resolution. Proceedings of COLING-ACL’98, 74-78. Cristea D., Ide N., and Romary L. (1998). Veins Theory: A Model of Global Discourse Cohesion and Coherence. Proceedings of COLING-ACL’98, 281-285. Cristea D., Ide N., Marc, D., and Tablan V. (2000). An Empirical Investigation of the Relation Between Discourse Structure and CoReference. Proceedings of COLING 2000, 208-214. Fox B. (1987). Discourse Structure and Anaphora. Written and Conversational English. No 48 in Cambridge Studies in Linguistics, Cambridge University Press. Grosz B. and Sidner C. (1986). Attention, Intention and the Structure of Discourse. Computational Linguistics, 12, 175-204. Gundel J., Hedberg N. and Zacharski R. (1993). Cognitive Status and the Form of Referring Expressions. Language, 69:274-307. Hahn U. and Strübe M. (1997). Centering in-thelarge: Computing referential discourse segments. Proceedings of ACL-EACL’97, 104111. Hirschman L. and Chinchor N. (1997). MUC-7 Co-reference Task Definition. Mann, W.C. and Thompson S.A. (1988). Rhetorical structure theory: A theory of text organization, Text, 8:3, 243-281. Marcu D., Amorrortu E. and Romera M. (1999). Experiments in Constructing a Corpus of Discourse Trees. Proceedings of the ACL’99 Workshop on Standards and Tools for Discourse Tagging. Marcu D. (2000). Extending a Formal and Computational Model of Rhetorical Structure Theory with Intentional Structures à la Grosz and Sidner. Proceedings of COLING 2000, 523-29. Marcu D. (1999). A Formal and Computational Synthesis of Grosz and Sidner's and Mann and Thompson's theories. Workshop on Levels of Representation in Discourse, 101-108. Marcu D. (1996). Building Up Rhetorical Structure Trees. Proceedings of the Thirteenth National Conference on Artificial Intelligence, vol. 2, 1069-1074. Moser M. and Moore J. (1996). Towards a Synthesis of Two Accounts of Discourse Structure. Computational Linguistics, 18(4): 537-544. Sidner C. (1981). Focusing and the Interpretation of Pronouns. Computational Linguistics, 7:217-231.
2000
53
Lexical transfer using a vector-space model Eiichiro SUMITA ATR Spoken Language Translation Research Laboratories 2-2 Hikaridai, Seika, Soraku Kyoto 619-0288, Japan [email protected] Abstract Building a bilingual dictionary for transfer in a machine translation system is conventionally done by hand and is very time-consuming. In order to overcome this bottleneck, we propose a new mechanism for lexical transfer, which is simple and suitable for learning from bilingual corpora. It exploits a vector-space model developed in information retrieval research. We present a preliminary result from our computational experiment. Introduction Many machine translation systems have been developed and commercialized. When these systems are faced with unknown domains, however, their performance degrades. Although there are several reasons behind this poor performance, in this paper, we concentrate on one of the major problems, i.e., building a bilingual dictionary for transfer. A bilingual dictionary consists of rules that map a part of the representation of a source sentence to a target representation by taking grammatical differences (such as the word order between the source and target languages) into consideration. These rules usually use case-frames as their base and accompany syntactic and/or semantic constraints on mapping from a source word to a target word. For many machine translation systems, experienced experts on individual systems compile the bilingual dictionary, because this is a complicated and difficult task. In other words, this task is knowledge-intensive and labor-intensive, and therefore, time-consuming. Typically, the developer of a machine translation system has to spend several years building a general-purpose bilingual dictionary. Unfortunately, such a general-purpose dictionary is not almighty, in that (1) when faced with a new domain, unknown source words may emerge and/or some domain-specific usages of known words may appear and (2) the accuracy of the target word selection may be insufficient due to the handling of many target words simultaneously. Recently, to overcome these bottlenecks in knowledge building and/or tuning, the automation of lexicography has been studied by many researchers: (1) approaches using a decision tree: the ID3 learning algorithm is applied to obtain transfer rules from case-frame representations of simple sentences with a thesaurus for generalization (Akiba et. al., 1996 and Tanaka, 1995); (2) approaches using structural matching: to obtain transfer rules, several search methods have been proposed for maximal structural matching between trees obtained by parsing bilingual sentences (Kitamura and Matsumoto, 1996; Meyers et. al., 1998; and Kaji et. al.,1992). 1 Our proposal 1.1 Our problem and approach In this paper, we concentrate on lexical transfer, i.e., target word selection. In other words, the mapping of structures between source and target expressions is not dealt with here. We assume that this structural transfer can be solved on top of lexical transfer. We propose an approach that differs from the studies mentioned in the introduction section in that: I) It use not structural representations like case frames but vector-space representations. II) The weight of each element for constraining the ambiguity of target words is determined automatically by following the term frequency and inverse document frequency in information retrieval research. III) A word alignment that does not rely on parsing is utilized. IV) Bilingual corpora are clustered in terms of target equivalence. 1.2 Background The background for the decisions made in our approach is as follows: A) We would like to reduce human interaction to prepare the data necessary for building lexical transfer rules. B) We do not expect that mature parsing systems for multi-languages and/or spoken languages will be available in the near future. C) We would like the determination of the importance of each feature in the target selection to be automated. D) We would like the problem caused by errors in the corpora and data sparseness to be reduced. 2 Vector-space model This section explains our trial for applying a vector-space model to lexical transfer starting from a basic idea. 2.1 Basic idea We can select an appropriate target word for a given source word by observing the environment including the context, world knowledge, and target words in the neighborhood. The most influential elements in the environment are of course the other words in the source sentence surrounding the concerned source word. Suppose that we have translation examples including the concerned source word and we know in advance which target word corresponds to the source word. By measuring the similarity between (1) an unknown sentence that includes the concerned source word and (2) known sentences that include the concerned source word, we can select the target word which is included in the most similar sentence. This is the same idea as example-based machine translation (Sato and Nagao, 1990 and Furuse et. al., 1994). Group1: 辛口 (not sweet) source sentence 1: This beer is drier and full-bodied. target sentence 1: □□□□□□□□辛口 辛口 辛口 辛口□□□□□□□□ source sentence 2: Would you like dry or sweet sherry? target sentence 2: 辛口 辛口 辛口 辛口□□□□□□□□□□□□□□□□□□□□□□□□ source sentence 3: A dry red wine would go well with it. target sentence 3: □□□□辛口 辛口 辛口 辛口□□□□□□□□□□□□ Group2: 乾燥 (not wet) source sentence 4: Your skin feels so dry. target sentence 4: □□□□□乾燥 乾燥 乾燥 乾燥□□□□□ source sentence 5: You might want to use some cream to protect your skin against the dry air. target sentence 5: 乾燥 乾燥 乾燥 乾燥□□□□□□□□□□□□□□□□□□□□□□□□□□□□ Table 1 Portions of English “dry” into Japanese for an aligned corpus Listed in Table 1 are samples of English-Japanese sentence pairs of our corpus including the source word “dry.” The upper three samples of group 1 are translated with the target word “辛口 (not sweet)” and the lower two samples of group 2 are translated with the target word “乾燥 (not wet).” The remaining portions of target sentences are hidden here because they do not relate to the discussion in the paper. The underlined words are some of the cues used to select the target words. They are distributed in the source sentence with several different grammatical relations such as subject, parallel adjective, modified noun, and so on, for the concerned word “dry.” 2.2 Sentence vector We propose representing the sentence as a sentence vector, i.e., a vector that lists all of the words in the sentence. The sentence vector of the first sentence of Table 1 is as follows: <this, beer, is, dry, and, full-body> Figure 1 System Configuration Figure 1 outlines our proposal. Suppose that we have the sentence vector of an input sentence I and the sentence vector of an example sentence E from a bilingual corpus. We measure the similarity by computing the cosine of the angle between I and E. We output the target word of the example sentence whose cosine is maximal. 2.3 Modification of sentence vector The naïve implementation of a sentence vector that uses the occurrence of words themselves suffers from data sparseness and unawareness of relevance. 2.3.1 Semantic category incorporation To reduce the adverse influence of data sparseness, we count occurrences by not only the words themselves but also by the semantic categories of the words given by a thesaurus. For example, the “辛口 (not sweet)” sentences of Vector generator Bilingual corpus Corpus vector, {E} Thesaurus Input sentence Input vector, I Cosine calculation The most similar vector Table 1 have the different cue words of “beer,” “sherry,” and “wine,” and the cues are merged into a single semantic category alcohol in the sentence vectors. 2.3.2 Grouping sentences and weighting dimensions The previous subsection does not consider the relevance to the target selection of each element of the vectors; therefore, the selection may fail due to non-relevant elements. We exploit the term frequency and inverse document frequency in information retrieval research. Here, we regard a group of sentences that share the same target word as a document.” Vectors are made not sentence-wise but group-wise. The relevance of each dimension is the term frequency multiplied by the inverse document frequency. The term frequency is the frequency in the document (group). A repetitive occurrence may indicate the importance of the word. The inverse document frequency corresponds to the discriminative power of the target selection. It is usually calculated as a logarithm of N divided by df where N is the number of the documents (groups) and df is the frequency of documents (groups) that include the word. Cluster 1: a piece of paper money, C(紙幣 紙幣 紙幣 紙幣) source sentence 1: May I have change for a ten dollar bill? target sentence 1: □□□□□紙幣 紙幣 紙幣 紙幣□□□□□□□□□□ source sentence 2: Could you change a fifty dollar bill? target sentence 2: □□□□札 札 札 札□□□□□□□□□□ Cluster 2: an account, C(勘定 勘定 勘定 勘定) source sentence 3: I've already paid the bill. target sentence 3: □□勘定 勘定 勘定 勘定□□□□□□□ source sentence 4: Isn't my bill too high? target sentence 4: □□料金 料金 料金 料金□□□□□□□□□□ source sentence 5: I'm checking out. May I have the bill, please? target sentence 5: □□□□□□□□□□会計 会計 会計 会計□□□□□ Table 2 Samples of groups clustered by target equivalence 3 Pre-processing of corpus Before generating vectors, the given bilingual corpus is pre-processed in two ways (1) words are aligned in terms of translation; (2) sentences are clustered in terms of target equivalence to reduce problems caused by data sparseness. 3.1 Word alignment We need to have source words and target words aligned in parallel corpora. We use a word alignment program that does not rely on parsing (Sumita, 2000). This is not the focus of this paper, and therefore, we will only describe it briefly here. First, all possible alignments are hypothesized as a matrix filled with occurrence similarities between source words and target words. Second, using the occurrence similarities and other constraints, the most plausible alignment is selected from the matrix. 3.2 Clustering by target words We adopt a clustering method to avoid the sparseness that comes from variations in target words. The translation of a word can vary more than the meaning of the target word. For example, the English word “bill” has two main meanings: (1) a piece of paper money, and (2) an account. In Japanese, there is more than one word for each meaning. For (1), “札” and “紙 幣” can correspond, and for (2), “勘定,” “会 計,” and “料金” can correspond. The most frequent target word can represent the cluster, e.g., “紙幣” for (1) a piece of paper money; “勘定” for (2) an account. We assume that selecting a cluster is equal to selecting the target word. If we can merge such equivalent translation variations of target words into clusters, we can improve the accuracy of lexical transfer for two reasons: (1) doing so makes the mark larger by neglecting accidental differences among target words; (2) doing so collects scattered pieces of evidence and strengthens the effect. Furthermore, word alignment as an automated process is incomplete. We therefore need to filter out erroneous target words that come from alignment errors. Erroneous target words are considered to be low in frequency and are expected to be semantically dissimilar from correct target words based on correct alignment. Clustering example corpora can help filter out erroneous target words. By calculating the semantic similarity between the semantic codes of target words, we perform clustering according to the simple algorithm in subsection 3.2.2. 3.2.1 Semantic similarity Suppose each target word has semantic codes for all of its possible meanings. In our thesaurus, for example, the target word “札” has three decimal codes, 974 (label/tag), 829 (counter) and 975 (money) and the target word “紙幣” has a single code 975 (money). We represent this as a code vector and define the similarity between the two target words by computing the cosine of the angle between their code vectors. 3.2.2 Clustering algorithm We adopt a simple procedure to cluster a set of n target words X = {X1, X2,…, Xn}. X is sorted in the descending order of the frequency of Xn in a sub-corpus including the concerned source word. We repeat (1) and (2) until the set X is empty. (1) We move the leftmost Xl from X to the new cluster C(Xl). (2) For all m (m>l) , we move Xm from X to C(Xl) if the cosine of Xl and Xm is larger than the threshold T. As a result, we obtain a set of clusters {C(Xl)} for each meaning as exemplified in Table 2. The threshold of semantic similarity T is determined empirically. T in the experiment was 1/2. 4 Experiment To demonstrate the feasibility of our proposal, we conducted a pilot experiment as explained in this section. Number of sentence pairs (English-Japanese) 19,402 Number of source words (English) 156,128 Number of target words (Japanese) 178,247 Number of source content words (English) 58,633 Number of target content words (Japanese) 64,682 Number of source different content words (English) 4,643 Number of target different content words (Japanese) 6,686 Table 3 Corpus statistics 4.1 Experimental conditions For our sentence vectors and code vectors, we used hand-made thesauri of Japanese and English covering our corpus (for a travel arrangement task), whose hierarchy is based on that of the Japanese commercial thesaurus Kadokawa Ruigo Jiten (Ohno and Hamanishi, 1984). We used our English-Japanese phrase book (a collection of pairs of typical sentences and their translations) for foreign tourists. The statistics of the corpus are summarized in Table 3. We word-aligned the corpus before generating the sentence vectors. We focused on the transfer of content words such as nouns, verbs, and adjectives. We picked out six polysemous words for a preliminary evaluation: “ bill,” “ dry,” “ call” in English and “ 熱,” “ 悪い,” “ 飲む” in Japanese. We confined ourselves to a selection between two major clusters of each source word using the method in subsection 3.2 #1&2 #1 baseline #correct vsm bill [noun] 47 30 64% 40 85% call [verb] 179 93 52% 118 66% dry [adjective] 6 3 50% 4 67% 熱 [noun] 19 13 68% 14 73% 飲む [verb] 60 42 70% 49 82% 悪い [adjective] 26 15 57% 16 62% Table 4 Accuracy of the baseline and the VSM systems 4.2 Selection accuracy We compared the accuracy of our proposal using the vector-space model (vsm system) with that of a decision-by-majority model (baseline system). The results are shown in Table 4. Here, the accuracy of the baseline system is #1 (the number of target sentences of the most major cluster) divided by #1&2 (the number of target sentences of clusters 1 & 2). The accuracy of the vsm system is #correct (the number of vsm answers that match the target sentence) divided by #1&2. #all #1&2 Coverage bill [noun] 63 47 74% call [verb] 226 179 79% dry [adjective] 8 6 75% 熱 [noun] 22 19 86% 飲む [verb] 77 60 78% 悪い [adjective] 38 26 68% Table 5 Coverage of the top two clusters Judging was done mechanically by assuming that the aligned data was 100% correct.1 Our vsm system achieved an accuracy from about 60% to about 80% and outperformed the baseline system by about 5% to about 20%. 1 This does not necessarily hold, therefore, performance degrades in a certain degree. 4.3 Coverage of major clusters One reason why we clustered the example database was to filter out noise, i.e., wrongly aligned words. We skimmed the clusters and we saw that many instances of noise were filtered out. At the same time, however, a portion of correctly aligned data was unfortunately discarded. We think that such discarding is not fatal because the coverage of clusters 1&2 was relatively high, around 70% or 80% as shown in Table 5. Here, the coverage is #1&2 (the number of data not filtered) divided by #all (the number of data before discarding). 5 Discussion 5.1 Accuracy An experiment was done for a restricted problem, i.e., select the appropriate one cluster (target word) from two major clusters (target words), and the result was encouraging for the automation of the lexicography for transfer. We plan to improve the accuracy obtained so far by exploring elementary techniques: (1) Adding new features including extra linguistic information such as the role of the speaker of the sentence (Yamada et al., 2000) (also, the topic that sentences are referring to) may be effective; and (2) Considering the physical distance from the concerned input word, which may improve the accuracy. A kind of window function might also be useful; (3) Improving the word alignment, which may contribute to the overall accuracy. 5.2 Data sparseness In our proposal, deficiencies in the naïve implementation of vsm are compensated in several ways by using a thesaurus, grouping, and clustering, as explained in subsections 2.3 and 3.2. 5.3 Future work We showed only the translation of content words. Next, we will explore the translation of function words, the word order, and full sentences. Our proposal depends on a handcrafted thesaurus. If we manage to do without craftsmanship, we will achieve broader applicability. Therefore, automatic thesaurus construction is an important research goal for the future. Conclusion In order to overcome a bottleneck in building a bilingual dictionary, we proposed a simple mechanism for lexical transfer using a vector space. A preliminary computational experiment showed that our basic proposal is promising. Further development, however, is required: to use a window function or to use a better alignment program; to compare other statistical methods such as decision trees, maximal entropy, and so on. Furthermore, an important future work is to create a full translation mechanism based on this lexical transfer. Acknowledgements Our thanks go to Kadokawa-Shoten for providing us with the Ruigo-Shin-Jiten. References Akiba, O., Ishii, M., ALMUALLIM, H., and Kaneda, S. (1996) A Revision Learner to Acquire English Verb Selection Rules, Journal of NLP, 3/3, pp. 53-68, (in Japanese). Furuse, O., Sumita, E. and Iida, H. (1994) Transfer-Driven Machine Translation Utilizing Empirical Knowledge, Transactions of IPSJ, 35/3, pp. 414-425, (in Japanese). Kaji, H., Kida, Y. and Morimoto, Y. (1992) Learning translation templates from bilingual text, Proc. of Coling-92, pp. 672-678. Kitamura, M. and Matsumoto, Y. (1996) Automatic Acquisition of Translation Rules from Parallel Corpora, Transactions of IPSJ, 37/6, pp. 1030-1040, (in Japanese). Meyers, A., Yangarber, R., Grishman, R., Macleod, C., and Sandoval, A. (1998) Deriving Transfer rules from dominance-preserving alignments, Coling-ACL98, pp. 843-847. Ohno, S. and Hamanishi, M. (1984) Ruigo-Shin-Jiten, Kadokawa, p. 932, (in Japanese). Sato, S. and Nagao, M. (1990) Toward memory-based translation, Coling-90, pp. 247-252. Sumita, E. (2000) Word alignment using matrix PRICAI-00, 2000, (to appear). Tanaka H. (1995) Statistical Learning of “Case Frame Tree” for Translating English Verbs, Journal of NLP, 2/3, pp. 49-72, (in Japanese). Yamada, S., Sumita, E. and Kashioka, H. (2000) Translation using Information on Dialogue Participants, ANLP-00, pp. 37-43.
2000
54
Using Confidence Bands for Parallel Texts Alignment António RIBEIRO Departamento de Informática Faculdade de Ciências e Tecnologia Universidade Nova de Lisboa Quinta da Torre P-2825-114 Monte da Caparica Portugal [email protected] Gabriel LOPES Departamento de Informática Faculdade de Ciências e Tecnologia Universidade Nova de Lisboa Quinta da Torre P-2825-114 Monte da Caparica Portugal [email protected] João MEXIA Departamento de Matemática Faculdade de Ciências e Tecnologia Universidade Nova de Lisboa Quinta da Torre P-2825-114 Monte da Caparica Portugal Abstract This paper describes a language independent method for alignment of parallel texts that makes use of homograph tokens for each pair of languages. In order to filter out tokens that may cause misalignment, we use confidence bands of linear regression lines instead of heuristics which are not theoretically supported. This method was originally inspired on work done by Pascale Fung and Kathleen McKeown, and Melamed, providing the statistical support those authors could not claim. Introduction Human compiled bilingual dictionaries do not cover every term translation, especially when it comes to technical domains. Moreover, we can no longer afford to waste human time and effort building manually these ever changing and incomplete databases or design language specific applications to solve this problem. The need for an automatic language independent task for equivalents extraction becomes clear in multilingual regions like Hong Kong, Macao, Quebec, the European Union, where texts must be translated daily into eleven languages, or even in the U.S.A. where Spanish and English speaking communities are intermingled. Parallel texts (texts that are mutual translations) are valuable sources of information for bilingual lexicography. However, they are not of much use unless a computational system may find which piece of text in one language corresponds to which piece of text in the other language. In order to achieve this, they must be aligned first, i.e. the various pieces of text must be put into correspondence. This makes the translations extraction task easier and more reliable. Alignment is usually done by finding correspondence points – sequences of characters with the same form in both texts (homographs, e.g. numbers, proper names, punctuation marks), similar forms (cognates, like Region and Região in English and Portuguese, respectively) or even previously known translations. Pascale Fung and Kathleen McKeown (1997) present an alignment algorithm that uses term translations as correspondence points between English and Chinese. Melamed (1999) aligns texts using correspondence points taken either from orthographic cognates (Michel Simard et al., 1992) or from a seed translation lexicon. However, although the heuristics both approaches use to filter noisy points may be intuitively quite acceptable, they are not theoretically supported by Statistics. The former approach considers a candidate correspondence point reliable as long as, among some other constraints, “[...] it is not too far away from the diagonal [...]” (Pascale Fung and Kathleen McKeown, 1997, p.72) of a rectangle whose sides sizes are proportional to the lengths of the texts in each language (henceforth, ‘the golden translation diagonal’). The latter approach uses other filtering parameters: maximum point ambiguity level, point dispersion and angle deviation (Melamed, 1999, pp. 115–116). António Ribeiro et al. (2000a) propose a method to filter candidate correspondence points generated from homograph words which occur only once in parallel texts (hapaxes) using linear regressions and statistically supported noise filtering methodologies. The method avoids heuristic filters and they claim high precision alignments. In this paper, we will extend this work by defining a linear regression line with all points generated from homographs with equal frequencies in parallel texts. We will filter out those points which lie outside statistically defined confidence bands (Thomas Wonnacott and Ronald Wonnacott, 1990). Our method will repeatedly use a standard linear regression line adjustment technique to filter unreliable points until there is no misalignment. Points resulting from this filtration are chosen as correspondence points. The following section will discuss related work. The method is described in section 2 and we will evaluate and compare the results in section 3. Finally, we present conclusions and future work. 1 Background There have been two mainstreams for parallel text alignment. One assumes that translated texts have proportional sizes; the other tries to use lexical information in parallel texts to generate candidate correspondence points. Both use some notion of correspondence points. Early work by Peter Brown et al. (1991) and William Gale and Kenneth Church (1991) aligned sentences which had a proportional number of words and characters, respectively. Pairs of sentence delimiters (full stops) were used as candidate correspondence points and they ended up being selected while aligning. However, these algorithms tended to break down when sentence boundaries were not clearly marked. Full stops do not always mark sentence boundaries, they may not even exist due to OCR noise and languages may not share the same punctuation policies. Using lexical information, Kenneth Church (1993) showed that cheap alignment of text segments was still possible exploiting orthographic cognates (Michel Simard et al., 1992), instead of sentence delimiters. They became the new candidate correspondence points. During the alignment, some were discarded because they lied outside an empirically estimated bounded search space, required for time and space reasons. Martin Kay and Martin Röscheisen (1993) also needed clearly delimited sentences. Words with similar distributions became the candidate correspondence points. Two sentences were aligned if the number of correspondence points associating them was greater than an empirically defined threshold: “[...] more than some minimum number of times [...]” (Martin Kay and Martin Röscheisen, 1993, p.128). In Ido Dagan et al. (1993) noisy points were filtered out by deleting frequent words. Pascale Fung and Kathleen McKeown (1994) dropped the requirement for sentence boundaries on a case-study for English-Chinese. Instead, they used vectors that stored distances between consecutive occurrences of a word (DK-vec’s). Candidate correspondence points were identified from words with similar distance vectors and noisy points were filtered using some heuristics. Later, in Pascale Fung and Kathleen McKeown (1997), the algorithm used extracted terms to compile a list of reliable pairs of translations. Those pairs whose distribution similarity was above a threshold became candidate correspondence points (called potential anchor points). These points were further constrained not to be “too far away” from the ‘translation diagonal’. Michel Simard and Pierre Plamondon (1998) aligned sentences using isolated cognates as candidate correspondence points, i.e. cognates that were not mistaken for others within a text window. Some were filtered out if they either lied outside an empirically defined search space, named a corridor, or were “not in line” with their neighbours. Melamed (1999) also filtered candidate correspondence points obtained from orthographic cognates. A maximum point ambiguity level filters points outside a search space, a maximum point dispersion filters points too distant from a line formed by candidate correspondence points and a maximum angle deviation filters points that tend to slope this line too much. Whether the filtering of candidate correspondence points is done prior to alignment or during it, we all want to find reliable correspondence points. They provide the basic means for extracting reliable information from parallel texts. However, as far as we learned from the above papers, current methods have repeatedly used statistically unsupported heuristics to filter out noisy points. For instance, the ‘golden translation diagonal’ is mentioned in all of them but none attempts filtering noisy points using statistically defined confidence bands. 2 Correspondence Points Filters 2.1 Overview The basic insight is that not all candidate correspondence points are reliable. Whatever heuristics are taken (similar word distributions, search corridors, point dispersion, angle deviation,...), we want to filter the most reliable points. We assume that reliable points have similar characteristics. For instance, they tend to gather somewhere near the ‘golden translation diagonal’. Homographs with equal frequencies may be good alignment points. 2.2 Source Parallel Texts We worked with a mixed parallel corpus consisting of texts selected at random from the Official Journal of the European Communities1 (ELRA, 1997) and from The Court of Justice of the European Communities2 in eleven languages3. Language Written Questions Debates Judgements Total da 259k (52k) 2,0M (395k) 16k (3k) 2250k de 234k (47k) 1,8M (368k) 15k (3k) 2088k el 272k (54k) 1,9M (387k) 16k (3k) 2222k en 263k (53k) 2,1M (417k) 16k (3k) 2364k es 292k (58k) 2,2M (439k) 18k (4k) 2507k fi ----13k (3k) 13k fr 310k (62k) 2,2M (447k) 19k (4k) 2564k it 279k (56k) 1,9M (375k) 17k (3k) 2171k nl 275k (55k) 2,1M (428k) 16k (3k) 2431k pt 284k (57k) 2,1M (416k) 17k (3k) 2381k sv ----15k (3k) 15k Total 2468k (55k) 18,4M (408k) 177k (3k) 21005k Sub-corpus Table 1: Words per sub-corpus (average per text inside brackets; markups discarded)4. For each language, we included: • five texts with Written Questions asked by members of the European Parliament to the European Commission and their corresponding answers (average: about 60k words or 100 pages / text); 1 Danish (da), Dutch (nl), English (en), French (fr), German (de), Greek (el), Italian (it), Portuguese (pt) and Spanish (es). 2 Webpage address: curia.eu.int 3 The same languages as those in footnote 1 plus Finnish (fi) and Swedish (sv). 4 No Written Questions and Debates texts for Finnish and Swedish are available in ELRA (1997) since the texts provided are from the 1992-4 period and it was not until 1995 that the respective countries became part of the European Union. • five texts with records of Debates in the European Parliament (average: about 400k words or more than 600 pages / text). These are written transcripts of oral discussions; • five texts with judgements of The Court of Justice of the European Communities (average: about 3k words or 5 pages / text). In order to reduce the number of possible pairs of parallel texts from 110 sets (11 languages×10) to a more manageable size of 10 sets, we decided to take Portuguese as the kernel language of all pairs. 2.3 Generating Candidate Correspondence Points We generate candidate correspondence points from homographs with equal frequencies in two parallel texts. Homographs, as a naive and particular form of cognate words, are likely translations (e.g. Hong Kong in various European languages). Here is a table with the percentages of occurrences of these words in the used texts: Pair Written Questions Debates Judgements Average pt-da 2,8k (4,9%) 2,5k (0,6%) 0,3k (8,1%) 2,5k (1,1%) pt-de 2,7k (5,1%) 4,2k (1,0%) 0,4k (7,9%) 4,0k (1,5%) pt-el 2,3k (4,0%) 1,9k (0,5%) 0,3k (6,9%) 1,9k (0,8%) pt-en 2,7k (4,8%) 2,8k (0,7%) 0,3k (6,2%) 2,7k (1,1%) pt-es 4,1k (7,1%) 7,8k (1,9%) 0,7k (15,2%) 7,4k (2,5%) pt-fi --- --- 0,2k (5,2%) 0,2k (5,2%) pt-fr 2,9k (5,0%) 5,1k (1,2%) 0,4k (9,4%) 4,8k (1,6%) pt-it 3,1k (5,5%) 5,4k (1,3%) 0,4k (9,6%) 5,2k (1,8%) pt-nl 2,6k (4,5%) 4,9k (1,2%) 0,3k (8,3%) 4,7k (1,6%) pt-sv --- --- 0,3k (6,9%) 0,3k (6,9%) Average 2,9k (5,1%) 4,4k (1,1%) 0,4k (8,4%) 4,2k (1,5%) Sub-corpus Table 2: Average number of homographs with equal frequencies per pair of parallel texts (average percentage of homographs inside brackets). For average size texts (e.g. the Written Questions), these words account for about 5% of the total (about 3k words / text). This number varies according to language similarity. For instance, on average, it is higher for Portuguese–Spanish than for Portuguese–English. These words end up being mainly numbers and names. Here are a few examples from a parallel Portuguese–English text: 2002 (numbers, dates), ASEAN (acronyms), Patten (proper names), China (countries), Manila (cities), apartheid (foreign words), Ltd (abbreviations), habitats (Latin words), ferry (common names), global (common vocabulary). In order to avoid pairing homographs that are not equivalent (e.g. ‘a’, a definite article in Portuguese and an indefinite article in English), we restricted ourselves to homographs with the same frequencies in both parallel texts. In this way, we are selecting words with similar distributions. Actually, equal frequency words helped Jean-François Champollion to decipher the Rosetta Stone for there was a name of a King (Ptolemy V) which occurred the same number of times in the ‘parallel texts’ of the stone. Each pair of texts provides a set of candidate correspondence points from which we draw a line based on linear regression. Points are defined using the co-ordinates of the word positions in each parallel text. For example, if the first occurrence of the homograph word Patten occurs at word position 125545 in the Portuguese text and at 135787 in the English parallel text, then the point co-ordinates are (125545,135787). The generated points may adjust themselves well to a linear regression line or may be dispersed around it. So, firstly, we use a simple filter based on the histogram of the distances between the expected and real positions. After that, we apply a finer-grained filter based on statistically defined confidence bands for linear regression lines. We will now elaborate on these filters. 2.4 Eliminating Extreme Points The points obtained from the positions of homographs with equal frequencies are still prone to be noisy. Here is an example: Noisy Candidate Correspondence Points y = 0,9165x + 141,65 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000 pt Word Positions en Word Positions Figure 1: Noisy versus ‘well-behaved’ (‘in line’) candidate correspondence points. The linear regression line equation is shown on the top right corner. The figure above shows noisy points because their respective homographs appear in positions quite apart. We should feel reluctant to accept distant pairings and that is what the first filter does. It filters out those points which are clearly too far apart from their expected positions to be considered as reliable correspondence points. We find expected positions building a linear regression line with all points, and then determining the distances between the real and the expected word positions: pt en Positions Position Word Real Expected Distance 3877 I 24998 3695 21303 9009 etc 22897 8399 14499 11791 I 25060 10948 14112 15248 As 3398 14117 10719 16965 As 3591 15690 12099 22819 volume 32337 21056 11281 Table 3: A sample of the distances between expected and real positions of noisy points in Figure 1. Expected positions are computed from the linear regression line equation y = ax + b, where a is the line slope and b is the Y-axis intercept (the value of y when x is 0), substituting x for the Portuguese word position. For Table 3, the expected word position for the word I at pt word position 3877 is 0.9165 × 3877 + 141.65 = 3695 (see the regression line equation in Figure 1) and, thus, the distance between its expected and real positions is | 3695 – 24998 | = 21303. If we draw a histogram ranging from the smallest to the largest distance, we get: Histogram of Distances 0 2 4 6 8 10 0 2769 5538 8307 11076 13845 16614 19383 22152 24921 27690 30459 33228 35997 Distances between Real and Expected Word Positions Number of Points filtered points 3297 Figure 2: Histogram of the distances between expected and real word positions. In order to build this histogram, we use the Sturges rule (see ‘Histograms’ in Samuel Kotz et al. 1982). The number of classes (bars or bins) is given by 1 + log2n, where n is the total number of points. The size of the classes is given by (maximum distance – minimum distance) / number of classes. For example, for Figure 1, we have 3338 points and the distances between expected and real positions range from 0 to 35997. Thus, the number of classes is 1 + log23338 ≅ 12.7 → 13 and the size of the classes is (35997 – 0) / 13 ≅ 2769. In this way, the first class ranges from 0 to 2769, the second class from 2769 to 5538 and so forth. With this histogram, we are able to identify those words which are too far apart from their expected positions. In Figure 2, the gap in the histogram makes clear that there is a discontinuity in the distances between expected and real positions. So, we are confident that all points above 22152 are extreme points. We filter them out of the candidate correspondence points set and proceed to the next filter. 2.5 Confidence Bands of Linear Regression Lines Confidence bands of linear regression lines (Thomas Wonnacott and Ronald Wonnacott, 1990, p. 384) help us to identify reliable points, i.e. points which belong to a regression line with a great confidence level (99.9%). The band is typically wider in the extremes and narrower in the middle of the regression line. The figure below shows an example of filtering using confidence bands: Linear Regression Line Confidence Bands 8700 8800 8900 9000 9100 9400 9450 9500 9550 9600 9650 9700 9750 9800 pt Word Position en Word Position Expected y Real y Confidence band Figure 3: Detail of the filter based on confidence bands. Point A lies outside the confidence band. It will be filtered out. We start from the regression line defined by the points filtered with the Histogram technique, described in the previous section, and then we calculate the confidence band. Points which lie outside this band are filtered out since they are credited as too unreliable for alignment (e.g. Point A in Figure 3). We repeat this step until no pieces of text belong to different translations, i.e. until there is no misalignment. The confidence band is the error admitted at an x co-ordinate of a linear regression line. A point (x,y) is considered outside a linear regression line with a confidence level of 99.9% if its y co-ordinate does not lie within the confidence interval [ ax + b – error(x); ax + b + error(x)], where ax + b is the linear regression line equation and error(x) is the error admitted at the x co-ordinate. The upper and lower limits of the confidence interval are given by the following equation (see Thomas Wonnacott & Ronald Wonnacott, 1990, p. 385): ∑ = − − + ± + = n i i X x X x n s t b ax y 1 2 2 005 .0 ) ( ) ( 1 ) ( where: • t0.005 is the t-statistics value for a 99.9% confidence interval. We will use the z-statistics instead since t0.005 = z0.005 = 3.27 for large samples of points (above 120); • n is the number of points; • s is the standard deviation from the expected value yˆ at co-ordinate x (see Thomas Wonnacott & Ronald Wonnacott, 1990, p. 379): b ax y n y y s n i i + = − − = ∑ = ˆ where , 2 )ˆ ( 1 • X is the average value of the various xi: ∑ = = n i ix n X 1 1 3 Evaluation We ran our alignment algorithm on the parallel texts of 10 language pairs as described in section 2.2. The table below summarises the results: Pair Written Questions Debates Judgements Average pt-da 128 (5%) 56 (2%) 114 (35%) 63 (2%) pt-de 124 (5%) 99 (2%) 53 (15%) 102 (3%) pt-el 118 (5%) 115 (6%) 60 (20%) 115 (6%) pt-en 88 (3%) 102 (4%) 50 (19%) 101 (4%) pt-es 59 (1%) 55 (1%) 143 (21%) 56 (1%) pt-fi --- --- 60 (26%) 60 (26%) pt-fr 148 (5%) 113 (2%) 212 (49%) 117 (2%) pt-it 117 (4%) 104 (2%) 25 (6%) 105 (2%) pt-nl 120 (5%) 73 (1%) 53 (15%) 77 (2%) pt-sv --- --- 74 (23%) 74 (23%) Average 113 (4%) 90 (2%) 84 (23%) 92 (2%) Sub-corpus Table 4: Average number of correspondence points in the first non-misalignment (average ratio of filtered and initial candidate correspondence points inside brackets). On average, we end up with about 2% of the initial correspondence points which means that we are able to break a text in about 90 segments (ranging from 70 words to 12 pages per segment A for the Debates). An average of just three filtrations are needed: the Histogram filter plus two filtrations with the Confidence Bands. The figure below shows an example of a misaligning correspondence point. Misalignments (Crossed segments) 300 400 500 600 700 800 900 1000 300 400 500 600 700 800 pt Word Position en Word Position Figure 4: Bad correspondence points (× – misaligning points; ” ± FRUUHVSRQGHQFH SRLQWV  Had we restricted ourselves to using homographs which occur only once (hapaxes), we would get about one third of the final points (António Ribeiro et al. 2000a). Hapaxes turn out to be good candidate correspondence points because they work like cognates that are not mistaken for others within the full text scope (Michel Simard and Pierre Plamondon, 1998). When they are in similar positions, they turn out to be reliable correspondence points. To compare our results, we aligned the BAF Corpus (Michel Simard and Pierre Plamondon, 1998) which consists of a collection of parallel texts (Canadian Parliament Hansards, United Nations, literary, etc.). Filename # Tokens # Segments Chars / Segment # Segments Chars / Segment Ratio citi1.fr 17556 49 1860 742 120 6,6% citi2.fr 33539 48 3360 1393 104 3,4% cour.fr 49616 101 2217 1377 140 7,3% hans.fr 82834 45 8932 3059 117 1,5% ilo.fr 210342 68 15654 7129 137 1,0% onu.fr 74402 27 14101 2559 132 1,1% tao1.fr 10506 52 1019 365 95 14,2% tao2.fr 9825 51 972 305 97 16,7% tao3.fr 4673 44 531 176 62 25,0% verne.fr 79858 29 12736 2521 127 1,2% xerox.fr 66605 114 2917 3454 85 3,3% Average 111883 60 10271 3924 123 1,5% Equal Frequency Homographs BAF Analysis Table 5: Comparison with the Jacal alignment (Michel Simard and Pierre Plamondon, 1998). The table above shows that, on average, we got about 1.5% of the total segments, resulting in about 10k characters per segment. This number ranges from 25% (average: 500 characters per segment) for a small text (tao3.fr-en) to 1% (average: 15k characters per segment) for a large text (ilo.fr-en). Although these are small numbers, we should notice that, in contrast with Michel Simard and Pierre Plamondon (1998), we are not including: • words defined as cognate “if their four first characters are identical”; • an ‘isolation window’ heuristics to reduce the search space; • heuristics to define a search corridor to find candidate correspondence points; We should stress again that the algorithm reported in this paper is purely statistical and recurs to no heuristics. Moreover, we did not reapply the algorithm to each aligned parallel segment which would result in finding more correspondence points and, consequently, further segmentation of the parallel texts. Besides, if we use the methodology presented in Joaquim da Silva et al. (1999) for extracting relevant string patterns, we are able to identify more statistically reliable cognates. António Ribeiro and Gabriel Lopes (1999) report a higher number of segments using clusters of points. However, the algorithm does not assure 100% alignment precision and discards some good correspondence points which end up in bad clusters. Our main critique to the use of heuristics is that though they may be intuitively quite acceptable and may significantly improve the results as seen with Jacal alignment for the BAF Corpus, they are just heuristics and cannot be theoretically explained by Statistics. Conclusions Confidence bands of linear regression lines help us to identify reliable correspondence points without using empirically found or statistically unsupported heuristics. This paper presents a purely statistical approach to the selection of candidate correspondence points for parallel texts alignment without recurring to heuristics as in previous work. The alignment is not restricted to sentence or paragraph level for which clearly delimited boundaries markers would be needed. It is made at whatever segment size as long as reliable correspondence points are found. This means that alignment can result at paragraph, sentence, phrase, term or word level. Moreover, the methodology does not depend on the way candidate correspondence points are generated, i.e. although we used homographs with equal frequencies, we could have also bootstrapped the process using cognates (Michel Simard et al. 1992) or a small bilingual lexicon to identify equivalents of words or expressions (Dekai Wu 1994; Pascale Fung and Kathleen McKeown 1997; Melamed 1999). This is a particularly good strategy when it comes to distant languages like English and Chinese where the number of homographs is reduced. As António Ribeiro et al. (2000b) showed, these tokens account for about 5% for small texts. Aligning languages with such different alphabets requires automatic methods to identify equivalents as Pascale Fung and Kathleen McKeown (1997) presented, increasing the number of candidate correspondence points at the beginning. Selecting correspondence points improves the quality and reliability of parallel texts alignment. As this alignment algorithm is not restricted to paragraphs or sentences, 100% alignment precision may be degraded by language specific term order policies in small segments. On average, three filtrations proved enough to avoid crossed segments which are a result of misalignments. The method is language and character-set independent and does not assume any a priori language knowledge (namely, small bilingual lexicons), text tagging, well defined sentence or paragraph boundaries nor one-to-one translation of sentences. Future Work At the moment, we are working on alignment of sub-segments of parallel texts in order to find more correspondence points within each aligned segment in a recursive way. We are also planning to apply the method to large parallel Portuguese–Chinese texts. We believe we may significantly increase the number of segments we get in the end by using a more dynamic approach to the filtering using linear regression lines, by selecting candidate correspondence points at the same time that parallel texts tokens are input. This approach is similar to Melamed (1999) but, in contrast, it is statistically supported and uses no heuristics. Another area for future experiments will use relevant strings of characters in parallel texts instead of using just homographs. For this purpose, we will apply a methodology described in Joaquim da Silva et al. (1999). This method was used to extract string patterns and it will help us to automatically extract ‘real’ cognates. Acknowledgements Our thanks go to the anonymous referees for their valuable comments on the paper. We would also like to thank Michel Simard for providing us the aligned BAF Corpus. This research was partially supported by a grant from Fundação para a Ciência e Tecnologia / Praxis XXI. References Peter Brown, Jennifer Lai and Robert Mercer (1991) Aligning Sentences in Parallel Corpora. In “Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics”, Berkeley, California, U.S.A., pp. 169–176. Kenneth Church (1993) Char_align: A Program for Aligning Parallel Texts at the Character Level. In “Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics”, Columbus, Ohio, U.S.A., pp. 1–8. Ido Dagan, Kenneth Church and William Gale (1993) Robust Word Alignment for Machine Aided Translation. In “Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives”, Columbus, Ohio, U.S.A., pp. 1–8. ELRA (European Language Resources Association) (1997) Multilingual Corpora for Co-operation, Disk 2 of 2. Paris, France. Pascale Fung and Kathleen McKeown (1994) Aligning Noisy Parallel Corpora across Language Groups: Word Pair Feature Matching by Dynamic Time Warping. In “Technology Partnerships for Crossing the Language Barrier: Proceedings of the First Conference of the Association for Machine Translation in the Americas”, Columbia, Maryland, U.S.A., pp. 81–88. Pascale Fung and Kathleen McKeown (1997) A Technical Word- and Term-Translation Aid Using Noisy Parallel Corpora across Language Groups. Machine Translation, 12/1–2 (Special issue), pp. 53–87. William Gale and Kenneth Church (1991) A Program for Aligning Sentences in Bilingual Corpora. In “Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics”, Berkeley, California, U.S.A., pp. 177–184 (short version). Also (1993) Computational Linguistics, 19/1, pp. 75–102 (long version). Martin Kay and Martin Röscheisen (1993) TextTranslation Alignment. Computational Linguistics, 19/1, pp. 121–142. Samuel Kotz, Norman Johnson and Campbell Read (1982) Encyclopaedia of Statistical Sciences. John Wiley & Sons, New York Chichester Brisbane Toronto Singapore. I. Dan Melamed (1999) Bitext Maps and Alignment via Pattern Recognition. Computational Linguistics, 25/1, pp. 107–130. António Ribeiro, Gabriel Lopes and João Mexia (2000a) Using Confidence Bands for Alignment with Hapaxes. In “Proceedings of the International Conference on Artificial Intelligence (IC’AI 2000)”, Computer Science Research, Education and Applications Press, U.S.A., volume II, pp. 1089–1095. António Ribeiro, Gabriel Lopes and João Mexia (2000b, in press) Aligning Portuguese and Chinese Parallel Texts Using Confidence Bands. In “Proceedings of the Sixth Pacific Rim International Conference on Artificial Intelligence (PRICAI 2000) – Lecture Notes in Artificial Intelligence”, Springer-Verlag. Joaquim da Silva, Gaël Dias, Sylvie Guilloré, José Lopes (1999) Using Localmaxs Algorithms for the Extraction of Contiguous and Non-contiguous Multiword Lexical Units. In Pedro Barahona and José Alferes, eds., “Progress in Artificial Intelligence – Lecture Notes in Artificial Intelligence”, number 1695, Springer-Verlag, Berlin, Germany, pp. 113–132. Michel Simard, George Foster and Pierre Isabelle (1992) Using Cognates to Align Sentences in Bilingual Corpora. In “Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation TMI-92”, Montreal, Canada, pp. 67–81. Michel Simard and Pierre Plamondon (1998) Bilingual Sentence Alignment: Balancing Robustness and Accuracy. Machine Translation, 13/1, pp. 59–80. Dekai Wu (1994) Aligning a Parallel English–Chinese Corpus Statistically with Lexical Criteria. In “Proceedings of the 32nd Annual Conference of the Association for Computational Linguistics”, Las Cruces, New Mexico, U.S.A., pp. 80–87. Thomas Wonnacott and Ronald Wonnacott (1990) Introductory Statistics. 5th edition, John Wiley & Sons, New York Chichester Brisbane Toronto Singapore, 711 p..
2000
55
   !""#%$&' ($ *) + & " ,-/.0132547698:<;>=@?BADCEGF38-@HG.0(0BI38KJ LM9NOQPQRTSNUWVYX SOZ[CV]\^OQ_`A@Rbadc>eZgf7h \^_jiSRbM9OlknmoadMoCm9MqpM9iA@OQRT_jM9CKR rsut+vxwyADmgNM9C{z%|}Ca~^MoObPbaR[€ \DV(t(MomgNC\^U\^‚D€ p}z"ƒ^„D…^ƒD†‡wA@mgNM9CWf‰ˆ&M9OQ_`ADCK€ Š^‹ŒŽ ‘’^“”–•˜—™nš‹@›^œDž7—ŽŸ¡ ‘›K¢žK£Œ/’@¡ [¤’ ¥§¦ 6©¨Ž-/.=D¨ ª‘« ¬­®¯+°±™°²g³Y´‰µ(²<°³²Y¯[²Y«@¬}±Ž«¶{·T¸/¹&º °n±™³²»9±™³Q®¼¸/½¯˜¯®«K¾/¿¼²Tº]µ¸Ž³"¶¡À±Ž¯[²Y¶+±Ž¿®¼¾/«^º ¹<²Y«–¬¹&¸^¶K²Y¿Á¯Ã¸Ž³+¯[¬±™¬®¯[¬®Á·g±Ž¿W¹<±Ž·"­®Á«K² ¬[³Q±Ž«¯¿Á±™¬®¼¸/«˜Ä Å ²Æ¶®Á¯·g½¯"¯G¬­²BÇ»Ž² ª‘ÈÊÉ˱Ž¿Á®¼¾/«¹&²Y«–¬Ì¹&¸^¶K²Y¿Á¯g´@¬­K²Í%®¶¶K²Y«^º Éα™³ώ¸o»*±Ž¿Á®Á¾/«¹&²Y«@¬Î¹&¸^¶K²Y¿Ð´y¯¹&¸D¸Ž¬­^º ®«K¾¡¬[²Y·"­«®Ñ@½K²Y¯±Ž«¶»9±™³"®Á¸/½¯W¹&¸D¶®¼Ç·g±9º ¬®Á¸/«¯gÄÅ ²+°³²Y¯[²Y«@¬Ã¶®ÓÒ²g³²Y«@¬¹&²g¬­K¸^¶¯ ¬[¸{·T¸/¹}À®Á«²Ô±Ž¿®¼¾/«¹&²Y«@¬¯gÄ3Õ ¯²g»9±Ž¿½±9º ¬®Á¸/«Ö·T³Q®¼¬[²g³"®¼¸/«Öµ²Ô½¯[²q¬­²ÔÑD½±Ž¿Á®Á¬Ø×{¸ŽÂ ¬­²(³²Y¯"½¿¼¬®Á«K¾%Ù ®Á¬[²g³À®D±Ž¿Á®¼¾/«¹&²Y«@¬‰·T¸/¹&º °n±™³²Y¶Ú¬[¸&±y¹<±Ž«D½±Ž¿Á¿¼×°³¸^¶½·T²Y¶Ú³²gÂd²g³[º ²Y«·T²%±Ž¿Á®¼¾/«¹&²Y«–¬gÄWÅ ²¡¯­K¸oµG¬­±™¬Ì¹&¸^¶^º ²Y¿¯µÃ®¼¬­+±Ãdz"¯[¬‘º]¸Ž³Q¶K²g³7¶K²g°²Y«¶K²Y«·T²±Ž«¶ ±+²g³¬®¿Á®¼¬Ø×y¹&¸^¶K²Y¿¿¼²Y±Ž¶Ú¬[¸l¯®¼¾/«®ÁÇ·g±Ž«–¬¿Á× À²g¬[¬[²g³¡³²Y¯½¿¼¬¯¬­±Ž«Î¬­K²¯®Á¹&°n¿¼²l¹&¸^¶^º ²Y¿¯+ª‘ÈÌÉ>º"ۇ¸Ž³}ª‘ÈÊÉܺÞÝD´µ%­®Á·"­ß±™³²Ô«K¸Ž¬ ±™Àn¿¼²%¬[¸&¾Ž¸&À²g׎¸/«¶‡àg²g³¸™º]¸Ž³Q¶K²g³¶K²g°²Y«^º ¶²Y«·g®¼²Y¯gÄ á â 0‰¨Ž-/4ã ä =D¨/åØ470 ª‘«Ü¯[¬±™¬®Á¯[¬®Á·g±Ž¿¹Ú±Ž·"­®Á«² ¬[³Q±Ž«¯¿Á±™¬®¼¸/«‡µ²+¯[²g¬½K°>± ¯[¬±™¬®Á¯¬®Á·g±Ž¿¬[³Q±Ž«¯¿Á±™¬®¼¸/«‡¹&¸^¶K²Y¿æçèÐé˜ê ëì í©î ëbï µÃ­®Á·"­ ¶K²Y¯·T³Q®¼À²Y¯Ô¬­K²{³²Y¿Á±™¬®Á¸/«¯­®¼°ðÀ²g¬Øµ²g²Y«ñ±ò¯[¸/½K³"·T² ¿Á±Ž«K¾/½±™¾Ž²óè]ô^õ ï ¯¬[³"®Á«K¾ñé˜ê ë ±Ž«¶u±ö¬±™³"¾Ž²g¬ß¿Á±Ž«^º ¾/½±™¾Ž²÷èùøõ ï ¯¬[³"®Á«K¾ í î ë ÄGªú«óèù¯[¬±™¬®Á¯[¬®Á·g±Ž¿ ï ±Ž¿Á®¼¾/«^º ¹&²Y«@¬+¹&¸^¶K²Y¿Á¯ æçèÐé˜ê ëWûü ê ë ì íYî ë ï ´7±`ýþ­®Á¶¶K²Y«ÿ±Ž¿Á®¼¾/«^º ¹&²Y«@¬ ü ê ë ®Á¯®«–¬[³¸^¶½·T²Y¶µ%­®Á·"­¶K²Y¯"·T³"®¼À²Y¯±¹<±™°Kº °®Á«¾ÃÂd³¸/¹ ¯[¸/½K³"·T²Ìµ¸Ž³"¶&é  ¬[¸± ¬±™³¾Ž²g¬µ¸Ž³"¶ í Ä Å ²¶®Á¯·g½¯¯­K²g³²}¬­K²lª‘ÈÊÉ ¬[³"±Ž«¯"¿Á±™¬®¼¸/«j¹<¸D¶^º ²Y¿Á¯5ª‘ÈÌÉ>º"Û̬[¸ª‘ÈÌÉ>º&èùȳ¸©µ%«²g¬W±Ž¿]ļ´Û  9À ï ±Ž«¶ ¬­K²jÍ ®Á¶¶²Y«^ºÞÉj±™³"ώ¸©» ±Ž¿Á®¼¾/«¹&²Y«–¬q¹&¸D¶²Y¿lèdÙ¸Ž¾Ž²Y¿ ²g¬¡±Ž¿Ðļ´5Û  ·"­`±Ž«¶%²g׎´˜Ý   ï Ä øÃ­K²}¶®¼Ò²g³[º ²Y«@¬Ã±Ž¿Á®¼¾/«¹<²Y«–¬¹&¸^¶K²Y¿Á¯µ(²+°³²Y¯[²Y«@¬°³¸o»D®Á¶²¡¶®¼Âº Âd²g³²Y«–¬Ü¶K²Y·T¸/¹<°¸/¯"®¼¬®¼¸/«¯Ú¸ŽÂ}æçèÐé˜ê ë ûü ê ënì íYî ëgï Ä Õ « ±Ž¿Á®Á¾/«¹&²Y«@¬ ü ê ë Âd¸Ž³µÃ­®Á·Q­Ü­K¸/¿Á¶¯  ü ê ë ±™³¾¹<±   æ+çKèÐé ê ë ûü ê ëì í î ëbï Âd¸Ž³5±%¯[°²Y·g®¼Ç·¹&¸^¶K²Y¿/®¯7·g±Ž¿Á¿¼²Y¶lÙ ®Á¬[²g³À®/±Ž¿Á®Á¾/«¹&²Y«@¬ ¸ŽÂW¬­®Á¯Ê¹<¸D¶K²Y¿]Ä ô^¸}Âù±™³Y´K«K¸µ(²Y¿¿²Y¯[¬±™Àn¿Á®Á¯­K²Y¶q²g»™±Ž¿Á½±™¬®¼¸/«Ô·T³"®¼¬[²Tº ³"®Á¸/«G²K®Á¯[¬¯Ü®«G¬­²§¿®¼¬[²g³"±™¬½K³²`Âd¸Ž³Ü¬­K²Y¯[²{±Ž¿Á®¼¾/«Kº ¹&²Y«@¬ñ¹&¸^¶K²Y¿Á¯YÄ K¸Ž³ó»9±™³"®Á¸/½¯ö³²Y±Ž¯[¸/«¯!èù«K¸/«^º ½«®ÁÑ@½²l³²gÂd²g³²Y«·T²¬[³"±Ž«¯"¿Á±™¬®¼¸/«´¸©»Ž²g³º]Ǭ[¬®Á«K¾>±Ž«¶ ¯[¬±™¬®¯[¬®Á·g±Ž¿Á¿¼×G¶K²gÇ·g®¼²Y«@¬Ü¹&¸D¶²Y¿Á¯ ï ®¼¬Ü¯[²g²Y¹<¯Ü­±™³"¶ ¬[¸<½¯²%¬[³"±Ž®«®Á«K¾ o¬[²Y¯[¬Ì°²g³°¿Á²^®¼¬Þ×<±Ž¯Ê®Á«Ô¿Á±Ž«K¾/½±™¾Ž² ¹&¸^¶K²Y¿Á®«K¾KÄ"! ¯®Á«K¾Ü¬[³"±Ž«¯¿Á±™¬®¼¸/«{ÑD½±Ž¿Á®Á¬Ø×§®Á¯°³¸ŽÀº ¿¼²Y¹Ú±™¬®Á·™´±Ž¯%¬[³"±Ž«¯"¿Á±™¬®¼¸/«jÑD½±Ž¿Á®Á¬Ø×Ü®Á¯Ã«¸Ž¬Ãµ²Y¿Á¿7¶K²Tº Ç«²Y¶ ±Ž«¶ö±Ž¯Ô¬­K²g³² ±™³²{±Ž¶¶®Á¬®¼¸/«±Ž¿ ®Á«$#½K²Y«·T²Y¯ ¯½·"­ ±Ž¯>¿±Ž«K¾/½±™¾Ž²{¹&¸^¶K²Y¿¡¸Ž³j¶K²Y·T¸^¶K²g³Ü°³"¸Ž°²g³º ¬®¼²Y¯YÄ5Å ²%°³¸Ž°¸/¯[²®Á«¬­®Á¯°±™°²g³¬[¸l¹&²Y±Ž¯"½K³²¬­K² ÑD½±Ž¿Á®¼¬Þ×}¸ŽÂ˜±Ž«q±Ž¿Á®¼¾/«¹<²Y«–¬¹&¸^¶K²Y¿½¯®Á«K¾¬­K²%Ñ@½±Ž¿¼º ®¼¬Þ×ò¸ŽÂ+¬­²`Ù¡®¼¬[²g³Àn®%±Ž¿Á®Á¾/«¹&²Y«@¬q·T¸/¹&°±™³"²Y¶ð¬[¸÷± ¹<±Ž«D½±Ž¿Á¿Á×°³¸^¶½·T²Y¶y±Ž¿Á®¼¾/«¹&²Y«@¬gĉøÃ­®Á¯5±Ž¿Á¿¼¸©µ%¯‰±Ž« ±Ž½K¬[¸/¹Ú±™¬®Á·y²g»9±Ž¿Á½±™¬®¼¸/«´¸/«·T²&±Ú³²gÂd²g³²Y«·T²±Ž¿Á®¼¾/«Kº ¹&²Y«@¬}­±Ž¯lÀ²g²Y« °³"¸D¶½·T²Y¶Äqª‘«3±Ž¶¶®¼¬®¼¸/«´5®¼¬+³²Tº ¯½¿¼¬¯Ê®Á«Ô±»Ž²g³×Ô°³²Y·g®Á¯²¡±Ž«¶Ô³²Y¿Á®Á±™À¿Á²%²g»™±Ž¿Á½±™¬®¼¸/« ·T³"®Á¬[²g³"®¼¸/«§¬­±™¬l®¯¡µ(²Y¿Á¿ ¯½®¼¬[²Y¶§¬[¸j±Ž¯"¯[²Y¯¯»™±™³"®¼¸/½¯ ¶K²Y¯"®¼¾/«3¶K²Y·g®¯®¼¸/«¯}®Á«3¹&¸^¶K²Y¿Á®«K¾j±Ž«¶3¬[³"±Ž®Á«®Á«¾>¸ŽÂ ¯[¬±™¬®¯[¬®Á·g±Ž¿˜±Ž¿Á®Á¾/«¹&²Y«@¬¹&¸D¶²Y¿Á¯gÄ % & 4ã8('Þ6 ª‘« ¬­®Á¯*°±™°²g³ µ(² ½¯²!¬­K²x¹&¸^¶K²Y¿Á¯*ª‘ÈÊÉܺ"Û ¬[¸óªúÈÊÉ>º Âd³¸/¹ èùȳ¸oµÃ«u²g¬Ö±Ž¿Ðļ´ÎÛ  9À ï ±Ž«¶ ¬­K²¡Í ®Á¶¶K²Y«^ºÞÉα™³ώ¸o»q±Ž¿Á®¼¾/«¹&²Y«–¬¹&¸^¶K²Y¿5èùÍ¡É`É ï Âd³¸/¹ËèdÙ¸Ž¾Ž²Y¿²g¬Ê±Ž¿Ðļ´Û  )¡·Q­‡±Ž«¶*%²g׎´Ý   ï Ä Õ ¿Á¿(¬­K²Y¯[²Ü¹&¸^¶K²Y¿Á¯}°³¸©»^®Á¶K²‡¶®ÓÒ²g³²Y«@¬y¶K²Y·T¸/¹<°¸™º ¯®Á¬®¼¸/«¯ ¸ŽÂ ¬­K²&°³¸ŽÀ±™À®Á¿®¼¬Ø×ÜæçèÐé˜ê ëWûü ê ë ì í©î ë ï Äyø­² ±Ž¿Á®Á¾/«¹&²Y«@¬ ü ê ë ¹<±©×G·T¸/«@¬±Ž®Á«ö±Ž¿Á®¼¾/«¹&²Y«–¬¯ ü   òµÃ®¼¬­ ¬­² ý ²Y¹&°¬Þ×nÿ%µ(¸Ž³Q¶ í+ ¬[¸G±Ž·g·T¸/½«@¬`¸Ž³ K³"²Y«·"­Ôµ¸Ž³"¶¯¬­±™¬Ê±™³²«K¸Ž¬Ê±Ž¿Á®¼¾/«²Y¶Ú¬[¸&±Ž«@×-, «Kº ¾/¿Á®Á¯"­Gµ¸Ž³"¶Ä Õ ¿Á¿¹&¸^¶K²Y¿Á¯Ü®«·g¿Á½¶K²§¿¼²K®Á·T¸/«ð°±9º ³"±Ž¹&²g¬[²g³Q¯$.5èÐé ì í9ï ±Ž«¶+±Ž¶¶®Á¬®¼¸/«±Ž¿9°±™³"±Ž¹&²g¬[²g³Q¯7¶K²Tº ¯·T³"®ÁÀ®Á«K¾y¬­K²+°³"¸ŽÀ±™À®Á¿Á®Á¬Ø×<¸ŽÂW±Ž«>±Ž¿Á®¼¾/«¹&²Y«–¬gÄ Å ²«K¸©µ÷¯[ώ²g¬·"­y¬­K²(¯[¬[³"½·T¬½K³"²¸ŽÂK¬­K² ¯®/¹<¸D¶^º ²Y¿Á¯10 2 ª‘« ª‘ÈÊÉܺ"ÛαŽ¿Á¿Ê±Ž¿Á®¼¾/«¹<²Y«–¬¯<­±Y»Ž²j¬­²j¯±Ž¹&² °³¸ŽÀ±™À®Á¿®¼¬Ø×ŽÄ 2 ª‘ÈÊÉܺÞÝ ½¯²Y¯‰± àg²g³¸™º]¸Ž³"¶²g³W±Ž¿®¼¾/«¹&²Y«@¬‰¹&¸^¶K²Y¿ .Wè ü  ì 3 û54û6 ï µÃ­²g³²Ë¶®ÓÒ²g³²Y«–¬ ±Ž¿Á®¼¾/«¹&²Y«–¬ °¸/¯®Á¬®¼¸/«¯ ±™³"² ®Á«¶K²g°²Y«¶K²Y«@¬ ³"¸/¹ ²Y±Ž·Q­ ¸Ž¬­²g³YÄ 2 øÃ­K² Í É`É ½¯[²Y¯'± dzQ¯[¬‘º]¸Ž³"¶K²g³'¹&¸^¶K²Y¿ .Wè ü  ì ü 7 ë ï µÃ­K²g³²&¬­K²q±Ž¿Á®¼¾/«¹&²Y«@¬°¸/¯®¼¬®¼¸/« ü  ¶K²g°²Y«¶¯¸/«l¬­K² °³"²g»D®¼¸/½¯˜±Ž¿Á®¼¾/«¹<²Y«–¬˜°¸™º ¯"®¼¬®¼¸/« ü 17 ë Ä 2 ª‘«Bª‘ÈÊÉܺ ñµ(² ­±Y»Ž² ±Ž« èù®Á«@»Ž²g³¬[²Y¶ ï àg²g³¸™º ¸Ž³Q¶K²g³+±Ž¿Á®¼¾/«¹<²Y«–¬¹&¸^¶K²Y¿8.5è 3nì ü  û54û6 ï µÃ®¼¬­ ±Ž«y±Ž¶¶®Á¬®¼¸/«±Ž¿ŽÂd²g³¬®Á¿Á®Á¬Ø×¹&¸^¶K²Y¿1.5è:9 ì í9ï µÃ­®Á·"­ ¶²Y¯·T³"®¼À²Y¯Ì¬­K²l«D½¹}À²g³Ê¸ŽÂ5µ¸Ž³"¶¯;9`±Ž¿Á®¼¾/«K²Y¶ ¬[¸q±Ž«<, «K¾/¿®Á¯­‡µ(¸Ž³Q¶ í Ä 2 ª‘«uª‘ÈÊÉܺ:=*µ² ­±©»Ž² ±Ž« èù®Á«@»Ž²g³¬[²Y¶ ï dz"¯[¬‘º ¸Ž³Q¶K²g³ ±Ž¿®¼¾/«¹&²Y«@¬%¹&¸^¶K²Y¿>.5è 3nì 3@?ï ±Ž«¶`±ÚÂd²g³[º ¬®¿Á®¼¬Ø×‡¹&¸^¶K²Y¿ .5è:9 ì íoï Ä 2 øÃ­K²q¹&¸^¶K²Y¿Á¯lªúÈÊÉ>º `±Ž«¶3ª‘ÈÊÉܺ:= ±™³²‡¶K²gÇKº ·g®Á²Y«–¬y±Ž¯l¬­K²g× µÌ±Ž¯[¬[²‡°³¸ŽÀ±™Àn®Á¿Á®¼¬Þ×ι<±Ž¯¯}¸/« «¸/«^ºØ¯[¬[³"®Á«¾/¯gÄ(ª‘ÈÊÉܺq®Á¯%±<³²gÂd¸Ž³"¹y½¿Á±™¬®¼¸/«j¸ŽÂ ª‘ÈÊÉܺ:=qµÃ®¼¬­>±q¯½®¼¬±™Àn¿¼×Ú³²gÇn«K²Y¶>±Ž¿Á®¼¾/«¹&²Y«–¬ ¹<¸D¶K²Y¿®«Ô¸Ž³"¶K²g³¬[¸Ú±©»Ž¸/®Á¶j¶²gÇ·g®¼²Y«·T×ŽÄ ôD¸ ¬­K²ß¹<±Ž®Á«ö¶®ÓÒ²g³²Y«·T²Y¯>¸ŽÂ}¬­K²Y¯²3¹&¸^¶K²Y¿Á¯Ü¿Á®¼² ®Á«`¬­K²<±Ž¿Á®Á¾/«¹&²Y«@¬ ¹&¸^¶K²Y¿èdµÃ­®Á·"­§¹<±©×ÎÀ²àg²g³¸™º ¸Ž³"¶K²g³j¸Ž³>dz"¯[¬‘º]¸Ž³Q¶K²g³ ï ´¡®Á«ð¬­K² ²K®Á¯[¬[²Y«·T²{¸ŽÂ±Ž« ²^°¿Á®Á·g®¼¬7²g³¬®¿Á®¼¬Ø×¹&¸^¶K²Y¿@±Ž«¶lµ%­K²g¬­K²g³5¬­K²(¹&¸^¶K²Y¿ ®Á¯Ê¶K²gÇn·g®¼²Y«–¬Ã¸Ž³Ã«K¸Ž¬gÄ K¸Ž³ÊÍ¡ÉÎÉ3´DªúÈÊÉ>º:=±Ž«¶<ª‘ÈÌÉ>ºy®¼¬ ®Á¯¯[¬[³"±Ž®Á¾/­–¬‘º Âd¸Ž³µ±™³Q¶{¬[¸`²D¬[²Y«¶ß¬­K²‡±Ž¿Á®¼¾/«¹&²Y«@¬+°±™³Q±Ž¹&²g¬[²g³"¯ ¬[¸¡®Á«·g¿Á½¶²±Ã¶K²g°²Y«¶K²Y«·T²¸/«l¬­² µ¸Ž³"¶}·g¿±Ž¯¯[²Y¯‰¸ŽÂ ¬­K²yµ(¸Ž³Q¶¯¡±™³¸/½«¶Î¬­K²±Ž¿Á®¼¾/«¹&²Y«@¬ °¸/¯"®¼¬®¼¸/«Ä%ªú« ¬­K² Í É`É˱Ž¿Á®¼¾/«¹&²Y«–¬(¹&¸D¶²Y¿µ²%±Ž¿Á¿¼¸oµGÂd¸Ž³Ì±l¶K²Tº °²Y«¶K²Y«·T²yÂd³¸/¹ ¬­K²<·g¿Á±Ž¯¯BA DC è í 5E  ï ÄGF¸Ž³[º ³²Y¯[°¸/«¶®Á«K¾/¿¼×Ž´(µ(²`·g±Ž«G®Á«·g¿Á½¶K²>¯®¹<®Á¿Á±™³<¶K²g°²Y«^º ¶K²Y«·g®Á²Y¯W¸/«HK³²Y«·Q­Ú±Ž«¶G,(«K¾/¿Á®Á¯"­}µ¸Ž³"¶&·g¿Á±Ž¯"¯[²Y¯®Á« ª‘ÈÌÉ>º:=}±Ž«¶ªúÈÊÉ>ºÚèùȳ¸©µ%«&²g¬ ±Ž¿Ðļ´Û  9À ï Ä ø­K² ·g¿Á±Ž¯¯"®¼Ç·g±™¬®¼¸/«l¸ŽÂK¬­K² µ¸Ž³"¶¯‰®Á«–¬[¸ ±¾/®¼»Ž²Y«y«D½¹lÀ²g³ ¸ŽÂ%·g¿±Ž¯¯[²Y¯>èù­K²g³² 0I ï ®Á¯y°²g³¸Ž³Q¹&²Y¶ß±Ž½¬[¸/¹<±™¬®Óº ·g±Ž¿Á¿¼×ÀD×+±Ž«K¸Ž¬­K²g³¯[¬±™¬®¯[¬®Á·g±Ž¿D¿¼²Y±™³"«®Á«¾°³¸^·T²Y¶½K³² èKJ+«K²Y¯²g³Ã±Ž«¶<%²g׎´7Û ^Û ï Ä L M -/.åØ0 åÞ0ON ë ø­²y¬[³Q±Ž®Á«®Á«K¾q¸ŽÂ(±Ž¿Á¿W±Ž¿Á®Á¾/«¹&²Y«@¬ ¹&¸^¶K²Y¿Á¯ ®Á¯¡¶K¸/«K² ÀD×}¬­²P,(É>ºØ±Ž¿¼¾Ž¸Ž³"®¼¬­¹ ½¯®Á«K¾¡±°±™³Q±Ž¿Á¿¼²Y¿D¬[³"±Ž®Á«®«K¾ ·T¸Ž³°n½¯ÎèKQ@RTSU û5V RTSU ï ´XW  Û û1Y1Y1Yoû[Z Äöª‘« ¬­²,º ¯[¬[²g°ñ¬­K²{·T¸/½«@¬¯Ü¸Ž³j¸/«K²{¯[²Y«@¬[²Y«·T²{°±Ž®¼³{èKQ û5V ï ±™³²·g±Ž¿Á·g½¿Á±™¬[²Y¶˜Ä\K¸Ž³5¬­K²¿¼²^®·T¸/«°±™³"±Ž¹&²g¬[²g³Q¯7¬­K² ·T¸/½«@¬¯Ã±™³"² 0 ] èÐé ì í 5Q û5V ï^ _` æçèKa ì Q û5V ï$_cbed f èÐé û é  ï f è í û íg/ï ª‘«Ü¬­K²lÉ>ºØ¯[¬[²g°j¬­²l¿¼²K®Á·T¸/«Ü°±™³"±Ž¹<²g¬[²g³"¯±™³² 0 .5èÐé ì íoïih _ S ] èÐé ì í 5Q R/SU û5V R/SU ï F¸Ž³"³²Y¯[°¸/«¶®Á«¾/¿¼×Ž´W¬­K²>±Ž¿Á®¼¾/«¹<²Y«–¬&±Ž«¶ Âd²g³¬®Á¿®¼¬Ø× °³"¸ŽÀ±™À®Á¿Á®Á¬®¼²Y¯(·g±Ž«jÀ²²Y¯[¬®Á¹<±™¬[²Y¶Ä øÃ­K² ¹<¸D¶K²Y¿¯ ª‘ÈÊÉܺ"ۙ´Kª‘ÈÊÉܺÞÝy±Ž«¶ÔÍ¡ÉÎÉ ­±Y»Ž² ±÷°±™³"¬®Á·g½¿Á±™³"¿Á×ò¯®Á¹&°n¿¼²Î¹<±™¬­²Y¹<±™¬®Á·g±Ž¿+Âd¸Ž³"¹ ¯[¸ ¬­±™¬%¬­K²j,É±Ž¿Á¾Ž¸Ž³"®¼¬­¹ ·g±Ž«jÀ²°²g³"¸Ž³"¹<²Y¶Ü²@º ±Ž·T¬¿¼×Ž´®ÐÄ ²ŽÄ‡®Á« ¬­K²-,ºØ¯[¬[²g°Ö®¼¬l®Á¯°¸/¯¯"®¼À¿¼²y¬[¸j²1k&º ·g®¼²Y«@¬¿¼×y·T¸/«¯®Á¶K²g³±Ž¿¿K±Ž¿Á®¼¾/«¹&²Y«@¬¯gÄlK¸Ž³¬­K²Í¡É`É µ²q¶K¸Î¬­®Á¯}½¯®Á«K¾Ü¬­²qÈ̱޽¹º]Å ²Y¿Á·Q­÷±Ž¿¼¾Ž¸Ž³"®¼¬­¹ èùÈ̱޽¹Ü´‰Û@m™Ý ï Ä ôK®Á«·T²+¬­K²g³"²l®Á¯Ã«¸Ú²1kÚ·g®¼²Y«@¬ÃµÌ±Y×>®«>¬­K²+Âd²g³¬®Á¿¼º ®¼¬Þ×Ö¹<¸D¶K²Y¿¯ªúÈÊÉ>º §¬[¸n`¬[¸3±©»Ž¸/®Á¶ ¬­K²Ü²^°¿Á®·g®¼¬ ¯½¹<¹<±™¬®¼¸/«ó¸©»Ž²g³{±Ž¿Á¿}±Ž¿Á®¼¾/«¹<²Y«–¬¯`®Á«ó¬­K²o,Éܺ ±Ž¿¼¾Ž¸Ž³Q®¼¬­¹Ü´¬­K²·T¸/½«@¬¯±™³²&·T¸/¿Á¿¼²Y·T¬[²Y¶§¸/«¿¼×j¸©»Ž²g³ ±Î¯½KÀ¯[²g¬¸ŽÂʰ³¸/¹<®Á¯®«K¾‡±Ž¿®¼¾/«¹&²Y«@¬¯gÄIK¸Ž³lª‘ÈÊÉܺ D´ª‘ÈÊÉܺ:=÷±Ž«¶GªúÈÊÉ>ºßµ(²Î°²g³"¸Ž³"¹ ¬­K²§·T¸/½«–¬ ·T¸/¿Á¿Á²Y·T¬®¼¸/«Ö¸/«¿¼× ¸©»Ž²g³<±`¯¹<±Ž¿¿(«D½¹}À²g³}¸ŽÂ¾Ž¸D¸D¶ ±Ž¿Á®Á¾/«¹&²Y«@¬¯gÄWªú«q¸Ž³"¶K²g³Ì¬[¸yώ²g²g°Ü¬­K²%¬[³Q±Ž®Á«®Á«K¾lÂd±Ž¯[¬ µ²·g±Ž«&¬±™Ï޲%®«–¬[¸±Ž·g·T¸/½«@¬¸/«¿¼×y±¯¹<±Ž¿Á¿DÂd³"±Ž·T¬®¼¸/« ¸ŽÂ±Ž¿Á¿7±Ž¿Á®¼¾/«¹&²Y«@¬¯gÄÅ ²}µ%®Á¿Á¿·T¸/¹&°±™³"²l¬­K³²g²}¶®¼Âº Âd²g³²Y«@¬5°¸/¯¯"®¼À®Á¿Á®Á¬®¼²Y¯¸ŽÂn½¯®«K¾Ã¯½KÀn¯[²g¬¯7¸ŽÂ¶®ÓÒ²g³²Y«@¬ ¯®Áàg² 0 2 ø­K²ö¯®Á¹&°n¿¼²Y¯[¬3°¸/¯¯"®¼À®Á¿Á®Á¬Ø×B®Á¯Ö¬[¸B°²g³¸Ž³Q¹ Ù ®¼¬[²g³"À®™¬[³"±Ž®Á«®«K¾½¯®Á«¾¸/«¿¼×%¬­² À²Y¯[¬˜±Ž¿Á®¼¾/«Kº ¹&²Y«–¬%¬­±™¬Ã·g±Ž«ÎÀ²+¸/½«¶Ä(Õ ¯¬­²l·g±Ž¿Á·g½¿Á±9º ¬®¼¸/«y¸ŽÂ¬­K²Ù ®¼¬[²g³"À®–±Ž¿Á®Á¾/«¹&²Y«@¬5®¼¬¯²Y¿¼Â®Á¯‰»Ž²g³× ¬®Á¹&²TºØ·T¸/«¯½¹<®Á«K¾Ü®¼¬+®Á¯·T¸/¹<°½K¬[²Y¶{¸/«¿¼×α™°Kº °³¸K®Á¹<±™¬[²Y¿Á×>½¯®Á«K¾Ú¬­K²y¹<²g¬­K¸D¶`¶K²Y¯·T³Q®¼À²Y¶ ®Á«{èùȳ¸©µÃ«j²g¬Ã±Ž¿ÐﴘÛ  9À ï Ä 2 ªú« èùÕ%¿Óº«±Ž®¼àY±Ž«{²g¬y±Ž¿Ðļ´ÌÛ   ï ®Á¬+µ±Ž¯y¯½K¾™º ¾Ž²Y¯[¬[²Y¶j¬[¸Ô½¯[²}±Ž¿Á¯[¸<¬­K²y«K²Y®¼¾/­@À¸Ž³"®Á«K¾&±Ž¿Á®¼¾/«Kº ¹&²Y«–¬¯`èù®ÐÄ ²ŽÄö±Ž¿Á®¼¾/«¹<²Y«–¬¯<¶®ÓÒ²g³Q®Á«K¾§À@×÷¸/«K² qp\rgsutcvxwycz{vxz{|1}q~5}:t€[|x[ }:‚z„ƒK…‡†ˆ}‰sq~Š|‹‰y€~Š}:tc[|ŒvxŽ z{yc‹$tc‹$~[‹‰z{|x‘’”“Š•—–˜‚tc™q‚tc‹$w~Šs‰}>[š}:‚z8wršyct€™yc›œ~ž[~[tcyŸŽ ~[šy€z„}:1yc tŸ}‡T[s˜‹K}q~5}:tc‹K}:t€™{~[y@vP~[™q‚tc|z˜}‰sq~[|š‹‰y€~Š}:tc|B¡u¢¤£ ¥g¦ §©¨ yŸŽep\|~Št€ª{~[|;z}\~Šy/«€¬ ­ž®[®®¯« ¹<¸©»Ž²š9¯[µÌ±™° ï ³¸/¹ ¬­K² À²Y¯[¬u±Ž¿Á®¼¾/«¹&²Y«–¬ ³"²Y±Ž·"­±™À¿Á²ŽÄ 2 ª‘«ðèùÈ(³"¸©µÃ«Ö²g¬&±Ž¿Ðļ´Û  9À ï ±Ž«Ö²g»Ž²Y«÷¿Á±™³¾Ž²g³ ¯²g¬ ¸ŽÂ˜±Ž¿Á®¼¾/«¹&²Y«–¬¯ µ±Ž¯½¯[²Y¶q®Á«·g¿Á½¶®Á«K¾¡±Ž¿Á¯[¸ ¬­²ý °²g¾Ž¾Ž²Y¶˜ÿ±Ž¿Á®¼¾/«¹<²Y«–¬¯gÄ ø­²+¶®ÓÒ²g³²Y«–¬%¹&¸D¶²Y¿Á¯Ê±™³²l¬[³Q±Ž®Á«K²Y¶>®Á«Ü¯"½·g·T²Y¯‘º ¯®¼¸/«¸/«¬­²¯±Ž¹&²Ã¶±™¬±^´–µÃ­K²g³²Ê¬­K²ÊÇn«±Ž¿D°±™³"±Ž¹º ²g¬[²g³7»™±Ž¿Á½K²Y¯˜¸ŽÂ^±Ã¯®Á¹&°¿Á²g³¹<¸D¶K²Y¿Ž¯²g³»Ž²(±Ž¯˜¯¬±™³¬®Á«K¾ °¸/®Á«@¬(Âd¸Ž³Ê±y¹<¸Ž³²·T¸/¹&°¿¼²q¹<¸D¶K²Y¿]Ä5ª‘«Ô¯[²Y·T¬®¼¸/«<° µ²µÃ®Á¿Á¿¯­K¸oµ ¬­±™¬ÊÀD×q½¯®«K¾¬­K²Í É`É ®Á«¯[¬[²Y±Ž¶ ¸ŽÂ^ªúÈÊÉ>ºÞÝʵí®Á¿¼²WÀ¸@¸Ž¬¯[¬[³Q±™°°®Á«K¾Ê¬[¸%ªúÈÊÉ>º:= oªúÈÊÉ>º  ¬­²j±Ž¿Á®¼¾/«¹<²Y«–¬ÚÑD½±Ž¿Á®Á¬Ø× ·g±Ž« À²j¯®Á¾/«®¼Ç·g±Ž«@¬¿¼× ®Á¹&°³¸©»Ž²Y¶Ä ± ² H 4W4¨/? åØ0œN ø‰¸ ¸©»Ž²g³"·T¸/¹&²3¬­K² °³¸ŽÀ¿Á²Y¹ ¸ŽÂy¸©»Ž²g³[º]Ǭ[¬®Á«K¾ ¸/« ¬­K² ¬[³"±Ž®«®Á«K¾ö¶±™¬± ±Ž«¶B¬[¸ñ·T¸Ž°² À²g¬[¬[²g³{µÃ®¼¬­ ³"±™³²µ¸Ž³"¶¯µ(²l±™°°¿¼×Ú¯¹<¸@¸Ž¬­®«K¾&¸/«>±Ž¿Á®¼¾/«¹&²Y«–¬ ±Ž«¶`Âd²g³¬®Á¿Á®¼¬Þ×ܰ³¸ŽÀ±™Àn®Á¿Á®¼¬®¼²Y¯YÄXK¸Ž³¬­K²&±Ž¿Á®¼¾/«¹&²Y«–¬ °³¸ŽÀn±™À®Á¿Á®¼¬®Á²Y¯<¸ŽÂl¬­²{Í É`É èù±Ž«¶ö·T¸Ž³³²Y¯[°¸/«¶^º ®Á«K¾/¿Á×{¸Ž³ÚªúÈÊÉ>º:=߱ޫ¶÷ª‘ÈÊÉܺ ï µ(²>°²g³Âd¸Ž³"¹ ±Ž« ®Á«@¬[²g³°¸/¿Á±™¬®¼¸/«ÔµÃ®Á¬­Ü±&·T¸/«¯[¬±Ž«@¬%¶®Á¯¬[³"®¼À½K¬®Á¸/«‡0 . ? è ü  ì ü 7 ë û54 ïO´³µ Û 4*¶ è‘ÛP· ³Wï\µ .5è ü  ì ü 17 ë û54 ï K¸Ž³ ¬­² Âd²g³¬®Á¿®¼¬Ø×B°³"¸ŽÀ±™À®Á¿Á®Á¬®¼²Y¯ µ²ð±Ž¯"¯½¹&² ¬­±™¬ ¬­K²g³²&®Á¯%±‡¶²g°²Y«¶K²Y«·T²l¸/«Î¬­K²«@½¹lÀ²g³ ¸ŽÂ ¿¼²g¬[¬[²g³"¯X¸è í9ï ¸ŽÂ í ±Ž«¶>²Y¯¬®Á¹<±™¬[²}±Ž¿¯[¸Ú±Ú¶®Á¯[¬[³"®¼Àn½^º ¬®¼¸/«.5è:9 ì ¸ ï ½¯®«K¾j¬­K²¹,ÉܺرŽ¿¼¾Ž¸Ž³Q®¼¬­¹ÜĺW®¼¾/½K³² ۇ¯­K¸©µ%¯}¬­²q³²Y¿±™¬®¼¸/«3À²g¬Þµ(²g²Y«Ö¬­K²Ô«@½¹lÀ²g³y¸ŽÂ ¿¼²g¬[¬[²g³"¯¸<¸ŽÂ7±Üè:» ²g³Q¹<±Ž« ï µ(¸Ž³Q¶‡±Ž«¶Ú¬­K²¡±Y»Ž²g³Q±™¾Ž² Âd²g³¬®Á¿Á®¼¬Þ×òè¼ 95轸 ïG¿¾ºÀ 9 µ .5è:9 ì ¸ ï[ï Ä{Å ²>·g±Ž«÷¯[²g² ¬­±™¬Ã¿Á¸/«K¾Ž²g³µ¸Ž³"¶¯­±©»Ž²l±<­®Á¾/­K²g³Ê²g³"¬®Á¿Á®¼¬Þ×ŽÄ ø­²Ê²g³¬®¿Á®¼¬Ø×¶®Á¯¬[³"®¼À½K¬®Á¸/«}½¯²Y¶<®Á«&¬[³"±Ž®«®Á«K¾®Á¯ ¬­K²Y«>·T¸/¹<°½K¬[²Y¶>±Ž¯Ã¸/¿Á¿Á¸©µÃ¯10 . ? è:9 ì íoïœ Á è í9ï  ¶ Á è íoï .5è:9 ì íoï ¶   ¶ Á è íoï .5è:9 ì ¸è í9ï[ï Í%²g³² Á è íoï ¶K²Y«¸Ž¬[²Y¯&¬­K²Ô³"²YÑ@½K²Y«·T×3¸ŽÂ í ®Á«Ö¬­K² ¬[³"±Ž®Á«®Á«K¾ ·T¸Ž³°½¯YÄ ø­®Á¯Ô²Y«¯½K³"²Y¯Ü¬­±™¬jÂd¸Ž³jÂd³²Tº ÑD½K²Y«–¬ßµ(¸Ž³Q¶¯g´q®]Ä ²ŽÄ Á è íoïÄà  ´Ú¬­²G¯[°²Y·g®¼Ç· ¶®Á¯¬[³"®¼À½K¬®Á¸/«´.5è:9 ì íoï ¶¸/¹<®Á«±™¬[²Y¯ ±Ž«¶ÆÂd¸Ž³ ³"±™³² µ¸Ž³"¶¯g´Ê®ÐÄ ²ŽÄ Á è íoïIÅ Â ´Ì¬­K²Î¾Ž²Y«K²g³"±Ž¿Ã¶®Á¯[¬[³"®¼Àn½^º ¬®¼¸/«Æ.Wè:9 ì ¸è íoï[ï ¶K¸/¹<®Á«±™¬[²Y¯YÄ ø­²<®Á«–¬[²g³"°¸/¿±™¬®¼¸/«`°±™³"±Ž¹&²g¬[²g³"¯ ³ ±Ž«¶  ±™³² ¸Ž°¬®Á¹Ú®¼àg²Y¶ÖµÃ®¼¬­Ö³²Y¯[°²Y·T¬&¬[¸{±Ž¿Á®¼¾/«¹&²Y«–¬&ÑD½±Ž¿Á®¼¬Þ× ¸/«>±&»™±Ž¿Á®Á¶±™¬®¼¸/«Ü·T¸Ž³°½¯YÄ 0 0.5 1 1.5 2 2.5 3 3.5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Average fertility Ç #Letters W®¼¾/½K³"²ßÛ03Õ%»Ž²g³"±™¾Ž² Âd²g³¬®Á¿Á®¼¬Þ× ±Ž¯‡±ÖÂù½«·T¬®¼¸/«ò¸ŽÂ ¬­K²¿¼²Y«K¾Ž¬­÷èù®Á«Î¿¼²g¬[¬[²g³"¯ ï ¸ŽÂ ±*»¡²g³"¹<±Ž«`µ(¸Ž³Q¶Öèd¸/« ÈGÉ>ʇË)Ì;ÍË8νϬ±Ž¯[Ï´¯[²g²}¿Á±™¬[²g³ ï Ä Ð â 6Îã8$ÑÌ=Kå]80(=DJñ.ˆÒ(-/4 ¦ 'Ø8HÄÓ Åð­K²Y«‡½¯®Á«K¾l¬­K²;,ÉܺرŽ¿Á¾Ž¸Ž³"®¼¬­¹!¸/«Úª‘ÈÊÉܺ y±Ž«¶ ª‘ÈÊÉܺ:=K´`µ(²ö¸ŽÀn¯[²g³»Ž²Y¶x¬­±™¬G¶½K³"®Á«¾ó¬­K²Ô,Éܺ ®¼¬[²g³Q±™¬®¼¸/«¯W¹&¸Ž³²Ì±Ž«¶y¹&¸Ž³²µ(¸Ž³"¶¯‰±™³²Ê±Ž¿Á®Á¾/«K²Y¶+¬[¸ ¬­K²y²Y¹&°¬Þ×ܵ(¸Ž³Q¶ÄÃø­®¯³²Y¯½¿¼¬¯Ã®Á«Î±<À±Ž¶`±Ž¿Á®¼¾/«Kº ¹&²Y«@¬}ÑD½±Ž¿Á®Á¬Ø× ±Ž¯+¬[¸D¸Î¹<±Ž«@× µ¸Ž³"¶¯l±™³"²Ú±Ž¿Á®¼¾/«K²Y¶ ¬[¸+¬­²Ê²Y¹&°¬Ø×yµ¸Ž³"¶ÄWø­®Á¯W¶K¸D²Y¯«K¸Ž¬¸^·g·g½K³µÃ­K²Y« ½¯"®Á«K¾‡¬­K²&¸Ž¬­K²g³l¹&¸^¶K²Y¿Á¯YÄlÅ ²&À²Y¿Á®¼²g»Ž²¬­±™¬¬­K² ³²Y±Ž¯¸/«Ü¸ŽÂW¬­®Á¯Ê¿Á®¼²Y¯Ê®«q¬­²+Âd±Ž·T¬Ã¬­±™¬ÊªúÈÊÉ>º <±Ž«¶ ª‘ÈÊÉܺ:=‡±™³²l¶K²gÇ·g®Á²Y«–¬gÄ øÃ­K²ß½¯²3¸ŽÂ&¬­K²o,(É>ºØ±Ž¿¼¾Ž¸Ž³"®¼¬­¹ ¾/½±™³"±Ž«@¬[²g²Y¯ ¬­±™¬Î¬­K²3¿Á®Áώ²Y¿Á®Á­K¸D¸D¶ð±Ž«ó±Ž¿Á®¼¾/«¹&²Y«@¬>¹&¸^¶K²Y¿+±Ž¯‘º ¯®Á¾/«¯¬[¸Ã¬­² ¬[³"±Ž®Á«®Á«K¾Ê·T¸Ž³°½¯‰®Á¯˜¯[¬[²Y±Ž¶®Á¿¼×¡®Á«·T³²Y±Ž¯‘º ®Á«¾KÄñø­®Á¯q®Á¯<¬[³Q½K²jÂd¸Ž³‡¶²gÇ·g®¼²Y«@¬‡±Ž«¶ò¸Ž³Ü«K¸/«Kº ¶K²gÇn·g®¼²Y«–¬y¹&¸^¶K²Y¿Á¯y¿Á®¼Ï޲gµÃ®Á¯²ŽÄjÍøoµ(²g»Ž²g³©´¸Ž³&¶K²gǺ ·g®¼²Y«@¬}¹<¸D¶K²Y¿¯+¬­K²‡¿Á®¼Ï޲Y¿Á®Á­¸@¸^¶ ·g±Ž«ÖÀ²q®Á«·T³²Y±Ž¯[²Y¶ ¯®¹&°¿¼×‡À@×ܳ"²Y¶½·g®Á«K¾&¬­²y±Ž¹&¸/½«–¬%¸ŽÂ¶K²gÇ·g®¼²Y«·T×ŽÄ ª‘«ðªúÈÊÉ>º ±Ž«¶ðª‘ÈÊÉܺ:=G±Ž¯>¶K²gÇ«K²Y¶ö®«uèùÈ(³"¸©µÃ« ²g¬j±Ž¿]ļ´Û  9À ï ¬­K²3¶®Á¯[¬[¸Ž³¬®¼¸/«ð¹&¸^¶K²Y¿Âd¸Ž³>³²Y±Ž¿ µ¸Ž³"¶¯˜®¯˜¶K²gÇ·g®¼²Y«@¬g´9À½K¬¬­K² ¶®Á¯[¬[¸Ž³¬®¼¸/«l¹&¸D¶²Y¿™Â¸Ž³ ¬­K²&²Y¹<°¬Ø×§µ¸Ž³"¶ ®Á¯«K¸/«^ºØ¶²gÇ·g®¼²Y«@¬g´7¯[¸>¬­K²H,Éܺ ±Ž¿¼¾Ž¸Ž³Q®¼¬­¹ ·g±Ž«ð®Á«·T³²Y±Ž¯[²§¿Á®¼Ï޲Y¿®Á­K¸D¸D¶ ÀD×G¯"®Á¹&°¿¼× ±Ž¿Á®Á¾/«®Á«K¾Ü¹&¸Ž³²q±Ž«¶3¹&¸Ž³"²<µ(¸Ž³Q¶¯+¬[¸Î¬­K²<²Y¹&°¬Þ× µ¸Ž³"¶ÄOÕ øÃ­K²g³²gÂd¸Ž³²Ž´Ìµ(²`·"­±Ž«K¾Ž²Y¶Gª‘ÈÌÉ>º ߱ޫ¶ ª‘ÈÊÉܺ:= ¯¿®¼¾/­–¬¿Á×Ö¬[¸÷¸ŽÀ¬±Ž®«G±Ž¿Á¯¸Ö±Ö¶K²gÇ·g®¼²Y«@¬Ô¶®Á¯[¬[¸Ž³¬®Á¸/« ¹&¸^¶K²Y¿¡Âd¸Ž³Ü¬­K² ²Y¹&°¬Þ×ðµ(¸Ž³"¶˜ÄÆø­²{¶®Á¯[¬[¸Ž³¬®Á¸/« °³"¸ŽÀ±™À®Á¿Á®Á¬Ø×ö®Á¯§¯[²g¬ ¬[¸o.5è 3@ïÖ Ûg× 6 Âd¸Ž³`²g»Ž²g³"× K³"²Y«·"­jµ(¸Ž³"¶j±Ž¿Á®¼¾/«K²Y¶Ô¬[¸<¬­²+²Y¹&°¬Þׇµ¸Ž³"¶Ä ØÙ ‚št€‹˜zÚ@z™}˜št€X|[}u1™{™{ršs˜tc| § …8s:ž–˜|;z}„~Šy©«c¬¤­ž®[®Û[¯ ~[‹˜ƒK…u†ŒŽeÛx~Š|XƒK…‡†ŒŽÝܗ–)zs:zÞ|Š}„}‰sq~Štc|zžŒštŸs:z{™}:yŸ›1« yes , then I would say , let us leave it at that . ja , dann w"urde ich sagen , verbleiben wir so . W®¼¾/½K³² Ý 0Ä,O^±Ž¹&°n¿¼²3¸ŽÂ‡±G¹<±Ž«D½±Ž¿±Ž¿Á®¼¾/«¹&²Y«–¬ µÃ®¼¬­Ö߄àqá â5ãä§èdÇ¿Á¿¼²Y¶Ö¶K¸Ž¬¯ ï ±Ž«¶æåçàèéŠéê‰ë1ì/ã{äÚ·T¸/«^º «K²Y·T¬®¼¸/«¯gÄ í îï .‡'Øä .n¨/åØ470óH 8K¨/? 4Wã 4„']4„NJ ª‘«ñ¬­K²ß¸/¿¿¼¸©µÃ®«K¾K´µ(²Ö°³²Y¯²Y«–¬§±Ž«ó±Ž««K¸Ž¬±™¬®¼¸/« ¯·Q­K²Y¹&²lÂd¸Ž³¡¯®Á«K¾/¿¼²Tº]µ¸Ž³"¶ÜÀn±Ž¯[²Y¶j±Ž¿Á®Á¾/«¹&²Y«@¬¯Ã±Ž«¶ ±Ö·T¸Ž³³²Y¯[°¸/«¶®«K¾{²g»™±Ž¿Á½±™¬®¼¸/« ·T³"®¼¬[²g³"®Á¸/«ÄÄK¸Ž³>± ¶®ÓÒ²g³²Y«@¬<±™°°³¸/±Ž·Q­ ¬[¸3±Ž¯¯²Y¯¯<±Ž¿Á®¼¾/«¹&²Y«–¬<ÑD½±Ž¿Óº ®¼¬Þׇ¯[²g²‡èùÕ%­K³"²Y«–À²g³¾&²g¬Ã±Ž¿]ļ´nÝ   ï Ä ªú¬%®Á¯%µ²Y¿Á¿˜Ï^«K¸©µ%«j¬­±™¬¡¹<±Ž«D½±Ž¿Á¿¼×Ô°²g³Âd¸Ž³"¹<®Á«¾ ±§µ¸Ž³"¶ ±Ž¿Á®¼¾/«¹&²Y«–¬<®¯&± ·T¸/¹&°¿Á®Á·g±™¬[²Y¶ò±Ž«¶ ±Ž¹º À®¼¾/½¸/½¯¬±Ž¯[ÏÎèÐÉj²Y¿Á±Ž¹&²Y¶˜´Û  ° ï Äø­K²g³²gÂd¸Ž³²Ž´Dµ(² ¶K²g»Ž²Y¿¼¸Ž°²Y¶+±Ž«}±Ž««K¸Ž¬±™¬®¼¸/«l¯·"­K²Y¹<² ¬­±™¬7¹Ú±™Ï޲Y¯‰®¼¬ °¸/¯¯®¼Àn¿¼²Ê¬[¸&±Ž««K¸Ž¬±™¬[²¡²D°n¿Á®Á·g®¼¬¿¼×y¬­K²¡±Ž¹lÀn®¼¾/½K¸/½¯ ±Ž¿Á®¼¾/«¹&²Y«–¬¯YÄ<Å ²q±Ž¿Á¿Á¸©µ²Y¶{­@½¹<±Ž« ²^°²g³"¬¯+µÃ­K¸ °²g³Âd¸Ž³"¹&²Y¶`¬­K²&±Ž««K¸Ž¬±™¬®¼¸/«§¬[¸>¯°²Y·g®Á×>¬Þµ(¸>¶®¼Âº Âd²g³²Y«–¬ÃÏD®Á«¶¯Ì¸ŽÂ±Ž¿Á®¼¾/«¹&²Y«@¬¯10 ±Ž«§ô èù¯½K³"² ï ±Ž¿Á®¼¾/«^º ¹&²Y«@¬qµ%­®Á·"­ ®Á¯‡½¯[²Y¶G¸Ž³Ô±Ž¿Á®¼¾/«¹&²Y«–¬¯q¬­±™¬Ü±™³² ½«±Ž¹}À®¼¾/½¸/½¯j±Ž«¶ ±ñð èd°¸/¯¯®¼À¿Á² ï ±Ž¿Á®¼¾/«¹&²Y«–¬ µÃ­®·"­ð®Á¯Ô½¯[²Y¶ðÂd¸Ž³>±Ž¿Á®¼¾/«¹<²Y«–¬¯‡¬­±™¬>¹<®Á¾/­–¬Ô¸Ž³ ¹<®¼¾/­@¬Ã«K¸Ž¬Ã²^®Á¯¬gÄø­²òðò³²Y¿±™¬®¼¸/«j®Á¯Ã½¯[²Y¶Ü²Y¯[°²Tº ·g®Á±Ž¿Á¿Á×<¬[¸<±Ž¿Á®¼¾/«‡µ¸Ž³"¶¯ÌµÃ®¼¬­®Á«q®¶®¼¸/¹<±™¬®Á·%²^°³²Y¯‘º ¯®¼¸/«¯g´³²g²¬[³"±Ž«¯¿Á±™¬®¼¸/«¯Y´±Ž«¶ ¹<®Á¯¯®«K¾<Âù½«·T¬®¼¸/« µ¸Ž³"¶¯lè Zôó æ ï Ä ø­²¬­D½¯W¸ŽÀ¬±Ž®«K²Y¶y³²g²g³"²Y«·T²±Ž¿Á®¼¾/«¹<²Y«–¬W¹<±Y× ·T¸/«@¬±Ž®Á«ó¹Ú±Ž«–×@º]¬[¸™º]¸/«K² ±Ž«¶ ¸/«²Tº]¬[¸™ºØ¹<±Ž«–×ó³²Y¿Á±9º ¬®¼¸/«¯"­®¼°¯gÄõW®¼¾/½K³² Ý3¯"­K¸©µÃ¯Ô±Ž« ²K±Ž¹&°¿¼²`¸ŽÂ}± ¹<±Ž«D½±Ž¿Á¿¼×§±Ž¿Á®¼¾/«²Y¶ ¯[²Y«–¬[²Y«·T²qµÃ®Á¬­ Z ±Ž«¶ßæ ³²Tº ¿Á±™¬®¼¸/«¯gÄ ø­² ÑD½±Ž¿Á®Á¬Ø× ¸ŽÂ ±Ž« ±Ž¿Á®¼¾/«¹&²Y«@¬÷ö  ø è 3 ûü  ïgì ü ù  úÔ®¯l¬­K²Y«Ö·T¸/¹&°½¬[²Y¶3ÀD×{±™°°³¸™º °³Q®Á±™¬[²Y¿¼×ò³"²Y¶K²gÇ«K²Y¶ð°³"²Y·g®Á¯®¼¸/«ö±Ž«¶ ³²Y·g±Ž¿Á¿¹&²Y±9º ¯½³²Y¯10 ç í ] ü ûKû  ì öôü Z ì ì Z ì û .ç í ]ý W ýqþ Á  ì öæüÔæ ì ì ö ì ±Ž«¶>¬­K²¸/¿¿¼¸©µÃ®«K¾²g³³¸Ž³³"±™¬[² 0 ö;Ajÿ<è Z5û æG 5ö ïœ ÛP· ì ö ü Z ì ¶ ì ö üÔæ ì ì ö ì ¶ ì Z ì Y ø­²g³²gÀ@׎´}± ³²Y·g±Ž¿Á¿l²g³³¸Ž³`·g±Ž« ¸/«¿¼×ð¸^·g·g½K³Î®Á± ôèù½³² ï ±Ž¿®¼¾/«¹&²Y«@¬(®Á¯(«K¸Ž¬¸/½«¶q±Ž«¶‡±}°³²Y·g®Á¯®Á¸/« ²g³³"¸Ž³·g±Ž«Ö¸/«¿Á× ¸D·g·g½K³®¼Â±Î¸/½«¶ß±Ž¿®¼¾/«¹&²Y«@¬}®Á¯ «K¸Ž¬Ã²g»Ž²Y« ð%èd¸/¯"¯®¼À¿¼² ï Ä øÃ­K² ¯[²g¬˜¸ŽÂK¯[²Y«@¬[²Y«·T²°±Ž®¼³"¯Âd¸Ž³7µ%­®Á·"­+¬­K² ¹<±Ž«^º ½±Ž¿Ú±Ž¿Á®¼¾/«¹&²Y«@¬3®Á¯{°³¸^¶½·T²Y¶u®Á¯{³"±Ž«¶K¸/¹<¿Á×*¯[²Tº ¿¼²Y·T¬[²Y¶qÂd³¸/¹u¬­K²%¬[³"±Ž®Á«®«K¾+·T¸Ž³°½¯YÄWÕ ¯¬­²%±Ž¿Á®¼¾/«Kº ¹&²Y«@¬®Á¯ ¿¼²Y±™³"«K²Y¶§½«¯"½K°²g³»D®¯[²Y¶´K¬­K²Y¯[²&¯[²Y«@¬[²Y«·T² °±Ž®Á³"¯Ê¹<±Y×ܱŽ¿Á¯¸&À²+½¯[²Y¶>®Á«Ô¬[³"±Ž®«®Á«K¾KÄ %¸Ž³"¹<±Ž¿¿¼×Ž´¬­K²Ü±Ž««K¸Ž¬±™¬®Á¸/« ®Á¯y°²g³"¸Ž³"¹<²Y¶3ÀD× ¬Þµ(¸ð±Ž««K¸Ž¬±™¬[¸Ž³"¯Y´l°³¸^¶½·g®«K¾ ¯[²g¬¯ Z ë ´}æ ë ´ Z Õ ´ æ Õ Ä ø‰¸u®Á«·T³"²Y±Ž¯[²ð¬­K² Ñ@½±Ž¿Á®¼¬Ø×!¸ŽÂj¬­K²ö³"²g²g³[º ²Y«·T²‡±Ž¿Á®¼¾/«¹<²Y«–¬¬­K²<±Ž««K¸Ž¬±™¬[¸Ž³"¯y±™³²<°³²Y¯²Y«–¬[²Y¶ ¬­K²>¹y½K¬½±Ž¿²g³³"¸Ž³"¯&±Ž«¶÷±™³²>±Ž¯[ώ²Y¶÷¬[¸{®Á¹<°³¸©»Ž² ¬­K²Y®Á³±Ž¿Á®¼¾/«¹&²Y«@¬Ê®¼Â7°¸/¯¯"®¼À¿¼²ŽÄlK³¸/¹ ¬­K²Y¯[²+±Ž¿Á®¼¾/«Kº ¹&²Y«@¬¯‡µ(²`Ç«±Ž¿¿¼×߾޲Y«²g³"±™¬[² ±3³²gÂd²g³²Y«·T²§±Ž¿Á®¼¾/«Kº ¹&²Y«@¬<µÃ­®Á·"­ ·T¸/«@¬±Ž®Á«¯Ú¸/«¿¼×Ö¬­K¸/¯[²`ô˜èù½K³² ï ·T¸/«^º «K²Y·T¬®Á¸/«¯µÃ­K²g³"²&À¸Ž¬­ ±Ž««K¸Ž¬±™¬[¸Ž³"¯y±™¾Ž³²g²Ú±Ž«¶{®Á¬ ·T¸/«@¬±Ž®Á«¯‡±Ž¿Á¿Ê¬­K² ð%èd¸/¯¯"®¼À¿¼² ï ·T¸/««²Y·T¬®¼¸/«¯q³¸/¹ À¸Ž¬­§±Ž««K¸Ž¬±™¬[¸Ž³"¯YÄ}ø­®¯Ã·g±Ž«§À²y¶¸/«K²À@×>Âd¸Ž³"¹º ®Á«¾ ¬­² ®Á«–¬[²g³Q¯[²Y·T¬®¼¸/« ¸ŽÂÚ¬­² ¯½K³² ±Ž¿Á®¼¾/«¹&²Y«–¬¯ è Z  Z ë ü Z Õ ï ±Ž«¶ ¬­K²Ú½«®¼¸/« ¸ŽÂ̬­K²<°¸/¯¯®ÁÀ¿¼² ±Ž¿Á®Á¾/«¹&²Y«@¬¯‡èùæ  æ ë  æ Õ ï ÄÎø­K²g³"²gÀ@׎´Wµ²q²Y«^º Âd¸Ž³"·T²¬­±™¬g´®¼Â7µ(²+·T¸/¹&°n±™³²¡¬­K²+¯½K³²¡±Ž¿Á®¼¾/«¹&²Y«–¬¯ ¸ŽÂʲg»Ž²g³×{¯"®Á«K¾/¿¼²&±Ž««K¸Ž¬±™¬[¸Ž³µÃ®Á¬­ ¬­K²Ú·T¸/¹}À®Á«K²Y¶ ³²gÂd²g³²Y«·T²Ê±Ž¿Á®Á¾/«¹&²Y«@¬‰µ²Ì¸ŽÀ¬±Ž®Á«±Ž«Ռ, ¸ŽÂàg²g³¸ °²g³"·T²Y«@¬gÄ   8n08-/.‡'Þå]1K8ã*.‡'ØåqN70 H 80‰¨/6 ø­²ÚÀ±Ž¯[²Y¿®Á«K²Ú±Ž¿Á®Á¾/«¹&²Y«@¬}¹&¸^¶K²Y¿ ¶¸@²Y¯y«K¸Ž¬y°²g³º ¹<®Á¬¡±Ü¯¸/½K³"·T²&µ¸Ž³"¶§¬[¸ÜÀ²±Ž¿®¼¾/«K²Y¶`µÃ®¼¬­`¬Øµ¸>¸Ž³ ¹&¸Ž³"²q¬±™³¾Ž²g¬µ¸Ž³"¶¯għø­K²g³"²g¸Ž³²Ž´¿Á²^®Á·g±Ž¿·T¸Ž³³²Tº ¯[°¸/«¶²Y«·T²Y¯(¿®¼Ï޲    âqãâÆê YÂd¸Ž³ ã Kê½é é  èê !"ã W·g±Ž½¯[²}°³¸ŽÀ¿Á²Y¹<¯ÊÀ²Y·g±Ž½¯[²y±Ú¯®Á«¾/¿¼² ¯[¸/½³"·T²¡µ(¸Ž³"¶Ô¹y½¯[¬ÌÀ² ¹Ú±™°°²Y¶Ú¸/«‡¬Øµ¸&¸Ž³¹&¸Ž³² ¬±™³¾Ž²g¬%µ(¸Ž³Q¶¯gÄ ø‰¸q¯¸/¿¼»Ž²}¬­®Á¯°³"¸ŽÀ¿¼²Y¹Ü´µ²l°²g³Âd¸Ž³"¹ ±q¬[³"±Ž®Á«Kº ®Á«¾`®Á«ßÀ¸Ž¬­Ö¬[³"±Ž«¯¿Á±™¬®Á¸/«ß¶®Á³²Y·T¬®¼¸/«¯Ôèù¯[¸/½K³"·T²Ô¬[¸ ¬±™³¾Ž²g¬g´K¬±™³"¾Ž²g¬¬[¸¯[¸/½K³Q·T² ï Äø­@½¯µ²%¸ŽÀ¬±Ž®«Ú¬Þµ(¸ ±Ž¿Á®¼¾/«¹&²Y«–¬ »Ž²Y·T¬[¸Ž³"¯ ü ê ë ±Ž«¶#" î ë ¸Ž³(²Y±Ž·"­Ô¯[²Y«@¬[²Y«·T² °±Ž®¼³©Älª‘«§¬­K²&¸/¿¿¼¸©µÃ®«K¾K´‡ö ë  ø è ü  û 3Dïgì ü Iù  ú ±Ž«¶ôö Õ  ø è ý û " b ïgì " b ù  ú>¶K²Y«¸Ž¬[²‡¬­²Ô¯²g¬¯¸ŽÂ ¿Á®Á«ÏD¯y®Á«ß¬­K²Ô¬Øµ¸ Ù ®¼¬[²g³"À®(±Ž¿®¼¾/«¹&²Y«@¬¯gÄ Å ²>®Á«^º ·T³²Y±Ž¯[²Ú¬­K²&Ñ@½±Ž¿®¼¬Ø×θŽÂ(¬­K²<±Ž¿®¼¾/«¹&²Y«@¬¯¡µÃ®¼¬­`³²Tº ¯[°²Y·T¬ ¬[¸q°³"²Y·g®Á¯®¼¸/«´n³²Y·g±Ž¿Á¿‰¸Ž³ Õ;,$ ÀD×>·T¸/¹}À®Á«^º ®Á«K¾ö ë ±Ž«¶ ö Õ ®Á«@¬[¸j¸/«K²‡±Ž¿Á®¼¾/«¹&²Y«@¬l¹<±™¬[³Q®T ö ½¯®«K¾¬­K²¸/¿¿¼¸©µÃ®«K¾&·T¸/¹lÀ®«±™¬®¼¸/«>¹&²g¬­¸D¶¯0 2 ª‘«@¬[²g³"¯[²Y·T¬®¼¸/«‡0ö  ö ë üIö Õ 2 ! «®¼¸/«˜0lö  ö ë  ö Õ 2 ògÇ«K²Y¶‡0qªú«÷±§Ç³Q¯[¬y¯¬[²g° ¬­K²Ü®Á«–¬[²g³Q¯[²Y·T¬®¼¸/« ö  ö ë ü ö Õ ®Á¯Î¶²g¬[²g³"¹<®Á«K²Y¶˜Ä ø­²3²Y¿Óº ²Y¹<²Y«–¬¯ µ%®¼¬­®Á« ö ±™³²%½¯[¬®¼Ç²Y¶óÀD×ñÀ¸Ž¬­ Ù¡®¼¬[²g³Àn®–±Ž¿Á®¼¾/«¹&²Y«–¬¯5±Ž«¶y±™³²¬­K²g³²gÂd¸Ž³²(»Ž²g³"× ³"²Y¿Á®Á±™À¿¼²ŽÄ‰Å ²«K¸©µ ²^¬[²Y«¶&¬­²Ì±Ž¿®¼¾/«¹&²Y«@¬Þö ®Á¬[²g³"±™¬®¼»Ž²Y¿¼× À@ס±Ž¶¶®«K¾¿Á®Á«KÏ^¯è ý û 3@ï ¸D·g·g½³³"®Á«K¾ ¸/«¿¼×{®Á« ö ë ¸Ž³®« ö Õ ®¼Â«²Y®¼¬­K²g³yé  «K¸Ž³ í b ­±Y»Ž²Ê±Ž«±Ž¿Á®¼¾/«¹&²Y«@¬W®Á«ö ¸Ž³W®ÁÂK¬­K²Ì¸/¿Á¿Á¸©µÃ®Á«¾ ·T¸/«¶®¼¬®¼¸/«¯Ê­¸/¿Á¶‡0 & ¬­K²¿Á®Á«Ïqè ý û 3@ï ­±Ž¯±+­¸Ž³"®¼àg¸/«@¬±Ž¿«K²Y®¼¾/­^º À¸Ž³yè ý ·òÛ û 3@ï ´è ý ¶ Û û 3Dï ¸Ž³¡±q»Ž²g³¬®Á·g±Ž¿ «K²Y®¼¾/­@À¸Ž³è ý û 3 ·GÛ ï ´è ý û 3 ¶ Û ï ¬­±™¬+®Á¯ ±Ž¿¼³²Y±Ž¶K×Ô®Á«¹ö}´±Ž«¶ & ¬­K²+¯[²g¬Pö  ø è ý û 3@ï ú&¶K¸D²Y¯«K¸Ž¬·T¸/«@¬±Ž®Á« ±Ž¿Á®¼¾/«¹<²Y«–¬¯5µÃ®¼¬­yÀ¸Ž¬­­K¸Ž³"®¼àg¸/«@¬±Ž¿K±Ž«¶ »Ž²g³¬®Á·g±Ž¿˜«²Y®¼¾/­–À¸Ž³"¯YÄ  ÀD»D®Á¸/½¯¿¼×Ž´n¬­K²y®«–¬[²g³"¯²Y·T¬®¼¸/«Î¿Á²Y±Ž¶¯Ã¬[¸Ô±Ž«§±Ž¿Á®¼¾/«^º ¹&²Y«@¬ò¬­±™¬ ­±Ž¯G¸/«¿¼×Ƹ/«K²Tº]¬[¸™º]¸/«K²B±Ž¿Á®¼¾/«¹<²Y«–¬¯ µÃ®¼¬­Ü­®¼¾/­K²g³Ê°³²Y·g®¯®¼¸/«Ü±Ž«¶>±Ú¿¼¸©µ²g³Ã³²Y·g±Ž¿¿ÐÄ ø­K² ½«®Á¸/«&¿¼²Y±Ž¶¯¬[¸l±l­®¼¾/­K²g³³²Y·g±Ž¿¿±Ž«¶<±+¿Á¸©µ²g³°³²Tº ·g®Á¯®Á¸/«ß¸ŽÂ%¬­K²>·T¸/¹}À®Á«K²Y¶÷±Ž¿Á®¼¾/«¹&²Y«–¬gÄßÅ ²Ü¬Ø×D°®Óº ·g±Ž¿Á¿¼×j¸ŽÀ¯[²g³»Ž²&¬­±™¬¬­²y³"²gÇ«K²Y¶§·T¸/¹lÀn®Á«±™¬®¼¸/«§®Á¯ ±™À¿¼²l¬[¸q°³"¸D¶½·T²l±Ž«j±Ž¿Á®Á¾/«¹&²Y«@¬µÃ®¼¬­jÀ²g¬[¬[²g³ ³²Tº ·g±Ž¿Á¿˜±Ž«¶Ô°³²Y·g®Á¯®Á¸/«Ä ' î)( ÒÊ8-@åØH 80‰¨/6 Å ²{°³"²Y¯[²Y«–¬Ô³²Y¯"½¿¼¬¯‡¸/«ö¬­K² ÈGÉ>ʇË)Ì;ÍË8νÏ÷±Ž«¶ ¬­K²+*),.-./,„Ê10 / ¬±Ž¯[ÏÜèùø5±™À¿¼²+Û ï ÄÞK¸Ž³ À¸Ž¬­¬±Ž¯[Ï^¯ µ²}¹<±Ž«D½±Ž¿Á¿¼×ܱŽ¿Á®Á¾/«K²Y¶j±Ú³"±Ž«¶K¸/¹<¿Áׇ·Q­K¸/¯[²Y«§¯½KÀKº ¯[²g¬Ô¸ŽÂ+¬­K²§¬[³"±Ž®«®Á«K¾3·T¸Ž³"°½¯ èùø‰±™Àn¿¼²§Ý ï Ä K³¸/¹ ¬­®Á¯·T¸Ž³"°½¯ ¬­K²&dz"¯¬Û1 j¯[²Y«–¬[²Y«·T²Y¯+µ²g³²<½¯[²Y¶ ±Ž¯ »9±Ž¿®Á¶±™¬®¼¸/«>·T¸Ž³"°½¯%¬[¸q¸Ž°¬®¹<®¼àg²+¬­K²¯¹&¸D¸Ž¬­^º ®Á«K¾`°±™³"±Ž¹<²g¬[²g³"¯<±Ž«¶Ö¬­K²Ü³²Y¹Ú±Ž®Á«®Á«K¾`¯[²Y«@¬[²Y«·T²Y¯ µ²g³²l½¯[²Y¶Ü±Ž¯Ã¬[²Y¯[¬Ã·T¸Ž³°n½¯gÄ ª‘«x¬­K² Âd¸/¿Á¿¼¸oµÃ®Á«K¾ ¾Ž³"±™°n­¯g´jµ²ñ¶®¯[°¿Á±©× ¬­K² Õ;,$ Âd¸Ž³ ²g»Ž²g³"×®Á¬[²g³"±™¬®¼¸/«&¸ŽÂ¬­K²P,ÉܺرŽ¿¼¾Ž¸Ž³Q®¼¬­¹ÜÄ 0.06 0.08 0.1 0.12 0.14 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 5 5 AER 2 Model viterbi +neighbors +pegging W®¼¾/½K³"²G 0B,Ò²Y·T¬¸ŽÂ(½¯®Á«K¾Ô¹&¸Ž³²&±Ž¿Á®¼¾/«¹&²Y«–¬¯®« ¬[³"±Ž®«®Á«K¾¸ŽÂ5ªúÈÊÉ>º @š= `èKÈGÉ>ʇË)ÌB̈́Ë8νϬ±Ž¯ÏnÄ ï ! «¿¼²Y¯¯«¸Ž¬[²Y¶ ¸Ž¬­K²g³µÃ®¯[²Ž´˜µ²<½¯[²Y¶§Âd¸Ž³+¬[³"±Ž®Á«®«K¾ ¸ŽÂ‰ªúÈÊÉ>º @š=<¸/½K³Ê¹&¸^¶®¼Ç²Y¶q»Ž²g³"¯®¼¸/«Ô¶K²Y¯"·T³"®¼À²Y¶q®« ¯[²Y·T¬®Á¸/«DÄ 354 V7618:97;OV=<?> QŒa@!AB 6 9nVC6DFE A 6GDF< aA 6 A 6 B W®¼¾/½³²H q·T¸/¹<°±™³²Y¯¬­K²³²Y¯"½¿¼¬¯ ¸ŽÀ¬±Ž®Á«K²Y¶`ÀD× ½¯"®Á«K¾>¶®ÓÒ²g³²Y«@¬+«@½¹lÀ²g³"¯¸ŽÂʱŽ¿Á®Á¾/«¹&²Y«@¬¯+®Á« ¬­K² ¬[³"±Ž®«®Á«K¾q¸ŽÂ¬­K²&¯[¸Ž°­®¯[¬®Á·g±™¬[²Y¶`±Ž¿Á®¼¾/«¹&²Y«@¬¡¹&¸D¶Kº ²Y¿Á¯ß¸/«!¬­K²ñÈGÉ>ʇË)ÌB̈́Ë8νÏñ¬±Ž¯[ÏÄ ª‘« ¸Ž³Q¶K²g³3¬[¸ ³²Y¶½·T²Ô¬[³"±Ž®Á«®Á«¾j¬®Á¹&²Ôµ²Ü³²Y¯[¬[³"®·T¬[²Y¶ß¬­²>«@½¹&º À²g³Ê¸ŽÂ7°²g¾Ž¾Ž²Y¶>±Ž¿®¼¾/«¹&²Y«@¬¯ÌÀ@ׇ½¯®Á«¾}¸/«¿Á×Ú¬­K¸/¯[² ±Ž¿Á®Á¾/«¹&²Y«@¬¯¡µÃ­K²g³²&æçèKQ û a ì V ï ®Á¯¡«K¸Ž¬¬[¸D¸>¹y½·"­ ¯¹Ú±Ž¿Á¿¼²g³j¬­±Ž«ñ¬­K²Ö°³¸ŽÀn±™À®Á¿Á®¼¬Þ×G¸ŽÂÚ¬­K²ßÙ¡®¼¬[²g³Àn® ±Ž¿Á®Á¾/«¹&²Y«@¬gÄ>ªÞÂʵ²Ô½¯[²Ú¸/«¿¼× ¬­K²ÚÙ¡®¼¬[²g³Àn® ±Ž¿Á®¼¾/«Kº ¹&²Y«@¬g´¬­K²l³²Y¯½¿¼¬¯Ê±™³"²l¯®¼¾/«®ÁÇ·g±Ž«–¬¿Á×Úµ¸Ž³"¯[²¬­±Ž« ±Ž¶¶®¼¬®¼¸/«±Ž¿Á¿Á×𽯮Á«K¾ ¬­K² «K²Y®¼¾/­@À¸Ž³"­K¸D¸D¶ó¸ŽÂÚ¬­K² Ù¡®¼¬[²g³À®@±Ž¿Á®¼¾/«¹<²Y«–¬gÄWÈ×l¶K¸/®Á«K¾lý °²g¾Ž¾/®Á«K¾Kÿ¼´Žµ²(¸ŽÀKº ¬±Ž®Á«j±Ž«>±Ž¶¶®¼¬®Á¸/«±Ž¿¯¹<±Ž¿Á¿®Á¹<°³¸©»Ž²Y¹<²Y«–¬gÄ ø5±™À¿¼²j <¯­K¸©µ%¯¬­K²+·T¸/¹&°n½K¬®Á«K¾&¬®Á¹<²+¸Ž³Ã°²g³º Âd¸Ž³"¹<®Á«¾Ö¸/«²3®¼¬[²g³"±™¬®¼¸/« ¸ŽÂ}¬­K² ,ÉܺرŽ¿Á¾Ž¸Ž³"®¼¬­¹ÜÄ ! ¯®Á«K¾Ü±Ô¿Á±™³¾Ž²g³+¯²g¬¸ŽÂ±Ž¿Á®Á¾/«¹&²Y«@¬¯ ¯"®¼¾/«®¼Ç·g±Ž«@¬¿¼× ®Á«·T³²Y±Ž¯[²Y¯ß¬­K²G¬[³Q±Ž®Á«®Á«K¾ñ¬®Á¹&²òÂd¸Ž³Ö¬­K² ¹<¸D¶K²Y¿¯ ª‘ÈÊÉܺ:=&±Ž«¶‡ª‘ÈÌÉ>ºDÄ Õ ¯%ý °²g¾Ž¾/®«K¾Kÿ@×^®¼²Y¿Á¶¯¸/«¿¼×<± ¹&¸^¶K²g³"±™¬[²§®Á¹&°³¸©»Ž²Y¹&²Y«@¬g´±Ž¿Á¿Ã¸/¿Á¿Á¸©µÃ®Á«¾ ³²Y¯½¿¼¬¯ ±™³²`¸ŽÀ¬±Ž®Á«²Y¶ò½¯®«K¾{¬­K²`«K²Y®¼¾/­@À¸Ž³Q­K¸@¸^¶ ¸ŽÂ¬­K² Ù¡®¼¬[²g³À®±Ž¿®¼¾/«¹&²Y«@¬gÄ HJILKNMO >P<#Q KRK W®¼¾/½³²O=¡·T¸/¹&°±™³²Y¯‰¬­K² ³"²Y¯½¿¼¬¯¸ŽÂ½¯®Á«K¾Êª‘ÈÊÉܺ ݧ¸Ž³qÍ É`É ®«÷À¸D¸Ž¬¯[¬[³"±™°°®Á«K¾`¬­K²>Âd²g³¬®Á¿Á®Á¬Ø×{¸/« ¬­K²*ÈGÉ>ʇË8ÌBÍË(ÎeÏ>¬±Ž¯[ÏÄjø­²ÚÍ É`É ±Ž¿Á®Á¾/«¹&²Y«@¬ ¹&¸^¶K²Y¿5×^®¼²Y¿Á¶¯ ¯®¼¾/«®ÁÇ·g±Ž«–¬¿Á×>À²g¬[¬[²g³³²Y¯½¿¼¬¯%¬­±Ž« ª‘ÈÊÉܺÞÝDÄ øÃ­K²À²Y¯[¬ ³"²Y¯½¿¼¬¯(±™³²Ã¸ŽÀ¬±Ž®«K²Y¶Ú®¼Âª‘ÈÊÉܺ ÷®Á¯‡¸/¹<®Á¬[¬[²Y¶ö®Á« ¬­K² ¬[³"±Ž®Á«®«K¾ß±Ž«¶ð¬­²§Í¡É`É ø5±™À¿¼²<Û0 ø‰³"±Ž®Á«®«K¾&·T¸Ž³°¸Ž³"±&¯®¼àg²Y¯gÄ õ7±Ž«K¾/½±™¾Ž²Y¯ Å ¸Ž³"¶¯ Ù¸D·g±™À½¿Á±™³× F¸Ž³°½¯ ô^õ\9øõ ôD²Y«@¬[²Y«·T²Y¯ ô^õ øõ ô^õ øõ ÈGÉ>ʇË)ÌB̈́Ë8Î½Ï ,(«K¾/¿Á®Á¯­) » ²g³"¹<±Ž« =ŽÏ =¤ O¤m ŽÝ O ŽÝ  Þ@ Þ *),.-./,„Ê10 /–è‰™Ï ï ³²Y«·"­), «K¾/¿Á®¯­ ™Ï °ŽÝ mDÛ m^ۄ°=¤ ÛO  Ý Þ   *),.-./,„Ê10 /–èÐÝ ™Ï ï ³²Y«·"­), «K¾/¿Á®¯­ Ý ™Ï Ý@m O =@ ÝÞ °ÃÛ  = =O= m =°  *),.-./,„Ê10 /–è‰ ™Ï ï ³²Y«·"­), «K¾/¿Á®¯­  ™Ï °ÊÛgm l=Û m= =@œ@@m =(Ý  œ ŽÝ *),.-./,„Ê10 /–è‘Û ™Ï ï ³²Y«·"­), «K¾/¿Á®¯­ Û ™Ï Ý= °ÃÛ  ݎÝÊÛ Þ@ŽÝ Û1 (Ý@m m°O ø‰±™À¿Á²lÝ 0(Éj±Ž«D½±Ž¿Á¿Á×q±Ž««K¸Ž¬±™¬[²Y¶Î¬[²Y¯[¬Ã·T¸Ž³"°¸Ž³Q±^Ä Å ¸Ž³"¶¯ F¸Ž³"°½¯ ô^õ øõ ô^²Y«–¬[²Y«·T²Y¯ ÈGÉ>ʇË)Ì;ÍË(ÎeÏ ŽÝ ^Û1@ = *),.-./J,Ê10 / °@mš=¤ m=¤   ¹&¸^¶K²Y¿ °±™³"±Ž¹&²g¬[²g³Q¯Ü±™³² ½¯[²Y¶ð¬[¸ ¶®¼³²Y·T¬¿Á× ²Y¯[¬®Óº ¹<±™¬[²§¬­K²§ª‘ÈÌÉ>º:= ¹&¸^¶K²Y¿%°±™³"±Ž¹&²g¬[²g³"¯YÄBªú« ¬­K² ¿Á±™¬[²g³&®¼¬[²g³"±™¬®Á¸/«¯g´WªúÈÊÉ>º:= ®Á¯}±™Àn¿¼²Ú¬[¸`³²Y¶½·T²‡¬­K² ±Ž¶K»™±Ž«–¬±™¾Ž² ¸ŽÂܽ¯®Á«¾ Í É`É{ÄyÈ̽K¬ß®Á« ¬­²ò²Y«¶ µ²`¯[¬®¿Á¿¸ŽÀ¬±Ž®«òÀ²g¬[¬[²g³Ô³²Y¯½¿Á¬¯<µÃ­K²Y« ½¯®«K¾3®Á« À¸@¸Ž¬¯¬[³"±™°°®Á«¾+Í É`É èùՌ,ò0 D°5S ï ®Á«¯[¬[²Y±Ž¶Ú¸ŽÂ ª‘ÈÌÉ>ºÞÝjèùÕ;,$j08m@ğ=TS ï Ä ª‘«Ú¬­²U*),.-./,„Ê10 /–è‰™Ï ï ¬±Ž¯[ÏÎèù¯[²g²çW®¼¾/½K³"²; ï ´ ¬­K²‡²g³³¸Ž³y³"±™¬[²Y¯±™³²Ô­®¼¾/­K²g³}²Y¯[°²Y·g®Á±Ž¿Á¿¼×§À²Y·g±Ž½¯[² ¸ŽÂ¬­K² ­®¼¾/­<»Ž¸^·g±™À½¿±™³×&¯®¼àg²ŽÄø­K² ½¯[²¸ŽÂ˜Í¡ÉÎÉ ®Á«`¬[³"±Ž®Á«®Á«K¾‡×D®¼²Y¿¶¯%±Ž«{²g»Ž²Y«{¯[¬[³¸/«K¾Ž²g³³"²Y¶½·T¬®¼¸/« ®Á«yՌ,+Ä9ªú«@¬[²g³²Y¯[¬®«K¾/¿¼×Ž´/±Ž¿¼³²Y±Ž¶×+¬­K²ÌՌ,÷¸ŽÂ¬­K² Ç«±Ž¿5®¼¬[²g³"±™¬®Á¸/«Î¸ŽÂ(Í É`É è‘Û°DÄcS ï ×D®¼²Y¿¶¯À²g¬[¬[²g³ ³²Y¯½¿¼¬¯+¬­±Ž«ß¬­K²qÀ²Y¯¬j,(É>ºØ®¼¬[²g³"±™¬®¼¸/«ßµÃ­K²Y«ß½¯‘º ®Á«K¾ ª‘ÈÌÉ>ºÞÝ ®Á«yÀ¸@¸Ž¬¯[¬[³Q±™°°®Á«K¾yèÐÝ^ÄcS ï ÄÅ ²Ê·T¸/«^º ·g¿Á½¶²(¬­±™¬®¼¬W®Á¯W®Á¹&°¸Ž³¬±Ž«@¬7¬[¸+¯¬±™³¬W¬­K²Ì¬[³"±Ž®Á«®Á«¾ ¸ŽÂ&¬­K²3¯¸Ž°­®Á¯[¬®·g±™¬[²Y¶ñ±Ž¿Á®¼¾/«¹&²Y«–¬>¹<¸D¶K²Y¿¯ÜµÃ®¼¬­ ¾Ž¸D¸D¶>®«®¼¬®Á±Ž¿°±™³"±Ž¹<²g¬[²g³"¯gÄ ø­²½¯²¸ŽÂnªúÈÊÉ>º ¡±™Â¬[²g³Í¡É`É ¹<±™Ï޲Y¯W³"²Y¯½¿¼¬¯ µ¸Ž³"¯[²Ž´lÀn½K¬>Ç«±Ž¿¿¼×Gª‘ÈÊÉܺ:= °³¸^¶½·T²Y¯>À²Y¯¬j³²Tº ¯½¿Á¬¯è‘ÛDÄcS ï ÄÕ ¯[¬[¸/«®¯­®Á«K¾/¿Á׎´©ª‘ÈÌÉ>º%°³¸^¶½·T²Y¯ µ¸Ž³"¯[²+³²Y¯"½¿¼¬¯Ê¬­±Ž«>ªúÈÊÉ>º:=KÄÌøÃ­®Á¯Ê®Á¯¹Ú±Y×DÀ²À²Tº ·g±Ž½¯[²ª‘ÈÊÉܺ­±Ž¯±¿¼¸Ž¬¹&¸Ž³²¡¬[³"±Ž®Á«®Á«K¾y°±™³"±Ž¹&²Tº ¬[²g³"¯%±Ž«¶>¬­K²}¶®Á¯[¬[¸Ž³¬®Á¸/«>¹&¸^¶K²Y¿˜½¯[²Y¯¸/«¿¼×Ô±Ú¶K²Tº °²Y«¶K²Y«·T²&¸/«3¬­K²-³²Y«·"­3µ(¸Ž³Q¶3·g¿Á±Ž¯¯YÄ*K¸Ž³y¬­K² Âd¸/¿Á¿¼¸©µ%®Á«K¾+²^°²g³"®Á¹&²Y«@¬¯µ²%½¯²Ã¬[³"±Ž®Á«®Á«K¾+¯·Q­K²Y¹&² Û+VXW7YZY[V =KÄ \^] V`_aDb> Qdc 9e>1>fD 4 A 6 B W®¼¾/½K³"²Œ l¯"­K¸©µÃ¯ ¬­K²Ã²TÒ²Y·T¬Ì¸ŽÂ˜½¯®Á«¾¸/½K³¹<¸D¶^º ®¼Ç²Y¶*ª‘ÈÊÉܺ:= èù¯[²Y·T¬®Á¸/«D ï ±Ž«¶B¯¹&¸D¸Ž¬­®Á«K¾ ¬­K² ±Ž¿Á®¼¾/«¹&²Y«–¬[oÂd²g³¬®Á¿®¼¬Ø×q°³¸ŽÀ±™À®¿Á®¼¬®¼²Y¯gÄ5Å ²}¯²g²+¬­±™¬ ø5±™À¿¼²B 0OF̸/¹&°½K¬®Á«¾¬®Á¹&²Ã¸/«q¬­K²;ÈÆÉ(ʇË8ÌB̈́Ë8Î½Ï ¬±Ž¯[Ï{èd¸/«  ÚÉj­àjð5²Y«–¬®Á½¹xªª(¹Ú±Ž·"­®Á«² ï Ä ¯[²Y·T¸/«¶¯Ã°²g³®¼¬[²g³"±™¬®¼¸/« ±Ž¿Á®Á¾/«¹&²Y«@¬¯[²g¬ ªúÈÊÉ>º ª‘ÈÌÉ>º:= ª‘ÈÊÉܺ Ù¡®¼¬[²g³À® Ûgm ݎÝ °@m ¶ «²Y®¼¾/­–À¸Ž³"¯ Ý  =@  Û   ¶ °²g¾Ž¾/®Á«¾ Û      0.06 0.08 0.1 0.12 0.14 1 1 1 1 1 2/H 2/H 2/H 2/H 2/H 3/4 3/4 3/4 3/4 3/4 4 4 4 4 4 5 5 5 AER 2 Model 1-2-3-4-5 1-HMM-3-4-5 1-HMM-4 W®¼¾/½K³"²´=$0 F¸/¹<°±™³"®Á¯[¸/« ¸ŽÂ ½¯"®Á«K¾ ªúÈÊÉ>ºÞÝB¸Ž³ Í¡ÉÎÉ ®«öÀ¸@¸Ž¬¯[¬[³Q±™°°®Á«K¾÷ª‘ÈÊÉܺ @š=  èKÈGÉ>ʇËg ÌB̈́Ë8νÏ<¬±Ž¯[Ï ï Ä ½¯"®Á«K¾ ¬­K²3¯¬±Ž«¶±™³"¶ö»Ž²g³Q¯®¼¸/«ö¸ŽÂª‘ÈÌÉ>º:=ò×^®¼²Y¿Á¶¯ ±x­®Á¾/­K²g³ Õ;,$ µÃ­®Á·Q­ ®¯ ¹Ú±Ž®Á«¿¼× ¶½K²ó¬[¸ ± µ¸Ž³"¯[² ³²Y·g±Ž¿Á¿ÐÄWÅ𮼬­K¸/½K¬Ì¯¹&¸D¸Ž¬­®Á«¾K´–µ² ±Ž¿Á¯¸}¸ŽÀKº ¯[²g³"»Ž²q²Y±™³"¿Á×{¸©»Ž²g³[º]Ǭ[¬®Á«K¾$0ÚÕ;,$ ®Á«·T³"²Y±Ž¯[²Y¯±™Â¬[²g³ ¬­K²&¯[²Y·T¸/«¶{®¼¬[²g³"±™¬®¼¸/«`¸ŽÂÌÍ É`É3ĘÕ%«±Ž¿Á×@àY®Á«¾q¬­K² ±Ž¿Á®Á¾/«¹&²Y«@¬¯‰¯­¸©µÃ¯‰¬­±™¬5¬­K²¯"¹&¸@¸Ž¬­®Á«K¾Ã¸ŽÂKÂd²g³¬®Á¿¼º ®¼¬Þ×q°³¸ŽÀ±™À®Á¿®¼¬®¼²Y¯ ±Ž¿¯[¸&¯®¼¾/«®¼Çn·g±Ž«–¬¿¼×Ú³²Y¶½·T²Y¯Ì¬­K² °³"¸ŽÀ¿¼²Y¹ ¬­±™¬ ³Q±™³²lµ¸Ž³"¶¯%¸ŽÂ¬[²Y«`Âd¸Ž³"¹ý ¾/±™³À±™¾Ž² ·T¸/¿Á¿Á²Y·T¬[¸Ž³"¯gÿ®Á«Î¬­±™¬¡¬­K²g×>¬[²Y«¶`¬[¸Ü±Ž¿Á®¼¾/«Î¬[¸Ü±q¿¼¸Ž¬ ¸ŽÂWµ¸Ž³"¶¯lèù¯[²g²‡èùÈ(³"¸©µÃ«j²g¬Ã±Ž¿ÐﴘÛ  ™± ï[ï Ä 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 1 1 1 1 1 2/H 2/H 2/H 2/H 2/H 3/4 3/4 3/4 3/4 3/4 4 4 4 4 4 5 5 5 AER 2 Model 1-2-3-4-5 1-HMM-3-4-5 1-HMM-4 W®¼¾/½K³²  0 F¸/¹<°±™³"®Á¯[¸/« ¸ŽÂ ½¯®Á«K¾ ª‘ÈÌÉ>º Ý ¸Ž³ Í¡ÉÎÉ ®« À¸@¸Ž¬¯¬[³"±™°°®Á«¾ ªúÈÊÉ>º @š=  èh*),.-./,„Ê10 /–è‰™Ï ï ¬±Ž¯[Ï ï Ä 0.06 0.08 0.1 0.12 0.14 1 1 1 1 1 H H H H H 4 4 4 4 4 4 4 4 AER 2 Model 1-H-4:standard 1-H-4:modifiedModel4 1-H-4:modifiedModel4+smoothing W®¼¾/½K³² 0 ,Ò²Y·T¬q¸ŽÂ+¯¹&¸D¸Ž¬­®Á«K¾òèKÈGÉ>ʇË)Ì;ÍË8Î½Ï ¬±Ž¯[Ï ï Ä i E A 6 Bnaj@ea < B V`<?DF< aPA 6 A 6 B _k>< l:8 E ø5±™À¿¼² =ñ¯"­K¸©µÃ¯{¬­K² ²TÒ²Y·T¬3¸ŽÂÔ½¯®Á«K¾ö¶®¼Ò²g³[º ²Y«@¬<±Ž¹&¸/½«@¬¯&¸ŽÂ%¬[³Q±Ž®Á«®Á«K¾`¶±™¬±^Ä Õ%¯²^°²Y·T¬[²Y¶´ ¹&¸Ž³²ß¬[³"±Ž®Á«®«K¾ò¶±™¬± ­K²Y¿¼°¯>¬[¸ ®Á¹<°³¸©»Ž²Ö±Ž¿Á®¼¾/«^º ¹&²Y«@¬ Ñ@½±Ž¿®¼¬Ø×öÂd¸Ž³ ±Ž¿Á¿¹&¸D¶²Y¿Á¯gÄ Íøoµ(²g»Ž²g³Y´&Âd¸Ž³ ª‘ÈÌÉ>º"Û%¬­K²%³"²Y¿Á±™¬®¼»Ž²%®¹&°³¸o»Ž²Y¹&²Y«–¬®Á¯»Ž²g³"×Ú¯¹<±Ž¿Á¿ ·T¸/¹&°±™³"²Y¶3¬[¸Î¬­K²Ú³²Y¿±™¬®¼»Ž²q®Á¹<°³¸©»Ž²Y¹<²Y«–¬½¯®Á«¾ Í¡ÉÎÉ ±Ž«¶Üª‘ÈÊÉܺ:=KÄ m VC6V=< a@!A n V=oqp @!AB 6 9nVC6DFE ø5±™À¿¼²çy¯­K¸©µ%¯Ì°³²Y·g®Á¯®Á¸/«´–³²Y·g±Ž¿¿±Ž«¶‡Õ;,$ ¸ŽÂ ¬­K²‡¿Á±Ž¯[¬}®Á¬[²g³"±™¬®¼¸/«{¸ŽÂʪ‘ÈÊÉܺ:=§Âd¸Ž³lÀ¸Ž¬­{¬[³Q±Ž«¯¿Á±9º ¬®¼¸/«G¶®¼³²Y·T¬®¼¸/«¯gÄñ,(¯[°²Y·g®Á±Ž¿Á¿¼×߁¸Ž³‡¬­K²`¿Á±Ž«K¾/½±™¾Ž² °±Ž®¼³I» ²g³"¹<±Ž«^ºq,(«K¾/¿Á®Á¯"­ñèKÈGÉ>ʇË8ÌBÍË(ÎeÏ{¬±Ž¯[Ï ï µ(² ¸ŽÀ¯[²g³"»Ž²}¬­±™¬ ÀD×j½¯®Á«¾*» ²g³"¹<±Ž«`±Ž¯¡¯[¸/½K³"·T²¿Á±Ž«^º ¾/½±™¾Ž²Î¬­K²jÕ;,$ ®¯<¹}½·"­ò­®¼¾/­K²g³&¬­±Ž«òÀ@×÷½¯‘º ®Á«K¾º,(«K¾/¿Á®Á¯­÷±Ž¯<¯¸/½K³"·T²j¿Á±Ž«¾/½±™¾Ž²ŽÄ ø­®Á¯®Á¯À²Tº ø5±™À¿¼² =$0¿,Ò²Y·T¬ ¸ŽÂν¯®Á«K¾ ¶®ÓÒ²g³²Y«@¬÷±Ž¹<¸/½«–¬ ¸ŽÂ‡¬[³"±Ž®Á«®«K¾ ¶±™¬±uèh*r, -./J,„Ê10:/Ö¬±Ž¯[Ï´&¬[³"±Ž®Á«®«K¾ ¯·Q­K²Y¹&²ÚÛ5V Í É`ÉsV = ï Ä ÕŒ,utvS)w F¸Ž³"°½¯ ª‘ÈÊÉܺ"Û Í É`É ªúÈÊÉ>º:= *),.-./,„Ê10 /–è‰™Ï ï =KĀ Û°DÄc ÛDĀ *),.-./,„Ê10 /–èÐÝ ™Ï ï ^ۙĀ Û=KĀ ÛYÝDĀ *),.-./,„Ê10 /–è‰ ™Ï ï ^Ā ÛYÝD° Û1^Ä m *),.-./,„Ê10 /–è‘Û ™Ï ï Ý Dğ= ێۙÄc Dğ= ·g±Ž½¯²3¬­K²{Àn±Ž¯[²Y¿Á®Á«K² ±Ž¿Á®Á¾/«¹&²Y«@¬Ü³²g°³²Y¯[²Y«@¬±™¬®¼¸/« ±Ž¯&±Î»Ž²Y·T¬[¸Ž³ ü ê ë ¶K¸D²Y¯®Á«3¬­±™¬·g±Ž¯²‡Âd¸Ž³À®¶3¬­±™¬ ¬­K²%¸ŽÂ¬[²Y«q¸^·g·g½K³³"®Á«¾» ²g³"¹<±Ž«qµ(¸Ž³Q¶Ú·T¸/¹&°¸/½«¶¯ ±Ž¿Á®Á¾/«Ô¬[¸Ú¹&¸Ž³²+¬­±Ž«Ü¸/«¿¼×‡¸/«K²ò, «¾/¿Á®Á¯­‡µ¸Ž³"¶Ä øÃ­K²²TÒ²Y·T¬¸ŽÂ̹&²g³¾/®Á«¾‡±Ž¿Á®Á¾/«¹&²Y«@¬¯%ÀD×jÂd¸Ž³"¹º ®Á«¾q¬­K²&®Á«@¬[²g³"¯[²Y·T¬®Á¸/«´¬­K²&½«®¼¸/«Î¸Ž³¡¬­K²³²gÇ«K²Y¶ ·T¸/¹}À®Á«±™¬®Á¸/«ó¸ŽÂ‡¬­K²÷Ù¡®¼¬[²g³À®y±Ž¿Á®¼¾/«¹<²Y«–¬¯ èù¯[²g² ¯[²Y·T¬®Á¸/« m ï ¸ŽÂÔÀ¸Ž¬­B¬[³"±Ž«¯"¿Á±™¬®¼¸/«u¶®Á³²Y·T¬®¼¸/«¯ ®Á¯ ¯­¸©µÃ«®Á«yø‰±™Àn¿¼² DÄWÈ×}½¯"®Á«K¾Ã¬­²(³²gÇn«K²Y¶}·T¸/¹}À®Óº «±™¬®Á¸/«&µ(² ·g±Ž«Ú®Á«·T³²Y±Ž¯[²°³"²Y·g®Á¯®¼¸/«&±Ž«¶<³²Y·g±Ž¿Á¿K¸/« ±Ž¿Á¿–¬±Ž¯[Ï^¯gÄ5øÃ­K² ¿¼¸oµ(²Y¯[¬5Ռ,3¸/«}¬­K²—ÈÆÉ(ʇË8ÌB̈́Ë8Î½Ï ¬±Ž¯[Ï}®Á¯˜¸ŽÀ¬±Ž®«K²Y¶}½¯®Á«K¾Ê¬­K²³²gÇ«²Y¶l·T¸/¹}À®Á«±™¬®¼¸/« ¹&²g¬­¸D¶Ä ø­K²¿¼¸©µ²Y¯[¬¡ÕŒ, ¸/«j¬­²L*),.-./,„Ê10 / ¬±Ž¯[ÏÜ®Á¯¸ŽÀ¬±Ž®Á«K²Y¶Ü½¯"®Á«K¾&®Á«@¬[²g³"¯[²Y·T¬®¼¸/«Ä È×ց¸Ž³Q¹<®Á«K¾§± ½«®¼¸/«Ö¸Ž³<®Á«@¬[²g³"¯[²Y·T¬®Á¸/«÷¸ŽÂ%¬­K² ±Ž¿Á®Á¾/«¹&²Y«@¬¯}µ²Ô·g±Ž«÷¸ŽÀ¬±Ž®Á«ß³²Y·g±Ž¿Á¿¸Ž³°³²Y·g®Á¯®Á¸/« »™±Ž¿Á½K²Y¯lèdÀn½K¬«K¸Ž¬ÊÀ¸Ž¬­ ï ¸©»Ž²g³ç bS‡Ä x y 470(=$'Øä 6™åØ470 Å ²<­±©»Ž²Ú¶®Á¯"·g½¯¯[²Y¶Î»9±™³Q®¼¸/½¯ ²D¬[²Y«¯®¼¸/«¯¡¬[¸Ü¯[¬±9º ¬®Á¯¬®Á·g±Ž¿7±Ž¿Á®Á¾/«¹&²Y«@¬%¹&¸^¶K²Y¿Á¯gÄÊÕ «j²g»™±Ž¿Á½±™¬®Á¸/«Î·T³"®¼º ¬[²g³"®Á¸/«´®ÐÄ ²ŽÄ¬­K²+±Ž¿Á®Á¾/«¹&²Y«@¬²g³"³¸Ž³Ê³"±™¬[²Ž´µÌ±Ž¯¯½K¾™º ¾Ž²Y¯[¬[²Y¶q±Ž«¶&³²Y¯"½¿¼¬¯W¸/«<¶®¼Ò²g³²Y«@¬¬±Ž¯[Ï^¯µ²g³²Ê°³²Tº ¯[²Y«@¬[²Y¶Ä Å ²Ö­±Y»Ž² ¯­K¸oµÃ«ñ¬­±™¬`¯[¸Ž°n­®Á¯[¬®Á·g±™¬[²Y¶ ±Ž¿Á®Á¾/«¹&²Y«@¬`¹&¸^¶K²Y¿Á¯ÎµÃ®¼¬­*± dz"¯[¬‘º]¸Ž³Q¶K²g³`¶²g°²Y«Kº ¶K²Y«·T²Ö±Ž«¶ ±G²g³"¬®Á¿Á®¼¬Þ× ¹&¸D¶²Y¿}¿¼²Y±Ž¶ó¬[¸ö¯®¼¾/«®ÁÇKº ·g±Ž«@¬¿¼×§À²g¬[¬[²g³l³²Y¯½¿¼¬¯¡¬­±Ž« ¬­K²q¯®Á¹&°¿Á²¹<¸D¶K²Y¿¯ ª‘ÈÊÉܺ"Û3¸Ž³jª‘ÈÌÉ>ºÞÝDÄ Å ²ß­±©»Ž²ß¶K²Y¯·T³Q®¼À²Y¶ð»9±™³"®¼º ¸/½¯‡­K²Y½K³Q®Á¯[¬®Á·g¯Ú¬­±™¬‡®¹&°³¸o»Ž²j°³²Y·g®¯®¼¸/«´Ì³²Y·g±Ž¿Á¿ ¸Ž³qÀ¸Ž¬­÷ÀD× ·T¸/¹lÀn®Á«®Á«K¾§Ù¡®¼¬[²g³À®Ì±Ž¿Á®Á¾/«¹&²Y«@¬¯&¸ŽÂ À¸Ž¬­Ü¬[³"±Ž«¯¿Á±™¬®¼¸/«Ü¶®Á³²Y·T¬®¼¸/«¯gÄ ½K³¬­K²g³+®Á¹<°³¸©»Ž²Y¹<²Y«–¬¯®Á«`°³"¸D¶½·g®Á«K¾ÚÀ²g¬[¬[²g³ ±Ž¿Á®Á¾/«¹&²Y«@¬¯l±™³²q²D°²Y·T¬[²Y¶ß³¸/¹¹<±™Ï^®Á«K¾j½¯²<¸ŽÂ ·T¸Ž¾/«±™¬[²Y¯Y´@±Ž«¶}³"¸/¹ó¯[¬±™¬®¯[¬®Á·g±Ž¿^±Ž¿Á®¼¾/«¹<²Y«–¬5¹&¸D¶Kº ²Y¿Á¯¬­±™¬±™³²À±Ž¯²Y¶&¸/«µ(¸Ž³Q¶&¾Ž³¸/½K°¯³"±™¬­K²g³¬­±Ž« ¯®«K¾/¿¼²¡µ(¸Ž³Q¶¯gÄ ø‰±™À¿Á²j 0 Õ%¿®¼¾/«¹&²Y«@¬ÊÑ@½±Ž¿®¼¬Ø×Ô®Á«Ü¿Á±Ž¯[¬®Á¬[²g³"±™¬®¼¸/«Ü¸ŽÂWª‘ÈÌÉ>º:=<¸ŽÂWÀ¸Ž¬­Ô¬[³"±Ž«¯¿±™¬®¼¸/«>¶®¼³"²Y·T¬®¼¸/«¯gÄ ô^õzV øõ øõzV ô^õ F¸Ž³"°½¯ °³²Y· ³²Y· Õ;,$ °³²Y· ³²Y· Õ;,$ ÈGÉ>ʇË)Ì;ÍË(ÎeÏ  DÄþÝ  DĀ D° ^Äc °@m@Ā Û1^Ā *),.-./J,Ê10 //è‰™Ï ï °^Ā ^ۙÄþÝ ÛDĀ °^Äc ^° Û DÄc *),.-./J,Ê10 //èÐÝ ™Ï ï °=KĀ  DÄÁÛ ÛYÝDĀ °=KÄþÝ  Dğ= ÛYÝDğ= *),.-./J,Ê10 //è‰ ™Ï ï ° DĀ =KÄþÝ Û1^Ä m ° DĀ =Kğ= Û1^Ā *),.-./J,Ê10 //è‘Û ™Ï ï ° °DÄÁÛ =KĀ Dğ= ° °DĀ  DÄc DÄc ø5±™À¿¼²j 0œ,Ò²Y·T¬%¸ŽÂ5·T¸/¹}À®Á«±™¬®Á¸/«Ü¸ŽÂ‰ª‘ÈÊÉܺ:=qÙ ®Á¬[²g³À®±Ž¿Á®¼¾/«¹&²Y«–¬¯ÌÂd³¸/¹xÀ¸Ž¬­Ü¬[³Q±Ž«¯¿Á±™¬®¼¸/«Ü¶®¼³²Y·T¬®¼¸/«¯YÄ ª‘«@¬[²g³"¯[²Y·T¬®¼¸/« !%«®¼¸/« ²gÇ«²Y¶ F¸Ž³°½¯ °³²Y· ³"²Y· Õ;,$ °³²Y· ³²Y· Õ;,$ °³"²Y· ³²Y· Ռ, ÈÆÉ(ʇË8ÌB̈́Ë8Î½Ï @m@Ā ° Dğ= °DÄc °@m@Ā  °DÄc °DĀ  DĀ  Dğ= Dğ= *),./J,„Ê10://è‰™Ï ï  DÄ m ° DĀ DÄc m™ÝDĀ  DĀ Ý^ÄþÝ ° DĀ ŽÝDĀ ÛŽÛ™Ä m *),./J,„Ê10://èÐÝ ™Ï ï  DÄ m ° DÄc D° m m@Ā @m@Ā Û DĀ ° °DĀ =KĀ Dğ= *),./J,„Ê10://è‰ ™Ï ï  DĀ ^Ā D° °^Ä m @m@° Û D° ^ÄÁÛ  DÄÁÛ °DÄc *),.-./,„Ê10 /–è‘Û ™Ï ï  D° ^ۙĀ DĀ ° DÄc  °DÄc ÛYÝDÄÁÛ ^ğ=  DĀ m@Ā ¥ =k{0 4|'Ø8ãON‰HG8n07¨ ø­®¯jµ¸Ž³Ïñ­±Ž¯§À²g²Y«*°±™³¬®±Ž¿Á¿¼×ö¯½K°°¸Ž³¬[²Y¶*±Ž¯ °±™³¬¸ŽÂ˜¬­K²¡Ù²g³Àn¹&¸ŽÀ®Á¿°³¸ %[²Y·T¬èù·T¸/«@¬[³"±Ž·T¬Ã«D½¹º À²g³´KÛ ªÞÙ mKÛuøx= ï À@× ¬­K²D» ²g³"¹Ú±Ž«¿K²Y¶^º ²g³"±Ž¿Éj®«®Á¯[¬[³×}¸ŽÂ˜, ¶½·g±™¬®¼¸/«´^ôK·g®¼²Y«·T²Ž´C²Y¯[²Y±™³"·Q­ ±Ž«¶Öø7²Y·Q­«K¸/¿¼¸Ž¾Ž×ֱޫ¶÷±Ž¯y°±™³¬y¸ŽÂ¬­K²I, ½ø‰³"±Ž«¯ °³¸ %[²Y·T¬lÀD×§¬­K²<ÀD×§¬­K²", ½³¸Ž°²Y±Ž«nF¸/¹<¹y½«®¼¬Þ× èK,ô$ð$ª‘øð°³¸ %[²Y·T¬Ã«D½¹}À²g³X /Ý ° ï Ä }§8:‘8-/80(=^86 ~€^ƒ‚k„†…ˆ‡Š‰`…ˆ„†‹ŒrŽ^L…„‘…ˆ’ Œr“””•a–‹‘—J–’˜’Œr–F‡™ š=d›œž…™a…ˆŸU–F‡k‡f  ¡‘¡F¡ ¢€—J–F’ž£–¤œ˜¥F‡Z¥¦#§ ¥‘„†™ –F’žœ˜‹F‡kŸ…‡‘¤©¨ªa¨h¤†…ˆŸU¨d«‡­¬:®¯°†±²±²³J´¶µŠ·J¸)¯º¹U»½¼k±d¾C±²°À¿ ¯µ`³”ÁÀµ»±À®µCÂJ» ´Ã¯Jµ`ÂÄ1ŀ¯Jµ¹À±ˆ®±ˆµ`°²±©¯µ5ÆPÂJµ ·FLj· ±$ȱÀ¿ ¸ˆ¯JÇa®†°†±À¸bÂJµC³?É:ÊJÂJÄvÇ»´Ã¯µq˽ÆÈ$ÉÌÅ`ÍJŒ.ΖF‹F…¨LÏ ‘ÐFÐÑ Ï ÒkϑŒa:¤‚k…‡¨ˆŒ`Ó̄……ÔÀ…‘ŒkÕ–ªÖJš‘£k‡k…‘ ×+؁ƒ’žÙºÚ‡–œ˜Û–F‡fŒ©š=ÝÜ £k„†œ˜‡fŒ©Þؚ‘–F‚k„Œ©ßÌߨ‡œž‹‘‚‘¤Œ š= ~P–JàC…ˆ„¤hªFŒ « á+ L…’˜–FŸ…™âŒ ã: š= ÚÌÔ²‚fŒ á©åä.£k„†™kªFŒ æ©灩啊Ÿœè¤†‚fŒ –F‡™ á+T×.–„†¥J§ƒ¨ºŠª‘ ÏéFéFé •Ф†–J¤†œ˜¨º¤œêԈ–F’ëŸU–FÔ²‚kœ˜‡k… ¤†„†–F‡¨º’ê–J¤†œž¥‘‡fŒí쇖F’…Î=¥‘„º¤Œïš‘“ð §:¥F„†a¨º‚k¥‘Îf ñaòŠò ó:ôõŠõöŠö ö÷²ø‘ùkúó÷hûFñ‘ü÷ºýFþ‘üõö`úÿŠÿ õó û ýøò`ú‘õ  òõ aù kýFó  Šòõ  ò Šùkýó‘ò:÷ óCú  ~€ ¢ƒ–£kŸ Ïé k$ƒ‡?«‡k… £–F’žœž¤hªL–‡™?$¨†¨º¥aÔÀœê–J¤†…™ Õ–aœžŸœ˜Û–J¤†œž¥‘‡Õ›…Ô²‚k‡kœ £k…”œ˜‡?• ¤†–¤œê¨h¤†œ˜Ô–’¢.¨º¤œ˜Ÿ–Ù ¤†œž¥‘‡¦½¥F„ ä.„¥‘‰–‰œž’˜œ˜¨º¤œêÔ ãk£‡Ô¤†œž¥‘‡¨.¥F¦Õ–„†F¥J—+䀄†¥٠Ԉ…¨†¨º…¨ˆ€ÁµC±ÀÇÂÄv´¶»´Ã±À¸Œ ˜ÏÑk ä:„¥J§ ‡fŒf•Cf©fá$…’ž’ê–5䀜˜…À¤„²–kŒ+fšâá…ˆ’˜’˜–5䀜˜…À¤†„†–Œ Þš=ÓÌ¥F’ê™k¨Ÿœè¤†‚fŒ€š“$– hœ˜ÔFŒ"!ÝP~.L…„†Ôˆ…ˆ„Œ–F‡™ •âL¥F‚=–‡ ¤hªF Ïé‘é ‘–k# £k¤©™aœ˜ÔÀ¤œ˜¥F‡–F„œ˜…¨Ý–F„…U™k–¤†– ¤†¥Š¥ «‡%$”Ç&µ ÆPÂJµ ·FLj· ±('±²°²¼Šµ`¯Я†·)JŒ+Ζ‹‘…¨  F¡‘ Ña ¡ Ða äã:*:„†¥J§ ‡fŒ•CJ”á$…’ž’ê–Ì䀜˜…À¤„²–kŒ*+š=á$…’ž’ê–$ä.œž…ˆ¤„²–kŒ –F‡™+!Ý`~€=Õ…ˆ„²ÔÀ…ˆ„$ÏéFé‰f:›‚k…”ŸU–J¤†‚k…ˆŸU–J¤†œ˜Ô¨ ¥¦ ¨º¤†–¤œê¨h¤†œ˜Ô–’kŸU–‘Ô²‚kœž‡…¤„²–‡¨’˜–¤œ˜¥F‡,ä1–„²–Ÿ…À¤†…ˆ„…¨º¤œžÙ ŸU–¤œ˜¥F‡f+Å1¯&.-CÇa»!»´Ã¯µ`ÂÄâÆ1´¶µ ·FÇa´ê¸²» ´Ã°À¸²ŒPÏé/  01  FÒ JÑ ÏFϑ !ÝÌߨ‡k…¨…ˆ„?–‡™ “©情ªF Ïé‘ékÏF ãk¥‘„Ÿœ˜‡k‹32?¥‘„†™ Ü ’ê–F¨†¨…¨ƒ‰ŠªL• ¤†–¤œê¨h¤†œ˜Ô–’€Ü ’ž£=¨h¤†…ˆ„†œž‡k‹d¦½¥F„Ì•Š¤†–J¤†œ˜¨º¤œêԈ–F’ ~P–‡k‹‘£–‹‘…5Õ¥Š™k…ˆ’˜’žœ˜‡k‹j«‡54687ÇÂJµ» ´¶»ÂJ» ´¶Ê±5Æ´¶µ¿ ·FÇa´ê¸»´Ã°À¸Uŀ¯Jµ¹À±ˆ®±ˆµ`°²±ˆŒ`•Š…Îa¤…Ÿ©‰`…ˆ„ «”á+©L…’˜–FŸ^…™â ÏéFék b–‡Š£–’U–F‡k‡k¥¤²–J¤†œž¥‘‡ ¥¦ ¤†„†–F‡¨’˜–¤œ˜¥F‡–F’â… £kœ˜—–F’ž…‡ÔÀ… ›‚k…9:’˜œž‡F…ˆ„$Îk„¥ h…ÔÀ¤ ›P…Ô²‚‡kœ˜Ô–’,! …Î=¥‘„º¤ƒéJÙ¡:ŠŒŠ«;!ƒÜ•C ã:.š= ÚÌÔ²‚ –F‡™ “”.æ$…ˆªFj F¡F¡‘¡k Ԉ¥FŸÎ–F„œê¨º¥‘‡Ž¥¦ –F’žœ˜‹F‡Ÿ^…‡ ¤ Ÿ¥a™a…ˆ’ê¨ ¦½¥F„ ¨h¤²–J¤†œ˜¨º¤œêԈ–F’=ŸU–FÔ²‚œž‡k…$¤„²–‡¨ºÙ ’ê–J¤†œž¥‘‡f«‡5¬:®¯° 6P¯º¹»½¼k±<4=»½¼Áµ=»>6€Å€¯Jµ¹16¯µÞÅ1¯&+¿ -CÇa»!»´Ã¯µ`ÂÄÆ´¶µŠ·Ça´ê¸» ´Ã°¸Œ€•a–‘–„†‰k„? £=Ô²F…ˆ‡ŒÓ̅ˆ„†Ÿ–F‡ŠªFŒ $£k‹F£=¨h¤ •C1¥F‹‘…ˆ’ Œ“”C情ªFŒ=–F‡™ÕÜ=›œ˜’ž’˜ŸU–‡k‡ÏéFéFÒ.“$b Ù ‰=–F¨…™§:¥F„²™U–’˜œž‹‘‡kŸ…ˆ‡ ¤ œ˜‡r¨º¤†–J¤†œ˜¨º¤œêԈ–F’k¤†„†–F‡¨º’ê–J¤†œž¥‘‡f «‡ ÅA@.ÆfÁCBEDGF H:IJK'C¼k±#4:I»½¼dÁµ=»>6 ŀ¯Jµ¹161¯JµÅ1¯&+¿ -CÇa»!»´Ã¯µ`ÂĀÆ1´¶µ ·FÇa´ê¸»´Ã°À¸Œ.ΖF‹F…¨8 ‘ÒÑLϑŒ1Ü ¥‘Î=…‡aÙ ‚=–‹F…‡fŒkƒ£‹F£¨º¤
2000
56
Multi-Comp onen t T A G and Notions of F ormal P o w er William Sc h uler, Da vid Chiang Computer and Information Science Univ ersit y of P ennsylv ania Philadelphia, P A  0 fschuler,[email protected]. upen n.ed u Mark Dras Inst. for Researc h in Cognitiv e Science Univ ersit y of P ennsylv ania Suite 00A, 0 W aln ut Street Philadelphia, P A  0- [email protected] Abstract This pap er presen ts a restricted v ersion of Set-Lo cal Multi-Comp onen t T A Gs (W eir,  ) whic h retains the strong generativ e capacit y of T ree-Lo cal MultiComp onen t T A G (i.e. pro duces the same deriv ed structures) but has a greater deriv ational generativ e capacit y (i.e. can deriv e those structures in more w a ys). This formalism is then applied as a framew ork for in tegrating dep endency and constituency based linguistic represen tations.  In tro duction An aim of one strand of researc h in generativ e grammar is to nd a formalism that has a restricted descriptiv e capacit y sucien t to describ e natural language, but no more p o w erful than necessary , so that the reasons some constructions are not legal in an y natural language is explained b y the formalism rather than stipulations in the linguistic theory . Sev eral mildly con text-sensitiv e grammar formalisms, all c haracterizing the same string languages, are curren tly p ossible candidates for adequately describing natural language; ho w ev er, they di er in their capacities to assign appropriate linguistic structural descriptions to these string languages. The w ork in this pap er is in the v ein of other w ork (Joshi, 000) in extracting as m uc h structural descriptiv e p o w er giv en a xed abilit y to describ e strings, and uses this to mo del dep endency as w ell as constituency correctly . One w a y to c haracterize a formalism's descriptiv e p o w er is b y the the set of string languages it can generate, called its we ak generative c ap acity. F or example, T ree Adjoining Grammars (T A Gs) (Joshi et al.,  ) can generate the language a n b n c n d n and Con textF ree Grammars (CF Gs) cannot (Joshi,  ). S a  b S a S a  b b S a S a S a  b b b : : : Figure : CF G-generable tree set for a n b n . S a S b  S a S a S b S b  S a S a S a S b S b S b  : : : Figure : T A G-generable tree set for a n b n . Ho w ev er, w eak generativ e capacit y ignores the capacit y of a grammar formalism to generate deriv ed trees. This is kno wn as its str ong gener ative c ap acity. F or example, CF Gs and T A Gs can b oth generate the language a n b n , but CF Gs can only asso ciate the a's and b's b y making them siblings in the deriv ed tree, as sho wn in Figure , whereas a T A G can generate the in nite set of trees for the language a n b n that ha v e a's and b's as siblings, as w ell as the in nite set of trees where the a's dominate the b's in eac h tree, sho wn in Figure  (Joshi,  ); th us T A Gs ha v e more strong generativ e capacit y than CF Gs. In addition to the tree sets and string languages a formalism can generate, there ma y also b e linguistic reasons to care ab out ho w these structures are deriv ed. F or this reason, m ulti-comp onen t T A Gs (MCT A Gs) (W eir,  ) ha v e b een adopted to mo del some linguistic phenomena. In m ulti-comp onen t T A G, elemen tary trees are group ed in to tr e e sets, and at eac h step of the deriv ation all the trees of a set adjoin sim ultaneously . In treelo cal MCT A G (TL-MCT A G) all the trees of a set are required to adjoin in to the same elemen tary tree; in set-lo cal MCT A G (SLMCT A G) all the trees of a set are required to adjoin in to the same elemen tary tree set. TL-MCT A Gs can generate the same string languages and deriv ed tree sets as ordinary T A Gs, so they ha v e the same w eak and strong generativ e capacities, but TL-MCT A Gs can deriv e these same strings and trees in more than T A Gs can. One motiv ation for TLMCT A G as a linguistic formalism (F rank,  ) is that it can generate a functional head (suc h as do es) in the same deriv ational step as the lexical head with whic h it is asso ciated (see Figure ) without violating an y assumptions ab out the deriv ed phrase structure tree { something T A Gs cannot do in ev ery case. se em : S do es S . . . VP seem VP sle ep : S John VP to sleep sleep seem S do es S John VP seem VP to sleep Figure : TL-MCT A G generable deriv ation This notion of the deriv ations of a grammar formalism as they relate to the structures they deriv e has b een called the derivational gener ative c ap acity ( ). Somewhat more formally (for a precise de nition, see Bec k er et al. (  )): w e annotate eac h elemen t of a deriv ed structure with a co de indicating whic h step of the deriv ation pro duced that elemen t. This co de is simply the address of the corresp onding no de in the deriv ation tree.  Then a formalism's deriv ational generativ e capacit y is the sets of deriv ed structures, th us annotated, that it can generate.  In Bec k er et al. (  ) the deriv ed structures w ere alw a ys strings, and the co des w ere not addresses but unordered iden ti ers. W e trust that our de nition is in the spirit of theirs. The deriv ational generativ e capacit y of a formalism also describ es what parts of a deriv ed structure com bine with eac h other. Th us if w e consider eac h deriv ation step to corresp ond to a seman tic dep endency , then deriv ational generativ e capacit y describ es what other elemen ts a seman tic elemen t ma y dep end on. That is, if w e in terpret the deriv ation trees of T A G as dep endency structures and the deriv ed trees as phrase structures, then the deriv ational generativ e capacit y of T A G limits the p ossible dep endency structures that can b e assigned to a giv en phrase structure. . Dep endency and Constituency W e ha v e seen that TL-MCT A Gs can generate some deriv ations for \Do es John seem to sleep" that T A G cannot, but ev en TLMCT A G cannot generate the string, \Do es John seem lik ely to sleep" with a deriv ed tree that matc hes some linguistic notion of correct constituency and a deriv ation that matc hes some notion of correct dep endency . This is b ecause the comp onen ts for `do es' and `seem' w ould ha v e to adjoin in to di eren t comp onen ts of the elemen tary tree set for `lik ely' (see Figure ), whic h w ould require a set-lo cal m ulti-comp onen t T A G instead of tree-lo cal. se em : S do es S . . . VP seem VP likely : S . . . VP lik ely VP sle ep : S John VP to sleep sleep lik ely seem Figure : SL-MCT A G generable deriv ation Unfortunately , unrestricted set-lo cal m ulticomp onen t T A Gs not only ha v e more deriv ational generativ e capacit y than T A Gs, but they also ha v e more w eak generativ e capacit y: SL-MCT A Gs can generate the quadruple cop y language w w w w , for example, whic h do es not corresp ond to an y kno wn linguistic phenomenon. Other formalisms aiming to mo del dep endency correctly similarly expand w eak generativ e capacit y , notably D-tree Substitution Grammar (Ram b o w et al.,  ), and consequen tly end up with m uc h greater parsing complexit y . The w ork in this pap er follo ws another Figure : Set-lo cal adjunction. line of researc h whic h has fo cused on squeezing as m uc h strong generativ e capacit y as p ossible out of w eakly T A G-equiv alen t formalisms. T ree-lo cal m ulticomp onen t T A G (W eir,  ), nondirectional comp osition (Joshi and Vija y-Shank er,  ), and segmen ted adjunction (Kulic k, 000) are examples of this approac h, wherein the constrain t on w eak generativ e capacit y naturally limits the expressivit y of these systems. W e discuss the relation of the formalism of this pap er, Restricted MCT A G (R-MCT A G) with some of these in Section .  F ormalism . Restricting set-lo cal MCT A G The w a y w e prop ose to deal with m ulticomp onen t adjunction is rst to limit the n um b er of comp onen ts to t w o, and then, roughly sp eaking, to treat t w o-comp onen t adjunction as one-comp onen t adjunction b y temp orarily r emoving the material b et w een the t w o adjunction sites. The reasons b ehind this sc heme will b e explained in subsequen t sections, but w e men tion it no w b ecause it motiv ates the somewhat complicated restrictions on p ossible adjunction sites:  One adjunction site m ust dominate the other. If the t w o sites are  h and  l , call the set of no des dominated b y one no de but not strictly dominated b y the other the site-se gment h h ;  l i.  Remo ving a site-segmen t m ust not depriv e a tree of its fo ot no de. That is, no site-segmen t h h ;  l i ma y con tain a fo ot no de unless  l is itself the fo ot no de.  If t w o tree sets adjoin in to the same tree, the t w o site-segmen ts m ust b e sim ultaneously remo v able. That is, the t w o sitesegmen ts m ust b e disjoin t, or one m ust con tain the other. Because of the rst restriction, w e depict tree sets with the comp onen ts connected b y a dominance link (dotted line), in the manner of (Bec k er et al.,  ). As written, the ab o v e rules only allo w tree-lo cal adjunction; w e can generalize them to allo w set-lo cal adjunction b y treating this dominance link lik e an ordinary arc. But this w ould increase the w eak generativ e capacit y of the system. F or presen t purp oses it is sucien t just to allo w one t yp e of set-lo cal adjunction: adjoin the upp er tree to the upp er fo ot, and the lo w er tree to the lo w er ro ot (see Figure ). This do es not increase the w eak generativ e capacit y , as will b e sho wn in Section .. Observ e that the set-lo cal T A G giv en in Figure  ob eys the ab o v e restrictions. . L T A G F or the follo wing section, it is useful to think of T A G in a manner other than the usual. Instead of it b eing a tree-rewriting system whose deriv ation history is recorded in a deriv ation tree, it can b e though t of as a set of trees (the `deriv ation' trees) with a yield function (here, reading o the no de lab els of deriv ation trees, and comp osing corresp onding elemen tary trees b y adjunction or substitution as appropriate) applied to get the T A G trees. W eir (  ) observ ed that sev eral T A Gs could b e daisy-c hained in to a multilevel T A G whose yield function is the comp osition of the individual yield functions. More precisely: a L T A G is a pair of T A Gs hG; G 0 i = hh; NT ; I ; A; S i; hI [ A; I [ A; I 0 ; A 0 ; S 0 ii. W e call G the obje ct-level grammar, and G 0 the meta-level grammar. The ob ject-lev el grammar is a standard T A G:  and NT are its terminal and non terminal alphab ets, I and A are its initial and auxiliary trees, and S  I con tains the trees whic h deriv ations ma y start with. The meta-lev el grammar G 0 is de ned so that it deriv es trees that lo ok lik e deriv ation trees of G:  No des are lab eled with (the names of ) elemen tary trees of G.  F o ot no des ha v e no lab els.  Arcs are lab eled with Gorn addresses.   The Gorn address of a ro ot no de is ; if a no de has Gorn address  , then its ith c hild has Gorn address Figure : Adjoining in to b y remo ving .  An auxiliary tree ma y adjoin an ywhere.  When a tree is adjoined at a no de  ,  is rewritten as , and the fo ot of inherits the lab el of  . The tr e e set of hG; G 0 i, T (hG; G 0 i), is f G [T (G 0 )], where f G is the yield function of G and T (G 0 ) is the tree set of G 0 . Th us, the elemen tary trees of G 0 are com bined to form a deriv ed tree, whic h is then in terpreted as a deriv ation tree for G, whic h giv es instructions for com bining elemen tary trees of G in to the nal deriv ed tree. It w as sho wn in Dras ( ) that when the meta-lev el grammar is in the regular form of Rogers (  ) the formalism is w eakly equivalen t to T A G. . Reducing restricted R-MCT A G to RF-L T A G Consider the case of a m ulticomp onen t tree set f  ;  g adjoining in to an initial tree (Figure ). Recall that w e de ned a sitesegmen t of a pair of adjunction sites to b e all the no des whic h are dominated b y the upp er site but not the lo w er site. Imagine that the site-segmen t is excised from , and that  and  are fused in to a single elemen tary tree. No w w e can sim ulate the m ulti-comp onen t adjunction b y ordinary adjunction: adjoin the fused  and  in to what is left of ; then replace b y adjoining it b et w een  and  . The replacemen t of can b e p ostp oned inde nitely: some other (fused) tree set f  0 ;  0 g can adjoin b et w een  and  , and so on, and then adjoins b et w een the last pair of trees. This will pro duce the same result as a series of set-lo cal adjunctions. More formally: . F use all the elemen tary tree sets of the grammar b y iden tifying the upp er fo ot   i. with the lo w er ro ot. Designate this fused no de the meta-fo ot. . F or eac h tree, and for ev ery p ossible combination of site-segmen ts, excise all the site-segmen ts and add all the trees th us pro duced (the excised auxiliary trees and the remainders) to the grammar. No w that our grammar has b een smashed to pieces, w e m ust mak e sure that the righ t pieces go bac k in the righ t places. W e could do this using features, but the resulting grammar w ould only b e strongly equiv alen t, not deriv ationally equiv alen t, to the original. Therefore w e use a meta-lev el grammar instead: . F or eac h initial tree, and for ev ery p ossible com bination of site-segmen ts, construct the deriv ation tree that will reassem ble the pieces created in step () ab o v e and add it to the meta-lev el grammar. . F or eac h auxiliary tree, and for ev ery p ossible com bination of site-segmen ts, construct a deriv ation tree as ab o v e, and for the no de whic h corresp onds to the piece con taining the meta-fo ot, add a c hild, lab el its arc with the meta-fo ot's address (within the piece), and mark it a fo ot no de. Add the resulting (meta-lev el) auxiliary tree to the meta-lev el grammar. Observ e that set-lo cal adjunction corresp onds to meta-lev el adjunction along the (meta-lev el) spine. Recall that w e restricted set-lo cal adjunction so that a tree set can only adjoin at the fo ot of the upp er tree and the ro ot of the lo w er tree. Since this pair of no des corresp onds to the meta-fo ot, w e can restate our restriction in terms of the conv erted grammar: no meta-lev el adjunction is allo w ed along the spine of a (meta-lev el) auxiliary tree except at the (meta-lev el) fo ot. Then all meta-lev el adjunction is regular adjunction in the sense of (Rogers,  ). Therefore this con v erted L T A G pro duces deriv ation tree sets whic h are recognizable, and therefore our formalism is strongly equivalen t to T A G. Note that this restriction is m uc h stronger than Rogers' regular form restriction. This w as done for t w o reasons. First, the de nition of our restriction w ould ha v e b een more complicated otherwise; second, this restriction o v ercomes some computational diculties with RF-T A G whic h w e discuss b elo w.  Linguistic Applications In cases where T A G mo dels dep endencies correctly , the use of R-MCT A G is straigh tforw ard: when an auxiliary tree adjoins at a site pair whic h is just a single no de, it lo oks just lik e con v en tional adjunction. Ho w ev er, in problematic cases w e can use the extra expressiv e p o w er of R-MCT A G to mo del dep endencies correctly . Tw o suc h cases are discussed b elo w. . Bridge and Raising V erbs S NP John VP V thinks S . . . S S C that S . . . VP V seems VP S NP Mary VP V to sleep Figure : T rees for () Consider the case of sen tences whic h contain b oth bridge and raising v erbs, noted b y Ram b o w et al. (  ). In most T A G-based analyses, bridge v erbs adjoin at S (or C 0 ), and raising v erbs adjoin at VP (or I 0 ). Th us the deriv ation for a sen tence lik e () John thinks that Mary seems to sleep. will ha v e the trees for thinks and se ems sim ultaneously adjoining in to the tree for like, whic h, when in terpreted, giv es an incorrect dep endency structure. But under the presen t view w e can analyze sen tences lik e () with deriv ations mirroring dep endencies. The desired trees for () are sho wn in Figure . Since the tree for that se ems can meta-adjoin around the sub ject, the tree for thinks correctly adjoins in to the tree for se ems rather than e at. Also, although the ab o v e analysis pro duces the correct dep endency links, the directions are in v erted in some cases. This is a disadv an tage compared to, for example, DSG; but since the directions are consisten tly in v erted, for applications lik e translation or statistical mo deling, the particular c hoice of direction is usually immaterial. . More on Raising V erbs T ree-lo cal MCT A G is able to deriv e (a), but unable to deriv e (b) except b y adjoining the auxiliary tree for to b e likely at the fo ot of the auxiliary tree for se em (F rank et al.,  ). () a. Do es John seem to sleep? b. Do es John seem to b e lik ely to sleep? The deriv ation structure of this analysis do es not matc h the dep endencies, ho w ev er|se em adjoins in to to sle ep. DSG can deriv e this sen tence with a deriv ation matc hing the dep endencies, but it loses some of the adv an tage of T A G in that, for example, cases of sup er-raising (where the v erb is raised out of t w o clauses) m ust b e explicitly ruled out b y subsertion-insertion constrain ts. F rank et al. ( ) and Kulic k (000) giv e analyses of raising whic h assign the desired deriv ation structures without running in to this problem. It turns out that the analysis of raising from the previous section, designed for a translation problem, has b oth of these prop erties as w ell. The grammar is sho wn bac k in Figure .  A P arser Figure  sho ws a CKY-st yle parser for our restriction of MCT A G as a system of inference rules. It is limited to grammars whose trees are at most binary-branc hing. The parser consists of rules o v er items of one of the follo wing forms, where w     w n is the input;  ,  h , and  l sp ecify no des of the grammar; i, j, k, and l are in tegers b et w een 0 and n inclusiv e; and c o de is either + or :  [ ; c o de ; i; ; ; l ; ; ] and [ ; c o de ; i; j; k ; l ; ; ] function as in a CKY-st yle parser for standard T A G (Vija y-Shank er,  ): the subtree ro oted b y   T deriv es a tree whose fringe is w i    w l if T is initial, or w i    w j F w k    w l if T is the lo w er auxiliary tree of a set and F is the lab el of its fo ot no de. In all four item forms, c o de = + i adjunction has tak en place at  .  [ ; c o de ; i; j; k ; l ; ;  l ] sp eci es that the segmen t h ;  l i deriv es a tree whose fringe is w i    w j Lw k    w l , where L is the lab el of  l . In tuitiv ely , it means that a p oten tial site-segmen t has b een recognized.  [ ; c o de ; i; j; k ; l ;  h ;  l ] sp eci es, if  b elongs to the upp er tree of a set, that the subtree ro oted b y  , the segmen t h h ;  l i, and the lo w er tree concatenated together deriv e a tree whose fringe is w i    w j F w k    w l , where F is the lab el of the lo w er fo ot no de. In tuitiv ely , it means that a tree set has b een partially recognized, with a site-segmen t inserted b et w een the t w o comp onen ts. The rules whic h require di er from a T A G parser and hence explanation are Pseudop o d, Push, P op, and P op-push. Pseudop o d applies to an y p oten tial lo w er adjunction site and is so called b ecause the parser essen tially views ev ery p oten tial site-segmen t as an auxiliary tree (see Section .), and the Pseudop o d axiom recognizes the feet of these false auxiliary trees. The Push rule p erforms the adjunction of one of these false auxiliary trees|that is, it places a site-segmen t b et w een the t w o trees of an elemen tary tree set. It is so called b ecause the site-segmen t is sa v ed in a \stac k" so that the rest of its elemen tary tree can b e recognized later. Of course, in our case the \stac k" has at most one elemen t. The P op rule do es the rev erse: ev ery completed elemen tary tree set m ust con tain a site-segmen t, and the P op rule places it bac k where the site-segmen t came from, empt ying the \stac k." The P op-push rule p erforms setlo cal adjunction: a completed elemen tary tree set is placed b et w een the t w o trees of y et another elemen tary tree set, and the \stac k" is unc hanged. P op-push is computationally the most exp ensiv e rule; since it in v olv es six indices and three di eren t elemen tary trees, its running time is O (n  G  ). It w as noted in (Chiang et al., 000) that for sync hronous RF-L T A G, parse forests could not b e transferred in time O (n  ). This fact turns out to b e connected to sev eral properties of RF-T A G (Rogers,  ).   Thanks to Ano op Sark ar for p oin ting out the rst The CKY-st yle parser for regular form T A G describ ed in (Rogers,  ) essen tially k eeps trac k of adjunctions using stac ks, and the regular form constrain t ensures that the stac k depth is b ounded. The only kinds of adjunction that can o ccur to arbitrary depth are ro ot and fo ot adjunction, whic h are treated similarly to substitution and do not a ect the stac ks. The reader will note that our parser w orks in exactly the same w a y . A problem arises if w e allo w b oth ro ot and fo ot adjunction, ho w ev er. It is w ell-kno wn that allo wing b oth t yp es of adjunction creates deriv ational am biguit y (Vija y-Shank er,  ): adjoining  at the fo ot of  pro duces the same deriv ed tree that adjoining  at the ro ot of  w ould. The problem is not the ambiguit y p er se, but that the regular form T A G parser, unlik e a standard T A G parser, do es not alw a ys distinguish these m ultiple deriv ations, b ecause ro ot and fo ot adjunction are b oth p erformed b y the same rule (analogous to our P op-push). Th us for a giv en application of this rule, it is not p ossible to sa y whic h tree is adjoining in to whic h without examining the rest of the deriv ation. But this kno wledge is necessary to p erform certain tasks online: for example, enforcing adjoining constrain ts, computing probabilities (and pruning based on them), or p erforming sync hronous mappings. Therefore w e arbitrarily forbid one of the t w o p ossibilities.  The parser giv en in Section  already tak es this in to accoun t.  Discussion Our v ersion of MCT A G follo ws other w ork in incorp orating dep endency in to a constituency-based approac h to mo deling natural language. One suc h early in tegration in v olv ed w ork b y Gaifman (  ), whic h sho w ed that pro jectiv e dep endency grammars could b e represen ted b y CF Gs. Ho w ev er, it is kno wn that there are common phenomena whic h require non-pro jectiv e dep endency grammars, so lo oking only at pro jectiv e desuc h connection.  Against tradition, w e forbid ro ot adjunction, b ecause adjunction at the fo ot ensures that a b ottom-up tra v ersal of the deriv ed tree will encoun ter elemen tary trees in the same order as they app ear in a b ottom-up tra v ersal of the deriv ation tree, simplifying the calculation of deriv ations. Goal: [ r ; ; 0; ; ; n; ; ]  r an initial ro ot (Leaf ) [ ; +; i; ; ; j; ; ]  a leaf (F o ot) [ ; +; i; i; j; j; ; ]  a lo w er fo ot (Pseudop o d) [ ; +; i; i; j; j; ;  ] (Unary) [  ; +; i; p; q ; j;  h ;  l ] [ ; ; i; p; q ; j;  h ;  l ]    (Binary ) [  ; +; i; p; q ; j;  h ;  l ] [  ; +; j; ; ; k ; ; ] [ ; ; i; p; q ; k ;  h ;  l ]      (Binary ) [  ; +; i; ; ; j; ; ] [  ; +; j; p; q ; k ;  h ;  l ] [ ; ; i; p; q ; k ;  h ;  l ]      (No adjunction) [ ; ; i; p; q ; j;  h ;  l ] [ ; +; i; p; q ; j;  h ;  l ] (Push) [  ; +; j; p; q ; k ; ; ] [ h ; ; i; j; k ; l ; ;  l ] [ ; +; i; p; q ; l ;  h ;  l ]  . . .   (i.e.  is an upp er fo ot and   is a lo w er ro ot) (P op) [ l ; ; j; p; q ; k ;  h 0 ;  l 0 ] [ r ; +; i; j; k ; l ;  h ;  l ] [ h ; +; i; p; q ; l ;  h 0 ;  l 0 ]  r a ro ot of an upp er tree adjoinable at h h ;  l i (P op-push) [  ; +; j; p; q ; k ; ; ] [ r ; +; i; j; k ; l ;  h ;  l ] [ ; +; i; p; q ; l ;  h ;  l ]  . . .   ,  r a ro ot of an upp er tree adjoinable at h ;   i Figure : P arser p endency grammars is inadequate. F ollo wing the observ ation of T A G deriv ations' similarit y to dep endency relations, other formalisms ha v e also lo ok ed at relating dep endency and constituency approac hes to grammar formalisms. A more recen t instance is D-T ree Substitution Grammars (DSG) (Ram b o w et al.,  ), where the deriv ations are also in terpreted as dep endency relations. Though t of in the terms of this pap er, there is a clear parallel with R-MCT A G, with a lo cal set ultimately represen ting dep endencies ha ving some yield function applied to it; the idea of non-immediate dominance also app ears in b oth formalisms. The di erence b et w een the t w o is in the kinds of languages that they are able to describ e: DSG is b oth less and more restrictiv e than R-MCT A G. DSG can generate the language count-k for some arbitrary k (that is, fa  n a  n : : : a k n g), whic h mak es it extremely p o w erful, whereas R-MCT A G can only generate count-. Ho w ev er, DSG cannot generate the cop y language (that is, fw w j w    g with  some terminal alphab et), whereas R-MCT A G can; this ma y b e problematic for a formalism mo deling natural language, giv en the k ey role of the cop y language in demonstrating that natural language is not con text-free (Shieb er,  ). RMCT A G is th us a more constrained relaxation of the notion of immediate dominance in fav or of non-immediate dominance than is the case for DSG. Another formalism of particular in terest here is the Segmen ted Adjoining Grammar of (Kulic k, 000). This generalization of T A G is c haracterized b y an extension of the adjoining op eration, motiv ated b y evidence in scrambling, clitic clim bing and sub ject-to-sub ject raising. Most in terestingly , this extension to T A G, prop osed on empirical grounds, is de ned b y a comp osition op eration with constrained non-immediate dominance links that lo oks quite similar to the formalism describ ed in this pap er, whic h b egan from formal considerations and w as then applied to data. This con uence suggests that the ideas describ ed here concerning com bining dep endency and constituency migh t b e reac hing to w ards some deep er connection.  Conclusion F rom a theoretical p ersp ectiv e, extracting more deriv ational generativ e capacit y and thereb y in tegrating dep endency and constituency in to a common framew ork is an interesting exercise. It also, ho w ev er, pro v es to b e useful in mo deling otherwise problematic constructions, suc h as sub ject-auxiliary in v ersion and bridge and raising v erb in terlea ving. Moreo v er, the formalism dev elop ed from theoretical considerations, presen ted in this pap er, has similar prop erties to w ork dev elop ed on empirical grounds, suggesting that this is w orth further exploration. References Tilman Bec k er, Ara vind Joshi, and Ow en Ramb o w.  . Long distance scram bling and tree adjoining grammars. In Fifth Confer enc e of the Eur op e an Chapter of the Asso ciation for Computational Linguistics (EA CL' ), pages {. Tilman Bec k er, Ow en Ram b o w, and Mic hael Niv.  . The deriv ational generativ e p o w er of formal systems, or, Scram bling is b ey ond LCFRS. T ec hnical Rep ort IR CS- -, Institute for Researc h in Cognitiv e Science, Univ ersit y of P ennsylv ania. Da vid Chiang, William Sc h uler, and Mark Dras. 000. Some Remarks on an Extension of Sync hronous T A G. In Pr o c e e dings of T A G+, P aris, F rance. Mark Dras.  . A meta-lev el grammar: rede ning sync hronous T A G for translation and paraphrase. In Pr o c e e dings of the th A nnual Me eting of the Asso ciation for Computational Linguistics (A CL ' ). Rob ert F rank, Seth Kulic k, and K. Vija y-Shank er.  . C-command and extraction in treeadjoining grammar. Pr o c e e dings of the Sixth Me eting on the Mathematics of L anguage (MOL). Rob ert F rank.  . Syntactic lo c ality and tr e e adjoining gr ammar: gr ammatic al ac quisition and pr o c essing p ersp e ctives. Ph.D. thesis, Computer Science Departmen t, Univ ersit y of P ennsylv ania. Haim Gaifman.  . Dep endency Systems and Phrase-Structure Systems. Information and Contr ol, :0{. Gerald Gazdar.  . Applicabilit y of indexed grammars to natural languages. In Uw e Reyle and Christian Rohrer, editors, Natur al L anguage Parsin and Linguistic The ories. D. Reidel Publishing Compan y , Dordrec h t, Holland. Ara vind Joshi and K. Vija y-Shank er.  . Comp ositional Seman tics with Lexicalized T reeAdjoining Grammar (L T A G): Ho w Muc h Undersp eci cation is Necessary? In Pr o c e e dings of the nd International Workshop on Computational Semantics. Ara vind K. Joshi, Leon S. Levy , and M. T ak ahashi.  . T ree adjunct grammars. Journal of c omputer and system scienc es, 0:{. Ara vind K. Joshi.  . Ho w m uc h con text sensitivit y is necessary for c haracterizing structural descriptions: T ree adjoining grammars. In L. Karttunen D. Do wt y and A. Zwic ky , editors, Natur al language p arsing: Psycholo gic al, c omputational and the or etic al p ersp e ctives, pages 0{0. Cam bridge Univ ersit y Press, Cambridge, U.K. Ara vind Joshi. 000. Relationship b et w een strong and w eak generativ e p o w er of formal systems. In Pr o c e e dings of T A G+, pages 0{, P aris, F rance. Seth Kulic k. 000. A uniform accoun t of lo calit y constrain ts for clitic clim bing and long scrambling. In Pr o c e e dings of the Penn Linguistics Col lo quium. Ow en Ram b o w, Da vid W eir, and K. Vija yShank er.  . D-tree grammars. In Pr o c e e dings of the r d A nnual Me eting of the Asso ciation for Computational Linguistics (A CL ' ). James Rogers.  . Capturing CFLs with tree adjoining grammars. In Pr o c e e dings of the nd A nnual Me eting of the Asso ciation for Computational Linguistics (A CL ' ). Stuart Shieb er.  . Evidence against the con text-freeness of natural language. Linguistics and Philosophy, :{. K. Vija y-Shank er.  . A study of tr e e adjoining gr ammars. Ph.D. thesis, Departmen t of Computer and Information Science, Univ ersit y of P ennsylv ania. Da vid W eir.  . Char acterizing Mild ly Context-Sensitive Gr ammar F ormalisms. Ph.D. thesis, Departmen t of Computer and Information Science, Univ ersit y of P ennsylv ania.
2000
57
Statistical parsing with an automatically-extracted tree adjoining grammar Da vid Chiang Departmen t of Computer and Information Science Univ ersit y of P ennsylv ania 00 S rd St Philadelphia P A  0 [email protected] Abstract W e discuss the adv an tages of lexicalized tree-adjoining grammar as an alternativ e to lexicalized PCF G for statistical parsing, describing the induction of a probabilistic L T A G mo del from the P enn T reebank and ev aluating its parsing p erformance. W e nd that this induction metho d is an impro v emen t o v er the EM-based metho d of (Hw a,  ), and that the induced mo del yields results comparable to lexicalized PCF G.  In tro duction Wh y use tree-adjoining grammar for statistical parsing? Giv en that statistical natural language pro cessing is concerned with the probable rather than the p ossible, it is not b ecause T A G can describ e constructions lik e arbitrarily large Dutc h v erb clusters. Rather, what mak es T A G useful for statistical parsing are the structur al descriptions it assigns to breadand-butter sen tences. The approac h of Chelba and Jelinek ( ) to language mo deling is illustrativ e: ev en though the probabilit y estimate of w app earing as the k th w ord can b e conditioned on the en tire history w  ; : : : ; w k  , the quan tit y of a v ailable training data limits the usable context to ab out t w o w ords|but whic h t w o? A trigram mo del c ho oses w k  and w k  and w orks quite w ell; a mo del whic h c hose w k  and w k  w ould probably w ork less w ell. But (Chelba and Jelinek,  ) c ho oses the lexical heads of the t w o previous constituen ts as determined b y a shift-reduce parser, and w orks b etter than a trigram mo del. Th us the (virtual) grammar serv es to structur e the history so that the t w o most useful w ords can b e c hosen, ev en though the structure of the problem itself is en tirely linear. Similarly , nothing ab out the parsing problem requires that w e construct an y structure other than phrase structure. But b eginning with (Magerman,  ) statistical parsers ha v e used bilexical dep endencies with great success. Since these dep endencies are not enco ded in plain phrase-structure trees, the standard approac h has b een to let the lexical heads p ercolate up the tree, so that when one lexical head is immediately dominated b y another, it is understo o d to b e dep enden t on it. E ectiv ely , a dep endency structure is made parasitic on the phrase structure so that they can b e generated together b y a con text-free mo del. Ho w ev er, this solution is not ideal. Aside from cases where con text-free deriv ations are incapable of enco ding b oth constituency and dep endency (whic h are somewhat isolated and not of great in terest for statistical parsing) there are common cases where p ercolation of single heads is not sucien t to enco de dep endencies correctly|for example, relativ e clause attac hmen t or raising/auxiliary v erbs (see Section ). More complicated grammar transformations are necessary . A more suitable approac h is to emplo y a grammar formalism whic h pro duces structural descriptions that can enco de b oth constituency and dep endency . Lexicalized T A G is suc h a formalism, b ecause it assigns to eac h sen tence not only a parse tree, whic h is built out of elemen tary trees and is in terpreted as enco ding constituency , but a derivation tr e e, whic h records ho w the v arious elemen tary trees w ere com bined together and is commonly in tepreted as enco ding dep endency . The abilit y of probabilistic L T A G to NP NNP John S NP# VP VB lea v e VP MD should VP NP NN tomorro w (  ) (  ) ( ) ( ) )     , S NP NNP John VP MD should VP VB lea v e NP NN tomorro w Figure : Grammar and deriv ation for \John should lea v e tomorro w." mo del bilexical dep endencies w as noted early on b y (Resnik,  ). It turns out that there are other pieces of con textual information that need to b e explicitly accoun ted for in a CF G b y grammar transformations but come for free in a T A G. W e discuss a few suc h cases in Section . In Sections  and  w e describ e an exp erimen t to test the parsing accuracy of a probabilistic T A G extracted automatically from the P enn T reebank. W e nd that the automatically-extracted grammar giv es an impro v emen t o v er the EM-based induction metho d of (Hw a,  ), and that the parser p erforms comparably to lexicalized PCF G parsers, though certainly with ro om for impro v emen t. W e emphasize that T A G is attractiv e not b ecause it can do things that CF G cannot, but b ecause it do es ev erything that CF G can, only more cleanly . (This is where the analogy with (Chelba and Jelinek,  ) breaks do wn.) Th us certain p ossibilities whic h w ere not apparen t in a PCF G framew ork or prohibitiv ely complicated migh t b ecome simple to implemen t in a PT A G framew ork; w e conclude b y o ering t w o suc h p ossibilities.  The formalism The formalism w e use is a v arian t of lexicalized tree-insertion grammar (L TIG), whic h is in turn a restriction of L T A G (Sc hab es and W aters,  ). In this v arian t there are three kinds of elemen tary tree: initial, (predicativ e) auxiliary , and mo di er, and three comp osition op erations: substitution, adjunction, and sister-adjunction. Auxiliary trees and adjunction are restricted as in TIG: essen tially , no wrapping adjunction or an ything equiv alen t to wrapping adjunction is allo w ed. Sister-adjunction is not an op eration found in standard de nitions of T A G, but is b orro w ed from D-T ree Grammar (Ram b o w et al.,  ). In sisteradjunction the ro ot of a mo di er tree is added as a new daugh ter to an y other no de. (Note that as it stands sister-adjunction is completely unconstrained; it will b e constrained b y the probabilit y mo del.) W e in tro duce this op eration simply so w e can deriv e the at structures found in the P enn T reebank. F ollo wing (Sc hab es and Shieb er,  ), m ultiple mo di er trees can b e sister-adjoined at a single site, but only one auxiliary tree ma y b e adjoined at a single no de. Figure  sho ws an example grammar and the deriv ation of the sen tence \John should lea v e tomorro w." The deriv ation tree enco des this pro cess, with eac h arc corresp onding to a comp osition op eration. Arcs corresp onding to substitution and adjunction are lab eled with the Gorn address  of the substitution or ad A Gorn address is a list of in tegers: the ro ot of a tree has address , and the j th c hild of the no de with junction site. An arc corresp onding to the sister-adjunction of a tree b et w een the ith and i + th c hildren of  (allo wing for t w o imaginary c hildren b ey ond the leftmost and righ tmost c hildren) is lab eled  ; i. This grammar, as w ell as the grammar used b y the parser, is lexicalized in the sense that ev ery elemen tary tree has exactly one terminal no de, its lexical anchor. Since sister-adjunction can b e sim ulated b y ordinary adjunction, this v arian t is, lik e TIG (and CF G), w eakly con text-free and O (n  )-time parsable. Rather than coin a new acron ym for this particular v arian t, w e will simply refer to it as \T A G" and trust that no confusion will arise. The parameters of a probabilistic T A G (Resnik,  ; Sc hab es,  ) are: X P i ( ) =  X P s ( j  ) =  X P a ( j  ) + P a (NONE j  ) =  where ranges o v er initial trees, o v er auxiliary trees, o v er mo di er trees, and  o v er no des. P i ( ) is the probabilit y of b eginning a deriv ation with ; P s ( j  ) is the probabilit y of substituting at  ; P a ( j  ) is the probabilit y of adjoining at  ; nally , P a (NONE j  ) is the probabilit y of nothing adjoining at  . (Carroll and W eir,  ) suggest other parameterizations w orth exploring as w ell. Our v arian t adds another set of parameters: X P sa ( j  ; i; f ) + P sa (STOP j  ; i; f ) =  This is the probabilit y of sister-adjoining b et w een the ith and i + th c hildren of  (as b efore, allo wing for t w o imaginary c hildren b ey ond the leftmost and righ tmost c hildren). Since m ultiple mo di er trees can adjoin at the same lo cation, P sa ( ) is also conditioned on a ag f whic h indicates whether is the rst mo di er tree (i.e., the one closest to the head) to adjoin at that lo cation. The probabilit y of a deriv ation can then b e expressed as a pro duct of the probabilities of address i has address i  j . the individual op erations of the deriv ation. Th us the probabilit y of the example deriv ation of Figure  w ould b e P i (  )  P a (NONE j  ())  P s (  j  ())  P a ( j  ())  P sa ( j  (); ; true )  P sa (STOP j  (); ; false )  P sa (STOP j  (); 0; true )  : : : where (i) is the no de of with address i. W e w an t to obtain a maxim um-lik eliho o d estimate of these parameters, but cannot estimate them directly from the T reebank, b ecause the sample space of PT A G is the space of T A G deriv ations, not the deriv ed trees that are found in the T reebank. One approac h, tak en in (Hw a,  ), is to c ho ose some grammar general enough to parse the whole corpus and obtain a maxim um-lik eliho o d estimate b y EM. Another approac h, tak en in (Magerman,  ) and others for lexicalized PCF Gs and (Neumann,  ; Xia,  ; Chen and Vija yShank er, 000) for L T A Gs, is to use heuristics to reconstruct the deriv ations, and directly estimate the PT A G parameters from the reconstructed deriv ations. W e tak e this approac h as w ell. (One could imagine com bining the t w o approac hes, using heuristics to extract a grammar but EM to estimate its parameters.)  Some prop erties of probabilistic T A G In a lexicalized T A G, b ecause eac h comp osition brings together t w o lexical items, every comp osition probabilit y in v olv es a bilexical dep endency . Giv en a CF G and headp ercolation sc heme, an equiv alen t T A G can b e constructed whose deriv ations mirror the dep endency analysis implicit in the headp ercolation sc heme. F urthermore, there are some dep endency analyses enco dable b y T A Gs that are not enco dable b y a simple head-p ercolation sc heme. F or example, for the sen tence \John should ha v e left," Magerman's rules mak e should and have the heads of their resp ectiv e VPs, so that there is no dep endency b et w een left and its sub ject John (see Figure a). Since nearly a quarter of nonempt y sub jects app ear in suc h a con guration, this is not a small problem. left ha v e should John left ha v e should John (a) (b) Figure : Bilexical dep endencies for \John should ha v e left." (W e could mak e VP the head of VP instead, but this w ould generate auxiliaries indep enden tly of eac h other, so that, for example, P (John lea v e ) > 0.) T A G can pro duce the desired dep endencies (b) easily , using the grammar of Figure . A more complex lexicalization sc heme for CF G could as w ell (one whic h k ept trac k of t w o heads at a time, for example), but the T A G accoun t is simpler and cleaner. Bilexical dep endencies are not the only nonlo cal dep endencies that can b e used to impro v e parsing accuracy . F or example, the attac hmen t of an S dep ends on the presence or absence of the em b edded sub ject (Collins,  ); T reebank-st yle t w o-lev el NPs are mismo deled b y PCF G (Collins,  ; Johnson,  ); the generation of a no de dep ends on the lab el of its grandparen t (Charniak, 000; Johnson,  ). In order to capture suc h dep endencies in a PCF G-based mo del, they m ust b e lo calized either b y transforming the data or mo difying the parser. Suc h c hanges are not alw a ys ob vious a priori and often m ust b e devised anew for eac h language or eac h corpus. But none of these cases really requires sp ecial treatmen t in a PT A G mo del, b ecause eac h comp osition probabilit y in v olv es not only a bilexical dep endency but a \biarb oreal" (tree-tree) dep endency . That is, PT A G generates an en tire elemen tary tree at once, conditioned on the en tire elemen tary tree b eing mo di ed. Th us dep endencies that ha v e to b e stipulated in a PCF G b y tree transformations or parser mo di cations are captured for free in a PT A G mo del. Of course, the price that the PT A G mo del pa ys is sparser data; the bac k o mo del m ust therefore b e c hosen carefully .  Inducing a sto c hastic grammar from the T reebank . Reconstructing deriv ations W e w an t to extract from the P enn T reebank an L T A G whose deriv ations mirror the dep endency analysis implicit in the head-p ercolation rules of (Magerman,  ; Collins,  ). F or eac h no de  , these rules classify exactly one c hild of  as a head and the rest as either argumen ts or adjuncts. Using this classi cation w e can construct a T A G deriv ation (including elemen tary trees) from a deriv ed tree as follo ws: . If  is an adjunct, excise the subtree ro oted at  to form a mo di er tree. . If  is an argumen t, excise the subtree ro oted at  to form an initial tree, lea ving b ehind a substitution no de. . If  has a righ t corner  whic h is an argumen t with the same lab el as  (and all in terv ening no des are heads), excise the segmen t from  do wn to  to form an auxiliary tree. Rules () and () pro duce the desired result; rule () c hanges the analysis somewhat b y making subtrees with recursiv e argumen ts in to predicativ e auxiliary trees. It pro duces, among other things, the analysis of auxiliary v erbs describ ed in the previous section. It is applied in a greedy fashion, with p oten tial  s considered top-do wn and p oten tial  s b ottomup. The complicated restrictions on  are simply to ensure that a w ell-formed TIG deriv ation is pro duced. . P arameter estimation and smo othing No w that w e ha v e augmen ted the training data to include T A G deriv ations, w e could try to directly estimate the parameters of the mo del from Section . But since the n um b er of (tree, site) pairs is v ery high, the data w ould b e to o sparse. W e therefore generate an elemen tary tree in t w o steps: rst the tree template (that is, the elemen tary tree min us its mo di er trees auxiliary trees PP IN  NP# JJ  ,  AD VP RB  VP TO  VP VP MD  VP NP NNS  NP NP NNS  S NP# VP VBD  NP# S NP# VP VBD  S VP VB  NP# initial trees Figure : A few of the more frequen tly-o ccurring tree templates.  marks where the lexical anc hor is inserted. anc hor), then the anc hor. The probabilities are decomp osed as follo ws: P i ( ) = P i  ( )P i  (w j  ) P s ( j  ) = P s  ( j  ) P s  (w j  ; t  ; w  ) P a ( j  ) = P a  ( j  ) P a  (w j  ; t  ; w  ) P sa ( j  ; i; f ) = P sa  ( j  ; i; f ) P sa  (w j  ; t  ; w  ; f ) where  is the tree template of , t is the part-of-sp eec h tag of the anc hor, and w is the anc hor itself. The generation of the tree template has t w o bac k o lev els: at the rst lev el, the anc hor of  is ignored, and at the second lev el, the POS tag of the anc hor as w ell as the ag f are ignored. The generation of the anc hor has three bac k o lev els: the rst t w o are as b efore, and the third just conditions the anc hor on its POS tag. The bac k ed-o mo dels are com bined b y linear in terp olation, with the w eigh ts c hosen as in (Bik el et al.,  ).  The exp erimen t . Extracting the grammar W e ran the algorithm giv en in Section . on sections 0{ of the P enn T reebank. The extracted grammar is large (ab out ,000 trees, with w ords seen few er than four times replaced with the sym b ol *UNKNOWN*), but if w e 1 10 100 1000 10000 100000 1 10 100 1000 10000 Frequency Rank Figure : F requency of tree templates v ersus rank (log-log) consider elemen tary tree templates, the grammar is quite manageable:  tree templates, of whic h 0 o ccur more than once (see Figure ). The  most frequen t tree-template t yp es accoun t for % of tree-template tok ens in the training data. Remo ving all but these trees from the grammar increased the error rate b y ab out % (testing on a subset of section 00). A few of the most frequen t tree-templates are sho wn in Figure . So the extracted grammar is fairly compact, but ho w complete is it? If w e plot the gro wth of the grammar during training (Figure ), it's not clear the grammar will ev er con v erge, ev en though the v ery idea of a 1 10 100 1000 10000 1 10 100 1000 10000 100000 1e+06 Types Tokens Figure : Gro wth of grammar during training (log-log) grammar requires it. Three p ossible explanations are:  New constructions con tin ue to app ear.  Old constructions con tin ue to b e (erroneously) annotated in new w a ys.  Old constructions con tin ue to b e combined in new w a ys, and the extraction heuristics fail to factor this v ariation out. In a random sample of 00 once-seen elemen tary tree templates, w e found (b y casual insp ection) that  resulted from annotation errors, 0 from de ciencies in the heuristics, and four apparen tly from p erformance errors. Only t w elv e app eared to b e gen uine. Therefore the con tin ued gro wth of the grammar is not as rapid as Figure  migh t indicate. Moreo v er, our extraction heuristics eviden tly ha v e ro om to impro v e. The ma jorit y of trees resulting from de ciencies in the heuristics in v olv ed complicated co ordination structures, whic h is not surprising, since coordination has alw a ys b een problematic for T A G. T o see what the impact of this failure to con v erge is, w e ran the grammar extractor on some held-out data (section 00). Out of 0 tree tok ens, 0 tree templates, or 0.%, had not b een seen in training. This amoun ts to ab out one unseen tree template ev ery 0 sentences. When w e consider lexicalized trees, this gure of course rises: out of the same 0 tree tok ens,  lexicalized trees, or %, had not b een seen in training. So the co v erage of the grammar is quite go o d. Note that ev en in cases where the parser encoun ters a sen tence for whic h the (fallible) extraction heuristics w ould ha v e pro duced an unseen tree template, it is p ossible that the parser will use other trees to pro duce the correct brac k eting. . P arsing with the grammar W e used a CKY-st yle parser similar to the one describ ed in (Sc hab es and W aters,  ), with a mo di cation to ensure completeness (b ecause fo ot no des are treated as empt y , whic h CKY prohibits) and another to reduce useless substitutions. W e also extended the parser to sim ulate sister-adjunction as regular adjunction and compute the ag f whic h distinguishes the rst mo di er from subsequen t mo di ers. W e use a b eam searc h, computing the score of an item [ ; i; j ] b y m ultiplying it b y the prior probabilit y P ( ) (Go o dman,  ); an y item with score less than 0  times that of the b est item in a cell is pruned. F ollo wing (Collins,  ), w ords o ccurring few er than four times in training w ere replaced with the sym b ol *UNKNOWN* and tagged with the output of the part-of-sp eec h tagger describ ed in (Ratnaparkhi,  ). T ree templates o ccurring only once in training w ere ignored en tirely . W e rst compared the parser with (Hw a,  ): w e trained the mo del on sen tences of length 0 or less in sections 0{0 of the P enn T reebank, do wn to parts of sp eec h only , and then tested on sen tences of length 0 or less in section , parsing from part-of-sp eec h tag sequences to fully brac k eted parses. The metric used w as the p ercen tage of guessed brac k ets whic h did not cross an y correct brac k ets. Our parser scored .% compared with .% for (Hw a,  ), an error reduction of %. Next w e compared our parser against lexicalized PCF G parsers, training on sections 0{ and testing on section . The results are sho wn in Figure . These results place our parser roughly in the middle of the lexicalized PCF G parsers. While the results are not state-of-the-art, they do demonstrate the viabilit y of T A G as a framew ork for statistical parsing. With  0 w ords  00 w ords LR LP CB 0 CB   CB LR LP CB 0 CB   CB (Magerman,  ) . . . . . .0 . . .0 . (Collins,  ) . . .  . . . . . . 0. presen t mo del . . .0 . . . . . 0. . (Collins,  ) . . 0.  . . . . .0 . . (Charniak, 000) 0. 0. 0. 0.  .  .  . 0. . . Figure : P arsing results. LR = lab eled recall, LP = lab eled precision; CB = a v erage crossing brac k ets, 0 CB = no crossing brac k ets,   CB = t w o or few er crossing brac k ets. All gures except CB are p ercen tages. impro v emen ts in smo othing and cleaner handling of punctuation and co ordination, p erhaps these results can b e brough t more upto-date.  Conclusion: related and future w ork (Neumann,  ) describ es an exp erimen t similar to ours, although the grammar he extracts only arriv es at a complete parse for 0% of unseen sen tences. (Xia,  ) describ es a grammar extraction pro cess similar to ours, and describ es some tec hniques for automatically ltering out in v alid elemen tary trees. Our w ork has a great deal in common with indep enden t w ork b y Chen and Vija yShank er (000). They presen t a more detailed discussion of v arious grammar extraction processes and the p erformance of sup ertagging mo dels (B. Sriniv as,  ) based on the extracted grammars. They do not rep ort parsing results, though their in ten tion is to ev aluate ho w the v arious grammars a ect parsing accuracy and ho w k -b est sup ertagging a fects parsing sp eed. Sriniv as's w ork on sup ertags (B. Sriniv as,  ) also uses T A G for statistical parsing, but with a rather di eren t strategy: tree templates are though t of as extended parts-ofsp eec h, and these are assigned to w ords based on lo cal (e.g., n-gram) con text. As for future w ork, there are still p ossibilities made a v ailable b y T A G whic h remain to b e explored. One, also suggested b y (Chen and Vija y-Shank er, 000), is to group elementary trees in to families and relate the trees of a family b y transformations. F or example, one w ould imagine that the distribution of activ e v erbs and their sub jects w ould b e similar to the distribution of passiv e v erbs and their notional sub jects, y et they are treated as indep enden t in the curren t mo del. If the t w o con gurations could b e related, then the sparseness of v erb-argumen t dep endencies w ould b e reduced. Another p ossibilit y is the use of m ultiplyanc hored trees. Nothing ab out PT A G requires that elemen tary trees ha v e only a single anc hor (or an y anc hor at all), so m ultiplyanc hored trees could b e used to mak e, for example, the attac hmen t of a PP dep enden t not only on the prep osition (as in the curren t mo del) but the lexical head of the prep ositional ob ject as w ell, or the attac hmen t of a relativ e clause dep enden t on the em b edded v erb as w ell as the relativ e pronoun. The smo othing metho d describ ed ab o v e w ould ha v e to b e mo di ed to accoun t for m ultiple anc hors. In summary , w e ha v e argued that T A G provides a cleaner w a y of lo oking at statistical parsing than lexicalized PCF G do es, and demonstrated that in practice it p erforms in the same range. Moreo v er, the greater exibilit y of T A G suggests some p oten tial impro v emen ts whic h w ould b e cum b ersome to implemen t using a lexicalized CF G. F urther researc h will sho w whether these adv an tages turn out to b e signi can t in practice. Ac kno wledgemen ts This researc h is supp orted in part b y AR O gran t D AA G -0 and NSF gran t SBR -00-. Thanks to Mik e Collins, Ara vind Joshi, and the anon ymous review ers for their v aluable help. S. D. G. References B. Sriniv as.  . Complexity of lexic al descriptions: r elevanc e to p artial p arsing. Ph.D. thesis, Univ. of P ennsylv ania. Daniel M. Bik el, Scott Miller, Ric hard Sc h w artz, and Ralph W eisc hedel.  . Nym ble: a highp erformance learning name- nder. In Pr o c e e dings of the Fifth Confer enc e on Applie d Natur al L anguage Pr o c essing (ANLP  ), pages  { 0. John Carroll and Da vid W eir.  . Enco ding frequency information in lexicalized grammars. In Pr o c e e dings of the Fifth International Workshop on Parsing T e chnolo gies (IWPT ' ), pages {. Eugene Charniak. 000. A maxim um-en trop yinspired parser. In Pr o c e e dings of the First Me eting of the North A meric an Chapter of the Asso ciation for Computational Linguistics (ANLP-NAA CL000), pages { . Ciprian Chelba and F rederic k Jelinek.  . Exploiting syn tactic structure for language mo deling. In Pr o c e e dings of COLING-A CL ' , pages {. John Chen and K. Vija y-Shank er. 000. Automated extraction of T A Gs from the P enn T reebank. In Pr o c e e dings of the Sixth International Workshop on Parsing T e chnolo gies (IWPT 000), pages {. Mic hael Collins.  . A new statistical parser based on bigram lexical dep endencies. In Pr oc e e dings of the th A nnual Me eting of the Asso c ation for Computational Linguistics, pages { . Mic hael Collins.  . Three generativ e lexicalised mo dels for statistical parsing. In Pr oc e e dings of the th A nnual Me eting of the Asso c ation for Computational Linguistics, pages {. Mic hael Collins.  . He ad-driven statistic al mo dels for natur al language p arsing. Ph.D. thesis, Univ. of P ennsylv ania. Josh ua Go o dman.  . Global thresholding and m ultiple-pass parsing. In Pr o c e e dings of the Se c ond Confer enc e on Empiric al Metho ds in Natur al L anguage Pr o c essing (EMNLP-), pages {. Reb ecca Hw a.  . An empirical ev aluation of probabilistic lexicalized tree insertion grammars. In Pr o c e e dings of COLING-A CL ' , pages {. Mark Johnson.  . PCF G mo dels of linguistic tree represen tations. Computational Linguistics, :{. Da vid M. Magerman.  . Statistical decisiontree mo dels for parsing. In Pr o c e e dings of the r d A nnual Me eting of the Asso c ation for Computational Linguistics, pages {. G  un ter Neumann.  . Automatic extraction of sto c hastic lexicalized tree grammars from treebanks. In Pr o c e e dings of the th International Workshop on T A G and R elate d F ormalisms (T A G+), pages 0{. Ow en Ram b o w, K. Vija y-Shank er, and Da vid W eir.  . D-tree grammars. In Pr o c e e dings of the r d A nnual Me eting of the Asso c ation for Computational Linguistics, pages {. Adw ait Ratnaparkhi.  . A maxim um-en trop y mo del for part-of-sp eec h tagging. In Pr o c e e dings of the Confer enc e on Empiric al Metho ds in Natur al L anguage Pr o c essing, pages {0. Philip Resnik.  . Probabilistic tree-adjoining grammar as a framew ork for statistical natural language pro cessing. In Pr o c e e dings of the F ourte enth International Confer enc e on Computational Linguistics (COLING- ), pages {. Yv es Sc hab es and Stuart M. Shieb er.  . An alternativ e conception of tree-adjoining deriv ation. Computational Linguistics, 0(): {. Yv es Sc hab es and Ric hard C. W aters.  . T ree insertion grammar: a cubic-time parsable formalism that lexicalizes con text-free grammar without c hanging the trees pro duced. Computational Linguistics, : {. Yv es Sc hab es and Ric hard W aters.  . Sto c hastic lexicalized tree-insertion grammar. In H. Bun t and M. T omita, editors, R e c ent A dvanc es in Parsing T e chnolo gy, pages { . Klu w er Academic Press, London. Yv es Sc hab es.  . Sto c hastic lexicalized treeadjoining grammars. In Pr o c e e dings of the F ourte enth International Confer enc e on Computational Linguistics (COLING- ), pages {. F ei Xia.  . Extracting tree adjoining grammars from brac k eted corp ora. In Pr o c e e dings of the th Natur al L anguage Pr o c essing Paci c R im Symp osium (NLPRS- ), pages  {0.
2000
58
     !#" $%&%')(*,+- - .. ' /$ /1023' -,+-4' 5768:938<;=->?@=-93A=-B<C6EDGFIHJLKNM@DO9QPR=-SUTCOM VWYXZW\[ FI]_^4`Racbd^ebfFgehji kmlonp FIgrq Vts buH v_b wyx{z gri-FI| p FIgeq}~€ n‚IƒI„‚… †‚‡‰ˆŠf‹ŒŠ3ˆŽ‰O‘“’‰”•‰–‚ˆ—_‡I— Žoˆ-˜E™›šŽIœ œ›š˜‚’m žRT,> Ÿm6E=-¡IŸ ¢£‚¤¤E¥%¦¨§‚©«ª£ ¬t­_¬®¥¯ª›°±¬²o¬®³¬ ª¯¤µ´%¬®¶m°¨¦±·j¬ ¶@³¬®¶m§O¦¨§‚©@¦¨§R§ ¶fª¸‚´%¶m°°¨¶m§‚©E¸O¶f©m¬©m¬®§o¹ ¬j´r¶fª¦±¤E§º¦¨¥.¶»£ ¶f´%¼ª¶m¥¯½¾À¿Á¬Â¦¨§Ãm¬®¥Ä¹ ª¦¨©E¶fª¯¬Å¼ ¦ÇÆ_¬j´¬®§‰ªRª¯´¬j¬¹<­¶m¥¯¬®¼È¥¯ª¯¤oÉ%£ ¶m¥¯¹ ª¦ÊɳZ¤I¼‚¬®°Ê¥.Ë{¤m´!°±¬²‚¦¨Éj¶m° É%£‚¤E¦¨É¬m¾ÍÌ,¬¹ Éj¶m¸O¥¯¬R¤mËtª£‚¬Á¼ ¦±ÎÏÉj¸ °±ªÑÐU¤mËt¤m­ ª¶m¦Ê§ ¦¨§‚© ¶€¥¯¬®§ ¥¬¹<ª¶f©m©m¬®¼Nɝ¤m´ÒO¸ ¥jÓÔy¬4©m¬®§‚¬j´%¶m°Ê¦±·j¬ ª£ ¬Õ§‚¤mª¦±¤E§»¤mËÖ¥¯ÐI§ ¤E§‰Ðo³cÐm¾×¿Á¬Ø¥£‚¤dÔ ª£O¶fª ¶ª¯´¬j¬¹<­O¶m¥¯¬®¼.³¤I¼ ¬®°-Éj¶m§¶mÉ%£ ¦¨¬jÃm¬ ¶.Ôy¤m´r¼o¹<­O¶f©­O¶m¥¬®¼Ù¶mÉjÉj¸‚´%¶mɝÐÙ¤mËÚfۉÜ@Ó ´%¬jÒ ´¬®¥¯¬®§ª¦¨§‚©È¶m§Ý¦¨³Ò ´¤dÃm¬®³¬®§ª»¤ Ãm¬j´ ª£ ¬Ö­O¶m¥¯¬®°¨¦¨§ ¬m¾ Þ ß 9ŒŸm6ECà3á3¡IŸE8:C“9 âZãD‚ä8:¡o=-BÖå@æC“8Ñ¡oD çI¬®§ª¯¬®§ ɝ¬ÖÒO°¨¶m§O§ ¦¨§‚©€¦¨§@§ ¶fª¸‚´%¶m°-°¨¶m§ ©E¸ ¶f©m¬›©m¬®§‚¬j´¯¹ ¶fª¦±¤E§Âèêé$ëìYí Éj¶m§Õ­_¬cÉ%£O¶f´%¶mɝª¯¬j´%¦±·j¬®¼Ø¶m¥ª£ ¬tɝ¤E°Ç¹ °±¬®Éª¦±¤E§î¤mË ª¶m¥¯½o¥€´¬®°¨¶fª¯¬®¼îª¯¤Rª£‚¬ÏÉr£‚¤E¦¨É¬@¤mËtèê¶f­‚¹ ¥¯ª¯´%¶mɝªrí3°¨¦¨§ ©E¸ ¦¨¥¯ª¦¨Éy´¬®¥¯¤E¸ ´%ɝ¬®¥y¦Ê§¤m´%¼‚¬j´yª¯¤€¶mÉr£ ¦±¬jÃm¬ ¬®°±¬®³¬®§ª¶f´Ѐɝ¤E³Z³t¸O§ ¦¨Éj¶fª¦±Ãm¬,©m¤E¶m°¨¥j¾ï £‚¬®¥¬yª¶m¥¯½o¥ ¦¨§ Éj°Ê¸ ¼‚¬Z°±¬²‚¦¨Éj¶m°,¶m§ ¼Ù¥¯Ðo§ª¶mɝª¦¨É@É%£‚¤E¦¨É¬m¾R묝²‚¦¨Éj¶m° É%£ ¤E¦¨É¬Õ¦¨¥Zª£‚¬ÕÉr£‚¤E¦¨É¬.¤mËY³Z¬®¶m§ ¦¨§‚©f¹<­_¬®¶f´%¦¨§ ©R°¨¬²¹ ¬®³¬®¥j¾Uèêð ¸ §Oɝª¦±¤E§ÂÔy¤m´%¼O¥t¬²oÒ ´¬®¥%¥c©m´%¶m³«³Z¶fª¦¨Éj¶m° ³¬®¶m§ ¦Ê§‚©c¥¸ Ér£«¶m¥3ª¯¬®§ ¥¯¬$¶m§O¼«¶m¥¯Ò_¬®ÉªË{¬®¶fª¸‚´¬®¥jÓI¤m´ ¶f´¬!¬®§ª¦±´¬®°¨Ðñ©m´%¶m³«³Z¶fª¦¨Éj¶m°¨°±Ð#¼‚¬jª¯¬j´%³«¦¨§‚¬®¼-ÓÖ¶m§ ¼ ª£‚¬j´¬jË{¤m´¬N¶f´¬N¸ ¥¸O¶m°¨°±ÐÁÉ%£‚¤E¥¬®§U§‚¤mª¼ ¸‚´%¦¨§ ©.¥¯¬®§o¹ ª¯¬®§ ɝ¬Zҏ°¨¶m§ § ¦¨§ ©‚ӏ­O¸ ª›¼O¸‚´%¦¨§‚©µ°¨¦¨§‚©E¸O¦¨¥¯ª¦¨Éc´¬®¶m°¨¦±·®¶u¹ ª¦±¤E§-¾òí\çIÐo§‰ª¶mɝª¦ÊÉÉ%£‚¤E¦Êɝ¬¦¨¥ ª£‚¬É%£ ¤E¦¨É¬N¤mË4£ ¤ Ô ³¬®¶m§ ¦Ê§‚©f¹<­_¬®¶f´%¦¨§‚©Z°±¬²o¬®³¬®¥4¶f´¬ ɝ¤E³c­O¦Ê§‚¬®¼-¾óôª$¦¨¥ Éj°±¬®¶f´õª£ ¶fªõ°±¬²‚¦¨Éj¶m°¶m§O¼Ï¥¯Ðo§ª¶mɝª¦¨É›É%£‚¤E¦¨É¬Ö¶f´¬›§‚¤mª ¦¨§ ¼ ¬jÒ¬®§O¼‚¬®§‰ªj¾Øð‚¤m´¬²‚¶m³ÒO°±¬mÓ¦±ËÔ,¬NÉ%£‚¤I¤E¥¯¬@ª£‚¬ Ãm¬j´­Uöm÷{øfùjӓÔ,¬µÉj¶m§ÂÉ%£‚¤I¤E¥¯¬Ï­¬jªÑÔy¬j¬®§ú¶Õ§‚¤E³Z¦¨§ ¶m° ¤m´cÒO´¬jÒ_¤E¥¦±ª¦±¤E§ ¶m°7´¬®¶m°Ê¦±·®¶fª¦±¤E§Á¤mËõª£‚¬Z©m¤E¶m°y¶f´©E¸o¹ ³¬®§ªúèôû‚üuý‰þÿöfømù{ýoùUü öfþO÷ ê÷êümþ .¤m´ û‚üuý‰þöfømù <ü{ý‚ù üêöfþ ÷ ê÷ üfþIírÓ_Ô£O¦¨°±¬›¦±Ë Ô,¬É%£‚¤I¤E¥¯¬Eüfþ :ùjÓo¤E§ °±Ð€ª£‚¬ Ò ´¬jÒ_¤E¥¦¨ª¦±¤E§ ¶m°I´¬®¶m°Ç¹ ¦±·®¶fª¦±¤E§.¦¨¥õÒ_¤E¥¥¦±­O°¨¬m¾ óħ ³Z¶m§Ð*¥¯Ðo¥¯ª¯¬®³Z¥jÓØ°±¬²o¦ÊÉj¶m°NÉ%£‚¤E¦¨É¬ÿ¦¨¥Å£ ¶m§‚¹ ¼ °¨¬®¼º¦¨§¶ñ¼ ¦¨Éª¦±¤E§O¶f´Ð׳Z¶fÒ ÒO¦¨§ ©UË ´¤E³ ¼‚¤E³Z¶m¦¨§ ɝ¤E§ ɝ¬jÒOª¥Âè{¤m´®Ót¦¨§×³Z¶mÉ%£O¦¨§‚¬!ª¯´%¶m§O¥°¨¶fª¦±¤E§ èØï$írÓ ¥¯¤E¸ ´%ɝ¬È°¨¶m§‚©E¸ ¶f©m¬ Ô,¤m´%¼ ¥#ª¯¤*ª¶f´%©m¬jªº°¨¶m§‚©E¸ ¶f©m¬ Ô,¤m´%¼ ¥rír¾ï £‚¬®¥¬3¼ ¦¨Éª¦±¤E§O¶f´%¦±¬®¥-¶f´¬yª:ÐIÒO¦¨Éj¶m°¨°±Ð›£ ¶m§ ¼‚¹ ɝ´%¶fË{ª¯¬®¼-¾ ¿#£ ¦¨°±¬ñ¥¸ É%£2¶m§2¶fÒOÒ ´¤E¶mÉ%£\¦¨¥ÙÒ¬j´¹ Ë{¬®Éª°±Ð´¬®¶m¥¯¤E§ ¶f­O°¨¬ØË{¤m´.³Z¶m§Ðñ¶fÒOÒO°¨¦¨Éj¶fª¦±¤E§O¥jÓ¬®¥Ä¹ Ò_¬®Éj¦¨¶m°¨°¨ÐϪ£‚¤E¥¬t¦¨§NÔ$£ ¦¨É%£.°¨¦¨§‚©E¸ ¦Ê¥¯ª¦¨É4Ãu¶f´r¦¨¶fª¦±¤E§.¦Ê§ ª£‚¬tª¶f´©m¬jª4ª¯¬²Iª¥4¦¨¥ ´%¬®¥¯ª¯´%¦¨Éª¯¬®¼-ÓOª£‚¬j´¬t¶f´%¬t¶fÒ ÒO°Ê¦Ç¹ Éj¶fª¦±¤E§O¥ èê¦Ê§ Éj°¨¸ ¼ ¦Ê§‚©³Z¶m§‰ÐØïQ¶fÒ ÒO°¨¦ÊÉj¶fª¦±¤E§ ¥ríõ¦Ê§ Ô£O¦¨É%£Õª£‚¬ ´%¶m§‚©m¬ ¤mËy°Ê¦¨§‚©E¸ ¦¨¥ª¦¨ÉÖÃf¶f´%¦¨¶fª¦±¤E§Õ¤mË3ª£‚¬ ª¯¬²oª¥›ª¯¤Ï­_¬t©m¬®§‚¬j´%¶fª¯¬®¼Á¦¨¥$Ë{¶f´4ª¯¤I¤µÃf¶m¥¯ª4ª¯¤N¶m°¨°±¤ Ô Ë{¤m´,£O¶m§ ¼o¹:ɝ¤o¼ ¦¨§‚©Ö¤mË-¼ ¦¨Éª¦¨¤E§ ¶f´%¦±¬®¥j¾𠸂´%ª£‚¬j´%³¤m´¬mÓ ¬jÃm¬®§ØË{¤m´$¼‚¤E³Z¶m¦Ê§ ¥ Ô¦¨ª£.¶«°Ê¦¨³Z¦±ª¯¬®¼N´r¶m§‚©m¬c¤m˰¨¦Ê§o¹ ©E¸ ¦Ê¥¯ª¦¨É4Ãu¶f´r¦¨¶fª¦±¤E§-Óoª£‚¬Öª¶m¥¯½@¤mË£ ¶m§ ¼o¹:ɝ¤o¼ ¦¨§ ©tª£‚¬ §‚¬®É¬®¥%¥¶f´Ð ´¬®¥¯¤E¸‚´rɝ¬®¥3Éj¶m§Z­_¬Ë{¤m´%³Z¦¨¼O¶f­O°±¬õÔ£‚¬®§Z¶ ©m¬®§‚¬j´r¶fª¦±¤E§Å¥¯Ðo¥¯ª¯¬®³ ³€¸ ¥¯ª ­_¬µÒ_¤m´ª¯¬®¼Âª¯¤!¶Á§‚¬jÔ ¼‚¤E³«¶m¦¨§-Ó¶€§‚¬jÔ©m¬®§‚´¬mÓI¤m´õ¬®¥¯Ò_¬®Éj¦¨¶m°¨°±Ð ¶t§ ¬jÔ°¨¶m§‚¹ ©E¸ ¶f©m¬m¾ óħ ¥¯¬®³«¦¨§ ¶m° Ô,¤m´½Ó 댶m§‚©m½I¦Ê°¨¼‚¬Ö¶m§ ¼Y§O¦±©E£‰ªYè®ÚmÚ oí £ ¶ Ãm¬ ¸ ¥¯¬®¼ ¶ °¨¶m§‚©E¸ ¶f©m¬G³¤I¼ ¬®°Å¶m¥2¶1Ô,¶ Ð ¤mË×É%£‚¤I¤E¥¦¨§‚© ¶m³¤E§ ©€¼ ¦±Æ¬j´¬®§ªy°¨¬²o¦¨Éj¶m°¶m§ ¼«¥ÐI§ª¶mɝª¦¨É4¤mÒ ª¦±¤E§ ¥®¾ óħ èêÌõ¶m§‚©E¶m°¨¤m´¬Í¶m§O¼ $¶m³c­_¤ ÔcÓ"!fÛmÛmۉírÓ Ôy¬ Ò ´%¬®¥¯¬®§‰ª!¶×³Z¤I¼‚¬®°€¤mË@¥¯Ðo§‰ª¶mɝª¦¨É»É%£‚¤E¦¨É¬UÔ$£ ¦¨É%£Ó °¨¦¨½m¬@댶m§‚©m½o¦¨°¨¼‚¬Ö¶m§ ¼c§ ¦±©E£ªÖè®ÚmÚ Iít´¬®°¨¦±¬®¥€¤E§Å¶ °¨¦Ê§‚¬®¶f´N°¨¶m§‚©E¸ ¶f©m¬Á³¤o¼‚¬®° Ó$­O¸‚ª@¸ § °¨¦±½m¬Øª£‚¬®¦±´N¶f҂¹ Ò ´%¤E¶mÉ%£-ӓ¶m°Ê¥¯¤N¸ ¥¯¬®¥Ö¶µª¯´¬j¬¹<­O¶m¥¯¬®¼!´¬jÒ ´¬®¥¯¬®§ª¶fª¦±¤E§ ¤mË4¥¯ÐI§ª¶mɝª¦¨ÉN¥¯ª¯´r¸ ɝª¸‚´¬mӶت¯´¬j¬N³¤o¼‚¬®° Ó¶m§ ¼ú¶m§ ¦¨§O¼‚¬jÒ_¬®§ ¼‚¬®§ª°±Ð £ ¶m§ ¼o¹:ɝ´%¶fË{ª¯¬®¼ ©m´%¶m³Z³«¶f´®¾ ¿Á¬ ¥£ ¤ Ô ª£ ¶fªºª£‚¬\¶m¼ ¼ ¦±ª¦¨¤E§À¤mËUª£‚¬ ª¯´%¬j¬¹<­O¶m¥¯¬®¼ ³¤o¼‚¬®°2¦¨³Ò ´¤dÃm¬®¥1ª£‚¬ Ò_¬j´Ë ¤m´r³Z¶m§ ɝ¬ ¤mË*ª£‚¬ ¥¯Ðo§ª¶mɝª¦¨É«Ér£‚¤E¦¨É¬Ï³Z¤I¼ ¸O°±¬m¾Zï £‚¬Z¥¯Ðo¥¯ª¯¬®³NÓÉj¶m°¨°±¬®¼ #%$'&)(+*-, èê𰱬²‚¦±­O°±¬ ¶fª¦¨¤E§ ¶m°¨¦¨¥¯ªÄ¹/.y³ÒO¦±´r¦¨Éj¦¨¥¯ª ì4¬®§ ¬j´%¶fª¦±¤E§ 0$¥¦Ê§‚©ÿçIÐo§‰ª¶u²OírÓ@¼‚¤¬®¥î§‚¤mªjÓ@£‚¤ Ô¹ 13254769836:4<;>=?6@ 66ACBEDGFH6:;GI=G; A<J69BKLAM;ONP3DGFQ;RP3KSJ PE8/;OA7BT6985I7KLJ9P3KLDGA<; 8EUV@7N<KLWSI7KLA<X-TDO8)YZ2\[]@<NP'P34<6^;ONP3DGFQ;RPE_ KLJ>;OWLW`UH69aPE8/;OJbP36>IVI7KLJ9P3KLDOA; 83KL6BcI7DdA7DOP;OWLWLD>efTgDO85=G698EUihA<6b_ XO8/; KSA76>IjJDOA]PE83DGWkKLAlWS6ba7KLJ>; WkJ/4<DOKSJ96Gm ¬jÃm¬j´®Ó7Ò_¬j´Ë ¤m´r³1¶m§‰ÐÁ°±¬²‚¦¨Éj¶m°Ér£‚¤E¦¨É¬ n€¶m°¨°°±¬²o¬®³¬®¥ ¦¨§ª£‚¬!¥¯ª¯´r¦¨§‚©úª¯¤Å­_¬R©m¬®§‚¬j´%¶fª¯¬®¼º³€¸ ¥¯ª@­_¬R´¬j҂¹ ´¬®¥¯¬®§ª¯¬®¼Â¦¨§Áª£‚¬µ¦¨§‚ÒO¸‚ªÖª¯´%¬j¬m¾Nóô§Ùª£ ¦¨¥ÖÒO¶fÒ_¬j´tÔy¬ ¼ ¦¨¥%Éj¸ ¥¥$Ô,¶ ÐI¥›¤mˬ²oª¯¬®§ ¼ ¦Ê§‚© #%$o&5(+*-, ª¯¤£ ¶m§ ¼ °¨¬ °±¬²‚¦¨Éj¶m°-É%£ ¤E¦¨É¬m¾ ï £ ¬ÖÒO¶fÒ_¬j´¦¨¥ ¥¯ª¯´r¸ ɝª¸‚´¬®¼¶m¥$Ë{¤E°¨°±¤ Ô$¥j¾óħØçI¬®Ée¹ ª¦±¤E§!IÓuÔ,¬,©E¦±Ãm¬y¶Ãm¬j´ÐÖ­ ´%¦±¬jËI¤ Ãm¬j´%ÃI¦±¬jÔú¤dÃm¬j´ #%$o&5p (+*-, ¶m¥yÒ ´¬jÃo¦±¤E¸ ¥%°±Ðtҏ¸‚­O°¨¦¨¥%£‚¬®¼-¾-óħµçI¬®Éª¦±¤E§qIÓIÔy¬ Ò ´¬®¥¬®§‰ªµª£‚¬Á°±¬²‚¦¨Éj¶m°$Ér£‚¤E¦¨É¬RÒ ´¤m­°±¬®³ ¶m§O¼£ ¤ Ô Ô,¬Ò ´¤mÒ_¤E¥¯¬ª¯¤³¤o¼‚¬®°¦±ªjӓ¶m§O¼!¦¨§!çI¬®Éª¦±¤E§sr.Ôy¬ ¼ ¦¨¥%Éj¸ ¥¥-ª£‚´%¬j¬3Ôõ¶®Ðo¥“¤mË ¦¨§ª¯¬j©m´%¶fª¦¨§ ©$°±¬²‚¦¨Éj¶m°EÉr£‚¤E¦¨É¬ Ô¦±ª£cª£‚¬,¬²o¦¨¥ª¦¨§‚©$¥ÐI§ª¶mɝª¦¨É,É%£‚¤E¦¨É¬m¾¿Á¬õÒ ´¬®¥¬®§‰ª ¤E¸‚´$´¬®¥¸ °±ª¥$¦¨§ÕçI¬®Éª¦¨¤E§utIÓ_¶m§ ¼.ɝ¤E§OÉj°¨¸ ¼‚¬YÔ$¦±ª£.¶ ¼ ¦¨¥%Éj¸ ¥¥¦±¤E§@¶m§ ¼.¶m§N¤E¸‚ª°±¤I¤m½¾ v 5-w>dŸfDS KN;“D 6m;8<D M óħñ¤m´r¼‚¬j´@ª¯¤Å³¤I¼ ¬®°$¥¯Ðo§ª¶u²Ó4Ôy¬Á¸ ¥¬Ø¶m§#¬²‚¦¨¥¯ªÄ¹ ¦¨§‚©ñÔ¦¨¼‚¬¹:ɝ¤dÃm¬j´%¶f©m¬U©m´%¶m³Z³«¶f´Ø¤mË.3§‚©E°¨¦Ê¥£-ÓÖª£‚¬ x ï-yì ©m´%¶m³Z³Z¶f´»¼ ¬jÃm¬®°±¤mÒ_¬®¼ ¶fªUª£‚¬z04§ ¦±Ãm¬j´¯¹ ¥¦±ªÑÐ ¤mË|{7¬®§ § ¥ÐI°±Ãf¶m§ ¦¨¶ è x ï-yì4¹ôì4´¤E¸‚ÒÓu®ÚmÚmÚEír¾ x ï-yì ¦¨¥Á¶#ª¯´¬j¬¹:¶m¼<}¯¤E¦¨§ ¦¨§ ©©m´r¶m³Z³Z¶f´»èêï\yìYí èE~m¤E¥%£ ¦ Ó:®Ú €mír¾ óħ ¶ÿï\yì€Ó ª£‚¬ñ¬®°¨¬®³¬®§‰ª¶f´%Ð ¥¯ª¯´%¸Oɝª¸‚´¬®¥Â¶f´¬»ÒO£ ´%¶m¥¯¬¹:¥¯ª¯´%¸Oɝª¸‚´¬»ª¯´%¬j¬®¥ÙÔ£O¦¨É%£ ¶f´¬Yɝ¤E³Ò_¤E¥¯¬®¼N¸O¥¦¨§‚©€ª:Ô,¤Z¤mÒ_¬j´%¶fª¦±¤E§ ¥®Óo¥%¸‚­O¥¯ª¦±ª¸‚¹ ª¦±¤E§è{Ô£ ¦¨Ér£!¶fÒ Ò_¬®§ ¼ ¥Ö¤E§ ¬«ª¯´¬j¬µ¶fªcª£‚¬ÏË ´¤E§ª¦±¬j´ ¤mË_¶m§‚¤mª£‚¬j´eí¶m§O¼Z¶m¼<}¸ § ɝª¦±¤E§è{Ô£O¦¨É%£¦¨§O¥¯¬j´ª¥¤E§‚¬ ª¯´¬j¬@¦¨§ª¯¤.ª£‚¬µ³Z¦¨¼ ¼ °¨¬¤mË ¶m§‚¤mª£‚¬j´eír¾óħ°¨¦¨§‚©E¸ ¦Ê¥Ä¹ ª¦¨É$¸ ¥¯¬®¥¤mˌï-yì€ÓIÔ,¬$¶m¥¥¯¤oÉj¦¨¶fª¯¬4¤E§‚¬$°¨¬²o¦¨Éj¶m°¦±ª¯¬®³ èꦱª¥mþ ýIü ¯íÔ¦±ª£µ¬®¶mÉ%£.ª¯´¬j¬mÓO¶m§ ¼N¤E§ ¬Ö¤m´tè{ª:ÐIÒO¦Ç¹ Éj¶m°¨°±Ð 퓳¤m´¬,ª¯´¬j¬®¥Ô¦±ª£t¬®¶mÉ%£°±¬²‚¦¨Éj¶m°I¦±ª¯¬®³N¾ço¦¨§ ɝ¬ ¬®¶mÉ%£ñ°±¬²‚¦¨Éj¶m°õ¦±ª¯¬®³ ¦¨¥ ¶m¥¥¯¤oÉj¦¨¶fª¯¬®¼»Ô¦±ª£ú¶RÔ£‚¤E°±¬ ª¯´¬j¬ è{´%¶fª£‚¬j´,ª£ ¶m§Z}¸ ¥¯ª3¶cÒO£‚´%¶m¥¬¹:¥¯ª¯´%¸ ɝª¸‚´%¬ ´%¸ °±¬mÓ Ë{¤m´¬²o¶m³ZÒO°±¬dírӉÔ,¬Éj¶m§Ï¥¯Ò_¬®Éj¦±Ë ÐtÔ¦±ª£ ¦Ê§cª£‚¬$¥¯ª¯´%¸ Ée¹ ª¸‚´¬ ¶m¥%¥¯¤IÉj¦Ê¶fª¯¬®¼Ô¦±ª£€ª£‚¬ °±¬²o¬®³¬Ãf¶f´%¦±¤E¸ ¥¤m˪£‚¬ ¥¯Ðo§‰ª¶mɝª¦ÊÉÙÒ ´%¤mÒ¬j´%ª¦±¬®¥¤mËZª£ ¬Â°±¬²o¬®³¬mӀ¥¸ É%£ÿ¶m¥ ¦±ª¥ÒO´¬®¼ ¦¨Éj¶fª¯¬¹:¶f´©E¸O³¬®§‰ªZ¥ª¯´%¸ ɝª¸‚´¬Ùè{­IÐú¦¨§ Éj°¨¸ ¼‚¹ ¦¨§‚©Ø§‚¤o¼‚¬®¥t¶fª€Ô£ ¦ÊÉ%£Ù¦¨ª¥t¶f´©E¸O³¬®§‰ª¥ ³€¸ ¥¯ª ¥¸‚­‚¹ ¥¯ª¦±ª¸ ª¯¬dírÓ-ª£‚¬€¥ÐI§ª¶mɝª¦¨É ´¬®¶m°¨¦¨·®¶fª¦±¤E§.¤mË,¦±ª¥›¶f´©E¸o¹ ³¬®§ª¥Zèꥶ®ÐmÓ7¸ ¥¦¨§ ©«Ò ´¬jÒ_¤E¥¦¨ª¦±¤E§ ¥jÓ¤m´ÖÒ ´¬jÒ_¤E¥¯¬®¼írÓ ¦±ª¥ÒO¶m¥%¥¦±Ãm¬õÃu¶m°¨¬®§ ɝеèê¦ ¾ ¬m¾±Ó£‚¤ ÔUª£‚¬õª¯´¬j¬ ɝ¤E§O§‚¬®Éª¥ ª¯¤î¦¨ª¥«©m¤dÃm¬j´%§‚¤m´eírÓ¶m§O¼»³¤m´ҏ£‚¤f¹:¥¯Ðo§‰ª¶mɝª¦¨ÉÕɝ¤E§o¹ ¥¯ª¯´%¶m¦Ê§‰ª¥ ¥%¸ É%£¶m¥$¥¸‚­‚}¯¬®ÉªÄ¹<Ãm¬j´­.¶f©m´%¬j¬®³¬®§‰ªj¾ ¿Á¬€Éj¶m§´¬jË{¬j´ª¯¤µ¶ª¯´¬j¬c­IÐ@¶«É¤E³t­O¦¨§ ¶fª¦±¤E§¤mË ¦±ª¥Y§O¶m³¬mӓÉj¶m°¨°±¬®¼Ù¦±ª¥ƒO„7…‚ù]G3®öEÓ7¶m§O¼!¦±ª¥Ö¶m§ Ér£‚¤m´®¾ y#¥¸‚Ò_¬j´ª¶f©Ö¬®§ ɝ¤o¼‚¬®¥y¶m°¨° ¥¯Ðo§‰ª¶mɝª¦ÊÉ Ò ´¤mÒ_¬j´ª¦¨¬®¥¤mË ¶m§Ù¦Ê§ ¥¯ª¶m§ ɝ¬Z¤mË ¶Õ°±¬²o¬®³¬m¾Nð‚¤m´t¬²‚¶m³ÒO°¨¬mÓ\†\‡Z¦¨¥ ª£‚¬µ¥¸‚Ò_¬j´ª¶f©.¤mË ¶Õª¯´¬j¬@¶m§ É%£ ¤m´¬®¼Ù­IÐ!¶Õª¯´%¶m§ ¥¦Ç¹ ª¦±Ãm¬NÃm¬j´­Uª£O¶fª€ÒO´¤<}Ĭ®Éª¥Zª¯¤îçÓÔ£ ¦¨°±¬‰ˆkŠ@¦¨¥€ª£‚¬ ¥¸‚Ò_¬j´ª¶f©ñ¤mËZ¶m§ ¶m¼}¯¸ §Oɝª.ª¯´¬j¬ú¶m§ É%£ ¤m´¬®¼ÿ­Ð¶ Ò ´%¬jÒ¤E¥%¦±ª¦±¤E§ØÔ£ ¦¨Ér£Rª¶f½m¬®¥t¶N§ ¤E³Z¦¨§ ¶m°¶f´©E¸O³¬®§‰ª ¶m§ ¼.¶m¼<}ĤE¦Ê§ ¥õª¯¤«ét¾ of a experience that Indians many fear repeat 𦱩E¸‚´%¬nÖóô§‚ҏ¸‚ª$ª¯¤.ª£‚¬ÏçIÐo§‰ª¶u²Ù¢õ£‚¤I¤E¥¯¬j´«èê¶m§ ¼ ª£‚¬$묝²o¬®³¬›¢õ£ ¤¤E¥¯¬j´íŒË ¤m´,¥¯¬®§‰ª¯¬®§Oɝ¬‹Œfþ'Žrþf÷‘ fþ'ƒi’jù ¯ù ¯ù9…‚ùR€õü>’{ýk  ùR“?…‚ù?e÷<ùþ eù ¬Á¤m´%¦±©E¦¨§O¶m°$Ãm¬j´r¥¦±¤E§×¤mË #%$'&)(^*\, è{Ô¦±ª£‚¤E¸ ª °±¬²‚¦¨Éj¶m°ŒÉ%£‚¤E¦¨É¬dí¦¨¥$ɝ¤E³Ò_¤E¥¯¬®¼.¤m˪£ ´¬j¬t³¤o¼ ¸ °¨¬®¥]n ª£‚¬.çIÐo§ª¶u²ú¢õ£‚¤I¤E¥¯¬j´®Óª£ ¬f04§‚´%¶ Ãm¬®°±¬j´®Ó3¶m§ ¼Âª£‚¬ 댦¨§‚¬®¶f´{´¬®É¬®¼‚¬®§Oɝ¬»èêë+{,íµ¢õ£‚¤I¤E¥¯¬j´®¾ ï £ ¬Ø¼ ¦Ê¶u¹ ©m´%¶m³ ¦Ê§t𦱩E¸‚´%¬%!4¥£‚¤dÔ¥7ª£ ¦¨¥7¶f´%É%£ ¦¨ª¯¬®Éª¸‚´¬mӉ¶m¸‚©f¹ ³¬®§ª¯¬®¼#¤E§ª£‚¬Á°±¬jË{ªÏ­Iлª£‚¬Á듬²I¬®³¬Ù¢õ£ ¤¤E¥¯¬j´ Ô£O¦¨É%£NÔ,¬YԦʰ¨°-¼ ¦¨¥Éj¸O¥¥°¨¶fª¯¬j´®¾õï £‚¬Y¦¨§ ÒO¸‚ªõª¯¤«ª£‚¬ ¥¯Ðo¥¯ª¯¬®³ ¦Ê¥,¶¼ ¬jÒ¬®§O¼‚¬®§ ɝÐZª¯´¬j¬Y¶m¥¥%£‚¤ Ô§N¦Ê§µð¦±©f¹ ¸‚´%¬:f¾.ï £ ¬Ï§‚¤o¼‚¬®¥c¶f´%¬µ°¨¶f­_¬®°±¬®¼!¤E§ °±ÐRÔ¦±ª£î°±¬²¹ ¬®³¬®¥®ÓI§‚¤mª3Ô¦±ª£Z¥¸ Ò¬j´%ª¶f©E¥j¾ ‡ ¬¬j¬›¢õ£ ¤¤E¥¯¬j´ ª£‚¬®§ñ¸ ¥¯¬®¥Z¶Ù¥¯ª¯¤IÉr£ ¶m¥¯ª¦¨É.ª¯´¬j¬Õ³Z¤I¼‚¬®° ª¯¤ÙÉ%£‚¤I¤E¥¯¬ ï\yìݪ¯´¬j¬®¥ Ë{¤m´€ª£ ¬@§ ¤I¼‚¬®¥€¦¨§îª£‚¬N¦¨§‚ÒO¸ ªc¥¯ª¯´%¸ Ée¹ ª¸‚´%¬m¾ï £ ¦¨¥Y¥ª¯¬jÒîÉj¶m§Â­_¬«¥¯¬j¬®§Ù¶m¥€¶m§ ¶m°±¤m©m¤E¸O¥cª¯¤ ” ¥¸ Ò¬j´%ª¶f©m©E¦¨§‚©€•!èêÌõ¶m§‚©E¶m°±¤m´¬Ï¶m§ ¼–~m¤E¥%£ ¦ Ói®ÚmÚmÚEírÓ ¬²‚ɝ¬jÒ ªÅª£ ¶fªÅ§ ¤ Ô ¥¸ Ò¬j´%ª¶f©E¥ºèê¦ ¾ ¬m¾±Ó.§ ¶m³Z¬®¥U¤mË ª¯´¬j¬®¥eí³€¸ ¥¯ªŒ­¬yË ¤E¸O§ ¼YË{¤m´ŒÔ,¤m´%¼ ¥7¦¨§t¶ ª¯´¬j¬,´%¶fª£‚¬j´ ª£ ¶m§@Ë{¤m´ Ô,¤m´%¼ ¥¦¨§@¶°¨¦Ê§‚¬®¶f´¥¯¬—¸ ¬®§ ɝ¬m¾3ï £‚¬›ª¯´¬j¬ ³¤o¼‚¬®°E¦Ê¥¶ ´¬jÒO´¬®¥¯¬®§ª¶fª¦±¤E§Y¤mËoª£‚¬ x ï\yìż ¬j´%¦±Ãf¶u¹ ª¦±¤E§#Ë{¤m´sfÓ ÛmÛmÛoÓ ÛmÛmÛ»Ô,¤m´%¼ ¥@¤mËtª£‚¬R¿!¶m°¨°cçIª¯´¬j¬jª ~E¤E¸‚´%§ ¶m° ¾ ð ¤m´»¤E¸‚´U¬²‚¶m³ÒO°±¬×¥¯¬®§ª¯¬®§ ɝ¬mÓØª£‚¬¦¨§‚ҏ¸‚ªÙª¯¤ ª£‚¬ñ¥¯ÐI¥ª¯¬®³ ¦¨¥Áª£ ¬Åª¯´¬j¬ñ¥%£‚¤ Ô§ ¦¨§ 𦱩E¸‚´¬˜fÓ ¶m§ ¼Õª£‚¬t¤E¸‚ª¯ÒO¸‚ª$Ë ´¤E³&ª£‚¬ ¬j¬¢õ£‚¤I¤E¥¯¬j´›¦¨¥ ª£‚¬ ª¯´¬j¬Õ¶m¥¥£‚¤ Ô$§Å¦¨§ú𦱩E¸‚´¬qI¾Uï £ ¦¨¥€ª¯´¬j¬mÓ,Ô£ ¦Ê°±¬ ³¤m´%¬ ¥Ò¬®Éj¦™O¬®¼Øª£ ¶m§Øª£‚¬Z¦¨§‚ÒO¸ ªª¯´¬j¬mӓ¥ª¦¨°¨°7¼‚¤¬®¥ §‚¤mªy¼‚¬jª¯¬j´%³Z¦¨§ ¬ ª£‚¬¥¸ ´Ë{¶mɝ¬$¥¯ª¯´%¸ ɝª¸‚´%¬ n7¶m¼<}¸ § ɝª¥ šb› AP34<6œBUBP36FP34;RPžec6ŸN7BE6>IfKLAP34<6œ6ba7  6b83KSFH69AP3B I76BEJb83KS@ 6IZKSAZP347KLB\ ;   698>[ ; WLWke5D 8/I7B%¡KLA7JWLNI7KLA7XVTN<A7J9P3KLDGA e5D 8/I7B/¢oA766>I£P3Di@ 6^ 78369BE6A]PKLAVP3476+KLA7 <N7P'836 836BE6A]P/; P3KLDOA[ TN7WSW`UœKLA¤6J9P36Im\254<KLBdKLB\D TcJDON783BE6¥N<A836>;OWLKLBP3KLJ%TD 8¦;O 7 <WLK`_ J>; P3KLDOA<Bm Semi−specified Dependency Tree Model Lexeme Chooser Tree LP Chooser Word Lattice String Semantic Predicate Representation Model Language Unraveler Tree XTAG Grammar Model Syntax Chooser 𦱩E¸‚´%¬œ!kn%y´rÉ%£ ¦±ª¯¬®Éª¸ ´¬Ö¤mË #%$'&5(+*-, ¶f´¬#¸ § ¼‚¬j´r¥¯Ò_¬®Éj¦™O¬®¼QÔ¦±ª£ ´¬®¥¯Ò_¬®ÉªÂª¯¤ ª£‚¬#¶m¼o¹ }¸ § ɝª¦±¤E§Õ¥¦±ª¯¬c¶m§ ¼c§d¤m´$ª£‚¬€¶m¼<}¸ § ɝª¦±¤E§¼O¦±´¬®Éª¦±¤E§ è{Ë{´¤E³ °±¬jË ª$¤m´›Ë ´¤E³Ý´%¦±©E£ªrír¾›ð ¸‚´ª£‚¬j´r³¤m´¬mÓ-¥¦¨§ ɝ¬ ª£‚¬›©m´%¶m³Z³«¶fª¦¨Éj¶m°_´¤E°±¬›¦¨§‚Ë{¤m´%³Z¶fª¦±¤E§ÏË{¤m´ ɝ¤E³ÒO°±¬¹ ³¬®§ª¥§‚¬j¬®¼t§ ¤mªŒ­_¬y¥Ò¬®Éj¦™O¬®¼c¦¨§cª£‚¬õ¦¨§‚ÒO¸‚ªjÓuɝ¤E³ ¹ ÒO°±¬®³Z¬®§‰ª¥õ¶f´¬Y¸ § ¤m´%¼‚¬j´¬®¼ÏÔ¦±ª£Ï´¬®¥¯Ò_¬®Éªõª¯¤¤mª£‚¬j´ ɝ¤E³ÒO°¨¬®³¬®§‰ª¥®¾ a of 1 α repeat γ many Indians 3 γ that experience 1 1 γ 1 α 2 α fear γ 1 α 1 𦱩E¸‚´%¬qkn%¨›¸‚ª¯ÒO¸ ª¤mˌª£‚¬ çIÐI§ª¶u².¢õ£‚¤I¤E¥¯¬j´ ï £ ¬Ÿ04§‚´%¶®Ãm¬®°¨¬j´Öª£‚¬®§R¸ ¥¯¬®¥$ª£‚¬ x ï\yì\©m´%¶m³ ¹ ³Z¶f´Ïª¯¤îÒ ´%¤I¼ ¸Oɝ¬¶Ù°¨¶fª¯ª¦¨É¬Õ¤mËY¶m°¨° Ò_¤E¥¥%¦±­O°±¬N°¨¦¨§‚¹ ¬®¶f´%¦±·®¶fª¦¨¤E§ ¥Öª£ ¶fªY¶f´¬Ïɝ¤E³ÒO¶fª¦±­O°¨¬€Ô$¦±ª£Rª£‚¬Z¸ §o¹ ¼‚¬j´%¥Ò¬®Éj¦™O¬®¼Z¥¸‚Ò_¬j´ª¶f©m©m¬®¼µª¯´¬j¬›¶m§ ¼Ïª£‚¬ x ï\y쀾 04§ ¼‚¬j´%¥¯Ò_¬®Éj¦™Éj¶fª¦±¤E§ ¥-°±¬®¶m¼Öª¯¤4¼ ¦¨¥‘}¸ § ɝª¦¨¤E§ ¥_¦¨§Öª£‚¬ °¨¶fª¯ª¦¨É¬m¾õ¿Á¬€´%¶m§‚½µª£‚¬®¥¯¬tÔy¤m´%¼Õ¥¯¬—I¸‚¬®§ ɝ¬®¥4¦¨§Nª£‚¬ ¤m´%¼‚¬j´¤mˏª£‚¬®¦±´°¨¦±½m¬®°¨¦¨£ ¤¤o¼Ö­IÐtɝ¤E³Ò_¤E¥¦¨§ ©ª£‚¬°¨¶fªÄ¹ ª¦¨É¬@Ô¦¨ª£Â¶:™§O¦±ª¯¬¹:¥¯ª¶fª¯¬N³Z¶mÉ%£O¦¨§‚¬@´¬jÒ ´¬®¥¬®§‰ª¦¨§ © ¶Ùª¯´%¦±©m´%¶m³Í°¨¶m§‚©E¸ ¶f©m¬Ø³¤o¼‚¬®° ¾ï £ ¦Ê¥Z³¤o¼‚¬®° £ ¶m¥ ­_¬j¬®§Qɝ¤E§ ¥¯ª¯´%¸Oɝª¯¬®¼QË ´¤E³ fÓ ÛmÛmÛoÓ ÛmÛmÛmÛ Ô,¤m´%¼ ¥Ø¤mË ¿!¶m°¨°7çIª¯´¬j¬jªl~E¤E¸‚´%§ ¶m°-ɝ¤m´ҏ¸ ¥j¾ï £‚¬Yë-{¢õ£ ¤¤E¥¯¬j´ ª£‚¬®§îÒO¦¨É%½I¥cª£‚¬«­_¬®¥¯ªcÒO¶fª£Ùª£ ´¤E¸‚©E£!ª£ ¬Ï°¨¶fª¯ª¦¨É¬ ´¬®¥%¸ °±ª¦¨§‚© Ë{´¤E³*ª£‚¬Yɝ¤E³ZÒ¤E¥%¦±ª¦±¤E§N¸ ¥¦Ê§‚©‚¾ © PRD'ª36ED>dD9“ŸE8Ñ9A¬«*D =-938Ñ9A óôËÕ°±¬²‚¦¨Éj¶m°µÉ%£‚¤E¦Êɝ¬¦¨¥Âª£‚¬×É%£‚¤E¦¨É¬×¤mËØ³¬®¶m§ ¦¨§ ©f¹ ­_¬®¶f´%¦¨§ ©Ï°±¬²o¬®³¬®¥›¦¨§Õ¤m´%¼‚¬j´›ª¯¤Nɝ¤E§‰Ãm¬jÐR³¬®¶m§ ¦¨§ ©‚Ó ª£‚¬œ—I¸‚¬®¥¯ª¦±¤E§@¶f´%¦Ê¥¯¬®¥,Ô£ ¶fªõª£‚¬›¦¨§‚ÒO¸‚ªyª¯¤ ª£‚¬Ö°±¬²¹ ¦¨Éj¶m°OÉ%£‚¤E¦¨É¬4³¤o¼ ¸ °±¬ ¥£ ¤E¸ °¨¼ ­_¬mӉ¦ ¾ ¬m¾±Óo£‚¤ Ôñª¯¤Y´¬j҂¹ ´¬®¥¬®§‰ªÒ ´%¬¹:°±¬²o¦ÊÉj¶m°-³¬®¶m§ ¦¨§‚©‚¾iy Ò_¤mÒO¸ °¨¶f´,Ôõ¶®Ð@¤mË ´¬jÒO´¬®¥¯¬®§ª¦¨§‚©Ï³Z¬®¶m§ ¦¨§‚©E¥$¦¨¥ ª¯¤@¸ ¥¯¬Yª£ ¬€¥¯Ðo§‚¤E§ÐI³ ¥¯¬jª¥Áè{¤m´:ƒOfþ­ƒ®ù]‘ƒZË{¤m´µ¥£ ¤m´ªríZ¼ ¬?™§‚¬®¼»¦Ê§Å¿Á¤m´%¼o¹ é$¬jªØèêð‚¬®°Ê°±­O¶m¸ ³NÓZ®Úmڀmír¾ .3¶mÉr£ñ¥¯Ðo§ ¥¯¬jªZ¦Ê¥Z¶Ù¥¯¬jª ¤mËÔ,¤m´%¼ ¥$¤m˪£‚¬t¥%¶m³¬€Éj°Ê¶m¥¥ Ô£O¦¨É%£Õ¶f´¬t´%¤E¸‚©E£ °±Ð ¥¯Ðo§‚¤E§ÐI³Z¤E¸ ¥N¦¨§#¤E§‚¬Á¤mˀª£‚¬®¦±´³¬®¶m§O¦¨§‚©E¥j¾ ð‚¤m´ ¬²‚¶m³ÒO°±¬mÓOª£‚¬YÃm¬j´­Œ’®ùR  £ ¶m¥V™OÃm¬c¥¬®§ ¥¯¬®¥jÓ ªÑÔy¤Ï¤mË Ô£O¦¨É%£Nɝ¤m´´%¬®¥¯Ò_¤E§ ¼@ª¯¤Zª£‚¬®¥¯¬t¥¯ÐI§O¥¯¬jª¥]n Š çI¬®§ ¥¯¬ çIÐo§ ¥¯¬jª  Ë{¬®¶f´®ÓO¼‚´¬®¶m¼ ! ´¬jÃm¬j´¬®§Oɝ¬mÓOË{¬®¶f´®Ó ´¬jÃm¬j´¬mÓOÃm¬®§‚¬j´%¶fª¯¬ ¬µ¥ÐI§ ¥¬jª¥tҏ¶f´ª¦±ª¦±¤E§îª£‚¬@¥¯¬jª ¤m˶m°¨°,¥¯¬®§ ¥¯¬®¥ ¤mËY¶m°¨°$°±¬²o¬®³¬®¥jÓõ¬®¶mÉ%£#ɝ¤m´´¬®¥Ò¤E§O¼ ¦¨§‚©Áª¯¤Â¶Ù¼ ¦¨¥¯¹ ª¦¨§Oɝªõ³Z¬®¶m§ ¦¨§‚©‚¾ï £I¸ ¥jÓIª£‚¬jÐ϶f´%¬›¥%¸ ¦±ª¶f­O°±¬$´¬jÒ ´¬¹ ¥¯¬®§ª¶fª¦±¤E§ ¥c¦¨§Øª£‚¬¦¨§‚ÒO¸‚ª$ª¯¤Nª£‚¬Z¥¯¬®§ª¯¬®§ ɝ¬ÒO°¨¶m§‚¹ ® 25476sDOP34<6b8|P3478366¯BE69A<BE6BJD 8E836BE  DGA<I°P3DBUA<BE6bP3B e^4<KLJ/4ZJDGA]P/; KSAjDGA<W`U£±>²E³O´/m §‚¬j´®¾@óôËõÔy¬µ£ ¶m¼Ù¶ɝ¤m´%ÒO¸ ¥c¦Ê§ÁÔ£ ¦¨Ér£!¬®¶mÉr£ÙÔ,¤m´%¼ Ô,¬j´¬µ¶m§O§‚¤mª¶fª¯¬®¼úÔ¦±ª£Â¦±ª¥€¥¯¬®§ ¥¯¬Áèê¦ ¾ ¬m¾±Ó¶Õ¿Á¤m´%¼o¹ é$¬jª€¥¯Ðo§ ¥¯¬jªrírÓª£ ¬®§ÙÔ,¬Ïɝ¤E¸ °Ê¼î°±¬®¶f´r§Ù¶Ø³Z¶fÒ ÒO¦¨§ © Ë{´¤E³À³¬®¶m§ ¦¨§ ©E¥$ª¯¤°¨¬²I¬®³¬m¾0$§ Ë ¤m´ª¸O§ ¶fª¯¬®°±ÐmÓ§‚¤ ¬²oª¯¬®§ ¥¦±Ãm¬Å¥¯¬®§O¥¯¬¹<ª¶f©m©m¬®¼ ɝ¤m´ҏ¸ ¥ÁÉj¸‚´´¬®§ª°±Ð¬²I¹ ¦¨¥¯ª¥®ÓEª£‚¤E¸‚©E£ª£ ¬j´¬ ¶f´¬ ¬Æ_¤m´ª¥3¸O§ ¼‚¬j´Ôõ¶®Ðtª¯¤cɝ´¬¹ ¶fª¯¬¤E§‚¬m¾úé$¤E§‚¬jª£‚¬®°±¬®¥¥®Óy¦±ª¦¨¥ Éj°±¬®¶f´Zª£O¶fªZ¥¯¬®§ ¥¯¬¹ ª¶f©m©E¦¨§‚©¦Ê¥4³Z¤m´¬Zɝ¤E³ÒO°±¬²R¶µª¶m¥¯½Õª£O¶m§Á¥¯ÐI§ª¶mÉe¹ ª¦¨Éª¶f©m©E¦Ê§‚©‚Ó¶m§ ¼Áª£‚¬j´¬jË{¤m´¬µ¦±ªYÔ¦¨°Ê°7¦Ê§Áª£‚¬«Ë{¤m´¬¹ ¥¯¬j¬®¶f­O°¨¬€Ë긂ª¸‚´%¬c­_¬€¶µË{¤m´%³Z¦¨¼O¶f­O°±¬Yª¶m¥¯½Õª¯¤N¥¯¬®§ ¥¯¬¹ ª¶f©¶§‚¬jÔ É¤m´ÒO¸ ¥j¾µ ¬®Éj¶m°¨°€ª£ ¶fªR¤E§‚¬Â¤mË«ª£‚¬ ³¤mª¦±Ãf¶fª¦±¤E§ ¥Ë ¤m´¥¯ª¯¤oÉ%£O¶m¥¯ª¦¨É!é4ë7ì ¦Ê¥@ª£‚¬!¶f­O¦¨°Ç¹ ¦±ªÑЪ¯¤—¸O¦¨É½o°±ÐÂÒ¤m´%ªª¯¤î§‚¬jÔG¼‚¤E³Z¶m¦¨§ ¥ ¤m´Ï¬jÃm¬®§ °¨¶m§‚©E¸O¶f©m¬®¥j¾ ï £ ¬j´¬jË ¤m´%¬mÓÙÔ,¬ £ ¶®Ãm¬2É%£‚¤E¥¯¬®§1ª¯¤ ´¬jÒ ´¬®¥¬®§‰ª ³¬®¶m§ ¦Ê§‚©Y¦¨§Z¶Y³Z¤m´¬¥°±¤mÒOÒÐ ³Z¶m§ §‚¬j´ Ó‰§ ¶m³¬®°±Ð€­IÐ ¸ ¥¦Ê§‚©Yª£‚¬4¸ § ¦±¤E§¤mˌ¶m°¨° ª£‚¬4¥¯Ðo§ ¥¯¬jª¥y¤m˶t°±¬²o¬®³¬m¾ ¿Á¬ ´¬jË{¬j´ª¯¤Öª£‚¬®¥¯¬ ¥¬jª¥¶m¥VƒO„7…‚ù] ƒGfþ­ƒjù‘ƒr¾ð‚¤m´¬²I¹ ¶m³ÒO°¨¬mÓª£‚¬Z¥¸‚Ò_¬j´%¥ÐI§ ¥¬jª$Ë{¤m´µùGƒ?ê÷¶‹Œ :ùcɝ¤E§ª¶m¦¨§ ¥ ¶m°¨°_ª£‚¬t§‚¤E¸ § ¥õ©E¦±Ãm¬®§.¶f­¤dÃm¬t¦¨§@ª£ ¬Yª¶f­O°±¬ n 炸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª Ë{¤m´\Ãm¬j´%­·’®ùRGn ’®ùR  ¸  ¯ù €<¸£ùømù?ùþ eùG¸%¯ùjøfù?ùG¸yøfùþ_ù?> :ù óôËYÔ,¬!¼‚¤E§)¹ ªN¶m¥¥¸ ³Z¬ØÔ,¬R½I§ ¤ ÔÀª£‚¬RÒO¶f´ªÄ¹<¤mË ¹ ¥¯Ò_¬j¬®É%£ñ¤mËÖ¶Ù°±¬²o¬®³¬mÓÔ,¬.©m¬jªµª£‚¬Õ¸O§ ¦±¤E§ú¤mËY¶m°¨° ¥¯Ðo§ ¥¯¬jª¥¤m˪£‚¬c°±¬²o¬®³¬t¦Ê§¶m°¨°Ò_¤E¥¥%¦±­O°±¬›ÒO¶f´ª¥¯¹<¤mËʹ ¥¯Ò_¬j¬®É%£^n 炸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª Ë ¤m´ {%¨YçI¹:¸ § ¥¯Ò_¬®Éj¦™O¬®¼ ’®ùR OnŸ’jù  ¸¯ù €<¸ùømù?ùþ eùG¸ùømù?ùG¸ ømùþù]€:ùG¸º’®ùR‘’G„k» þù?ƒ ƒ ¸¼’Oe÷±öfý‘¸ rüfþ' eù?eþ­¸% O¯ù ¿#£ ¦¨°±¬Ö¶¥¯Ðo§ ¥¯¬jª´%¬jÒ ´¬®¥¯¬®§ª¥ ¶³¬®¶m§ ¦Ê§‚©‚Ó ¶¥¸o¹ Ò_¬j´%¥¯Ðo§ ¥¯¬jª´¬jÒ ´%¬®¥¯¬®§‰ª¥ž‹«ùRfþ ÷{þöl…‚ü :ùþoê÷E»gn¦±ª¬®§o¹ ɝ¤o¼‚¬®¥$¶m°Ê°Ò_¤E¥¥¦±­°±¬Ö³¬®¶m§ ¦¨§‚©E¥¤m˶ϰ¨¬²I¬®³¬m¾Q04§o¹ °¨¦±½m¬¥¯Ðo§ ¥¯¬jª¥®ÓE¥¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª¥¼‚¤Ö§‚¤mªË{¤m´%³ ¶›ÒO¶f´ª¦Ç¹ ª¦±¤E§Õ¤m˶m§‰ÐIª£ ¦¨§ ©«¸ ¥¯¬jËê¸ ° ¾H.3¶mÉr£R°±¬²I¬®³Z¬t¦¨¥$¶m¥¥¯¤f¹ Éj¦¨¶fª¯¬®¼NÔ$¦±ª£@¬²o¶mɝª°¨Ðµ¤E§ ¬Y¥¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª]½‚¶¥¸‚Ò_¬j´¯¹ ¥¯Ðo§ ¥¯¬jª,³Z¶®ÐZ­_¬4¶m¥¥¯¤oÉj¦¨¶fª¯¬®¼«Ô$¦±ª£«³¤m´%¬$ª£ ¶m§Ï¤E§‚¬ °±¬²o¬®³¬mÓ­O¸ ª€§‚¬j¬®¼ú§‚¤mª€­_¬mÓ¬jÃm¬®§Å¦±Ë ¦±ª ɝ¤E§ª¶m¦¨§ ¥ ³¤m´¬cª£ ¶m§¤E§‚¬€°±¬²o¬®³¬m¾%¾$¤ Ô,¬jÃm¬j´®Ó_¦±Ë°±¬²I¬®³Z¬Ÿ¿  ¦¨¥Y¦Ê§Áª£‚¬Ï¥%¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª›¤mË °±¬²o¬®³¬‰¿ ‡ Ó7ª£‚¬®§–¿ ‡ ¦¨¥ ¤m˓ɝ¤E¸‚´r¥¯¬$¶m°¨¥¤€¶€³¬®³t­¬j´,¤mË-ª£‚¬›¥¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª3¤mË ¿  ¾R¢õ°±¬®¶f´%°±ÐmÓ¦±ª€¦¨¥c¥ª¯´%¶m¦±©E£ª¯Ë ¤m´Ôõ¶f´%¼#èê¶m§ ¼ú¼‚¬jª¯¬j´¯¹ ³Z¦¨§O¦¨¥¯ª¦¨É íª¯¤ ³«¶fÒNË ´¤E³ ¶€´¬jÒ ´%¬®¥¯¬®§‰ª¶fª¦¨¤E§@­O¶m¥¯¬®¼ ¤E§°±¬²o¬®³¬®¥ª¯¤Z¤E§‚¬Ö­O¶m¥¯¬®¼¤E§.¥¸‚Ò_¬j´%¥ÐI§ ¥¬jª¥j¾ ço¦Ê§ ɝ¬mÓ-Ë ¤m´cÒ ´%¶mɝª¦¨Éj¶m°´¬®¶m¥¤E§ ¥jÓÔ,¬Z³t¸O¥¯ªÖÔy¤m´%½ Ô¦±ª£ ¥%¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª¥Á¶m§ ¼È§ ¤mª!¥¯ÐI§O¥¯¬jª¥jÓZÔ,¬»¶f´¬ Ë{´%¶m³Z¦¨§ ©cª£‚¬Ö°±¬²‚¦¨Éj¶m°É%£‚¤E¦¨É¬—I¸‚¬®¥¯ª¦¨¤E§µ§ ¤mª,¶m¥õª£‚¬ ª¶m¥¯½ ¤mËNÉr£‚¤¤E¥%¦¨§‚©º°±¬²‚¦¨Éj¶m°Z¦¨ª¯¬®³Z¥!ª¯¤ÿɝ¤E§‰Ãm¬jÐ ¶ ³¬®¶m§O¦¨§‚©×¥¯Ò_¬®Éj¦™O¬®¼Q¦¨§Qª£‚¬U¦¨§ ÒO¸‚ªjӀ­O¸‚ªR´%¶fª£‚¬j´ ¶m¥¶Âª¶m¥¯½¤mËtÉr£‚¤I¤E¥¦¨§‚©úª£‚¬!³¤E¥ª@¶fÒOÒ ´¤mÒ ´%¦Ê¶fª¯¬ ¥¯Ðo§‚¤E§ÐI³ Ë{¤m´ ¶Ø°±¬²o¬®³¬@¥¯Ò_¬®Éj¦™O¬®¼Â¦¨§!ª£‚¬@¦¨§‚ҏ¸‚ª è{Ô£ ¬j´¬@ª£‚¬µ°¨¬²I¬®³¬@´¬jÒO´¬®¥¯¬®§ª¥€¦¨ª¥t¤dÔ§U³¬®¶m§o¹ ¦¨§ ©$Ò_¤mª¯¬®§ª¦¨¶m° ír¾ï £I¸ ¥jÓfª£‚¬¦¨§‚ÒO¸ ªŒª¯¤›ª£‚¬ 듬²I¬®³¬ ¢õ£ ¤¤E¥¯¬j´›¦¨¥Ë ¤m´%³«¶m°¨°±Ðµª£‚¬t¥¶m³Z¬€¶m¥$ª£‚¬t¦Ê§‚ÒO¸‚ªª¯¤ ª£‚¬.çIÐo§ª¶u²U¢õ£‚¤I¤E¥¯¬j´®Ó§O¶m³¬®°±Ð!ª£‚¬@ª¯´¬j¬¥%£‚¤ Ô§ ¦¨§Â𦱩E¸‚´¬u½y£‚¤dÔy¬jÃm¬j´ Ó3ª£‚¬®¥¯¬@§‚¤o¼‚¬Ï°Ê¶f­¬®°Ê¥Y´¬j҂¹ ´¬®¥¬®§‰ªÖ³¬®¶m§ ¦Ê§‚©«Ò_¤mª¯¬®§ª¦¨¶m°¨¥4´%¶fª£‚¬j´›ª£ ¶m§R¶mɝª¸ ¶m° °±¬²o¬®³¬®¥jÓY¶m§O¼×묝²o¬®³¬Â¢õ£‚¤I¤E¥¯¬j´.³«¶®Ð#¤m´Õ³Z¶ Ð §‚¤mªtÉ%£ ¶m§‚©m¬Ïª£‚¬«°¨¶f­_¬®°Ô£ ¬®§!¦±ªY¼‚¬jª¯¬j´%³«¦¨§‚¬®¥Öª£‚¬ °±¬²o¬®³¬m¾Øé¤mª¯¬@ª£ ¶fª€Ôy¬@Éj¶m§Â¸ ¥¯¬«ª£ ¬µ¶fÒ ÒO´¤E¶mÉ%£ Ò ´%¬®¥¯¬®§‰ª¯¬®¼ñ¦¨§Uª£ ¦Ê¥ZÒO¶fÒ_¬j´«¦¨§Å¤m´%¼‚¬j´«ª¯¤îÒ_¬j´Ë ¤m´r³ ª£‚¬Ö°±¬²‚¦¨Éj¶m°É%£‚¤E¦Êɝ¬4ª¶m¥½Ï¶m¥õ¦±ªõ¦¨¥õ¸ ¥¸ ¶m°¨°¨Ð€Ë{´%¶m³¬®¼^n ¦±ËŒÔy¬t¶f´¬Ö©E¦±Ãm¬®§.¥ÐI§ ¥¬jª¥cèê¦ ¾ ¬m¾±ÓO³¬®¶m§O¦¨§‚©E¥ríy¦¨§@ª£‚¬ ¦¨§ ÒO¸‚ªÖ´¬jÒ ´%¬®¥¯¬®§‰ª¶fª¦¨¤E§-ÓÔy¬@Éj¶m§îÉr£‚¤I¤E¥¯¬N¶.³¬®³ ¹ ­_¬j´µ¤mËcª£ ¶fªN¥¯Ðo§ ¥¯¬jª@¶m§ ¼ñª£‚¬®§Ò ´¤oɝ¬j¬®¼¦Ê§»ª£‚¬ ³Z¶m§O§‚¬j´¼‚¬®¥ɝ´%¦±­_¬®¼Ï¦¨§µª£ ¦¨¥yÒO¶fÒ_¬j´®¾¦¾¤dÔy¬jÃm¬j´®ÓÔy¬ ¶f´¬4§‚¤mª3©E¸O¶f´%¶m§‰ª¯¬j¬®¼µª¯¤t¤m­ ª¶m¦¨§«¶€¥¯Ðo§‚¤E§ÐI³ Ë ´¤E³ ª£‚¬c¤m´%¦±©E¦¨§ ¶m°_¥¯Ðo§ ¥¯¬jªj¾ ¿Á¬t¶m¥¥¸ ³¬Öª£O¶fª ª£‚¬Y¥¯ª¯´%¸Oɝª¸‚´%¶m°_´¬®°¨¶fª¦¨¤E§N­_¬¹ ªÑÔy¬j¬®§@ª£‚¬4°±¬²o¬®³¬®¥õ¦¨¥3§‚¤mªõÉ%£O¶m§‚©m¬®¼@¼ ¸‚´%¦¨§ ©Y°±¬²o¦±¹ Éj¶m°¤m´,¥ÐI§ª¶mɝª¦¨É4É%£‚¤E¦¨É¬m¾õço¦¨§ ɝ¬ ª£ ¬$¦¨§‚ҏ¸‚ª´¬jÒ ´¬¹ ¥¯¬®§ª¶fª¦±¤E§³«¶®Ð@­_¬4¸ §O¼‚¬j´%¥¯Ò_¬®Éj¦™O¬®¼ÏÔ¦±ª£µ´¬®¥Ò¬®Éª ª¯¤Áª£‚¬@ªÑÐÒ_¬@¤mË$¼‚¬jÒ_¬®§ ¼ ¬®§ ɝÐÁ´¬®°¨¶fª¦±¤E§×èê¶m¼<}¸ § ɝª ¤m´Á¶f´©E¸ ³Z¬®§‰ªjÓ Ô£ ¦ÊÉ%£ ª:ÐIҬ¤mË϶f´%©E¸ ³¬®§ªrírӀ¶m°¨° ª£ ¶fªÖ´%¬®³Z¶m¦¨§ ¥j™ ²o¬®¼Á¦¨¥›ª£‚¬ ª£‚¬ZÉr£‚¤E¦¨É¬Z¤mËõ£‚¬®¶m¼ ¥ ¶m§ ¼ñ¼‚¬jÒ_¬®§ ¼‚¬®§ª¥j¾×Ìõ¸‚ªÏ¦±ªÏ¦¨¥ZÔ,¬®°¨°½o§‚¤ Ô$§»ª£ ¶fª ª£‚¬j´%¬Ù¶f´¬Ù¥¯Ðo§‚¤E§ÐI³Z¤E¸ ¥N´¬®¶m°¨¦¨·®¶fª¦±¤E§ ¥¦¨§#Ô$£ ¦¨É%£ ³¬®¶m§O¦¨§‚©E¥.¶f´%¬î¼ ¦¨¥ª¯´%¦±­O¸‚ª¯¬®¼×¦Ê§¼ ¦ÇÆ_¬j´¬®§ª.¥¯ª¯´%¸ Ée¹ ª¸‚´r¶m°4ɝ¤E§­™O©E¸‚´%¶fª¦±¤E§O¥jÓ$Ë{¤m´@¬²o¶m³ZÒO°±¬ÁÀyùsÀ¥ »SÂEùR € Äüƒ ƒÃ{ýoùƒG3 :ùú¶m§O¼ÄÀyù>føfù] ƒ®ùR¬{ýoù–ƒ?3 :ù üfþ ’jü®ü ê¾`Å ï £‚¬#¦¨¥¥¸‚¬ñ¤mËÕ¼‚¬®¶m°Ê¦¨§‚©ºÔ$¦±ª£ ¥¸ É%£ ¥¯ª¯´r¸ ɝª¸‚´%¶m°ҏ¶f´%¶fÒO£‚´%¶m¥¬®¥Y¦¨¥Y¶ɝ¤E³ZÒO°±¬²Á¤E§‚¬«¶m§ ¼ Ò ´%¬®¥¸ ³Z¶f­O°¨Ð´¬—¸O¦±´¬®¥õɝ¤m´Ò_¤m´%¶ Ô£ ¦ÊÉ%£@¶f´¬Y¶m§ §‚¤f¹ ª¶fª¯¬®¼€¦¨§Ö³¤m´¬yɝ¤E³ÒO°±¬²›Ô,¶ ÐI¥“ª£ ¶m§Yª£ ¬¥¯ÐI§ª¶mɝª¦Ç¹ Éj¶m°¨°¨Ð€¶m§ § ¤mª¶fª¯¬®¼Ïɝ¤m´Ò_¤m´%¶Ö¶ Ãu¶m¦Ê°¨¶f­O°±¬ª¯¤€¸ ¥ª¯¤o¼ ¶®Ðm¾ Æ ã D‚ä8Ñ¡o=_Bc嵿3C“8:¡oD“âZ?@=_>f8Ñ¡ žR6‰¡‰æy8 ŸmDO¡IŸEá6‰D óħت£‚¬Z¶f´%É%£O¦±ª¯¬®Éª¸‚´¬¼ ¦¨¶f©m´r¶m³À¦¨§Õ𦱩E¸ ´¬Œ!IÓ°±¬²¹ ¦¨Éj¶m°cÉ%£‚¤E¦¨É¬î£ ¶fÒ Ò_¬®§ ¥@­_¬jË ¤m´%¬!¥¯ÐI§ª¶mɝª¦¨É!Ér£‚¤E¦¨É¬m¾ ¾$¤ Ô,¬jÃm¬j´®Ó|–…r÷ üe÷Yª£‚¬j´¬Â¶f´¬Ù¤mª£ ¬j´Ø¶f´%Ér£ ¦±ª¯¬®Ée¹ ª¸‚´r¶m°E¤mÒ ª¦±¤E§O¥“¶m¥7Ôy¬®°Ê° ¾“óħYª£ ¦Ê¥“¥¯¬®Éª¦±¤E§ÓuÔ,¬y¦Ê§‰Ãm¬®¥Ä¹ Ç 25476BE6VÈKLAI7B\DOT69a7;OFH <WL6Bd; 836VÈA<D>e^A;OBVÉ3Ê´/Ë7Ì9ÊË´³GÍ ÎOÏÐ ²b´Ñ]²bÒ Ì/²bɏe^4<6AsTDON<A<I¯KSA¯P3476J9DGA]P369aPD TžFQ;OJ/4<KLA<6 PE8/;OA7BEWÓ;RP3KLDGAÔ¡ÕdD 8E8>[Ö×G×OØ?¢bm ª¦±©E¶fª¯¬ª£‚´%¬j¬€­¶m¥¦¨Ét¤mÒ ª¦±¤E§ ¥4¤mË3¦¨§ª¯¬j©m´%¶fª¦¨§ ©@°¨¬²o¦Ç¹ Éj¶m°IÔ¦±ª£€¥¯Ðo§‰ª¶mɝª¦¨ÉõÉr£‚¤E¦¨É¬ n°±¬²o¦ÊÉj¶m°Ér£‚¤E¦¨É¬,­¬jË{¤m´¬ ¥¯Ðo§‰ª¶mɝª¦ÊÉ4Ér£‚¤E¦¨É¬ ½ ¦¨§ª¯¬j©m´%¶fª¯¬®¼N¥¦Ê³t¸ °¨ª¶m§‚¬j¤E¸ ¥3°¨¬²¹ ¦¨Éj¶m°$¶m§ ¼Å¥¯Ðo§‰ª¶mɝª¦ÊÉÕÉr£‚¤E¦¨É¬ ½$¶m§O¼»°±¬²‚¦¨Éj¶m° Ér£‚¤E¦¨É¬ ¶fË{ª¯¬j´4¥¯Ðo§ª¶mɝª¦¨ÉcÉr£‚¤E¦¨É¬m¾õ¿Á¬c¤m­O¥¬j´Ãm¬cª£ ¶fª$ª£‚¬®¥¯¬ ª£‚´¬j¬yÒ_¤E¥¥¦±­O°¨¬¶f´rÉ%£ ¦±ª¯¬®Éª¸ ´¬®¥“¼‚¤4§‚¤mª“¶mÉjɝ¤E³ҏ°¨¦¨¥£ ¬²‚¶mɝª°±ÐNª£‚¬t¥%¶m³¬Yª¶m¥¯½o¥j¾óôˌÔ,¬cÒ_¬j´Ë{¤m´%³ °±¬²‚¦¨Éj¶m° É%£ ¤E¦¨É¬,¶fË{ª¯¬j´¥¯ÐI§ª¶mɝª¦¨É,É%£‚¤E¦Êɝ¬mÓuª£ ¬®§cª£‚¬y´%¶m§‚©m¬y¤mË Ò_¤E¥¥¦±­°±¬Ö¥¯ÐI§ ¤E§‰Ðo³Z¥¦Ê¥ ´¬®¼ ¸Oɝ¬®¼)n,§ ¤mª¤E§ °±ÐN³€¸ ¥¯ª Ô,¬É%£‚¤I¤E¥¯¬4¶Y³¬®³t­¬j´3¤m˪£ ¬¥¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª¤mË_ª£‚¬ ¦¨§‚ҏ¸‚ªZ°±¬²o¬®³¬mÓ ­O¸‚ªÏ¦±ªÏ³€¸ ¥¯ªµ¶m°¨¥¯¤Ù­_¬ØÉ¤E³ÒO¶fª¦Ç¹ ­O°±¬.Ô¦±ª£Uª£‚¬Ø¥¯Ðo§‰ª¶mɝª¦¨ÉØÉ%£ ¤E¦¨É¬Õ¶m°±´%¬®¶m¼‚ÐųZ¶m¼‚¬m¾ ð‚¤m´ ¬²‚¶m³ÒO°±¬mÓ¦±Ëª£‚¬µ¦¨§‚ÒO¸ ªc°±¬²o¬®³¬Ï¦¨¥ öm÷{øfùjÓÔy¬ Éj¶m§Ér£‚¤¤E¥¬‰Eüfþ :ù¶m¥ ¶ ¥¯Ðo§‚¤E§ÐI³N¾i¾$¤ Ô,¬jÃm¬j´®Ó¦±Ë Ô,¬™O´%¥¯ªÉ%£ ¤¤E¥¯¬Õ¶Á¼‚¤E¸‚­O°±¬¹<¤m­­}Ĭ®Éªɝ¤E§ ¥¯ª¯´r¸ ɝª¦±¤E§ è öm÷{øfùŒ Z<üÔ{ýoùYüêöfþ ÷ ê÷ üfþIí,¶m¥yª£‚¬4¥¯ÐI§ª¶mÉe¹ ª¦¨É$´¬®¶m°¨¦±·®¶fª¦¨¤E§Ï¤m˓ª£‚¬4°¨¬²I¬®³¬›Ðm¬jªõª¯¤ ­_¬$É%£ ¤E¥¯¬®§-Ó ª£‚¬®§Eüfþc :ùN¦¨¥Ï¬²‚Éj°¨¸ ¼‚¬®¼¶m¥N¶ú°±¬²o¦ÊÉj¶m°$Ér£‚¤E¦¨É¬m¾ ï £‚¬yª¶m¥¯½c¦¨§tª£‚¬õ¥¯Ðo§‰ª¶mɝª¦¨Ée¹<­_¬jË{¤m´¬¹:°±¬²‚¦¨Éj¶m°Ç¹:Ér£‚¤E¦¨É¬ ¶f´%Ér£ ¦±ª¯¬®Éª¸‚´¬î¦¨¥Nª£I¸ ¥N¬®¶m¥¦±¬j´Õª£ ¶m§×ª£‚¬Ùª¶m¥¯½#¤mË ª£‚¬Y°±¬²‚¦¨Éj¶m°Ç¹<­_¬jË{¤m´¬¹:¥¯Ðo§‰ª¶mɝª¦ÊÉe¹:É%£‚¤E¦¨É¬@è{¤m´$°±¬²‚¦¨Éj¶m°Ç¹ Ô¦±ª£‚¹:¥¯ÐI§ª¶mɝª¦¨Ée¹:Ér£‚¤E¦¨É¬díÖ¶f´rÉ%£ ¦±ª¯¬®Éª¸ ´¬mÓ7¥%¦¨§ ɝ¬ª£‚¬ É%£ ¤E¦¨É¬c¦¨¥ ³Z¤m´¬Yɝ¤E§ ¥¯ª¯´%¶m¦Ê§‚¬®¼-¾ ï £ ¬Ô™O´%¥¯ªcª¶m¥¯½¯Ùت£O¶fªc¤m˪£‚¬µ°±¬²o¦ÊÉj¶m°Ç¹<­_¬jË ¤m´¬¹ ¥¯Ðo§‰ª¶mɝª¦ÊÉe¹:É%£‚¤E¦¨É¬ è{¤m´ °±¬²‚¦¨Éj¶m°Ç¹<Ô¦±ª£‚¹:¥¯ÐI§ª¶mɝª¦¨Ée¹ É%£ ¤E¦¨É¬dí ¶f´%Ér£ ¦±ª¯¬®Éª¸‚´¬fÙî¦Ê¥ ª£¸ ¥Zª¯¤Áª¶f½m¬.¶m§Å¦¨§o¹ ÒO¸‚ªÖª¯´%¬j¬«¤mËõª£‚¬ZË{¤m´%³1¥£ ¤ Ô§î¦¨§Á𦱩E¸‚´¬ÚfÓ¶m§ ¼ ª¯¤YÉ%£ ¤¤E¥¯¬ °¨¬²I¬®³¬®¥Ë ´¤E³ÿª£‚¬ ¥%¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª¥7¶m¥¥¯¤f¹ Éj¦¨¶fª¯¬®¼ØÔ¦±ª£.¬®¶mÉ%£Á°±¬²o¬®³¬mÓ_¶m§ ¼Õª£‚¬®§Âè{¤m´Ö¥¦Ê³t¸ °±¹ ª¶m§‚¬j¤E¸ ¥%°±Ð‚í«ª¯¤ÅÉr£‚¤¤E¥¬!¥¸‚Ò_¬j´ª¶f©E¥µË ¤m´Nª£ ¬RÉr£‚¤f¹ ¥¯¬®§U°±¬²o¬®³¬®¥j¾îï £‚¬@¥¯¬®É¤E§ ¼úª¶m¥¯½ÛÙÁª£O¶fª¤m˪£‚¬ ¥¯Ðo§‰ª¶mɝª¦ÊÉe¹<­¬jË{¤m´¬¹:°±¬²‚¦¨Éj¶m°Ç¹:Ér£‚¤E¦¨É¬U¶f´%Ér£ ¦±ª¯¬®Éª¸‚´¬ÃÙ ¦¨¥«ª¯¤îª¶f½m¬R¶Ùª¯´%¬j¬Õ¤mËÖª£‚¬ÕË ¤m´r³ ¥£ ¤ Ô§ñ¦¨§»ð¦±©f¹ ¸‚´¬qIÓ¶m§ ¼Nª¯¤µÉ%£‚¤I¤E¥¯¬c°±¬²o¬®³¬®¥Ë ´¤E³ ª£‚¬c¥¸‚Ò_¬j´¯¹ ¥¯Ðo§ ¥¯¬jª¥7¶m¥¥¯¤oÉj¦¨¶fª¯¬®¼€Ô¦±ª£c¬®¶mÉ%£°±¬²o¬®³¬3Ô$£ ¦¨É%£€¶f´¬ ɝ¤E³ÒO¶fª¦¨­O°±¬@Ô¦±ª£Âª£‚¬¶m°±´%¬®¶m¼‚Ðî³Z¶m¼ ¬@¥ÐI§ª¶mɝª¦¨É É%£ ¤E¦¨É¬®¥j¾ ð‚¤m´$ª£‚¬Z™O´%¥ª,ª¶m¥½Ó‚Ô,¬c¼ ¦¨¥ª¦¨§‚©E¸ ¦¨¥%£Zª:Ô,¤ZªÑÐÒ_¬®¥ ¤mË ¶f´%Ér£ ¦±ª¯¬®Éª¸‚´%¬®¥jÓ °±¬²‚¦¨Éj¶m°Ç¹<­_¬jË ¤m´%¬¹:¥¯ÐI§ª¶mɝª¦¨Ée¹ É%£ ¤E¦¨É¬Â¶m§O¼º°±¬²‚¦¨Éj¶m°Ç¹<Ô¦¨ª£o¹:¥¯Ðo§‰ª¶mɝª¦¨Ée¹:Ér£‚¤E¦¨É¬m¾ óô§ ­_¤mª£*¶f´%É%£O¦±ª¯¬®Éª¸‚´¬®¥jÓÕÔy¬¸O¥¯¬#ª¯¤m҂¹:¼‚¤ Ô$§ ¶m°±©m¤f¹ ´%¦±ª£O³Z¥y¦Ê§ÏÔ$£ ¦¨É%£@¼‚¬®Éj¦Ê¥¦±¤E§ ¥õ¶f´¬j™O´%¥ªõ³«¶m¼‚¬›Ë ¤m´ ¶ ³¤mª£‚¬j´µ§‚¤o¼‚¬N­_¬jË ¤m´¬£ ¬j´«¼ ¶m¸‚©E£ª¯¬j´%¥«¶f´%¬.¼‚¬®¶m°±ª Ô¦±ª£¾ ð‚¤m´Áª£‚¬Ã™O´%¥¯ªØ°±¬²‚¦¨Éj¶m°Ç¹<­_¬jË ¤m´%¬¹:¥¯ÐI§ª¶mɝª¦¨Ée¹ É%£ ¤E¦¨É¬&¶f´%Ér£ ¦±ª¯¬®Éª¸‚´¬mÓ×Ô,¬&¸ ¥¬*Ë ¤E¸‚´2¼ ¦ÇÆ_¬j´¬®§‰ª ³¤o¼‚¬®°¨¥j¾ ÜsÝ%Þßdàcá+â n.¿Á¬ØÉ%£‚¤I¤E¥¯¬.ª£ ¬Õ°±¬²o¬®³¬¤m˛ª£‚¬ ¼O¶m¸‚©E£‰ª¯¬j´§‚¤o¼‚¬y­O¶m¥¯¬®¼t¤E§t£‚¬j´³¤mª£‚¬j´¹ ¥7°¨¬²¹ ¬®³Z¬t¶m§ ¼¥%¸‚Ò_¬j´ª¶f©‚¾%Õ¤m´¬tÒ ´¬®Éj¦¨¥¯¬®°¨ÐmÓ ©E¦±Ãm¬®§ ¶Ø³Z¤mª£‚¬j´¹ ¥°±¬²o¬®³¬|¿ã&¶m§O¼Â¥%¸‚Ò_¬j´ª¶f©ÛäãYÓ Ôy¬V™§O¼tª£ ¬ ¼ ¶m¸‚©E£ª¯¬j´3°±¬²o¬®³¬Q¿æåõª£ ¶fªy³Z¶u²¹ ¦¨³Z¦±·j¬®¥\ç7è¿¶åè ¿ãŸé äã›ír¾Lê ÜÛÝ£Þß n¿Á¬õÉ%£‚¤I¤E¥¯¬,ª£‚¬3°±¬²o¬®³¬3¤m˂ª£ ¬y¼ ¶m¸‚©E£‚¹ ª¯¬j´µ§‚¤o¼‚¬­¶m¥¯¬®¼Å¤E§ñ£‚¬j´Ï³¤mª£ ¬j´¹ ¥Ï°¨¬²I¬®³¬ ¤E§ °±Ðm¾QÕ¤m´¬cÒ ´%¬®Éj¦¨¥¯¬®°±ÐmӏÔy¬™§ ¼Nª£‚¬ ¼ ¶m¸‚©E£‚¹ ª¯¬j´$°±¬²o¬®³¬¿æåYª£ ¶fª³Z¶u²‚¦¨³Z¦±·j¬®¥\çè¿æåè ¿æã›ír¾ ÜÁàcá+â n¿Á¬yÉr£‚¤I¤E¥¯¬3ª£‚¬y°±¬²I¬®³Z¬¤mËoª£‚¬3¼ ¶m¸‚©E£‚¹ ª¯¬j´§‚¤o¼‚¬›­O¶m¥¯¬®¼@¤E§£‚¬j´ ³Z¤mª£‚¬j´¹ ¥ ¥¸ Ò¬j´%ª¶f© ¤E§ °±Ðm¾QÕ¤m´¬cÒ ´%¬®Éj¦¨¥¯¬®°±ÐmӏÔy¬™§ ¼Nª£‚¬ ¼ ¶m¸‚©E£‚¹ ª¯¬j´$°±¬²o¬®³¬¿æåYª£ ¶fª³Z¶u²‚¦¨³Z¦±·j¬®¥\çè¿æåè äã›ír¾ ÜÛëìí7Ý%Þß nØ¿Á¬RÉ%£ ¤¤E¥¯¬Øª£‚¬R°±¬²o¬®³¬.¤mËÖª£‚¬ ¼ ¶m¸‚©E£ª¯¬j´ § ¤I¼‚¬Ï­O¶m¥¯¬®¼Â¤E§îª£‚¬N¥¦¨³ÒO°¨¬Z°±¬²¹ ¦¨Éj¶m°ÂË ´¬—I¸‚¬®§ ɝР¤mË»ª£‚¬*°±¬²o¬®³¬®¥ ¦¨§Lª£‚¬ ¼ ¶m¸‚©E£ª¯¬j´¹ ¥Á¥¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jªjÓcÔ¦±ª£‚¤E¸‚ªÕ´¬j©E¶f´%¼ ª¯¤!ª£‚¬.³¤mª£ ¬j´®¾ºÕ¤m´¬Ò ´¬®Éj¦Ê¥¯¬®°±ÐmÓ3Ô,¬|™§O¼ ª£‚¬Ø¼O¶m¸‚©E£‰ª¯¬j´°¨¬²I¬®³¬¯¿æåÕª£ ¶fªN³Z¶u²‚¦¨³Z¦±·j¬®¥ ç7è¿æådír¾ 炦¨§ ɝ¬¥¯ÒO¶f´r¥¯¬®§‚¬®¥¥-¤m˂¼ ¶fª¶$¦¨¥¶$³Z¶7}Ĥm´ŒÒ ´¤m­O°¨¬®³NÓ Ô,¬!ɝ¤E³c­¦¨§‚¬Áª£‚¬®¥¯¬!³Z¤I¼‚¬®°Ê¥jÓ$­¶mɽo¦¨§‚©ú¤fÆ Ë ´¤E³ ¤E§‚¬«ª¯¤.¶°±¬®¥¥Ö¥Ò¬®Éj¦™É ¤E§‚¬Z¶m¥c§ ¬j¬®¼‚¬®¼-¾Zð‚¤m´€¬²¹ ¶m³ҏ°±¬mÓ Ý£Þßdàcá+â^î ëìí<Ý£Þß ¦¨¥õ¶³¤o¼‚¬®°_¦¨§«Ô$£ ¦¨É%£ Ô,¬ª¶f½m¬ ¬®¶mÉ%£Ï³¬®³t­¬j´¤mˏª£‚¬ ¼ ¶m¸‚©E£ª¯¬j´¹ ¥¥%¸‚Ò_¬j´¯¹ ¥¯Ðo§ ¥¯¬jªµ¶m§ ¼»Ér£‚¤I¤E¥¯¬Õª£ ¬.¤E§‚¬ÕÔ¦±ª£Uª£‚¬Ø£ ¦±©E£‚¬®¥ª Ãf¶m°¨¸‚¬ Ë{¤m´Hç7è¿æåkè ¿ ã é ä ã ír¾ óôË,§‚¤¼O¶fª¶N¦¨¥Ö¶®Ãf¶m¦¨°¨¶f­O°¨¬ Ë{¤m´¶m§‰Ðc°¨¬²I¬®³¬®¥7¤mË ª£ ¬,¼ ¶m¸ ©E£‰ª¯¬j´¹ ¥¥¸ Ò¬j´r¥¯ÐI§O¥¯¬jªjÓ ª£‚¬®§!Ôy¬Z­¶mɽm¤fÆ»¶m§ ¼!Ér£‚¤I¤E¥¯¬«ª£‚¬Z°¨¬²I¬®³¬«Ë ´¤E³ ª£‚¬›¥¸‚Ò_¬j´%¥ÐI§ ¥¬jªÔ£ ¦¨Ér£«¦¨¥y³¤E¥¯ª,Ë ´%¬—¸‚¬®§ªõ¦¨§Zª£‚¬ ª¯´%¶m¦Ê§ ¦¨§‚©ØÉ¤m´ÒO¸ ¥j¾Øóħ¶m°Ê°y³¤o¼‚¬®°¨¥®Ó¦¨Ëª£‚¬µ³Z¤I¼‚¬®° ¼‚¤I¬®¥§‚¤mª´¬®¥%¸ °±ª¦¨§@¶°±¬²‚¦¨Éj¶m°-É%£‚¤E¦Êɝ¬«è{­¬®Éj¶m¸O¥¯¬Ö¤mË ¸ §O¥¯¬j¬®§ú¼ ¶fª¶‰írÓÔ,¬µ´r¶m§ ¼‚¤E³Z°±Ð!Ér£‚¤I¤E¥¯¬N¶Ø°¨¬²I¬®³¬ Ë{´¤E³Èª£ ¬$¥¸‚Ò_¬j´%¥ÐI§ ¥¬jªj¾õèêé$¤mª¯¬$ª£O¶fª ëìí<Ý£Þß ¶m°Ç¹ ³¤E¥ª¶m°±Ôõ¶®Ðo¥ ´¬®¥¸O°±ª¥¦¨§N¶ZÉr£‚¤E¦¨É¬m¾òí ð ¤m´cª£‚¬Z°±¬²‚¦¨Éj¶m°Ç¹<Ô$¦±ª£o¹:¥¯Ðo§‰ª¶mɝª¦ÊÉe¹:É%£‚¤E¦¨É¬Z¶f´rÉ%£ ¦Ç¹ ª¯¬®Éª¸‚´%¬mÓOÔ,¬c¸ ¥¯¬Y¶¥¦Ê§‚©E°±¬Ö³¤o¼‚¬®°En ÜÛÝ£Þßdàcá+â+ï)ðñâcì n2¿Á¬Ér£‚¤¤E¥¬»ª£‚¬»°¨¬²I¬®³¬ ¶m§ ¼»ª£ ¬Ø¥¸ Ò¬j´%ª¶f©Ù¤mËÖª£‚¬R¼ ¶m¸‚©E£ª¯¬j´N§‚¤I¼ ¬ ­O¶m¥¯¬®¼ ¤E§ÿ£ ¬j´Ø³¤mª£ ¬j´¹ ¥R°±¬²I¬®³Z¬Â¶m§ ¼Q¥¸‚¹ Ò¬j´%ª¶f©‚¾VÕ¤m´¬YÒO´¬®Éj¦¨¥¯¬®°±ÐmÓO©E¦±Ãm¬®§.¶Ï³¤mª£‚¬j´¹ ¥ òbó DOP36)P34;RPP347KSBoFHDI769W<KLFH <WLKL6B'P34; PoWL69aKLJ>;OW€; AI%BUA_ P/;OJ9P3KLJœJ/4<DOKSJ96Ÿ;R836œ  698ETD 83FH6>IKLA]P36983WL6>;>=?6>I€ôœTD 8j;ŸXGKL=?6A A<DI6G[-h783BPjP3476WL69a6FH6KLBZJ/4<DOBE6A[+P34<6A|K`P3BlBEN<  698EP/; X7m 254<6AŸ; PEP369AP3KLDOABE4<K`TÓP3BdP3DlP34<6£WL69a6FH6Gõ B¦I<; N<XO4P36b83B[‚; AI TD 8Ÿ6;OJ/4¯I<; N<XO4P36b8>[de^4<6AP34<6WS6ba769FH6KLBœJb47DGBE6A[¦P34<6 FHDOP347698>õ BBEN7  698EP/;OX¦KLB;OW`836>;GIUH;>=O;OKLWS;O@<WL6Gm5ö/Ò7ʶ²b´9ÍS²E³ Ð ² Î 69a_ 6JNP3KSDOADOT5WS6ba7KLJ>; Wc; AIBU7A]P/; J9P3KLJVJb47DGKLJ6HBE47DGN<WSIA<DOPi@ 6 JDGATN7BE6>IZe^KLP34‰Ì3÷OøVù Ï Ò€² Î 6ba769JN7P3KLDGAœD TcWL69aKLJ>; W';OAIZBUA_ P/;OJ9P3KLJ¦J/4<DOKSJ96G[ IKSBEJ9N<BEBE6>Ij@ 69WSD>e¦m °¨¬²I¬®³¬Ô¿æã ¶m§ ¼Á¥¸‚Ò_¬j´ª¶f©äãYÓÔ,¬™§ ¼Øª£‚¬ ¼O¶m¸‚©E£‰ª¯¬j´$°±¬²I¬®³Z¬¿æåY¶m§ ¼@ª£ ¬Y¼ ¶m¸‚©E£ª¯¬j´¥¸o¹ Ò_¬j´ª¶f©‰ä7åÖª£ ¶fª$³Z¶u²o¦Ê³Z¦±·j¬%ç7è¿æå€é äåè ¿æãé äã›ír¾ ¾$¬j´¬mÓ ª£‚¬U¤E§ °±Ðº­O¶mɽm¤fÆ*Ô,¬úɝ¤E§O¥¦¨¼‚¬j´Á¦¨¥Øª¯¤ ëìí<Ý£Þß Ó Ô¦¨ª£ñ¥¸ ­O¥¯¬—I¸‚¬®§‰ªµ¦¨§ ¼‚¬jÒ_¬®§ ¼ ¬®§‰ªZ¥¯Ðo§o¹ ª¶mɝª¦¨ÉtÉ%£‚¤E¦¨É¬m¾ ð‚¤m´õª£‚¬4¥¯¬®É¤E§ ¼µª¶m¥¯½Ùª£ ¶fª,¤mË-ª£‚¬4¥ÐI§ª¶mɝª¦¨Ée¹ ­_¬jË ¤m´%¬¹:°±¬²o¦ÊÉj¶m°Ç¹:É%£‚¤E¦Êɝ¬«¶f´%É%£O¦±ª¯¬®Éª¸‚´¬ŒÙÕÔy¬µ¼‚¬?™§‚¬ ª:Ô,¤ ª:ÐIÒ_¬®¥ú¤mË.¶m°±©m¤m´%¦¨ª£ ³Z¥j¾ óħ2ª£ ¬ ì7í7ÞkÞ ¶m°Ç¹ ©m¤m´%¦±ª£O³NÓ Ô,¬RÉr£‚¤I¤E¥¯¬Rª£‚¬Ø°¨¬²I¬®³¬®¥@¦¨§ñª£‚¬Øª¯´¬j¬ ³¤o¼‚¬®°¶fË{ª¯¬j´tÔ,¬«£ ¶ Ãm¬µÉr£‚¤E¥¯¬®§Ùª£‚¬Ï¥¯Ðo§ª¶u²Ó¦¨§!¶ ¥¯¬®É¤E§ ¼ÒO¶m¥¥õª£‚´¤E¸ ©E£Ïª£ ¬Öª¯´¬j¬m¾¥.3¶mÉr£°±¬²o¬®³¬Y¦¨¥ É%£ ¤E¥¯¬®§­O¶m¥¬®¼@¤E§¦±ª¥¤dÔ§¥¸‚Ò_¬j´ª¶f©‚Ó ¦¨ª¥³¤mª£‚¬j´ §‚¤o¼‚¬ ¹ ¥Ö°±¬²o¬®³¬mӌ¶m§O¼!¦±ª¥Ö³¤mª£‚¬j´¹ ¥Y¥¸‚Ò_¬j´ª¶f©‚¾óôË §‚¤¼O¶fª¶ ¦¨¥¶ Ãu¶m¦¨°Ê¶f­O°±¬mÓ Ô,¬i™´%¥¯ª-­O¶mÉ%½4¤fƪ¯¤4ɝ¤E§ ¥¦¨¼o¹ ¬j´%¦¨§ ©«¦±ª¥ ¤dÔ§.¥%¸‚Ò_¬j´ª¶f©«¶m§ ¼Õ¦±ª¥³¤mª£ ¬j´4§‚¤o¼‚¬ ¹ ¥ °±¬²o¬®³¬mÓfª£‚¬®§V}¸ ¥¯ª7¦±ª¥¤dÔ§€¥¸‚Ò_¬j´ª¶f©‚Óf¶m§ ¼œ™§O¶m°¨°±Ð ª¯¤µª£‚¬cË{´¬—I¸‚¬®§ ɝÐN¤m˪£‚¬€¼ ¶m¸‚©E£ª¯¬j´4°±¬²o¬®³¬®¥j¾óô§ ª£‚¬:ú ñâ-Þû'í ¶m°±©m¤m´r¦±ª£ ³NÓyÔy¬Ø¦¨§ Éj°¨¸ ¼ ¬µ¶m°¨° ª£‚¬.¶m°Ç¹ ª¯¬j´%§ ¶fª¦¨Ãm¬t°±¬²o¬®³¬®¥¦Ê§@ª£‚¬c°¨¶fª¯ª¦¨É¬tÒO¶m¥¥¯¬®¼ª¯¤«ª£‚¬ ë+{ ¢õ£‚¤I¤E¥¯¬j´®ÓõÔ£‚¬j´%¬Nª£‚¬.°¨¶m§‚©E¸O¶f©m¬.³¤o¼‚¬®° ¦¨³ ¹ Ò_¤E¥¯¬®¥3¶tÉ%£‚¤E¦Êɝ¬¤m˰±¬²o¬®³¬m¾ð‚¤m´,ª£ ¦¨¥ª¶m¥½Ó‚¦±ªÔõ¶m¥ §‚¬®É¬®¥¥%¶f´Ðtª¯¤YÉr£ ¶m§‚©m¬ ª£ ¬­O¶mɽm¤fÆNÒO¶f´%¶m³Z¬jª¯¬j´%¥¦¨§ ª£‚¬°¨¶m§‚©E¸O¶f©m¬ ³¤I¼ ¬®°I¦Ê§t¥¸OÉ%£¶4Ô,¶ Ðtª£O¶fª¸ § ¥¯¬j¬®§ Ô,¤m´%¼ ¥yÔy¬j´¬4Ô,¬®¦±©E£‰ª¯¬®¼µÃm¬j´Ðϰ±¤ ÔYÓo¸ § °Ê¦±½m¬Éj¶m¥¯¬®¥õ¦¨§ Ô£ ¦ÊÉ%£!ª£ ¬µ°¨¶m§ ©E¸ ¶f©m¬µ³Z¤I¼‚¬®°y¦¨¥€¸ ¥¯¬®¼!Ë{¤m´ ´¬®É¤m©f¹ § ¦±ª¦¨¤E§Ù¤m´ÒO¶f´r¥¦¨§‚©‚¾!ð‚¤m´«¬®¶mÉ%£U¤mË$ª£‚¬®¥¯¬Nª:Ô,¤!¶m°Ç¹ ©m¤m´%¦±ª£O³Z¥jÓoÔy¬ÖÉj¶m§ɝ¤E§O¥¦¨¼‚¬j´¶m°¨°³¬®³t­¬j´r¥õ¤m˓ª£‚¬ ¥¸‚Ò_¬j´%¥ÐI§ ¥¬jª3¤m˓ª£ ¬›°¨¬²I¬®³¬mÓo¤m´õ¤E§ °±Ð«ª£‚¤E¥¯¬›Ô¦±ª£ ª£‚¬!ҏ¶f´ªÄ¹<¤mËʹ:¥Ò¬j¬®Ér£ ¶m¥Õ¥¯Ò_¬®Éj¦™O¬®¼×¦¨§#ª£ ¬Ù¦¨§‚ÒO¸ ª ´¬jÒ ´%¬®¥¯¬®§‰ª¶fª¦¨¤E§-¾ï £ ¦¨¥©E¦±Ãm¬®¥y¸ ¥Ë{¤E¸‚´3¶m°±©m¤m´r¦±ª£ ³Z¥jÓ ìí<ÞÞî û úúÑÓ ìí<ÞÞî üiðoý Óoú ñâ-Þûoíî û úúÑÓú ñâ+Þkû'íî üiðoý ¾ þ ÿ ;_=-B:á3=ŸE8:C“9º=-9yà PRDO>fá3B ŸE> ï £‚¬«¬jÃu¶m°¨¸O¶fª¦±¤E§î¦Ê¥Y­O¶m¥¯¬®¼Ù¤E§îɝ¤E³ZÒO¶f´%¦¨¥¯¤E§ÙÔ¦±ª£ ¶Zª¯¬®¥ª4ɝ¤m´ҏ¸ ¥ ¤mËQjÛmÛ@¥¯¬®§ª¯¬®§ ɝ¬®¥4´%¶m§ ¼‚¤E³«°±ÐµÉr£‚¤f¹ ¥¯¬®§ºË ´¤E³ ª£‚¬–{7¬®§ §ºïŒ´¬j¬ÙÌõ¶m§‚½#¿ç­~ñɝ¤m´ÒO¸ ¥ è꧂¤mª,¸ ¥¯¬®¼«¦Ê§ª¯´%¶m¦¨§ ¦Ê§‚©‚ÓE¤mË-ɝ¤E¸‚´%¥¯¬dír¾ï£‚¬¶®Ãm¬j´r¶f©m¬ ¥¯¬®§ª¯¬®§ ɝ¬ °±¬®§‚©mª£.¦Ê¥I¾mÚÏÔy¤m´%¼O¥jӏ¶m§ ¼.ª£‚¬c¶ Ãm¬j´¯¹ ¶f©m¬°±¬²‚¦¨Éj¶m°,¶m³c­O¦¨©E¸ ¦±ª:ÐÙÒ¬j´€Ô,¤m´%¼U¦¨¥Ÿr‚¾SI¾ñèêï £‚¬ ³Z¶u²‚¦¨³€¸ ³°±¬²‚¦¨Éj¶m°Ö¶m³c­¦±©E¸ ¦±ªÑÐñ¦¨¥mÚúË{¤m´Û3<ÂEù¾òí ï £‚¬Ò ´¬®³Z¦¨¥¬¤m˪£ ¬ ¬jÃu¶m°Ê¸ ¶fª¦±¤E§«¦Ê¥ª£ ¶fª3ª£ ¬©m¤E¶m° ¤mË©m¬®§ ¬j´%¶fª¦±¤E§Å¦¨¥€ª¯¤!³Z¶fªÉr£Åª£‚¬@©m¤E°¨¼U¥ª¶m§ ¼ ¶f´%¼ ¶m¥RÉj°±¤E¥¯¬®°±Ð×¶m¥ØÒ_¤E¥¥¦±­°±¬mӛ¬jÃm¬®§ÿ¦¨Ë¤mª£‚¬j´Õ´%¬®¥¸ °±ª¥ Ô,¤E¸ °¨¼¶m°¨¥¯¤c­¬ ¶mÉjɝ¬jÒOª¶f­O°±¬¤m´3¬jÃm¬®§Ï¶fÒ ÒO´¤mÒ ´%¦¨¶fª¯¬ ½ ¦¨§Å¥¤E³¬.¶fÒ ÒO°Ê¦¨Éj¶fª¦±¤E§ ¥jÓ,Ãf¶f´%¦¨¶fª¦±¤E§Å³«¶®ÐU­_¬¸ ¥¯¬¹ Ëê¸ ° Ó ¦¨§NÔ$£ ¦¨É%£.Éj¶m¥¯¬cª£ ¦Ê¥ ¬jÃf¶m°¨¸ ¶fª¦±¤E§³«¶®Ð§‚¤mª$­¬ ¶fÒ Ò ´%¤mÒ ´%¦¨¶fª¯¬m¾ ð ¤m´îª£‚¬U¬jÃf¶m°¨¸ ¶fª¦¨¤E§-ÓZÔ,¬»¸ ¥¯¬ÅªÑÔy¤ÿ¼ ¦ÇÆ_¬j´¬®§ª ³¬jª¯´r¦¨Éj¥j¾ï £‚¬™O´%¥ª›³Z¬jª¯´%¦¨ÉfÓ ý]ìíñâ¼û +í<û á Ó ¦¨¥ ­¶m¥¯¬®¼ú¤E§U¥¦¨³Z¦Ê°¨¶f´€³Z¬jª¯´%¦¨Éj¥¸ ¥¯¬®¼ú¦¨§U³«¶mÉ%£ ¦¨§ ¬ ª¯´%¶m§O¥°¨¶fª¦±¤E§-¾õ¿Á¬ɝ¤E³ÒO¶f´%¬cª£‚¬t¤E¸‚ª¯ÒO¸‚ª$¤mË #%$'&5p (+*-, ª¯¤Zª£‚¬Y©m¤E°¨¼.¥¯ª¶m§ ¼ ¶f´%¼.¥¯¬®§‰ª¯¬®§Oɝ¬t¶m§ ¼Õɝ¤E¸ §‰ª ª£‚¬Á§I¸ ³t­¬j´µ¤mËc³¤dÃm¬!¶m§ ¼#¥¸‚­O¥ª¦±ª¸‚ª¦±¤E§U¤mÒ_¬j´¯¹ ¶fª¦±¤E§O¥Nª£ ¶fª§‚¬j¬®¼#ª¯¤Å­_¬RÒ_¬j´Ë{¤m´%³¬®¼ª¯¤Uª¯´%¶m§ ¥Ä¹ Ë{¤m´%³ª£ ¬3¶mɝª¸ ¶m°¤E¸‚ª¯ÒO¸‚ª“¦Ê§‰ª¯¤ ª£ ¬3©m¤E°¨¼t¥¯ª¶m§ ¼ ¶f´%¼¾ ï £O¦¨¥ §I¸ ³c­_¬j´¦Ê¥ ¥¸‚­ ª¯´r¶mɝª¯¬®¼Ë{´¤E³&¶m§ ¼ª£‚¬®§.¼ ¦±¹ Ão¦¨¼‚¬®¼Ø­IÐ.ª£‚¬Z°±¬®§ ©mª£Õ¤mË,ª£‚¬¥¯¬®§ª¯¬®§ ɝ¬mӓÐo¦±¬®°¨¼O¦¨§‚© ¶R§I¸ ³c­_¬j´€­_¬jª:Ô,¬j¬®§ÅÛÕ¶m§O¼¼fÓyÔ¦±ª£¼µª£‚¬@­¬®¥ª ¥ɝ¤m´%¬m¾Õð‚¤m´¶Ø³¤m´¬@¼‚¬jª¶m¦¨°±¬®¼î¼ ¦¨¥Éj¸ ¥%¥¦±¤E§Á¤m˪£‚¬ ¦¨¥%¥¸‚¬ ¤mË,³Z¬jª¯´%¦¨Éj¥Y¦¨§Ø¬jÃf¶m°¨¸ ¶fª¦±¤E§Ó¤mËyª£ ¬«³¬jª¯´%¦¨Éj¥ Ô,¬Z£ ¶®Ãm¬µ¸ ¥¯¬®¼-Ó¶m§O¼R¤mË,¶m§Á¬²oÒ_¬j´%¦¨³¬®§ª›´¬®°¨¶fª¦¨§ © £I¸ ³Z¶m§}¸ ¼‚©E³¬®§ª¥,ª¯¤€ª£ ¬®¥¯¬Ö³¬jª¯´%¦¨Éj¥jӂ¥¯¬j¬ZèêÌõ¶m§o¹ ©E¶m°±¤m´%¬ ¬jª3¶m° ¾±Ó!fÛmÛmۉír¾õçIª¯´%¦¨§‚©Ö¶mÉjÉj¸‚´r¶mɝР³Z¬®¶m¥¸‚´¬®¥ ª£‚¬yÒ_¬j´Ë ¤m´r³Z¶m§ ɝ¬¤m˂ª£‚¬y¬®§‰ª¦¨´¬ #%$'&)(^*\, ¥¯Ðo¥¯ª¯¬®³N¾ ð ¤m´Yª£‚¬Z¥¯¬®É¤E§ ¼Á³Z¬jª¯´%¦¨ÉfÓ ûºû  +í<û á ӌÔy¬ ¼ ¦Ê¥¯´¬j©E¶f´%¼ °¨¦¨§ ¬®¶f´Œ¤m´%¼ ¬j´¶m§O¼€Éj¶m°ÊÉj¸ °¨¶fª¯¬´¬®Éj¶m°¨°o¶m§ ¼ Ò ´%¬®Éj¦¨¥¦±¤E§×¤E§ºª£‚¬!­O¶f©Qèê¦ ¾ ¬m¾±Óc³€¸ °±ª¦¨¥¬jªríµ¤m˰±¬²¹ ¬®³¬®¥c¦¨§.ª£ ¬©m¬®§‚¬j´%¶fª¯¬®¼!¥¯ª¯´r¦¨§‚©@¶m¥Öɝ¤E³ÒO¶f´¬®¼Rª¯¤ ª£‚¬ ­O¶f©@¤mË,°±¬²I¬®³Z¬®¥Ö¦¨§.ª£ ¬€©m¤E°Ê¼Ø¥ª¶m§ ¼ ¶f´%¼R¥¯¬®§o¹ ª¯¬®§ ɝ¬m¾cço¦Ê§ ɝ¬cª£‚¬ §I¸ ³c­_¬j´4¤mËÔ,¤m´%¼ ¥4¦¨§.ª£ ¬tªÑÔy¤ ¥¯¬®§ª¯¬®§ ɝ¬®¥4¦¨¥ ¸ ¥%¸ ¶m°¨°±ÐϪ£‚¬t¥¶m³Z¬mÓO´¬®Éj¶m°¨°“¶m§ ¼NÒ ´¬¹ Éj¦¨¥%¦±¤E§¶f´¬îÃm¬j´ÐÉj°±¤E¥¯¬mӀ¶m§ ¼Ô,¬Ù´¬jÒ_¤m´ªÕ¶m¥Õ¥¯¬jª ¶mÉjÉj¸‚´r¶mɝÐñª£‚¬®¦¨´µ¶ Ãm¬j´%¶f©m¬mӛÔ£ ¦¨Ér£ñÉj¶m§#­_¬Õ¦Ê§‰ª¯¬j´¯¹ Ò ´%¬jª¯¬®¼»¶m¥Zª£ ¬NÒ¬j´rɝ¬®§‰ª¶f©m¬Ø¤mË4ɝ¤m´%´¬®Éª°±ÐU©m¬®§‚¬j´¯¹ ¶fª¯¬®¼€°±¬²o¬®³¬®¥j¾ŒÌõ¶f©4¶mÉjÉj¸‚´%¶mɝÐY³¬®¶m¥¸ ´¬®¥ª£‚¬3Ò¬j´¹ Ë{¤m´%³Z¶m§ ɝ¬c¤mˌª£‚¬t묝²o¬®³¬€¢õ£‚¤I¤E¥¯¬j´ ¤E§O°±Ðm¾ ¬ ´¬®¥¸ °±ª¥ Ë{¤m´ ª£‚¬ ™O´%¥ª ª¶m¥½ è{ª£‚¬ °±¬²‚¦¨Éj¶m°Ç¹<­_¬jË{¤m´¬¹:¥¯Ðo§‰ª¶mɝª¦ÊÉe¹:É%£‚¤E¦¨É¬ ¶m§O¼ °¨¬²o¦¨Éj¶m°±¹ Ô¦¨ª£o¹:¥¯Ðo§‰ª¶mɝª¦¨Ée¹:Ér£‚¤E¦¨É¬Z¶f´%Ér£ ¦±ª¯¬®Éª¸‚´%¬®¥rí4¶f´¬Z¥¸ ³¹ ³Z¶f´r¦±·j¬®¼Á¦¨§Rª£‚¬ª¶f­O°¨¬Z¦¨§Rð7¦¨©E¸‚´¬r‚¾Œy$¥YÔy¬ZÉj¶m§ ¥¯¬j¬mÓ ëìí7Ý%Þß ¤E§Á¦±ª¥4¤dÔ§RË{¶f´›¤E¸‚ª¯Ò_¬j´Ë{¤m´%³Z¥›¶m§Ð ¤mª£‚¬j´t¶m°±©m¤m´%¦±ª£ ³1ª£ ¶fªY¼‚¤I¬®¥Y§‚¤mªÖ­O¶mÉ%½Ø¤fÆUª¯¤¦±ªj¾ y4°¨°‚¤mË-ª£‚¬4¶m°±©m¤m´%¦±ª£O³Z¥ª£ ¶fªõ£ ¶®Ãm¬$¤E§‚¬­¶mɽm¤fÆ.ª¯¤ ëìí<Ý%Þß Ò_¬j´Ë{¤m´%³ ¶fª«ª£ ¬¥¶m³¬.°±¬jÃm¬®°<Ó,¶m¥Ï¼‚¤¬®¥ Ý%Þßdà5á+â^î]àcá+â^î ëìí<Ý£Þß ÓÔ£ ¦¨°¨¬ Ý£Þßdà5á+â)î Ý%Þß5î ëìí<Ý%Þß Ò_¬j´Ë{¤m´%³Z¥¥°Ê¦±©E£‰ª°¨Ð«­_¬jª¯ª¯¬j´®¾ ¬´¬®¥¸O°±ª¥Ë{¤m´«ª£‚¬Ø¥¯¬®É¤E§ ¼Åª¶m¥½ºèêÉ%£‚¤I¤E¥¦¨§‚© °±¬²o¬®³¬®¥Ø¶fË ª¯¬j´Á¥¯Ðo§ª¶mɝª¦¨ÉîÉr£‚¤E¦¨É¬díÕ¶f´¬ú¥¸ ³Z³«¶u¹ ´%¦¨·j¬®¼N¦¨§@ª£‚¬Öª¶f­O°±¬Y¦Ê§Nð7¦¨©E¸‚´¬tI¾3Ì,¬®Éj¶m¸ ¥¯¬t¤mˌª£‚¬ ¼ ¦±Æ¬j´¬®§ªZª¶m¥¯½o¥jÓõª£‚¬´%¬®¥¸ °±ª¥ZÉj¶m§O§‚¤mªZ­_¬.³¬®¶m§o¹ ¦¨§ ©mË{¸ °Ê°±Ðcɝ¤E³ÒO¶f´%¬®¼Zª¯¤t𦱩E¸ ´¬Hr‚Ó­¸‚ªÔy¬$Éj¶m§Ï¥¯¬j¬ ª£ ¶fªjÓE¶m¥Œ¬²oÒ¬®Éª¯¬®¼Ó #%$o&5(+*-, Ò¬j´%Ë ¤m´%³«¥-­_¬jª¯ª¯¬j´Œ¤E§ ª£ ¦Ê¥“ª¶m¥¯½cª£ ¶m§c¤E§tª£‚¬£™O´%¥¯ªŒª¶m¥¯½¾7¿#¦±ª£‚¤E¸ ª“ÒO¶f´ªÄ¹ ¤mË ¹:¥¯Ò_¬j¬®É%£Q¦¨§‚Ë{¤m´%³Z¶fª¦¨¤E§-ÓYª£‚¬ú%¬j¬Â³¤o¼‚¬®°cÒ¬j´¹ Ë{¤m´%³Í­_¬jª¯ª¯¬j´Nª£ ¶m§ª£‚¬Á°¨¦¨§‚¬®¶f´@³¤o¼‚¬®° Ó ­O¸ ªÏª£‚¬ ¶m¼‚Ãf¶m§ª¶f©m¬R¦Ê¥«°±¤E¥¯ª«Ô£‚¬®§Åҏ¶f´ªÄ¹<¤mËʹ:¥Ò¬j¬®Ér£ñ¦Ê§‚Ë ¤m´¹ Õ¤o¼‚¬®° çIª¯´%¦¨§‚© y$ÉjÉj¸o¹ ´%¶mɝРÌõ¶f© y4ÉjÉj¸o¹ ´%¶mɝР´r¶m§ ¼‚¤E³ Ûo¾Sq<r Ûo¾mÚ ë“¬² çoÐI§ Ûo¾Sq<r Ûo¾mÚ ë“¬² çoÐI§o¹:듬² Ûo¾`r Ûo¾ÓfÚ ë“¬² çoÐI§o¹:듬²¹4ª¯´%듬² Ûo¾ q Ûo¾S  듬² çoÐI§o¹ôçoÐI§o¹4ª¯´%듬² Ûo¾ ! Ûo¾S€ 듬² çoÐI§o¹4ª¯´%듬² Ûo¾ ! Ûo¾S€ 듬² Ûo¾`r Ûo¾ÓfÚ ë“¬²¹4ª¯´%듬² Ûo¾ ! Ûo¾S€ 듬² çoÐI§o~E¤E¦¨§ª Ûo¾`rt Ûo¾Ó  듬² çoÐI§o~E¤E¦¨§ªÄ¹$ª¯´r묝² Ûo¾‚ Ûo¾S€ 4ª¯´%듬² Ûo¾StmÚ Ûo¾S t 𦱩E¸‚´¬ r­n ço¸O³Z³Z¶f´Ð ¤mË ´¬®¥¸ °¨ª¥1Ë{¤m´ ª£‚¬ °±¬²‚¦¨Éj¶m°Ç¹<­_¬jË{¤m´¬¹:¥¯Ðo§‰ª¶mɝª¦¨Ée¹:Ér£‚¤E¦¨É¬ ¶m§ ¼ °±¬²‚¦¨Éj¶m°Ç¹ Ô¦±ª£‚¹:¥¯ÐI§ª¶mɝª¦¨Ée¹:Ér£‚¤E¦¨É¬ ¶f´%É%£ ¦¨ª¯¬®Éª¸‚´¬®¥GË ¤m´Gª£‚¬ ™O´%¥ªª¶m¥¯½ ³Z¶fª¦±¤E§.¦¨¥¸ ¥¯¬®¼¶m¥Ôy¬®°Ê° ¾ ¿Á¬«§ ¤ Ôȼ‚´%¶ Ô ª¯¬®§‰ª¶fª¦¨Ãm¬«É¤E§ Éj°¨¸ ¥%¦±¤E§ ¥4Ë ¤m´cË{¸o¹ ª¸‚´¬µÔ,¤m´½!Ë{´¤E³ ª£ ¬®¥¯¬@´¬®¥¸ °±ª¥®¾R¢õ°±¬®¶f´r°±ÐmÓ¸ ¥¦¨§ © ¦¨§‚Ë{¤m´%³Z¶fª¦¨¤E§îË{´¤E³Lª£‚¬³¤mª£ ¬j´Z§‚¤I¼ ¬µ¦¨§Oɝ´¬®¶m¥¯¬®¥ Ò_¬j´Ë{¤m´%³Z¶m§ ɝ¬mÓ-¦±Ë¤E§ °¨Ð¥°¨¦±©E£ª°±Ðm¾œ¾¤dÔy¬jÃm¬j´ Óª£‚¬j´¬ ¦¨¥y§‚¤t´%¬®°¨¦¨¶f­O°±¬¬jÃo¦¨¼‚¬®§Oɝ¬ª£ ¶fª,ª£‚¬›°±¬²o¦ÊÉj¶m°Ç¹<­_¬jË ¤m´¬¹ ¥¯Ðo§‰ª¶mɝª¦ÊÉe¹:É%£‚¤E¦¨É¬t¶f´%É%£ ¦¨ª¯¬®Éª¸‚´¬Ö¤E¸‚ª¯Ò_¬j´Ë{¤m´%³Z¥,ª£‚¬ °±¬²‚¦¨Éj¶m°Ç¹<Ô¦¨ª£o¹:¥¯Ðo§‰ª¶mɝª¦¨Ée¹:Ér£‚¤E¦¨É¬Ù¶f´rÉ%£ ¦±ª¯¬®Éª¸ ´¬®¥N¤m´ ø ÷E rùÙømù?Oƒ?f¾Àï £‚¬j´%¬Ù¦¨¥.¶m°Ê¥¯¤»§‚¤Å¬jÃo¦¨¼‚¬®§Oɝ¬!ª£ ¶fª ª£‚¬ú³¤mª£‚¬j´¹ ¥R¥¸ Ò¬j´%ª¶f©ñ¶uÆ_¬®Éª¥!°±¬²‚¦¨Éj¶m°cÉr£‚¤E¦¨É¬ ¦¨§»ª£‚¬Õ¼ ¶m¸ ©E£‰ª¯¬j´®¾ y4¥Ï¶Ù´¬®¥¸ °¨ªjÓ,Ô,¬Rɝ¤E§ Éj°¨¸ ¼‚¬ ª£ ¶fª›Ë{¤m´4ª£ ¬Ÿ™´%¥¯ª4ª¶m¥¯½.Ô,¬Éj¶m§R´¬®¥¯ª¯´%¦¨Éª4¤E¸‚´Ö¶fªÄ¹ ª¯¬®§ª¦±¤E§Âª¯¤R¶Ø¥¦¨³ZÒO°¨¦™O¬®¼Á¶f´rÉ%£ ¦±ª¯¬®Éª¸ ´¬N¦¨§!Ô£O¦¨É%£ °±¬²‚¦¨Éj¶m°É%£‚¤E¦¨É¬›Ë{¤m´õª£‚¬$¬®§ª¦±´¬4ª¯´¬j¬›¤oÉjÉj¸‚´%¥y­¬jË{¤m´¬ ¥¯Ðo§‰ª¶mɝª¦ÊÉÉ%£‚¤E¦Êɝ¬Ë ¤m´«ª£‚¬N¬®§‰ª¦¨´¬Nª¯´¬j¬îèê¦ ¾ ¬m¾±Ó,ª£‚¬ ª:Ô,¤ÁÉ%£‚¤E¦Êɝ¬®¥€¶f´%¬Ï§‚¬®¦±ª£ ¬j´t¥¦¨³€¸ °±ª¶m§ ¬j¤E¸ ¥c§‚¤m´ ¦¨§o¹ ª¯¬j´%°±¬®¶ Ãm¬®¼ír¾Q𠸂´ª£‚¬j´r³¤m´¬mÓª£‚¬R¦¨³Ò_¤m´ª¶m§ ɝ¬.¤mË ª£‚¬€­O¶mɽm¤fÆ ëìí<Ý£Þß ³¤o¼‚¬®°7¥£‚¤dÔ¥ª£ ¬€¦¨³ZÒ¤m´¹ ª¶m§ ɝ¬¤mË,ª£‚¬¥¯ÒO¶f´%¥¬ ¼O¶fª¶@Ò ´¤m­O°±¬®³¾cï £I¸ ¥jÓ-ª£‚¬ ³¤o¼‚¬®°Ôy¬,Ô¦¨°¨°‰É¤E§ ɝ¬®§‰ª¯´r¶fª¯¬¤E§t¦¨¥ Ý%Þß5î ëìí<Ý%Þß ¾  åÏC“9ŒŸmD‚äŸÕ8:9Qã D ä8Ñ¡o=-BÖ嵿3C“8:¡oD ¿Á¬t§ ¤ Ô ¦Ê§‰Ãm¬®¥¯ª¦¨©E¶fª¯¬€£‚¤dÔ ³€¸ É%£Õɝ¤E§‰ª¯¬²oª›¦¨§Nª£‚¬ ¥¯Ðo§‰ª¶u² ª¯´¬j¬¦Ê¥§‚¬j¬®¼‚¬®¼¦¨§ ¤m´%¼ ¬j´ª¯¤Y¤mÒ ª¦¨³Z¦¨·j¬õ°¨¬²¹ ¦¨Éj¶m°4É%£‚¤E¦Êɝ¬m¾ óħ»ª£ ¬Õ¬²oÒ_¬j´%¦¨³¬®§ª¥Z´¬jÒ_¤m´ª¯¬®¼¦¨§ -D>e56=?6b8>[?e56+4;>=?6^A<D PcN7BE6>IVKLA7TgDO83FQ; P3KLDOAl; @ DGNPP3476 ¡BE6FQ; AP3KLJZD 8žBU7A]P/; J9P3KLJR¢£83DGWL6DOTdP3476œI<; N<XO4P36b8>[^;OA<IK`P BP/;OA<I7B'P3Dd836>;OBEDOAHP34; PoP34<KLBoKLA7TD 83FQ; P3KLDGAQJ>; AH4<6WL HKLAHWL69a_ KLJ>;OWJb47DGKLJ6Gm Õ¤o¼‚¬®° çoª¯´%¦¨§‚© y4Ée¹ Éj¸ ´%¶mɝРÌõ¶f© y4ÉjÉj¸o¹ ´%¶mɝР¬j¬¹/y4°¨° Ûo¾mÚ Ûo¾ Ú q ¬j¬¹/{7¤E¥ Ûo¾  Ûo¾ Ú q 댦¨§‚¬®¶f´¯¹/y4°¨° Ûo¾€ Ûo¾ ÚfÛ ëŒ¦¨§‚¬®¶f´¯¹/{7¤E¥ Ûo¾ÓuÛ Ûo¾ Ú q 𦱩E¸‚´%¬ tkn ço¸ ³Z³Z¶f´%Ð ¤mË ´¬®¥¸O°±ª¥1Ë{¤m´ ª£‚¬ ¥¯Ðo§ª¶mɝª¦¨Ée¹<­_¬jË ¤m´¬¹:°¨¬²o¦¨Éj¶m°±¹:É%£‚¤E¦¨É¬Á¶m°±©m¤m´r¦±ª£ ³Z¥«Ë ¤m´ ª£‚¬t¥¯¬®É¤E§ ¼Nª¶m¥¯½ ª£‚¬ÏÒ ´¬jÃo¦±¤E¸ ¥t¥¯¬®Éª¦±¤E§-Ó¶Ô,¬®¶f½o§‚¬®¥¥—¸O¦¨É½o°±ÐR­_¬¹ ɝ¤E³¬®¥$¶fÒ ÒO¶f´¬®§ª]nÔ£ ¦Ê°±¬4°¨¬²o¦¨Éj¶m°-Ér£‚¤E¦¨É¬Y¼‚¬jÒ_¬®§ ¼O¥ ¤E§Qª£‚¬ú³¤mª£‚¬j´!§ ¤I¼‚¬mÓtª£‚¬ú´¤I¤mªR°¨¬²I¬®³¬UÉj¶m§o¹ §‚¤mªc­¬Ér£‚¤E¥¯¬®§Ù¦Ê§Rª£ ¦¨¥Y³Z¶m§O§‚¬j´®¾ óô§!ª£‚¬Z¬²oÒ¬j´¹ ¦¨³Z¬®§‰ª¥t´¬jÒ_¤m´ª¯¬®¼Â¶f­¤dÃm¬@Ôy¬N¥¦¨³ZÒO°±ÐR¸ ¥¯¬Ï°¨¬²o¦¨Éj¶m° Ë{´¬—I¸‚¬®§ ɝÐ@¶fªª£‚¬Ö´¤I¤mªj¾£yÿÔ ´%¤E§‚©ZÉ%£‚¤E¦Êɝ¬t¶fª ª£‚¬ ´¤I¤mªjÓ£‚¤ Ô,¬jÃm¬j´®ÓÉj¶m§Â°±¬®¶m¼!ª¯¤R¶Éj¶m¥Éj¶m¼‚¬µ¤mË ¥¸‚­ ¹ ¥¯¬—I¸‚¬®§ªÔ ´¤E§‚©Ér£‚¤E¦¨É¬®¥ ¶fª °±¤dÔy¬j´$§‚¤I¼ ¬®¥,¶m¥ ª£‚¬jÐ ¶m°¨°O¼‚¬jÒ_¬®§ ¼ ¤E§Ò ´¬jÃo¦±¤E¸ ¥%°±Ðc³Z¶m¼‚¬Ô ´¤E§‚©YÉr£‚¤E¦¨É¬®¥j¾ ¿Á¬ ª£ ¬j´¬jË ¤m´%¬ Ô¦¨°¨°­O¶m¥¯¬ °¨¬²o¦¨Éj¶m°‚Ér£‚¤E¦¨É¬ ¤E§«¶m§¬²¹ ª¯¬®§ ¼ ¬®¼Á§‚¤mª¦±¤E§R¤mË3ª¯´%¬j¬Zɝ¤E§‰ª¯¬²oªYÔ£ ¦ÊÉ%£R¦¨§ Éj°¨¸O¼‚¬®¥ ª£‚¬t¼ ¶m¸‚©E£ª¯¬j´%¥j¾ Ì,¬®Éj¶m¸ ¥¬Ïª£‚¬Ï§¸ ³t­_¬j´Y¤m˼ ¶m¸‚©E£ª¯¬j´%¥€¦¨§Á¶.¼‚¬¹ Ò_¬®§ ¼‚¬®§OɝЫª¯´¬j¬c¦Ê¥§‚¤mª…r÷ üe÷-­_¤E¸ § ¼-Óo¦±ª ¦¨¥¦¨³¹ Ò_¤E¥¥¦¨­O°±¬cª¯¤Õɝ´¬®¶fª¯¬«¶@Ò ´¤m­¶f­O¦¨°¨¦±ªÑг¤o¼‚¬®°7¤mËyª£‚¬ ªÑÐÒ_¬\ç7è¿Rè ¿æå 1 é7éR¿æåoírÓmÔ£‚¬j´¬V¿O¦¨¥Œª£‚¬§‚¤o¼‚¬ ¹ ¥7°±¬²¹ ¬®³¬›¶m§ ¼¿æå7¦¨¥ª£ ¬4°±¬²o¬®³¬$¤mË-ª£‚¬:ª£Ï¼ ¶m¸ ©E£‰ª¯¬j´®¾ óħ ¥¯ª¯¬®¶m¼-ÓÔ,¬Õ³«¶f½m¬Ø¶m§¦¨§ ¼‚¬jÒ_¬®§ ¼ ¬®§ ɝ¬N¶m¥¥¸ ³Ò ¹ ª¦±¤E§ Ë{¤m´ª£‚¬¼ ¶m¸ ©E£‰ª¯¬j´%¥3¶m§ ¼Éj¶m°¨Éj¸ °¨¶fª¯¬ª£ ¬ §‚¤I¼ ¬¹ ¼ ¶m¸ ©E£‰ª¯¬j´ Ò ´¤m­¶f­O¦¨°¨¦±ª¦¨¬®¥Ö¥¯¬jÒO¶f´%¶fª¯¬®°±Ðm¾Øé¤mª¯¬@ª£ ¶fª ª£‚¬õ¼ ¶m¸‚©E£ª¯¬j´%¥]¹f°±¬²o¬®³¬®¥£ ¶®Ãm¬§‚¤mª¶mɝª¸ ¶m°¨°¨ÐÖ­¬j¬®§ Ér£‚¤E¥¯¬®§@Ðm¬jªjÓO¥¯¤ Ô,¬Ö¸ ¥¯¬4ª£‚¬Ö¼ ¶m¸‚©E£ª¯¬j´%¥]¹‚³¬®¶m§ ¦¨§ © Ò_¤mª¯¬®§ª¦¨¶m°›´%¶fª£‚¬j´ª£O¶m§#ª£‚¬®¦±´°±¬²o¬®³¬®¥jӛ¶m¥¼ ¦¨¥¯¹ Éj¸ ¥%¥¯¬®¼¦¨§.çI¬®Éª¦¨¤E§ÚqI¾ ¬@´¬®¥¸ °±ª¥ ¶f´¬¥%£‚¤ Ô§U¦Ê§Uð7¦¨©E¸‚´¬I¾¾$¬j´¬mÓ Ô,¬$£ ¶ Ãm¬´¬®§O¶m³¬®¼ª£‚¬ Ý%Þß ³¤o¼‚¬®°‚ª¯¤ ð -Þ Ó‰ª¯¤ ´¬"!¬®Éª«ª£‚¬Ëê¶mɝª«ª£O¶fªZ¤E§ °±Ðú¦¨§‚Ë{¤m´%³Z¶fª¦±¤E§U¶f­_¤E¸‚ª ª£‚¬#§‚¤o¼‚¬»Ô$£‚¤E¥¯¬#°±¬²I¬®³Z¬¦¨¥î­_¬®¦¨§‚©ÿÉ%£‚¤E¥¯¬®§ ¦¨¥ ª¶f½m¬®§2¦¨§‰ª¯¤×¶mÉjɝ¤E¸ §ªj¾ Õ¤I¼‚¬®°$# ðoì&%+ÞkíîOâ+ð \Þ ¦¨¥Âª£‚¬ Ý%Þß5î ëì7í7Ý%Þß ³¤o¼‚¬®°ÏË ´¤E³ 𦱩E¸‚´%¬ tkn Ô,¬.¼‚¬jª¯¬j´%³«¦¨§‚¬¶Á§‚¤o¼‚¬ ¹ ¥Z°±¬²o¬®³¬­¶m¥¯¬®¼U¤E§Uª£‚¬ ³¤mª£ ¬j´¹ ¥Œ°¨¬²I¬®³¬m¾7óħc³¤o¼‚¬®°  ð -Þî'\û( )%cìÞkí7ý ª£‚¬›É%£ ¤E¦¨É¬4¦Ê¥­O¶m¥¯¬®¼Ï¤E§Ïª£‚¬4¼O¶m¸‚©E£‰ª¯¬j´r¥3¤E§ °¨Ðm¾ð¦Ç¹ § ¶m°Ê°±ÐmÓO³¤o¼‚¬®°*# ðoì&%+ÞkíîOâ+ð \ހî'\û( *+%cìÞkí<ý ɝ¤E§o¹ ¥¦Ê¼‚¬j´%¥×­_¤mª£1ª£ ¬2³¤mª£‚¬j´¹ ¥ °±¬²o¬®³¬\¶m§ ¼1ª£‚¬ ¼ ¶m¸ ©E£‰ª¯¬j´%¥¹,è{¬²‚Éj°¨¸ ¼ ¦¨§ ©‚Ó“¤mË É¤E¸‚´%¥¬mӌª£‚¬µ³¤mª£‚¬j´ ¦¨§tÉj¶m¥¯¬õ¤m˂ª£‚¬,´¤¤mª§‚¤o¼‚¬mÓm¶m§ ¼c¼O¶m¸‚©E£‰ª¯¬j´r¥7¦Ê§cÉj¶m¥¯¬ Õ¤I¼‚¬®° çoª¯´%¦¨§‚© y4ÉjÉj¸o¹ ´r¶mɝРÌõ¶f© y4ÉjÉj¸o¹ ´%¶mɝР¶m§O¼‚¤E³ Ûo¾Sq<r Ûo¾mÚ é$¤I¼‚¬ Ûo¾StmÚ Ûo¾S t Õ¤mª£‚¬j´¯¹:§‚¤o¼‚¬ Ûo¾ ! Ûo¾S€ é$¤I¼‚¬¹:¼O¶m¸‚©E£‰ª¯¬j´r¥ Ûo¾ q Ûo¾S  Õ¤mª£‚¬j´¯¹:§‚¤o¼‚¬¹:¼ ¶m¸‚©E£ª¯¬j´%¥ Ûo¾€ Ûo¾ ÚfÛ ð¦±©E¸‚´¬,knÿço¸ ³«³Z¶f´Ð#¤mËZ´¬®¥¸O°±ª¥Ë{¤m´Á¼ ¦ÇÆ_¬j´¬®§‰ª ¥¯Ðo§‰ª¶mɝª¦ÊÉYɝ¤E§‰ª¯¬²oª›¸ ¥¶f©m¬ ¤m˰±¬®¶f˧‚¤I¼ ¬®¥rír¾ y4¥.Ô,¬îÉj¶m§Q¥¯¬j¬mÓt¬²Iª¯¬®§O¼ ¦¨§‚©Åª£ ¬îɝ¤E§ª¯¬²IªÁ¦¨§ ª£‚¬›ª¯´¬j¬ÖË{´¤E³\Ô£ ¦¨Ér£µÔy¬Ö¼‚´%¶ Ô¦¨§‚Ë{¤m´%³Z¶fª¦±¤E§µÉj¶m§ ¥¦±©E§O¦™Éj¶m§ª°±Ðc¦¨³ÒO´¤ Ãm¬õÒ_¬j´Ë{¤m´%³Z¶m§ ɝ¬õ­_¬jÐm¤E§ ¼ ª£‚¬ ­O¶m¥¯¬®°Ê¦¨§‚¬õ¤mËÉr£‚¤I¤E¥¦¨§‚©Öª£‚¬ ³Z¤E¥¯ªË{´¬—¸ ¬®§‰ª°±¬²o¬®³¬ Ë{´¤E³¶ ¥¸ Ò¬j´r¥¯ÐI§O¥¯¬jªè곤I¼ ¬®°- ð \Þ ¦Ê§Öð7¦¨©E¸‚´¬.Eír¾ / 0 >f8:9A21NKR5 ß 943ÄC6‰S=ŸE8<C“9 ð‚¤m´ñ¤E¸‚´»¥¶m³ҏ°±¬×¥¯¬®§‰ª¯¬®§Oɝ¬ ¦¨§*𦱩E¸‚´%¬˜qIÓ.ª£‚¬ # ð'ì&%-ÞkíîOâ-ð -Þî'\û( )%cìÞkí7ý ³Z¤I¼‚¬®° Ér£‚¤I¤E¥¯¬®¥ %ümþ rù]rþ×Ë{¤m´Õª£ ¬î´¤I¤mªR§‚¤I¼ ¬m¾ ï £‚¬îÃm¬j´­ %üfþo rù]rþ!¦Ê¥€§ ¤mª ¶R¥¯Ðo§‚¤E§ÐI³L¤mË$ª£‚¬@Ãm¬j´­s’jù eÓ3­O¸‚ª ª£‚¬N§‚¤E¸ § rüfþ rù]rþ¥ ¶Ø¥ÐI§‚¤E§Ðo³ ¤mË$ª£‚¬N§‚¤E¸ § ’®ùRe¾Áð‚¤m´Z³Z¶m§ÐÙ¶fÒ ÒO°¨¦ÊÉj¶fª¦±¤E§ ¥jÓ¦±ª ¦¨¥Ÿ—I¸ ¦±ª¯¬Ï´¬®¶u¹ ¥¯¤E§ ¶f­°±¬@ª¯¤Ù¶m¥¥¸ ³Z¬Nª£ ¶fªÔy¬.½I§‚¤dÔ ª£‚¬°±¬²‚¦¨Éj¶m° Éj°¨¶m¥¥$¤mË3¥¯¤E³Z¬t¤m´Ö¶m°¨°Œ°¨¶f­_¬®°¨¥¦¨§Õª£‚¬€¦¨§‚ÒO¸‚ª$¥¯ª¯´%¸ Ée¹ ª¸‚´¬m¾2¿Á¬!ª£‚¬j´¬jË{¤m´¬!¦¨§Ãm¬®¥¯ª¦±©E¶fª¯¬®¼#ª£ ¬ÁÒ¬j´%Ë ¤m´¯¹ ³Z¶m§ ɝ¬Â¤mËZ¤E¸‚´Ø­_¬®¥¯ªR³¤I¼ ¬®°c¥¯¤Ë{¶f´ Ó5# ð'ì&%-Þkíî â-ð -Þî'-û+ )%cìÞí<ý ÓZ¦±ËÏÔy¬U´%¬®¥¯ª¯´%¦¨ÉªR¤E¸‚´!Ér£‚¤E¦¨É¬ ª¯¤µª£‚¤E¥¯¬ °±¬²I¬®³Z¬®¥ª£ ¶fª›£ ¶ Ãm¬€ª£ ¬€¥¶m³Z¬cÒO¶f´ªÄ¹<¤mË ¹ ¥¯Ò_¬j¬®É%£Õ¶m¥ ª£‚¬Y¦Ê§‚ÒO¸‚ª³¬®¶m§ ¦Ê§‚©Ò_¤mª¯¬®§‰ª¦¨¶m°<¾,ço¦¨§ ɝ¬ ª£‚¬.¥%¸‚Ò_¬j´%¥¯Ðo§ ¥¯¬jª¥ ¶m¥¥¯¤oÉj¦¨¶fª¯¬®¼ÅÔ$¦±ª£ú¬®¶mÉ%£¦¨§‚ÒO¸ ª °±¬²o¬®³¬ ¶f´¬ §‚¤dÔÅ¥³Z¶m°¨°¨¬j´®Ómª£‚¬Ò_¬j´Ë{¤m´%³Z¶m§ ɝ¬õÉj¶m§o¹ §‚¤mªÏ­¬Ø¦¨§‚Ë{¬j´%¦±¤m´ª£ ¶m§¦¨§»ª£‚¬.©m¬®§‚¬j´r¶m°$Éj¶m¥¯¬Uèê¶fª °±¬®¶m¥¯ªÖ¶m¥Ö³¬®¶m¥%¸‚´¬®¼Ø­ÐÕ­O¶f©N¶mÉjÉj¸‚´%¶mɝРír¾cóħ ¼‚¬j¬®¼-Ó Ë{¤m´$¤E¸‚´4¬²‚¶m³ÒO°±¬€¥¯¬®§‰ª¯¬®§Oɝ¬mÓ-Ôy¬ §‚¤ Ôÿ¤m­ ª¶m¦¨§Õª£‚¬ ¼‚¬®¥¦¨´¬®¼.Ér£‚¤E¦¨É¬mÓ§ ¶m³¬®°¨ÐԒjù r¾œ¾$¤ Ô,¬jÃm¬j´®Óª¯¤@¤E¸‚´ ¥¸‚´%Ò ´%¦¨¥¯¬mӓ¤ Ãm¬j´%¶m°¨°3Ò_¬j´Ë ¤m´r³Z¶m§ ɝ¬Z¤E§Ù¤E¸‚´cª¯¬®¥¯ª ¥¯¬jª ¦¨§ ɝ´%¬®¶m¥¯¬®¼¤E§ °¨ÐµÃm¬j´%Ð@¥%°¨¦±©E£ª°±ÐmÓ ª¯¤@¶«¥¯ª¯´%¦Ê§‚©Z¶mÉjÉj¸o¹ ´%¶mɝÐt¤mËOÛo¾ è{Ë ´%¤E³ÿÛo¾€mí¶m§ ¼cª¯¤Ö¶4­O¶f©Ö¶mÉjÉj¸‚´%¶mɝР¤mËÛo¾ ڂÏè{Ë{´¤E³&Ûo¾ Úfۉír¾y¿Á¬tɝ¤E§<}¯¬®Éª¸‚´¬Yª£O¶fª³¤E¥¯ª ¬j´´¤m´r¥3¶f´¬$Ô¦±ª£ ¦¨§€ª£‚¬4ɝ¤m´´¬®Éª,°±¬²o¦ÊÉj¶m°OÉj°¨¶m¥¥®Ó¶m§ ¼ ª£I¸ ¥c§‚¤mªt¥¶®Ãm¬®¼î­ÐRª¶f½o¦¨§‚©ҏ¶f´ªÄ¹<¤mËʹ:¥Ò¬j¬®Ér£î¦Ê§‰ª¯¤ ¶mÉjɝ¤E¸ §ªj¾ 6 åÏC“9y¡‚B:á3>f8:C“9 ¿Á¬€£ ¶®Ãm¬c¥¬j¬®§ª£ ¶fª ª£ ¬ÖÒ¬j´%Ë ¤m´%³«¶m§ ɝ¬Ö¤mË7°¨¬²o¦¨Éj¶m° Ér£‚¤E¦¨É¬.Éj¶m§U­_¬N¦¨³ÒO´¤ Ãm¬®¼ú¤dÃm¬j´«ª£‚¬N­O¶m¥¬®°¨¦¨§‚¬µ¤mË Ér£‚¤¤E¥%¦¨§‚©ª£‚¬µ³¤E¥¯ªYË{´¬—I¸‚¬®§ªt³¬®³t­_¬j´Y¤mË ¶.¥¸‚¹ Ò_¬j´%¥¯Ðo§ ¥¯¬jªc­ÐÁª¶f½o¦¨§‚©Ø¥¯ÐI§ª¶mɝª¦¨ÉµÉ¤E§‰ª¯¬²oª ¦¨§!ª£‚¬ ¼‚¬jÒ_¬®§ ¼ ¬®§ ɝÐRª¯´¬j¬µ¦Ê§‰ª¯¤Ø¶mÉjɝ¤E¸ §‰ªj¾Áço¦¨§ ɝ¬Ï¥¯ÒO¶f´%¥¯¬¹ §‚¬®¥%¥y¤mˌ¼ ¶fª¶ ¦¨¥y¶ ³Z¶7}¯¤m´õÒ ´¤m­O°¨¬®³NÓÔ,¬›¦¨§‰ª¯¬®§O¼«ª¯¤ ª¯´%¶m¦Ê§Ïª£ ¬4³¤o¼‚¬®°¨¥,¤E§µª£‚¬Ö§‚¬jÔ×Ì,´¤ Ô$§µÉ¤m´%ÒO¸ ¥,¤mË qfÛoÓ ÛmÛmÛoÓ ÛmÛmÛ@Ô,¤m´%¼ ¥j¾ PRD3ÄD 6ED93¡oDO> 798;:=<-:?>@&A4BC@D<-EF@GIH&8;JK@&<LNM.8@">9:I< LPODH&AQ-:SRUTWVDV&VRX7-Y9Z [ J\8;];@DEDE&:I<-E ^_MU<`@ [[ 8;H&@DabQ_];Hc@G=deH&A;] [ @&8;A;:=<-E R f gh4ikj-lnmlpoqgrsmtvuwoxr9yj-o=zblSoq{bzb|s}&~-n}9€ ^ }9‚\ƒ}„D„ R 798;:=<-:?>@&ABC@D<-EF@GIH&8;J @D<L†…U‡ˆJ\<ЉU@Dd†‹ H‡ŒR }DDD RvŽ 9Z [ G?H&:?];:I<-E@ [ 8;H&‹@&‹-:?GI:=A];:Ia$Q:?JW8;@&8;abQ-:=a'@DG4deH9LJ'GU‘’H&8 E&J\<-JW8;@D]:IH&<vRc“”<–•ˆ— gW{˜™˜™šoxr9yz›gœNl’˜Ÿž l’¡br l¢˜ —b£ rkmlSoqgrkmtkf gr\œ˜ — ˜'rs{™˜4gr5f gh4ikj-lnmlpoqgrsmtDu oxrFyDj-o=z £ lSoq{z¥¤™fX¦u¡;§©¨,ªF«&«D«™¬| 79@D@&8;‹8­ Yab®DJ\< |¯ J\8d¥@D<F°DR 798;:=<-:?>@&A±BX@&<-E&@DG?HF8J | …U‡ˆJ\<²‰.@&d†‹ H‡ | @&<L³7F]J\>DJ ´ Q:?];];@D®DJ\8\R }DDD RµŽ>@DGIY@D]:IH&<±deJ']8:=a'AŠ‘’HF8ŒEDJW<9Z JW8;@D]:IH&<vR2“”<c•ˆ— g\{™˜™˜™šoxrFyz±gœ$l’˜$¶o — zbl†¡br ln˜ — rsm £ lSoqgrkmt©§†mlpj — mt¥u+mr9yj m\y&˜2¨4˜'rs˜ — mlSoqgr·f grWœ˜ —b£ ˜'rs{™˜¤’¡§UuC¨XªF«D«&«™¬| ¸ :?];¹ [ JŒ‰.@DdeH&< | “”A;8@J\GpR BXH&< <-:?JºO R»KH&88'RXT\V&V¼ R ¸ @DabQ-:=<-JŠ];8@D<A;G=@]:IH&<½L-:I>DJ\8Z E&J\<a'J\A\^CM¾‘’H&8d¥@G)L-J\Aa'8;: [ ]:IH&<$@&<L [ 8;H [ H&A;J\LNA;HZ G=Y-];:?HF<vR f gh.isj-l¢mlSoqgrkmtˆu oxrFyDj-o=z™lSoq{z™|X}  ¼ € ^ ~ V ‚Wƒ „&&~ R ¿ˆQ8;:=A];:I@&<-JµÀ J\G?G=‹@&Yd5RÁT\VDV ‚ R_ g — š'§˜l¢ÃÄUrÁňt?˜™{ £ l — gr oq{ u+˜;Æ&oq{™mtÇmlnm&șm"z'˜ R ¸ “”ÉXÊ*8;J\AA | ¿ˆ@Dd‹8;:IL-E&J | ¸ MºR MU8;@">9:=<L5Ë©R OHFA;Q:pRUT\VDÌ ‚ RMU<P:I<F]8H-LYa']:IH&<5]H½É+8;J'J MUL"͔HD:=<-:=<-E ¯ 8;@&d¥de@&8;A\Rº“”<›MºR ¸ @&<@DA;]JW8”Zn‰.@&dJW8 | JWL-:I]H&8 |&Î$mlx ˜h©mlSoq{zCgœ)u+mrFyDj m'yF˜'| [ @EDJWAÌ ‚Wƒ T&T ~ R ODH&Q <BXJ\<Í@Dde:I< A | MUd¥A];J\8L@Dd5R “”8;J\<-J5Ï+@&<-E&®F:IGILJ½@D< LÁˌJ'>9:=<˺<-:IE&QF]'RÐTWVDVDÌ R ¯ JW<9Z JW8;@D]:IH&<2];Q @]J' [ G?H&:?]AÑaHF8 [ YA”Z¢‹@&AJWL²A;];@D]:=A];:Ia\@G ®9<-H‡CGIJ\LEDJDRғ”<ÔÓFÕ lx¾Î֘˜'lSoxrFy_g”œÁl’˜±ÄKz™z'g\{'oqm £ lSoqgr4œg — f gh4ikj-lnmlpoqgrsmt&uwoxrFyDj-o=zblpoq{zXmrsšŒž×'l’Œ¡br £ l¢˜ — rsmlSoqgrkmtXf grWœ˜ — ˜'rs{™˜µgrØf gh.isj-l¢mlSoqgrsmt(uwoxr £ yDj-o=zblpoq{zÙ¤fX¦u¡;§©¨ £ Äf(uŒÚ Û& b¬| [ @DEDJWA ‚ ¼ ƒ‚ T  | ¸ H&<F];8-Ü J\@DG | ¿ˆ@D< @DL@ R ÉCQ-JÝUÉ)M ¯ Z ¯ 8;H&Y [ RºT\V&VDV R4M`GIJ-:=a'@GI:I¹'J\LNÉ)8;J'J†MUL9Z ͔H&:I<:I<-E ¯ 8;@&d¥de@&8N‘’HF8֎ˆ<-EDGI:IAQvRÞÉ(JWabQ<-:=a'@G¥8;JZ [ H&8;] | “”< A];:?]Y-]J›‘’H&8‰4J\A;J\@&8;abQ:I<_¿*HDE&<:?];:?>&JŸ79a:?Z JW<aJ |ß <:?>&J\8A:I]¢°©H‘wÊ J\<< A°9GI>"@&<-:=@R
2000
59
         !"# %$& '() %*,+ - /. 0+ 13254769872;:&4=<?>@256 ACBD&EGF7HJIKMLONQPSRTKMUWV NXNZY\[OI]U^P_V Na`cb d [ORU^NQP^e%KfPT[,gihjk[OI]U^PSNa`cblgimQ` npo<q>@6srtc> u"v5wyx{zx{|~}x{ v_€w€M‚Gƒ@„…‚^†s vWƒaw€‡z†s|5zˆx^‰ }lŠ#xGx{|\ƒ‹„‡€‡|]x{ƒaw/‚^†sJˆ_€‡|_ƒa}y€†s|p†@Œzy}yƒ@|O‰  ƒaw  zy}yƒa}y€‡z}y€‡‚Gƒ@„Ž„‡ƒ@|]s_ƒa@x%ƒ@|  }wƒ@|_z‘‰ „Mƒa}y€†s|’ †  x{„‡z“ƒ@|  ƒ@|”x{•~5€‡–Qƒ@„x{|~} —˜ƒQ™]€‡J_ š0|~}wy†@v~›5œQ—˜€‡|5€‡5 €ž‰ –@xGw@x{|5‚^x/Ÿ —(š#—(/¡¢ †  x{„¤£a5zy€‡|5zxG–~‰ xGwSƒ@„  €ž¥xGwyx{|~}¦xG}y§]†  z¦Œ¨†@w©ƒ@5}†a‰ ªƒa}y€‡‚Œ¨x{ƒa}y]wx)zx{„x{‚^}y€†s|«&¬)§5x—˜š0—˜ †  x{„&z€s|5€ž­W‚Gƒ@|~}y„›®†s5}v,xGwŒ¨†@wªz}y§]x zy}yƒ@|  ƒaw   †  x{„¯€‡|}x{z}ª‚^†@wv_5z%vxGw‰ vW„x^™O€‡}°›@£OxG–@x{|‹}y§]†s]s§k€}±§5ƒ@z¯Œfƒaw±ŒfxGŠxGw vWƒawƒ@ xG}xGwzG« ² ³?´ >@6s4&µ0¶0tc>s·°4 ´ ¸ }yƒa}y€‡zy}y€‡‚Gƒ@„C—˜ƒ@‚§5€M|]x\¬…wƒ@|5zy„Mƒa}y€†s|¹Ÿ ¸ —˜¬¡ºz›Oz‘‰ }x{z‹5zyx˜ƒ» †  x{„†@Œ)¼¢Ÿf½O¾ ¿?¡S£}y§5x˜v_wy†@ˆ_ƒaˆ_€‡„M€}°› }y§5ƒa}Žƒ}x^™c}&¿¯€‡|/}y§5xz†s5w‚^x±„‡ƒ@|]s5ƒa@xŠ"€‡„‡„s}wƒ@|5z‘‰ „‡ƒa}x¯€‡|~}†ƒ}x^™O}…½±€‡|J}y§]x}yƒawy@xG}&„‡ƒ@|]s5ƒa@x@£@}†  x^‰ }xGw€M|]x0}y§]x#ˆx{z}…}wSƒ@|5zy„‡ƒa}y€†s|CŒ¨†@wŽƒs€–@x{|%z†s]w‚^x }x^™O}G«Ž¬"§]x#z}yƒ@|  ƒaw  ƒav5v5wy†sƒ@‚S§J}† †  x{„‡€‡|5)}y§5€‡z  €‡zy}w€ˆ_]}y€‡†s|;wyx{„‡€x{zÀ†s|ÁƒÃÂ|]†s€Mz›Ä‚§_ƒ@|5|]x{„‡Å  x^‰ ‚^†s v†szy€}y€‡†s| €‡|9}†/ƒC„‡ƒ@|]s5ƒa@x"†  x{„Q¼ŽŸf½Q¡¢ƒ@|  ƒ }wƒ@|5z„‡ƒa}y€†s|Æ †  x{„^¼¢Ÿ ¿c¾ ½Q¡S£?Ч5€‡‚§Æ‚^†@wwyx{zv†s|  wyx^‰ zvx{‚^}y€–@x{„›}†/v5wS€†@w ƒ@|  „‡€‡Ç@x{„‡€‡§]†c†  ‚^†s v†s|]x{|~}yz €‡|‹ƒȯƒq›@x{zy€‡ƒ@|®Œf†@w5„‡ƒa}y€†s|7É ¼¢Ÿf½O¾ ¿?¡-Ê ¼ŽŸf½Q¡M¼¢Ÿ ¿~¾ ½Q¡Ë&ÌcÍμ¢Ÿf½Q¡M¼¢Ÿ ¿c¾ ½Q¡ Ï ¼ŽŸf½Q¡M¼¢Ÿ ¿~¾ ½Q¡SÐ Š"§]xGwx˜v_wy†@v†@wy}y€†s|5ƒ@„‡€‡}°›Ä§]†s„  zÀŠ"§]x{|Ñzx{ƒaw‚S§5€‡|] Œf†@wÒ}y§]x;†@v5}y€‡5Ó}yƒaw@xG}Ò}x^™c}\½;Œ¨†@w\ƒÁs€–@x{| z†s]wS‚^x}x^™O}/¿@«¬)§5€‡zx{•c5ƒa}y€†s|3§5ƒ@zˆxGx{|p‚Gƒ@„‡„x  }y§]x3‘Œ 5|  ƒ@ x{|~}yƒ@„Žx{•c5ƒa}y€†s|p†@Œ ¸ —(¬"ÅpŸ Èwy†?Š"| xG}"ƒ@„¤«£…Ô{Õ@Õ@Ös¡S« u‘|×}y§_€‡zÒv_ƒavxGw{£ªup€M|9–@x{z}y€‡sƒa}x؃@|ك@„}xGw|5ƒa}x }x{‚S§5|5€‡•c]xCŒf†@w†  x{„M€‡|]¼¢Ÿf½c¾ ¿Q¡S£,ˆWƒ@zx  †s|˜ƒ  €‰ wyx{‚^}‚§5ƒ@€‡|]‰Úw5„xx^™Ov_ƒ@|5zy€†s|º†@ŒŽ}y§]xCŒ¨†@w®É ¼¢Ÿf½O¾ ¿?¡ ÊÜÛ Í Û Ý Þ‡ß…à ¼ŽŸfá Þ ¾ á àâGâGâ á Þfãà Ðy¿?¡SÐ Ÿ‘Ôq¡ Š"§5xGwyxá Þ  x{|]†@}x{z±}y§]xÆä°}y§º}†@Ç@x{|k€‡|À½s« à ¬)§]x†@ˆ]‰ åx{‚^}yz¯}†%ˆ,x †  x{„x  €‡|Z}y§5€‡z‚Gƒ@zxƈx{„†s|]C}†%}y§]x Œ ƒ@€‡„›æ†@Œ‚^†s|  €‡}y€†s|5ƒ@„  €‡z}w€‡ˆ_]}y€†s|5z±¼¢Ÿfç%¾ è0Ðy¿?¡S£ Š"§5xGwyx%çـ‡zƒº}yƒawy@xG}Ɗ†@w  ƒa}/ƒºv_ƒawy}y€M‚G5„‡ƒawv,†a‰ zy€‡}y€†s|»€‡|»½s£#ƒ@|  è  x{|5†@}x{z }y§]x‹}†@Ç@x{|5z Ч5€‡‚§ v5wx{‚^x  x}3€‡|é½s«”¬"§]x΃@€‡|à †@}y€‡–Qƒa}y€†s|׌¨†@w }y§5€Mz#ƒav5v_wy†sƒ@‚§‹€‡z#}y§5ƒa}±€}¯zy€‡ vW„‡€ž­_x{z&}y§5xºÂ  x{‚^†  ‰ €‡|5sÅ/v5wy†@ˆW„x{Æ@Œ­W|  €‡|]}y§]x †sz}„‡€Ç@x{„›%}yƒawy@xG} }x^™O}ƒ@‚G‚^†@w  €‡|]ª}†ª}y§]xJ †  x{„¤«#uê|®v_ƒawy}y€‡‚G5„Mƒaw{£5€Œ èÁ€MzCÇO|]†qŠ|£Ž}y§]xZv_wy†@ˆ_„x{©†@Œ¯­W|  €‡|5®}y§]xÀˆ,x{zy} І@w  ƒa}J}y§]xº‚G]wywx{|9}Jv,†sz€}y€†s|3wyx{•c5€wx{z/†s|5„›Òƒ z}wSƒ@€s§9}Œf†@wyНƒaw  zx{ƒaw‚§º}y§5wy†s]s§}y§5x}yƒawy@xG}¯–@†a‰ ‚Gƒaˆ__„‡ƒawy›@£)ƒ@|  zy€‡ vW„x®ƒ@|  x^ëZ‚G€x{|~}  ›O|5ƒ@€M‚T‰ v5w†@@wƒ@€‡|5Jˆ_ƒ@zyx  §]x{]w€‡zy}y€‡‚Gz#‚Gƒ@|®ˆ,x5zx  }† x^™O}x{|  }y§_€‡z}†"zx{•c]x{|5‚^x{z=†@Œ~І@w  zG«¢¬)§5€‡z€‡z7–@xGwy› €‡v,†@w}yƒ@|9}ŽŒf†@w ƒav5v_„‡€‡‚Gƒa}y€‡†s|5z…z5‚§ƒ@z&¬…wƒ@|5z¬0›cvx Ÿ ì]†szy}xGw˜xG}˜ƒ@„¤«£Ô{Õ@Õsí~îªï…ƒ@|]s„‡ƒ@€‡z‹xG}˜ƒ@„¤«£Cðañ@ñ@ñ9¡S£ Š"§5xGwyx®}y§]x(}yƒ@zLJzª}†\ƒaÇ@x˜wyx{ƒ@„ž‰Ú}y€‡x®v5wyx  €M‚T‰ }y€†s|_z0†@Œ7}y§5x"}x^™O}¯ƒJ§~_ƒ@|ª}wSƒ@|5zy„‡ƒa}†@wŠ"€‡„‡„5}°›cv,x |]x^™O}G£]ˆ_ƒ@zx  †s|‹}y§]x/z†s]wS‚^x}x^™O}"5|  xGw¯}wƒ@|5zy„‡ƒQ‰ }y€†s|»ƒ@|  z†s xÀv5wyx^­_™Ò†@Œ)}y§]xÀ}yƒawy@xG} }x^™c}%}y§5ƒa} §5ƒ@z"ƒ@„wyx{ƒ  ›ÀˆxGx{|‹}°›cvx  « ¬"§]xÀªƒ@€‡|  wƒ{Š)ˆWƒ@‚yÇæ}†3 †  x{„‡€M|] ¼¢Ÿf½c¾ ¿Q¡C€M| }xGwªz(†@Œ/¼¢Ÿfç%¾ è0Ðy¿?¡˜€‡z˜}y§5ƒa}˜}y§]x;„‡ƒa}}xGw  €‡z}wS€ž‰ ˆ_5}y€†s|;€‡z ‚^†s|  €‡}y€†s|]x  †s|;}lŠ#†\–@xGwy›  €‡zv_ƒawSƒa}x z†s5w‚^x{z †@Œ€M|]Œ¨†@wSƒa}y€†s|Š"§5€‡‚S§»ƒawyx  €žëZ‚G_„}C}† ‚^†sJˆ_€‡|]xZ€‡|҃®‚^†s vW„x{ x{|~}yƒawy›pŠƒq›@«(ò|]xzy€‡ ‰ v_„‡xCz}wƒa}xG@›˜€‡z)}†º5zxƒª„‡€‡|5x{ƒaw‚^†sJˆ_€‡|5ƒa}y€†s|k†@Œ óÚôõ?öø÷0öúùTû?üSýÚþ‘÷ ÿÚõQþ±öú÷¤÷Qþ)ü7ûQüSýúö ÿÚöúüTû ü ^þêý&ÿyý ùTþêÿ0ÿÚþ{ÿÚ÷ ü @üT÷¤÷¤öúþúþ‘ûQùÿÚõQ÷ 7õ?ö °õû @þ±þS÷¤ö"! þ‘û# üSý‘þ $%7õQþ‘û&$qþ‘÷¤ö ýÚþ$'(!)Q÷¤öúûQù*¢÷ ÿÚü¯ÿÚü,+^þêû¯üSý-?ýÚöúüý $?öú÷ ÿ¤ýÚöqÿÚöøüSûü ^þêý.úþ‘û?ùSÿÚõQ÷ / „‡ƒ@|]s_ƒa@xƒ@|  }wƒ@|5zy„‡ƒa}y€‡†s|˜‚^†sv,†s|5x{|9}yzG£=†@Œ0}y§]x Œf†@w‹É ¼¢Ÿfç%¾ è0Ðy¿?¡#Ê10a¼ŽŸfç%¾ 莡.2ÁŸ‘Ô&340,¡M¼¢Ÿfç ¾ ¿?¡ ⠟¤ðs¡ Š"§]xGwx507698 ñOÐ{Ô,:Ž€Mz"ƒÀ‚^†sCˆ_€M|5€‡|] Šx{€s§~}G«;"†?б‰ xG–@xGw{£}y§5€‡zº€‡zÀƒŠ#x{ƒaÇÑ †  x{„ˆ,x{‚Gƒ@_zx˜€‡}Àƒq–@xGw‰ ƒa@x{z®†?–@xGwk}y§]x3wx{„‡ƒa}y€–@xÒz}wyx{|5@}y§5z‹†@Œ%€}yz‹‚^†s%‰ v†s|]x{|~}yzG"§]x{|‹¼ŽŸfç%¾ 莡/€‡zJ„‡€Ç@x{„›˜}†kˆxªƒ( †@wyx ƒ@‚G‚G]wƒa}x(x{z}y€‡ƒa}x(}y§5ƒ@|p¼¢Ÿfç ¾ ¿?¡S£)€}€‡z†@ˆ~–O€†s5z }y§5ƒa} }y§]x®†  x{„±z§]†s5„  wyx{„› †@wyx®§]x{ƒq–c€‡„‡›æ†s| ¼¢Ÿfç ¾ 袡S£)ƒ@|  –c€M‚^x®–@xGwzyƒO£¯wƒa}y§]xGwZ}y§5ƒ@|Đ_zy€‡|]Òƒ ­5™Ox  Šx{€s§9}G«Žu‘| }y§]xG†@wy›%}y§5€Mz ‚^†s5„  ˆx±v_ƒawy}y€Mƒ@„‡„› wyx{ x  €x  ˆc›ÄƒaÇO€‡|]<0  xGvx{|  †s|ÁèÃ@|  ¿@£ ˆ_]}#€‡|v5wSƒ@‚^}y€‡‚^xz€s|5€ž­W‚Gƒ@|~}0€‡v5wy†q–@x{x{|9}yz#Š"€}y§ }y§5€‡zJ}x{‚§_|5€‡•c]x‹§5ƒ{–@xºv5wy†?–@x{|»x{„‡5zy€‡–@xpŸ ï…ƒ@|]s„‡ƒ@€‡z ƒ@|  ì]†szy}xGw{£Žðañ@ñ@ñ9¡S«À¬)§]x|]†s€Mz›(‚S§5ƒ@|5|]x{„& †  x{„ ƒ{–@†s€  zÒ}y§5€Mz3v5wy†@ˆ_„‡x{ ˆ~›ÃƒaÇO€‡|]Áv5wx  €‡‚^}y€†s|5z ˆ_ƒ@zx  †s|kèÒ}y§5x/wyx{zv†s|5zy€‡ˆ_€‡„‡€}l›%†@Œ…}y§]xJ„‡ƒ@|]s5ƒa@x  †  x{„c¼¢Ÿf½Q¡S£5ƒ@|  }y§]†szxƈ_ƒ@zx  †s|‹¿Æ}y§]xÆwyx{zv†s|O‰ zy€ˆW€‡„‡€}l›\†@ŒÆ}y§]x˜}wƒ@|5z„‡ƒa}y€†s|Ø †  x{„&¼¢Ÿ ¿~¾ ½a¡S£ƒ@|  ‚^†sJˆ_€‡|5€‡|5À}y§5x }°Š†(€‡|3ƒ@|3†@v_}y€‡J_ Šƒq›@«Zȯ]} }y§5€‡zp‚^†s x{zæƒa}3}y§]x΂^†sz}3†@ŒZ€M|5‚^wyx{ƒ@zx Ã x{‚^†  ‰ €‡|]‚^†s v_„x^™]€}l›@£cˆx{‚Gƒ@5zyx"}y§]x‚§5ƒ@€M|ZwS5„x"‚Gƒ@|º|]† „†s|]@xGw&ˆx±ƒav5v_„M€x  ƒ@z €‡|‹Ÿ‘Ôq¡  ]x¯}†Æ}y§]x±wxG–@xGwzx   €wx{‚^}y€†s|k†@Œ#}y§]x%}wƒ@|5zy„Mƒa}y€†s|p †  x{„¤«/—(_‚§pwyx^‰ ‚^x{|~}&wyx{zx{ƒaw‚S§ €‡| ¸ —(¬Æ£axGŸ=ƒaw‚?> @ ƒQ‰A ƒawyx{ƒ/xG}&ƒ@„¤«£ Ô{Õ@Õ?BcîC€x{zyzx{|xG} ƒ@„¤«£"Ô{Õ@Õ?Bcî"ò‚S§»xG}%ƒ@„Ú«£"Ô{Õ@Õ@Õcî D҃@|]æƒ@|  D҃@€ˆx{„¤£"Ô{Õ@Õ?Bs¡  x{ƒ@„Mz%Š"€}y§}y§]x  x^‰ ‚^†  €‡|]˜v5wy†@ˆ_„‡x{‹£&x{€}y§]xGw  €wx{‚^}y„›Ò†@w€‡|  €wyx{‚^}y„› ˆx{‚Gƒ@5zx®†@ŒÆ‚^†s|5zy}wƒ@€‡|~}yzª€‡ v†szx  †s|Ä}y§]xkŒf†@w †@ŒŽ}y§]xÆ}wƒ@|5z„‡ƒa}y€†s|‹ †  x{„¤« E z}yƒa}y€‡z}y€‡‚Gƒ@„}x{‚S§5|5€‡•c]xºŠ"§5€‡‚S§\§_ƒ@z%wyx{‚^x{|9}y„‡› ˆx{‚^†s x»v†@v_5„‡ƒaw(Œ¨†@wFCïG €‡z˜—(ƒQ™]€‡5 š0|O‰ }wy†@vc›]œQ—˜€‡|5€MJ5Ý€–@xGw@x{|5‚^x%Ÿ —˜š0—˜/¡& †  x{„ž‰ €‡|]ØŸ È#xGw@xGw‹xG}Àƒ@„¤«£/Ô{Õ@Õ?Hs¡S«¹ò|5xk†@Œ/}y§]x˜ƒ@€‡| z}wyx{|5@}y§5z0†@Œ7—˜š0—˜ €‡z }y§5ƒa}¯€}0ƒ@„M„†qŠ"z€‡|]Œf†@wƒQ‰ }y€†s|ƌfwy†s  €ž¥xGwyx{|9}7zy†s]w‚^x{z7}†"ˆ,x ‚^†sJˆ_€‡|5x  €‡|/ƒ v5w€M|5‚G€v_„x  ƒ@|  x^¥x{‚^}y€–@xŠƒq›@£5z†%€}#€‡zƒ|5ƒa}y]wƒ@„ ‚§5†s€‡‚^xŒf†@w/ †  x{„‡€‡|]J¼¢Ÿfç%¾ è0Ðy¿?¡S« u‘|p}y§5€‡zv_ƒavxGw{£ u  x{zy‚^w€ˆxƒ®—˜š0—˜“ †  x{„&Œf†@w±¼¢Ÿfç ¾ è Ðy¿?¡Cƒ@|  ‚^†s v_ƒawxC€}yz±vxGwyŒf†@wƒ@|5‚^x}†}y§_ƒa})†@Œ&ƒ@|‹x{•~_€–9‰ ƒ@„x{|~}˜„M€‡|]x{ƒaw(†  x{„Ú« ukƒ@„‡z† xG–Qƒ@„M5ƒa}x;zxG–@xGwƒ@„  €ž¥xGwyx{|~}k xG}y§]†  z‹Œf†@wp—(š#—( Œfx{ƒa}y]wyxzx{„x{‚T‰ }y€†s|£O€‡|5‚G„M  €‡|]ƒ|]xGŠØƒ@„‡@†@w€}y§5  ]x"}† G wS€‡|9}I Ÿ‘Ô{Õ@Õ?Bs¡S«#¬…† C›ªÇc|]†?Š"„x  @x@£O}y§5€Mz#€‡z#}y§]x­_wzy}#ƒav]‰ v_„‡€M‚Gƒa}y€†s|3†@Œ±—(š#—( }†kˆ_5€M„  €‡|]ºƒ®„‡ƒaw@x^‰°zy‚Gƒ@„x }wƒ@|5z„‡ƒa}y€†s|Ò †  x{„¤£…ƒ@|  †s|]xª†@Œ}y§5xŒ¨xGŠ  €wyx{‚^} ‚^†s v_ƒawS€‡z†s|5z0ˆ,xG}lŠ#xGx{|ÀƒC—˜š0—˜  †  x{„5ƒ@|  ƒ@| ƒ@„‡†sz})x^™]ƒ@‚^}y„›ºx{•~5€‡–Qƒ@„x{|~})„‡€‡|5x{ƒaw¯†  x{„Ú«J K L 4&µ 2NMl< OQPSR T%USVXWZYN[]\_^Q`aWcb ¬)§5x)ˆ_ƒ@zx{„‡€M|]x) †  x{„5€‡z ƒJ„‡€‡|]x{ƒaw ‚^†sJˆ_€‡|_ƒa}y€†s|ªƒ@z €‡|pŸ¤ðs¡#†@Œ…ƒ%z}yƒ@|  ƒaw  €‡|~}xGwyv†s„‡ƒa}x  }wS€@wƒ@ Ÿd@x^‰ „‡€M|]xGÇ\ƒ@|  —kxGwS‚^xGw{£ÆÔ{Õ?Bañ9¡ Œf†@w/¼¢Ÿfç ¾ 袡ƒ@|  }y§]x u‘ȯ— †  x{„ Ô Ÿfuêȯ—Ôq¡3Ÿ Èwy†qŠ"| xG}kƒ@„¤«£ Ô{Õ@Õ@Ös¡ Œf†@w¼¢Ÿfç%¾ ¿?¡S« E zZ†@wS€s€‡|5ƒ@„‡„‡›\Œf†@w5„‡ƒa}x  £±u‘ȯ—\Ô  †  x{„‡z}y§]x  €‡z}wS€ˆ_]}y€†s| ¼¢Ÿf½c¾ ¿?¡S£ˆ_5}zy€‡|_‚^xJ}yƒaw‰ @xG} }x^™O}%}†@Ç@x{|5z ƒawyxºv5wyx  €M‚^}x  €M|  xGvx{|  x{|~}y„›@£ €}/‚Gƒ@|æƒ@„‡z†‹ˆx 5zx  Œf†@w±¼¢Ÿfç ¾ ¿?¡S«ª¬)§]x 5|  xGw„›~‰ €‡|5 @x{|]xGwƒa}y€–@xJv5wy†O‚^x{zyz)€‡z)ƒ@z"Œ¨†s„M„†qŠ"z{É)Ôq¡¯v_€‡‚ÇÀƒ }†@Ç@x{| eƒa}=wƒ@|  †s €‡|/¿@£@€‡|  xGvx{|  x{|~},†@ŒO}y§]x0v,†a‰ zy€‡}y€†s|5z †@ŒçÁƒ@|  e9îOðs¡ ‚§]†c†szxçÁƒ@‚G‚^†@w  €‡|]J}†ƒ І@w  ‰ÚŒf†@w‰ÚІ@w  }wƒ@|5z„‡ƒa}y€†s|ºv5wy†@ˆWƒaˆ_€‡„‡€}l›Æ¼¢Ÿfç ¾ eQ¡S« ¸ _€‡|]Æ†?–@xGw)ƒ@„‡„_‚S§]†s€‡‚^x{z¯Œf†@w'es€‡–@x{z#}y§5x‚^†s%‰ v_„‡xG}x/ †  x{„¤É ¼¢Ÿfç ¾ ¿?¡0Ê Û fÛ Ì gTßih ¼¢Ÿfç%¾ e g ¡ËOŸy¾ ¿c¾j2ÑÔq¡ Š"§5xGwyx5e g €Mz)}y§]xlk@}y§(}†@Ç@x{|˜€M|k¿\Œ¨†@w&k]mÁñO£,ƒ@|  e h €‡zƒpzvx{‚G€‡ƒ@„0|c5„‡„0}†@Ç@x{|»v5wyxGvx{|  x  }†px{ƒ@‚§ z†s5w‚^x(zx{|~}x{|5‚^x(}†ƒ@‚G‚^†s5|~}ÀŒ¨†@wZ}yƒawy@xG}ZІ@w  z Š"§_€‡‚§º§5ƒq–@xC|]†  €wyx{‚^}}wƒ@|5z„‡ƒa}y€†s|5zG« ¬"§]xІ@w  ‰ v_ƒ@€‡w¢vWƒawƒ@ xG}xGwz,¼¢Ÿfç ¾ eQ¡¢‚Gƒ@|%ˆxx{z}y€‡ƒa}x  Œ¨wy†s ƒ"ˆ_€‡„‡€‡|5s5ƒ@„?‚^†@wv_5z†@Œ5ƒ@„‡€‡s|]x  zx{|~}x{|5‚^x0vWƒ@€wz75z‰ €‡|5}y§]x±š#— ƒ@„@†@w€}y§_‹£@ƒ@z  x{zy‚^w€ˆx  €‡|ºŸ È#w†qŠ"| xG}"ƒ@„Ú«£=Ô{Õ@Õ@Ös¡S« OQPnO \_ol\_pq\_^Q`aWcb E —˜š0—˜  †  x{„5Œf†@w7¼ŽŸfç%¾ è0Ðy¿?¡#§5ƒ@z }y§]x@x{|]xGwƒ@„ Œf†@w‹É ¼¢Ÿfç ¾ è Ðy¿Q¡0Êsr Ÿfç%¾ è0Ðy¿?¡~x^™OvŽŸ?t u7vjw ŸfçCÐè0Ðy¿?¡¡ xŸ¤è Ðy¿Q¡ Ð Š"§5xGwyx r Ÿfç ¾ è Ðy¿?¡ €‡z ƒ wyxGŒfxGwyx{|5‚^x  €‡z}w€‡ˆ_O‰ }y€†s|7£ w ŸfçJÐè Ðy¿Q¡ ƒav_z“ŸfçCÐè0Ðy¿?¡é€‡|~}†¦ƒ@|zy‰  €M x{|5zy€†s|_ƒ@„3Œ¨x{ƒa}y5wyxٖ@x{‚^}†@wq£{t u €Mz ƒ¦‚^†@wywyx^‰ zv†s|  €‡|]Ò–@x{‚^}†@w(†@ŒCŒfx{ƒa}y]wyxpŠ#x{€‡s§9}yzҟf}y§]xpv_ƒQ‰ wƒ@xG}xGwz“†@Œé}y§]x  †  x{„¨¡S£Ãƒ@|  xŸ¤è0Ðy¿?¡ Ê |~} r Ÿfç%¾ è0Ðy¿?¡~x^™cv&Ÿt u€vw ŸfçCÐ莡¡€‡z¯ƒJ|5†@wƒ@„‡€I{€M|] Œ ƒ@‚^}†@w{« ‚„ƒ=üT÷¤þ‘ûjžþ  $]…S† ‡,‡ˆŠ‰¢ýÚþ @üSý¤ÿÚ÷*"ùSýÚþyÿÚþêý)@þêýúþqö ÿ‹!ýÚþ $##êÿÚöúüTûl…ŒŽŠ ^þêýÚ÷?÷.† ‘Š'‰]ü ^þêý-÷¤þ úöúûQþÿ¤ýÚöúùSý’ Sûj ù,SùTþ“±ü”$qþ $#QþÿÚõQþl?÷¤þÆü–•l—˜ TþêýÚ÷Q÷&úöúûQþyý™,üSý$ ÿ¤ýÚöúùTùTþlýÚ÷ /š…üþ ^þlý{÷¤öøû#‘þ…ÿÚõ?þX±ü”$qþ ú÷WÿÚþ‘÷ ÿÚþ $™yýÚþ‘ûGÿ"! $?ö"›sþêýÚþ$öúû/üSÿÚõ?þêýaS÷@þ êÿÚ÷ sö ÿ¢öø÷…õyý$ÿÚü$?þêÿÚþêý±öúû?þ0õ?ü %#lõ±ücÿÚõQöú÷ù,SöúûSû@þ-yÿ¤ÿ¤ýÚö #?ÿÚþ$¯ÿÚü ÿÚõQþXQ÷¤þ¢üœ•l—Q/ uê}/‚Gƒ@|3ˆx zy§]†qŠ|Ο È#xGw@xGwCxG}Cƒ@„¤«£#Ô{Õ@Õ?Hs¡}y§5ƒa} }y§]x;5zx†@Œª}y§_€‡zp †  x{„CŠ"€‡}y§ ªƒQ™O€‡5 „‡€Ç@x^‰ „‡€‡§5†~†  v_ƒawƒ@ xG}xGw\x{z}y€‡ƒa}y€†s|ÀMzÀå_z}y€ž­_x  †s| €‡|]Œf†@wƒa}y€‡†s|O‰Ú}y§]xG†@wyxG}y€‡‚ª@wy†s5|  zCŠ"§5x{| r wyxGv5wyx^‰ zx{|~}yz(zy†s xæv5wS€†@w®ÇO|]†qŠ"„‡x  @xæƒaˆ†s]}k}y§5xÒ}w]x  €‡zy}w€ˆ_]}y€‡†s|Zƒ@|  Š"§]x{|‹}y§]xCx^™cvx{‚^}x  –aƒ@„‡]x{z)†@Œ w €‡|‹}y§]xJ}wƒ@€‡|5€‡|5%‚^†@wv_5z"ƒawxJ€  x{|~}y€‡‚Gƒ@„}†Z}y§]x{€w }w]xkx^™cvx{‚^}x  –Qƒ@„‡5x{zG«©¬)§5xGwyxk€‡zZ|]†Òwyx{•c5€wyx^‰  x{|~}±}y§5ƒa})}y§]x/‚^†sv,†s|5x{|9}yz±†@Œ w wxGv5wyx{zx{|~}  €‡z‘‰ å†s€‡|9}&†@w zy}yƒa}y€‡z}y€‡‚Gƒ@„‡„‡›J€‡|  xGv,x{|  x{|9}¢xG–@x{|~}yzG« ¬)§5€‡z wyx{zy_„}Z †@}y€‡–Qƒa}x{z‹}y§5xp_zx(†@Œ—(š#—(  †  x{„‡zG£ ˆ_]}Z€}†a¥xGwzª†s|_„›»Š#x{ƒaÇΏs_€  ƒ@|5‚^x®†s|ا]†?Š }† zx{„x{‚^} r †@w w «u‘|(v5wƒ@‚^}y€‡‚^x@£ r €‡z_zy5ƒ@„‡„›‹‚S§]†szx{| †s|}y§]xˆ_ƒ@zy€‡z…†@Œ5x^ëZ‚G€x{|_‚^›C‚^†s|5zy€  xGwƒa}y€†s|5z¯ŸfŠ"§]x{| }y§]xC€M|]Œ¨†@wSƒa}y€†s|®€}‚Gƒav5}y]wyx{z"Š#†s5„  ˆx/‚^†s v_O‰ }yƒa}y€†s|5ƒ@„M„›;x^™cvx{|5z€–@x(}†»wyxGv_wyx{zx{|~}Àƒ@z‹‚^†s v†a‰ |]x{|~}yz)†@Œ w ¡S£,ƒ@|  w €Mz±x{z}yƒaˆ_„‡€Mzy§]x  5zy€M|] §]x{]w€‡z‰ }y€‡‚GzÒz5‚§×ƒ@z  x{zy‚^w€‡ˆ,x  €‡|é}y§5xÎ|]x^™c}\zx{‚^}y€†s|« ò|5‚^x r ƒ@|  w §5ƒ{–@xJˆxGx{|‹‚§]†szx{|7£5}y§]xÆuyu ¸ ƒ@„@†a‰ w€}y§_ Ÿ x{„‡„‡ƒ]G0€xG}wSƒªxG}ƒ@„¤«£&Ô{Õ@Õ?žs¡‚Gƒ@|˜ˆ,x5zx  }†­W|  ªƒQ™O€‡5 „‡€Ç@x{„‡€‡§5†~†  v_ƒawƒ@ xG}xGw¢–aƒ@„‡]x{zG« u‘|%}y§]x±‚G]wywyx{|~}&‚^†s|9}x^™O}G£Ozy€‡|5‚^x¯}y§]x)ƒ@€‡銯ƒ@zŽ}† ‚^†s v_ƒawxZx{•c5€–aƒ@„x{|~}C„‡€‡|]x{ƒawJƒ@|  —˜š0—˜ †  ‰ x{„‡zG£]u5zx  ƒ@|®€M|9}xGwyv†s„‡ƒa}x  }wS€@wƒ@“ƒ@z±}y§]xÆwxGŒM‰ xGwyx{|5‚^x  €‡z}w€‡ˆ_]}y€†s| r ƒ@|  ˆ†~†s„x{ƒ@|Á€M|  €‡‚Gƒa}†@w Œ 5|5‚^}y€†s|5z…†q–@xGw0ˆ_€‡„‡€‡|5s5ƒ@„@І@w  v_ƒ@€wz¢ƒ@zŽŒfx{ƒa}y]wyx{z Ÿ €x@£]‚^†s v†s|]x{|~}yz†@Œ w ¡S« E v_ƒ@€w¯†@Œ=z†s]wS‚^x@£ }yƒawy@xG} І@w  zƟesÐáy¡0§5ƒ@z¯ƒJ‚^†@wywx{zv†s|  €‡|]CŒ¨x{ƒa}y]wxŒ 5|5‚T‰ }y€†s|É Ÿ? ¡ ŸfçCÐè0Ðy¿?¡Ê£¢ ÔaФe“6k¿0ƒ@|  á ÊØç ñOÐx{„‡zx ¥zy€‡|] }y§]x|]†@}yƒa}y€†s|5ƒ@„…‚^†s|9–@x{|~}y€†s|(}y§5ƒa} u  ¡ €‡z"ñ Š"§]x{|5xG–@xGwJ}y§]xÀ‚^†@wywyx{zv†s|  €‡|]‹Œfx{ƒa}y]wyx Ÿ? ¡  †cx{z |]†@}…x^™O€‡zy}=€‡|Æ}y§]x# †  x{„¤£q}y§]x0­W|5ƒ@„s—˜š0—˜Î †  x{„ ‚Gƒ@|®ˆxŠ)w€‡}}x{|®‚^†s v_ƒ@‚^}y„›‹ƒ@zGÉ ¼¢Ÿfç ¾ è Ðy¿Q¡0Ê r Ÿfç%¾ 莡~x^™OvŽŸÚÌ   ¦ f u   } ¡Ë(xŸ¤è Ðy¿Q¡ â ¬)§5€Mz¢†  x{„]€Mz&z}w5‚^}y]wSƒ@„‡„›C•c5€}x)zy€M€‡„‡ƒaw¢}†/}y§]x †s|]x  x^­W|]x  €‡|º}y§]xÆv_wyxG–c€‡†s5z±zx{‚^}y€†s|É ¼ŽŸfç%¾ è0Ðy¿?¡0Ê10 r Ÿfç%¾ 莡.2 Ô'340 ¾ ¿~¾j2ÑÔ Û fÛ Ì gSßih ¼¢Ÿfç ¾ e g ¡ § ¨=ûQüÿÚõQþêý öúûGÿÚþêý?ýÚþlÿÿÚöúüTû©7õQö°õ%õ÷)@þêþ‘û5úþ‘÷¤÷*þ  #úö‘ö ‘þ$%öúûÿÚõ?þªa«c¬­øö ÿÚþêýyÿ?ýÚþ,5öú÷0ÿÚõÿ0ü÷¤öøû?ù,úþ   !Gþêýû?þ ?ý@ûQþêÿ7öúÿÚõ‘þêý¤ÿöúû™þêöøùSõGÿQ‘üTû?÷ ÿ¤ýSöúûGÿÚ÷QSû$™ ® ÷¤ü ÿ #¯ü,?ÿ#?ÿQ°QûlÿÚöøüSû …n±,öú÷¤õQüZ©†‡‡Š²,‰„/ Š"€‡}y§‹}y§]xC—˜š0—˜ÃŒfx{ƒa}y]wyxCŠx{€s§~}yz u   } v_„‡ƒq›c€‡|5 }y§]x"wy†s„x±†@Œ,}y§]x)u‘È—Ô±v5w†@ˆ_ƒaˆ_€‡„‡€‡}y€x{z5¼ŽŸfç%¾ ea¡S£cƒ@|  }y§]x;—˜š0—˜ †  x{„%zy_€‡|]Ä†q–@xGwæ‚^†s|~}w€ˆ_]‰ }y€†s|_z¯Œ¨wy†s z†s]w‚^x/zx{|~}x{|5‚^x/І@w  z)wƒa}y§]xGw±}y§5ƒ@| }†@Ç@x{|5zŒf†@wCx^ëZ‚G€‡x{|5‚^›@«‹ulŒ±}y§5xGwyxZƒawyx´³ ŒfwyxGxªv_ƒQ‰ wƒ@xG}xGwz0€‡|}y§]x)}w€@wSƒ@׃@|  y‹Š†@w  v_ƒ@€‡wzG£s}y§]x —˜š0—˜  †  x{„CŠ"€‡„‡„/‚^†s|~}yƒ@€‡|_³µ2¶y¹ŒfwyxGxæv_ƒQ‰ wƒ@xG}xGwz(ƒ@|  }y§]x„‡€‡|]x{ƒawk†  x{„CŠ"€‡„‡„Æ‚^†s|~}yƒ@€‡| ³12­y·2\ÔX3Ò¾"¸   ¾2¾"¸ ¡ ¾33Ԋ¹)ŒfwyxGxv_ƒawƒ@ xG}xGwz{£cz† €Œ&}y§5xJz†s]wS‚^xƒ@|  }yƒawy@xG}–@†O‚Gƒaˆ_5„‡ƒawy›®z€IGx{z%¾"¸   ¾ ƒ@|  ¾"¸ ¡ ¾aƒawx)x{•~_ƒ@„]}y§]x)}lŠ#† †  x{„‡z&Š"€‡„M„c‚^†s|~}yƒ@€‡| v5wx{‚G€‡zx{„›Æ}y§]x±zyƒ@x¯|c5JˆxGw¢†@ŒWŒ¨wxGxvWƒawƒ@ xG}xGwzG« òÆ|]x±€‡ v†@wy}yƒ@|~}¢v_wƒ@‚^}y€‡‚Gƒ@„  €ž¥xGwyx{|5‚^x±ˆxG}lŠ#xGx{| }y§]xº}lŠ#†æ†  x{„MzJ€‡z}y§]xºwyx{•c5€wx{ x{|9}}†Ò‚Gƒ@„‡‚GO‰ „‡ƒa}x¯}y§]x¯—˜š0—˜ |]†@wSƒ@„‡€I{€‡|5)Œfƒ@‚^}†@w–xŸ¤è0Ðy¿?¡7Œ¨†@w x{ƒ@‚S§‚^†s|~}x^™O}Ž€M|/Š"§5€M‚§Æ}y§5€Mz= †  x{„9€Mz=5zx  «…¬)§5€Mz ƒaÇ@x{z"}y§]xC—˜š0—˜Ã †  x{„J_‚§k†@wyx/‚^†s v_O‰ }yƒa}y€†s|_ƒ@„‡„›x^™cvx{|5zy€‡–@x±}y§5ƒ@| }y§]x"„M€‡|]x{ƒaw& †  x{„¤£9z† }y§5ƒa}&€}Ž€‡zŽ|]†@}ŽŒfx{ƒ@zy€ˆ_„x#}†Æ§5ƒ{–@x)€}Ž€‡|5‚^†@wv,†@wSƒa}xƒ@„‡„ ƒq–Qƒ@€‡„Mƒaˆ_„xkІ@w  ‰ÚvWƒ@€wZŒfx{ƒa}y]wyx{z3Ÿ €xpƒ@„‡„ˆ_€‡„‡€M|]s5ƒ@„ v_ƒ@€‡wz†@Œ…І@w  z¯Š"§_€‡‚§º‚^†c†c‚G‚G]w)€M|‹z†s xƃ@„‡€s|]x  zx{|~}x{|5‚^xZv_ƒ@€w/€‡|3}y§5x }wƒ@€‡|5€‡|5º‚^†@wv_5zS¡S«ª—(†@wyx^‰ †?–@xGw{£…zy€M|5‚^x%}y§]x%x{ v_€w€M‚Gƒ@„=x^™Ovx{‚^}yƒa}y€†s|5zƆ@Œ#Œ¨x{ƒQ‰ }y]wx{z&ƒawyx)zy]v5v†szx  }†ÆwyxŠºWx{‚^} }y§]x{€wŽ}w]x±–aƒ@„‡]x{z{£ §5ƒq–c€M|]ƒŒfx{ƒa}y]wyxŒf†@w“»(¼»(½¾‚^†~†O‚G‚G]wyw€M|]Cv_ƒ@€‡w¯€M| }y§]xƂ^†@wyvW5zŠ#†s5„  ˆx}y§]xG†@wyxG}y€‡‚Gƒ@„M„›ª€‡|5ƒ  –c€‡zƒaˆ_„x xG–@x{|%€Œ]€}…Š#xGwyx¯‚^†s v_5}yƒa}y€†s|5ƒ@„‡„›Œfx{ƒ@zy€ˆ_„‡x@« ¸ †s x  xG}y§5†  †@Œzx{„x{‚^}y€M|]3ƒÒzy]ˆ_zyxG}†@Œwyx{„‡€Mƒaˆ_„xÀŒ¨x{ƒQ‰ }y]wx{z €‡z¢}y§]xGwyxGŒf†@wyx)wyx{•c5€wx  £@ƒ@z  x{zy‚^w€‡ˆ,x  €‡|}y§]x |]x^™O}"zx{‚^}y€‡†s|« ¿ :&25r,>s¶ 6s2ÁÀŽ2NMÚ2_tc>s·°4 ´ u¢x^™cvxGw€M x{|9}x  Š"€}y§%}y§]wyxGx xG}y§]†  z&Œf†@w0zx{„x{‚^}‘‰ €‡|5˜ˆW€‡„‡€‡|]s_ƒ@„0І@w  v_ƒ@€wSz%Œ¨†@w€‡|_‚G„‡5zy€†s|€‡|»}y§]x  †  x{„‡z{« E „‡„W xG}y§]†  z#ƒ@zzy€s|ªz‚^†@wyx{z}†%€M|  €–O€  ‰ 5ƒ@„9v_ƒ@€wz{£Qz†Œ¨x{ƒa}y5wyx#zy5ˆ_zxG}yz7†@Œ5ƒ@|~›  x{zy€‡wyx  zy€IGx ‚Gƒ@|ˆx0x^™O}wƒ@‚^}x  ˆc›/}yƒaÇc€M|]}y§5x§5€‡s§]x{z}‘‰Úwƒ@|]Ç@x  v_ƒ@€‡wzG« Â.PSR \Äà ½ ÃXYNb%ŔV w ^N[ƘY ½ U‹^V ¬)§5xzy€M v_„x{z}zy‚^†@w€‡|5À xG}y§5†  Нƒ@zÆJ]}y_ƒ@„…€‡|]‰ Œf†@wƒa}y€†s|\Ÿ —kuy¡S£  x^­W|5x  Œ¨†@wƒ v_ƒ@€wCŸ zG£ }S¡±ƒ@zGÉ ÇWŸe9îá¡0Ê Ì È ¦?É   ÊË  „Ì Ì Í ¦?Ʉ¡SÊ Ë ¡SÌ*Î ¼ŽŸ‹Ï=ÐÐ5¡c„†@ Î ¼¢Ÿ‹Ï=ÐÐ5¡ Î ¼¢Ÿ‹Ï¡ Î ¼¢Ÿ‹Ð5¡ Ð ÑÒ û?þ€ ýÚþ‘þÓ‘ü%Qöúû?öøû?ù7þêöøùSõGÿüTû?þ‹ûQüýúö ÿÚöúüTû ‘üTû?÷ ÿ¤ýSöúûGÿa@þlý…÷¤ü?ý‘þÔ,üSý$©Sû$ Õ Ö×ÕØÙ†Ô ýÚþ‘þ*ý±þ„ ÿÚþêýÚ÷Q ýÚüÛÚc…Ü%Õ ÝÞ‰ Š"§]xGwx Î ¼¢Ÿe9Ðá¡®€‡z(}y§]xæv5w†@ˆ_ƒaˆ_€‡„‡€‡}°›Ø}y§5ƒa}˜ƒÄwƒ@|O‰  †s„‡›®‚§]†szx{|pvWƒ@€w†@Œ0‚^†~†O‚G‚G]wywS€‡|]ºz†s]w‚^x ƒ@|  }yƒawy@xG}/}†@Ç@x{|_z/€‡|(}y§]x‚^†@wyv_5z€‡z Ÿe9Ðá¡Sî Î ¼…ŸesÐjß á¡€‡z }y§]xv5wy†@ˆ_ƒaˆW€‡„‡€}l›J}y§5ƒa}¯}y§]xz†s]wS‚^x}†@Ç@x{|‹€‡z%eƃ@|  }y§]xº}yƒawy@xG} }†@Ç@x{|΀Mz|5†@}áTîxG}y‚a@|  Î ¼…Ÿ‹Ï¡ƒ@|  Î ¼¢Ÿ‹Ð5¡ƒawyx#}y§]x#„‡xGŒ¨}¢ƒ@|  w€‡s§9}¢ƒawys€‡|5ƒ@„Mz†@Œ Î ¼¢Ÿ‹Ï=ÐÐ5¡S« —˜]}y5ƒ@„€‡|]Œf†@wƒa}y€†s|Ø x{ƒ@zy5wyx{zº}y§]x  xG@wyxGxp}† Š"§5€M‚§­eƒ@|  ᯃawxJ|]†s|O‰°€M|  xGvx{|  x{|~}G£5z†ª€}€‡z"ƒ wyx{ƒ@z†s|_ƒaˆ_„x/‚§5†s€‡‚^x/Œf†@wzy‚^†@w€M|]%v_ƒ@€wzG« ÂQPO \ào·\_pâá]YU‹V ¿ ¬)§]xÀzx{‚^†s|  zy‚^†@w€‡|5( xG}y§]†  Šƒ@zƒ@|»ƒav5v5wy†q™O€ž‰ ƒa}y€†s|ª†@Œ}y§]x"—˜š0—˜ sƒ@€‡|ªŒ¨†@w#Œ¨x{ƒa}y5wyx Ÿ? ¡ £  x^‰ ­W|]x  ƒ@z}y§]x%„†@a‰°„‡€Ç@x{„‡€M§]†~† ® €¥,xGwyx{|_‚^xCˆxG}°ŠxGx{| ƒ3—˜š0—˜ †  x{„¯Š§5€‡‚§»€‡|_‚G„‡  x{zJ}y§5€‡z%Œfx{ƒa}y]wyx ƒ@|  †s|5xƊ"§5€‡‚S§  †cx{z)|]†@}GÉ ã“ ¡ Ê Ô ¾ 亾 „‡†@ ¼  ¡ ŸS䋾 寡 ¼¢ŸS䋾 寡 Š"§]xGwxÀ}y§5x‹}wƒ@€‡|5€‡|5˜‚^†@wyvW5z®Ÿ‹å"Ð ä ¡‚^†s|5zy€‡zy}yz%†@Œ ƒ»zxG}À†@ŒÀŸ z}yƒa}y€‡zy}y€‡‚Gƒ@„‡„›Î€‡|  xGvx{|  x{|~}S¡ zx{|~}x{|5‚^x v_ƒ@€wSzҟ ¿@нQ¡S£Æƒ@|  ¼  ¡ €‡zº}y§]xÒ †  x{„Š"§5€M‚§Á€‡|O‰ ‚G„‡  x{z Ÿ? ¡ « ¸ €‡|_‚^x%—˜š0—˜  †  x{„‡zƒawyxª}wƒ@€‡|]x  ˆc›Ä­W|  €‡|]\}y§]xÒzxG}‹†@ŒŒ¨x{ƒa}y]wx3Š#x{€s§~}yz‹Š"§_€‡‚§ ƒQ™]€‡€IGx{z}y§]x „‡€‡Ç@x{„‡€‡§]†c†  †@Œ#}y§5x}wSƒ@€‡|5€‡|]À‚^†@w‰ v_5z{£€}Z€‡z|5ƒa}y]wSƒ@„)}†æwSƒa}xkŒfx{ƒa}y]wyx{zªƒ@‚G‚^†@w  €‡|] }†Ä§]†?Š J_‚§Á}y§]xG›Ñ‚^†s|~}w€ˆ_]}x3}†;}y§_€‡z‹„‡€Ç@x{„‡€‰ §]†c†  « E v,†?Š#xGwŒf5„Žz}wSƒa}xG@›˜Œf†@wC5z€‡|]Àsƒ@€‡|5zƀ‡z }†/ˆ__€‡„  ƒ/ †  x{„]€}xGwƒa}y€–@x{„‡›ˆc›%ƒ 5 €‡|]Æƒa} x{ƒ@‚S§ z}xGvp}y§]xŒ¨x{ƒa}y5wyxJЧ5€‡‚§ks€–@x{z}y§]x§5€s§]x{z}sƒ@€‡| Š"€}y§pwx{zvx{‚^}Æ}†®}y§]†szxƒ@„wx{ƒ  ›˜ƒ _ x  « ÈxGwy@xGw xG}0ƒ@„=Ÿ‘Ô{Õ@Õ?Hs¡  x{zy‚^wS€ˆxƒ@| x^ëÀ‚G€x{|9}#ƒ@„@†@w€}y§5éŒf†@w ƒ@‚G‚^†s v_„M€‡zy§5€‡|5Ò}y§5€‡zZ€‡|Ċ§5€‡‚§ ƒav5v5wy†q™O€Mƒa}y€†s|5z }†®¼  ¡ ŸS䋾 寡%ƒawyx˜‚^†s v_]}x  €‡|;v_ƒawSƒ@„‡„x{„¯Œf†@wZƒ@„‡„ Ÿ |]xGŠ¡ Œ¨x{ƒa}y5wyx{z Ÿ? ¡ ˆc›Î§]†s„  €‡|5Òƒ@„‡„±Šx{€s§9}yzÀ€‡| }y§]xx^™O€Mz}y€‡|]ª†  x{„=­5™Ox  ƒ@|  †@v5}y€‡ª€I{€‡|]†s|_„› †q–@xGw u  ¡ «a;"†?Š#xG–@xGw{£@}y§_€‡z7 xG}y§]†  wyx{•c5€wx{z7ƒ@|~› x^™Ov,x{|_zy€–@xv_ƒ@zyzx{z†q–@xGwÆ}y§5x%‚^†@wv_5z}†º†@v5}y€‡€IGx }y§]x/Šx{€s§~}yz)Œf†@w"}y§]xJzxG})†@ŒŽŒ¨x{ƒa}y5wyx{z"5|  xGw)‚^†s|O‰ zy€  xGwƒa}y€†s|˜ƒa}x{ƒ@‚S§3z}xGv7£ƒ@|  €}ƒ _ z†s|5„›‹†s|]x Œfx{ƒa}y]wyxv,xGwz}xGv7£Oz†%€}#€‡z#|]†@}0v_wƒ@‚^}y€‡‚Gƒ@„WŒ¨†@w¯‚^†s|O‰ z}w_‚^}y€‡|]C †  x{„‡z0‚^†s|9}yƒ@€‡|_€‡|]/}y§]†s_zyƒ@|  z0†@Œ,Œfx{ƒQ‰ }y]wyx{z)†@w" †@wyx@« u‘|ºƒCwyx{‚^x{|~}±v_ƒavxGwƟSG w€‡|~}I@£Ô{Õ@Õ?Bs¡S£NG wS€‡|9}Iƒaw‰ s]x{zº}y§5ƒa}‹€}º€‡zÀ5zy5ƒ@„M„›;zyOëZ‚G€x{|~}À}†;v,xGwŒ¨†@w }y§]xª€‡}xGwƒa}y€†s|  x{z‚^w€ˆx  €‡|p}y§]xv_wyxG–c€‡†s5zv_ƒawƒQ‰ @wƒav_§¹†s|5„›é†s|5‚^x@£‹€‡|Æ@}y§5xGw\І@w  z\}y§5ƒa}æŒfx{ƒQ‰ }y]wyx{z‚Gƒ@|Zˆ,x"wƒ@|]Ç@x  zy€‡ v_„‡›Jƒ@‚G‚^†@w  €M|]C}†}y§]x{€w sƒ@€‡|\Š"€}y§3wx{zvx{‚^}C}†pz†s xº€‡|5€}y€‡ƒ@„&†  x{„Ú«Ó;"x ƒ@„‡zy†3s€–@x{z ƒ@|΃@„@†@w€‡}y§5 Œ¨†@w‚^†s vW]}y€‡|]psƒ@€‡|5z 5z€‡|] ƒØ|c5 xGw€‡‚Gƒ@„Jƒav5v5wy†q™O€Mƒa}y€†s| Š"§5€‡‚S§ wyx^‰ •c5€wyx{z¢†s|5„‡›Jƒzy€‡|]s„‡x#v_ƒ@zzކq–@xGw0}y§5x±}wƒ@€‡|5€‡|5‚^†@w‰ v__zG«Žu ƒ  †@v5}x  G w€M|9}I?æc xG}y§]†  Œf†@w¯‚^†s v_]}y€‡|5 —˜š0—˜¹sƒ@€M|5zG£5zy€‡|5ª}y§]xwyxGŒfxGwyx{|5‚^x%}w€@wƒ@¦ƒ@z }y§]xJ€‡|5€}y€‡ƒ@„, †  x{„¤« Â.PÂ Å(ç\£RÛá]YU‹V ¿ ¬)§5x)­W|5ƒ@„Wzy‚^†@w€‡|5J xG}y§]†  €‡|9–@†s„‡–@x  }y§]xsƒ@€‡|Z†@Œ x{ƒ@‚S§JІ@w  ‰ÚvWƒ@€w=v_ƒawSƒ@ xG}xGw,¼¢ŸfáG¾ ea¡Š€}y§5€‡|u‘ȯ—\Ôa« u‘|5z}x{ƒ  †@Œ,}yƒaÇO€‡|]Csƒ@€‡|5z Š€}y§ wyx{zvx{‚^} }†ƒ@|ª€‡|_€ž‰ }y€‡ƒ@„0 †  x{„&ƒ@z/€‡|p}y§]x v5wxG–c€†s_zÆzx{‚^}y€†s|£7u‚^†s%‰ v_5}x  }y§]x{’Š"€‡}y§Áwyx{zvx{‚^}‹}†Äƒ×‘Œ 5„‡„Mņ  x{„ Š"§_€‡‚§‹€‡|_‚^†@wyv†@wƒa}x  ƒ@„‡„ƒq–Qƒ@€‡„Mƒaˆ_„xƊ†@w  v_ƒ@€wSzGÉ ãè ¡ Ê Ô ¾ 䋾 „†@ ¼¢ŸS䋾 寡 ¼ Ë  ¡ ŸS䋾 寡 Ð Š"§5xGwyx&¼ Ë  ¡  x{|]†@}x{z0}y§]x"Œ 5„‡„Ou‘ȯ—\Ô †  x{„a¼ZŠ"€‡}y§ }y§]x0v_ƒawƒ@ xG}xGw_¼¢ŸfáG¾ ea¡,zxG}7}†IGxGwy†ƒ@|  }y§5x wyx{zy5„‡}‘‰ €‡|5  €Mz}w€ˆ_5}y€†s|±¼¢Ÿfç ¾ eQ¡=wyx{|5†@wƒ@„‡€IGx  «¢¬)§5xƒ  ‰ –aƒ@|9}yƒa@xƆ@Œ=}y§5€‡z0xG}y§]†  €‡z0}y§5ƒa}±€}0s€‡–@x{zƒ x{ƒQ‰ zy5wyx †@ŒOx{ƒ@‚§v_ƒawƒ@xG}xGw”æúz=І@wy}y§J€‡|/}y§]x0v5wyx{zx{|5‚^x †@Œ±†@}y§]xGwJv_ƒawƒ@ xG}xGwSzG« E zJ€‡zÆ}y§]xZv5wyxG–O€†s5zÆzx{‚T‰ }y€†s|7£_}y§5€‡z€‡z"ƒ@|(ƒav5v5wy†q™O€Mƒa}y€†s|®ˆx{‚Gƒ@5zyx  xG}xGw‰ €M|5€‡|]Æ}y§]x}w]xsƒ@€‡|ZŠ#†s5„  wyx{•c5€wyx"wyxG}wƒ@€‡|5€M|] ¼ Ë  ¡ ƒ@|  |]†@} xGwyx{„›Àwyx{|]†@wªƒ@„‡€I{€‡|]]« E v5wy†@ˆ_„x{ Š"€}y§Cuêȯ—Ô0sƒ@€‡|_z…€‡z…}y§5ƒa}¢}y§]xG›Cƒawyx |]†@}¯–@xGwy›Zwy†@ˆ_5zy}G«¢uêŒ7}y§]x‚^†@wyv__z‚^†s|~}yƒ@€‡|5z¯ƒ%zx{|O‰ }x{|5‚^x±vWƒ@€w¯Ÿ ¿sнQ¡…Š"§5€‡‚S§J‚^†s|_zy€‡z}yzކs|5„‡›/†@Œ_ƒÆzy€‡|5s„x І@w  vWƒ@€w#Ÿe9Ðá¡S£a}y§]x{| 㓠¡ Š"€‡„‡„@‚^†s|~}yƒ@€‡|J}y§]x }xGw à Û é Û „‡†@êë ¡ Û  ìí ê?ë ¡ Û   Þ ì êë ¡ Û   Þ ì £9z†/€‡Œs¼¢Ÿfá{¾ e h ¡Ž€Mz&‚G„†szx±}†·IGxGwy† Ÿ ƒ@z €‡zŒfwyx{•c]x{|9}y„‡›3}y§]x‹‚Gƒ@zx?¡S£ ãè ¡ Š"€‡„‡„0ˆ,xº‚G„†szx }†À€‡|O­W|5€‡}°›@£]xG–@x{|(}y§]†s]s§\Ÿe9Ðáy¡¯ƒ{›‹†O‚G‚G]w†s|5„› †s|5‚^x€‡|%}y§]x)}wƒ@€M|5€‡|]‚^†@wyv_5z{«¢¬…†/wyx{ x  ›J}y§5€MzG£@u ‚^†s vW]}x  sƒ@€‡|5zŠ"€}y§Àwyx{zyv,x{‚^}}†ƒ%„‡€‡|]x{ƒaw¯‚^†s%‰ ˆ_€M|5ƒa}y€†s|˜†@Œ#u‘ȯ—\Ôªƒ@|  ƒ‹zy †c†@}y§5€‡|]º †  x{„-î=£ †@Œ}y§]xŒf†@ws0@¼¢Ÿfç ¾ ¿?¡2»Ÿ‘ÔX3ï0,¡„fç ¾ è Ðy¿Q¡S«#u‘|}y§]x x^™OvxGw€‡ x{|~}yz&wyxGv†@wy}x  ˆ,x{„‡†qŠ/£su&5zyx  ƒ/5|_€Œ¨†@wS  €Mz}w€ˆ_5}y€†s|ªŒf†@w5Š"€}y§­0ÀÊ â Õ@Õc«ð ¸  †~†@}y§5x  uêȯ—ÔÆsƒ@€‡|5z±‚Gƒ@|kˆx/‚^†s v_]}x  €M| v_ƒawSƒ@„‡„x{„,€‡|ºƒz€‡|]s„xv_ƒ@zyz†q–@xGw"}y§]x}wƒ@€‡|5€‡|5‚^†@w‰ v__z"5zy€M|]}y§]xƒ@„@†@w€}y§5 €‡|‹­Ws]wyxªÔa«"¬"§]xJ„‡€M|]x ƒawÇ@x  Š"€}y§Øƒ@|Áƒ@z}xGw€‡zÇÄ}yƒaÇ@x{z®€‡|~}†»ƒ@‚G‚^†s_|9} }y§]x΀M|5‚^wyx{ƒ@zx;€‡| ¼¢Ÿfá{¾ eQ¡  5x\}†Áwyx{|5†@wƒ@„‡€I{€M|] }y§]x  €Mz}w€ˆ_5}y€†s|Ƽ¢Ÿfç ¾ eQ¡#ƒaŒf}xGw"zxG}}y€‡|5¼ŽŸfá„ñÚ¾ eQ¡0}† ò¨=ûQüSÿÚõ?þêý=öúû{ÿÚþlýÚþ‘÷ ÿÚöúûQùlõ?üTö‘þÔžüýaó·,ü, $l@þŽÿÚõ?þ0öúûj ÿÚþêý@ü, yÿÚþ$#ÿ¤ýÚöúùSýl,7õQö°õ–,ü, $%+^þ,ÿÚõ?þi±þêÿÚõQü”$)$?þ„ ÷êýÚö@þ$õ?þêýÚþ-±üýÚþ¢÷¤ö±öôyý7ÿÚü#ÿÚõQþ-•l—õ•÷ö3ù,Söúûýû+{öúûQù $?þ‘÷lýÚö @þ $öúûÿÚõQþ-qýÚþ {öúü,Q÷7÷¤þ lÿÚöøüSûZ/ zyxGs x{|9} ­W„xv_ƒ@€‡wz zx{|~}x{|5‚^x/v_ƒ@€wSz š0|]s„‡€Mzy§À}†@Ç@x{|5z ì]wyx{|_‚§®}†@Ç@x{|_z }wSƒ@€‡| Õ@ð@ð Ôa£ H@Ö@Õc£øð?žañ ð@Õc£ žø~í~£øÕ@Ö?H ÖOÔa£ B@ð?Hc£‡Ô@Ô{ð §5x{„  ‰Ú†s]} Öañ žø]£iíž?B ÕsíBc£øÖ@Õø Ôa£úñùB@ðc£øÖ?žañ }x{zy} Öañ ž@Öc£ HsíH Õ?Bø]£ BañsÕ Ôa£‡ÔGñsÖc£øÖ@ðañ ¬¢ƒaˆ_„xÔaÉ%ú†@wv_5z±zxGs x{|~}yƒa}y€†s|«¬)§]x5û»(üþýÿ7zxGsx{|9}Šƒ@z)_zx  }†}wƒ@€M|‹‚^†sCˆ_€M|5€‡|]%Šx{€s§9}yz)Œ¨†@w }y§]x/}wS€@wƒ@“ƒ@|  }y§5x/†q–@xGwƒ@„M„7„‡€‡|]x{ƒaw)†  x{„Úî]}y§]xS½ ºzxGs x{|~})Нƒ@z"5zx  Œ¨†@w"ƒ@„M„†@}y§]xGw"}wƒ@€M|5€‡|]]«   Ÿe9Ðáy¡! ãè ¡#" ñ $%'&( $)*$ )+% $, Ÿ ¿@нQ¡%6\Ÿ‹åÐ ä%¡! $%'&* -$) á ') ½#! . " | Û fSÛ gTßih ¼¢Ÿfá{¾ e g ¡2£ë àã/ ì ë Û fSÛ í à ì / fáG¾ è0Ðy¿?¡ $%0&1 e 0) ¿2! ã  ¡ " ã  ¡ 2»„†@ 3 3 ã465 ë  „ì êë ¡ Û  ì  á ñ87 ÊØá! ã  ¡:9 "qã  ¡:92»„‡†@ 3 3 í 465 ë  „ì;=< ×?> @BA ;=< × 9 > @BA óBC ;=< × 9 > @BA D  Ÿe9Ðáy¡! ãè ¡#"qãè ¡ Ë]¾ 䮾 쎀s]wyx»ÔaÉ E „@†@w€}y§_ Œf†@wºuêȯ—Ô(sƒ@€M|5zG«FE f ŸeQ¡ s€–@x{z"}y§]x/|~_CˆxGw±†@ŒŽ}y€‡ x{zeC†O‚G‚G]wz)€‡|‹¿s« IGxGwy†]£…Œ¨†@w/x{ƒ@‚S§ÒІ@w  áñ 7 ʹá€M|p}y§]x –@†c‚GƒaˆW5„‡ƒawy›@« ¬…†kzvxGx  ]v3}y§]xƒ@„@†@wS€}y§5‹£7uvxGwyŒf†@w x  }y§5€‡z z}xGvp†s|5„›kŒ¨†@w}y§]†szx%áñ=zy5‚S§˜}y§5ƒa}¼¢Ÿfá„ñÚ¾ ea¡HG â ñ]Ôa« ¬)§5€Mz=‚Gƒ@5zx{zŽ}y§5x#sƒ@€‡|_z=Œf†@wŽv_ƒ@€wz¯ŸesÐáñ‡¡=z5‚§J}y§5ƒa} ¼¢Ÿfá ñ ¾ ea¡JI â ñ]Ô}†%ˆxzy„M€s§9}y„‡›†?–@xGwyx{z}y€‡ªƒa}x  £5ˆ_]} zy€‡|_‚^xÀ}y§5x‹sƒ@€‡|5z%†@Œzy5‚§»vWƒ@€wz ƒawyx®„†?Š €‡|;ƒ@|~› ‚Gƒ@zx@£c}y§]x)wƒ@|5Çc€‡|5†@Œ}y§]x †sz} –aƒ@„‡5ƒaˆ_„‡xvWƒ@€wz&€‡z 5|5„M€Ç@x{„›}†ˆxÆwƒ  €‡‚Gƒ@„‡„›ªƒQ¥x{‚^}x  « K LNMPO 256~·RQØ2 ´ >s< uwƒ@|3x^™OvxGw€‡ x{|~}yzƆs|Ò}y§5x€ú¯ƒ@|5ƒ  €‡ƒ@|’;ƒ@|5zyƒaw  ‚^†@wyv__zG£&Š"€}y§š0|]s„M€‡zy§æƒ@z%}y§5xÀz†s5w‚^x‹„‡ƒ@|]s5ƒa@x ƒ@|  ì5wyx{|5‚§Àƒ@z }y§5x"}yƒawy@xG}„‡ƒ@|]s5ƒa@x@« E Œ¨}xGwzx{|O‰ }x{|5‚^xºƒ@„‡€s|5x{|9}5zy€‡|]k}y§]xZ xG}y§5†  x{zy‚^w€‡ˆ,x  €‡|Ÿ ¸ €‡ƒaw  xG}ƒ@„¤«£¢Ô{Õ@Õ@ðs¡S£}y§]xJ‚^†@wv_5zŠƒ@zzv_„‡€‡} €‡|~}†  €‡zfå†s€‡|~}‹zxGs x{|~}yzkƒ@zkz§]†qŠ"|Á€M|Ñ}yƒaˆ_„xÎÔa« ¬…†æxG–aƒ@„‡5ƒa}x˜v,xGwŒ¨†@wªƒ@|5‚^x@£¯u5zx  vxGwyvW„x^™O€‡}°›,É ¼¢ŸS䋾 寡 ãàTS Û é Û £@Š"§5xGwyx7¼ €‡z…}y§]x†  x{„9ˆx{€‡|]"xG–aƒ@„ž‰ 5ƒa}x  £/ƒ@|  Ÿ‹å"Ð ä ¡‹€Mz‹}y§]xÒ}x{z}˜‚^†@wyv_5z{« G¢xGw‰ v_„x^™]€}l›€‡z#ƒ@†c†  €‡|  €‡‚Gƒa}†@w†@Œ7vxGwyŒ¨†@wSƒ@|5‚^x"Œf†@w }y§]xÒ¬…wƒ@|5zy¬#›cv,xpƒav5v_„M€‡‚Gƒa}y€†s|  x{zy‚^w€‡ˆ,x  €‡|Ø}y§]x €‡|~}wy†  5‚^}y€†s|£cƒ@|  €}#§5ƒ@zƒ@„‡z†ˆxGx{|Z_zx  €M|}y§]x xG–aƒ@„‡5ƒa}y€†s|Ć@ŒÆŒf5„M„ž‰º_x  @x  ¸ —(¬ z›czy}x{z˜Ÿ E „ž‰ ò|5ƒ@€nI{ƒ@|pxG}/ƒ@„¤«£0Ô{Õ@Õ@Õs¡S«¬…†‹x{|5zy]wyx ƒºŒ ƒ@€wƂ^†s%‰ v_ƒaw€Mz†s|£7ƒ@„‡„& †  x{„‡zƐ_zx  }y§]xzyƒ@ x }yƒaw@xG}C–@†a‰ ‚Gƒaˆ_5„Mƒawy›@« 20 25 30 35 40 45 0 5000 10000 15000 20000 25000 30000 test corpus perplexity U number of features MI MEMD IBM1 쎀s]wx¯ðcɎ—(š#—( vxGwyŒf†@wƒ@|5‚^x–@xGwzy5z&|c5JˆxGw †@Œ7Œfx{ƒa}y]wyx{z¯Œf†@w–Qƒaw€‡†s5z0Œfx{ƒa}y]wyx^‰°zyx{„x{‚^}y€†s|‹ xG}y§O‰ †  zG« ¬…†‚^†s v_ƒawyx—(š#—(ÄŒ¨x{ƒa}y]wx^‰°zx{„x{‚^}y€†s|% xG}y§O‰ †  zG£Zup­_wz}\wƒ@|]Ç@x  ƒ@„‡„ZÖ?ž €‡„‡„M€†s|éˆ_€‡„‡€M|]s5ƒ@„ І@w  v_ƒ@€‡wz3‚^†~†O‚G‚G]wywS€‡|]ÑŠ"€}y§5€M| ƒ@„M€s|]x  zx{|O‰ }x{|5‚^x(v_ƒ@€wz €M|»}y§]x®}wƒ@€‡|_€‡|]3‚^†@wyv_5zª5zy€‡|]p}y§]x —(uªƒ@|  u‘È—Ԙsƒ@€‡|5zº xG}y§]†  zG«éÈx{‚Gƒ@5zx3}y§]x —˜š0—˜ sƒ@€‡|5z± xG}y§]†  Šƒ@z±5‚S§‹ †@wyxÆx^™Ov,x{|]‰ zy€‡–@x@£a€‡}7Šƒ@z¢5zx  }†"wƒ@|]džs|5„‡›ƒzy§]†@wy}…„‡€‡z}†@Œ5ƒav]‰ v5w†{™]€‡ƒa}x{„›ºÔ”HañO£úñ@ñ@ñCvWƒ@€wz  xGw€‡–@x  ˆ~›% xGwys€‡|5 }y§]xZ}†@vØÔGñ@ñO£úñ@ñ@ñ3‚Gƒ@|  €  ƒa}x{zJŒ¨w†s x{ƒ@‚§†@Œ±}y§]x †@}y§]xGw xG}y§]†  zG« E zŽzy§]†qŠ|€M|/}yƒaˆ_„x)ðc£@}y§]x}y§]wyxGx  xG}y§5†  z s€–@x"zy]ˆWz}yƒ@|9}y€Mƒ@„‡„›  €¥,xGwyx{|~}&wƒ@|]ÇO€‡|]sz{£ xG–@x{| ƒ@ †s|]Ò}y§]x‹}†@v5‰Úwƒ@|]Ç@x  vWƒ@€wzG«;ì]†@wZx{ƒ@‚§  xG}y§5†  £Æuº}wƒ@€M|]x  —˜š0—˜  †  x{„‡z‹†s| ƒÄzx^‰ •c]x{|5‚^x†@Œ=zy5‚G‚^x{zzy€–@x{„›ª„Mƒawy@xGw#Œfx{ƒa}y]wyxzxG}yz±‚^†s|O‰ zy€Mz}y€‡|]º†@Œ}y§]x }†@v]‰Úwƒ@|]Ç@x  Š#†@w  v_ƒ@€wzŒf†@w/}y§5ƒa}  xG}y§5†  « ¬)§]x‹wyx{zy_„}yz ƒawyxkzy§5†qŠ"|΀M|»­_s]wyxkðc« ]x}†J}y€M x"‚^†s|5z}wSƒ@€‡|9}yz{£WVðañO£úñ@ñ@ñQ‰0ƒ@|  ÖañO£úñ@ñ@ñQ‰ Œfx{ƒa}y]wyx †  x{„‡zŠ#xGwx0}wƒ@€‡|5x  †s|5„›Œ¨†@w…}y§]x u‘ȯ—\Ô Œfx{ƒa}y]wyx;zxG}yzG£JŠ"§5€‡‚S§ †s5}v,xGwŒ¨†@wx  }y§]x†@}y§]xGw X ¨ÄŽ‘j ‘,‘‘€žþyÿ?ýÚþ´•l—õ•÷ö ±ü”$qþ 0ÿ+^þ‘÷è#?ýÚüqö" ÿÚþ "!™ˆ'$#!q÷ÿÚüÿ¤ýöúûüTû·ZY,²‘,•÷ša )¬Oþ‘ûGÿÚöl/ —(u —˜š0—˜Ãsƒ@€‡|_z u‘ȯ—\Ô/sƒ@€‡|5z É É w{«”‹« ƒ@|  xG}  wq«-®« € å‘x @†?–@xGw|5 x{|~} @†s]–@xGwS|]x{ x{|~} Š#x |5†s5z Šx |]†s5z Š#x |]†s_z € åx [ [ £Ó£ [ [ @†q–@xGw|_ x{|9} @†s]–@xGw|]x{x{|9} ÉÓÉ †a¥x{|  xGwz „‡†s€ @wƒ@|~} ƒ@‚G‚^†@w  xGw s5‚G‚G€ s5‚G‚G€  x{„xG}x  w]\ xGs„‡x{ x{|9} ‚G„†szyxGw v_„‡5z  xGv5wyx{‚G€‡ƒa}y€†s| ƒ@ †@w}y€‡zyzx{ x{|~} €M|9}xGw„MƒaÇ@x Œfx{„‡€ž™ €‡ vxGw€‡ƒ@„ €‡ vxGw€‡ƒ@„ x{|  †@wzx ƒav5vW]›@xGw Іc†  ˆW€‡|]x ˆx{ƒ@‚S§]x{z zƒ@ x „‡ƒ €‡|  xGx  –cwƒ@€‡ x{|~} •c]x{z}y€†s| ƒ@€ z}yƒaˆ_€‡„M€I{ƒa}y€†s| @wƒ@€‡| ƒav5v_ƒ@„‡„x  ‚^†s|5zy}xGw|–> x ¬¢ƒaˆ_„xCðcÉ)GŽƒ@€wSz±wƒ@|]Ç@x  Ô=^cž(Ÿf}†@vkˆ†{™_¡ƒ@|  ðañ@ñ@ñ@ñQ‰lðañ@ñ@ñùžÀŒf†@w)x{ƒ@‚S§kŒfx{ƒa}y]wyx^‰°zx{„‡x{‚^}y€†s|k xG}y§5†  « 42 44 46 48 50 52 54 56 58 100 1000 10000 100000 1e+06 1e+07 1e+08 test corpus perplexity _ number of parameters trigram+IBM1 쎀s]wyxÖcÉ G¢xGwyŒf†@wƒ@|_‚^xæ†@Œ }y§]xæ„M€‡|]x{ƒawk †  x{„ –@xGwzy_z)|~_CˆxGw±†@ŒŽuêȯ—ÔÆv_ƒawƒ@xG}xGwzG«  xG}y§]†  z±ˆc›ºƒzyƒ@„‡„ªƒawys€‡|« ¸ €M|5‚^x±}y§]x"|c5Jˆ,xGw0†@Œ,Œfx{ƒa}y]wyx{z#€‡| }y§]x—(š#—(  †  x{„‡z0Šƒ@z¯5‚§ºzyƒ@„M„xGw0}y§5ƒ@|À}y§]x|c5CˆxGw#†@Œ v_ƒawƒ@xG}xGwzƀ‡|(}y§]x%Œf5„M„=u‘ȯ—\Ôa£ˆxGŒf†@wyx ‚^†s v_ƒaw‰ €‡|]Æ}y§5x"—(š#—(Áƒ@|  „‡€‡|]x{ƒaw †  x{„Mz&u…Нƒ@|~}x  }† ˆxzy]wyxª}y§5ƒa}Cƒ@|~›3v,xGwŒ¨†@wªƒ@|5‚^x  €ž¥xGwyx{|5‚^xНƒ@z |]†@}  ]xÆ}†u‘È—ÔÆ†?–@xGw­_}}y€‡|]Z}y§]x/}wƒ@€M|5€‡|]%‚^†@w‰ v_5z{«¢¬…†Cx{„‡€M€‡|5ƒa}x)}y§_€‡z&v†szyzy€ˆW€‡„‡€}l›@£@u¢†@v_}y€‡€IGx  }y§]xª|c5JˆxGw/†@Œ¯uêȯ—Ôv_ƒawƒ@xG}xGwz/ˆc›p}wƒ@€‡|5€‡|5 „‡€‡|5x{ƒaw  †  x{„‡z&Š"€}y§%–aƒaw€†s_z zy€IGx{z&†@Œ}wƒ@|_zy„‡ƒa}y€†s| v_ƒawƒ@xG}xGwzyxG}yz"†@ˆ5}yƒ@€M|]x  Œfwy†s }y§]xCu‘ȯ—\ÔJsƒ@€‡| wƒ@|]ÇO€‡|]]« E zzy§]†?Š"|k€‡|‹­_s]wyxCÖc£W}y§]xJ„‡ƒaw@xGw„‡€‡|]‰ x{ƒaw" †  x{„‡z  †%x^™]§5€ˆ_€‡}ƒ%–@xGwy›ºzy„‡€‡s§9}†q–@xGwy}wSƒ@€‡|O‰ €‡|]\x^¥,x{‚^}G£Š"€}y§Ä}y§]x˜†@v5}y€‡5 vWƒawƒ@ xG}xGw®zxG} zy€IGx˜ƒawy†s5|  ÔG—\£‚^†s vWƒawyx  }†»Ö?ža— v_ƒawƒ@ x^‰ †  x{„ І@w  v_ƒ@€wSz v5v9™ ` Öù= a HOÔa«úñ a Öù=è2uêȯ—Ô Öø]£øÕ?H@Õc£øÖ@ÖOÔ ø9Öc« H ñb Öù=è2uêȯ—Ô Ôa£úñ@ñ@ñO£úñ@ñ@ñ ø9Öc«øð ñO«øÕb —˜š0—˜ Ôa£úñ@ñ@ñ ø5Ôa«‡Ô žc«ií b —˜š0—˜ ÖañO£úñ@ñ@ñ ð@Öc«øð ø©Hc«øÕb ¬¢ƒaˆ_„xCÖcÉ%ú¯†s v_ƒaw€‡zy†s|Z†@Œ& †  x{„vxGwyŒf†@wƒ@|5‚^x{z{« ¬)§5xdce?½ ýgf ‹½ih ‚^†s„‡5ª|\s€‡–@x{z }y§]x‹|~_CˆxGw%†@Œ І@w  v_ƒ@€wz%zx{„‡x{‚^}x  ˆ~›\}y§]xZu‘ȯ—\Ôºsƒ@€M|\wSƒ@|]Ç9‰ €‡|5J xG}y§]†  £O}y§]xjffkƂ^†s„M5|Zs€–@x{z¯}x{z}±‚^†@wyv_5z vxGwyv_„‡x^™O€}l›@£¢ƒ@|  }y§]xl`©‚^†s„‡_|3s€–@x{z}y§]xZv,xGwy‰ v_„‡x^™O€}l›  wy†@væƒ@zCƒ®v,xGwS‚^x{|9}yƒa@xZ†@Œ}y§5x ˆ_ƒ@zx{„‡€‡|5x@« Öù=¹€‡z)}y§5x/}w€@wƒ@  †  x{„=ƒ@|  æô25æ  x{|5†@}x{z„‡€M|O‰ x{ƒaw€‡|9}xGwv,†s„Mƒa}y€†s|« }xGwz€‡|º}y§]xƌ 5„‡„ †  x{„¤« ¬¢ƒaˆ_„x±Ö"v5wyx{zx{|~}yz…­W|5ƒ@„~wyx{zy5„}yz…Œf†@w¢–aƒaw€†s_z…„‡€M|O‰ x{ƒawƒ@|  —˜š0—˜× †  x{„‡zG« ¬)§5x/—(š#—(×†  x{„Mz s€–@x3ƒ»z}w€‡Çc€‡|5æ€‡ v5w†q–@x{ x{|~}º†q–@xGw‹}y§]x3„‡€‡|5x{ƒaw  †  x{„‡z{£/Š"€}y§ ƒ ÔGñ@ñ@ñQ‰ÚŒfx{ƒa}y]wyxǘš0—˜ †  x{„ vxGwyŒf†@w€‡|5JˆxG}}xGw)}y§5ƒ@|®}y§]x/ˆx{z})„‡€‡|5x{ƒaw¯†  x{„ Ÿ  x{zyv_€}x»‚^†s|9}yƒ@€‡|_€‡|] ÔGñ@ñ@ñÑ}y€M x{zkŒfxGŠxGw˜Š†@w  ‰ v_ƒ@€‡w v_ƒawƒ@ xG}xGwSzS¡S£cƒ@|  }y§]x)ˆx{z}#—(š#—( †  x{„ ›O€x{„  €M|]ÀƒºvxGwyvW„x^™O€‡}°›®wyx  5‚^}y€†s|p†@Œ¯ †@wyx }y§5ƒ@| ø©žb“†q–@xGw}y§]xƈ_ƒ@zx{„‡€M|]xƄ‡€‡|]x{ƒaw) †  x{„¤« m n ·°<at]¶#<Q<a·°4 ´ ¬)§5x ƒ@€‡|¹wyx{zy5„‡}3†@Œ®}y§5€‡z3vWƒav,xGw€‡zÒ}y§_ƒa}\}y§]x —˜š0—˜ÓŒfwƒ@ xGІ@wyÇ ƒav5vx{ƒawzÎ}†Ùˆx ƒ¹5‚§  †@wx;x^¥,x{‚^}y€–@x Šƒq›é}† ‚^†sCˆW€‡|]x΀‡|]Œf†@wƒa}y€‡†s| Œfwy†s  €ž¥,xGwx{|9}z†s5w‚^x{zª}y§_ƒ@|Ą‡€‡|5x{ƒaw€‡|9}xGwv,†s„MƒQ‰ }y€†s|7£Oƒa}±„x{ƒ@zy}Œf†@w¯}y§]xv5wy†@ˆ_„‡x{¹z}y  €x  §5xGwyx@«&ul} €‡z…Œ ƒ@€w„›Æx{ƒ@z›J}†zyxGx¯€M|9}y5€‡}y€–@x{„›Š"§~›/}y§5€‡z¢zy§]†s5„  ˆx&}y§]x ‚Gƒ@zx@É¢—(š#—(;x{zzx{|9}y€Mƒ@„‡„›5„}y€‡v_„‡€x{zWv5wyx^‰  €‡‚^}y€‡–@xæzy‚^†@wyx{zpƒaw€‡z€‡|];Œfwy†s  €ž¥,xGwx{|9}kzy†s]w‚^x{z wƒa}y§]xGw¯}y§5ƒ@|Àƒ{–@xGwSƒas€‡|]%}y§]x{‹« ¬)§_€‡z s€–@x{z¯€‡|5Œ¨†@w‰ ƒa}y€†s|˜z†s]wS‚^x{z"Š"§_€‡‚§(ƒ@zyzy€s|kx{€}y§]xGw–@xGwy›®§5€s§ †@wC–@xGw›p„‡†qŠÙz‚^†@wyx{zC5‚§\ †@wyxª€M|ºW]x{|5‚^x †?–@xGw }y§]x­W|5ƒ@„¢wyx{z5„}G«èDѧ]x{|3zy_‚§3zy‚^†@wyx{zCƒawyx%ˆ_ƒ@zx  ]v†s|wyx{„‡€‡ƒaˆW„x0xG–O€  x{|5‚^x@£9}y§5€‡z¢Š"€‡„‡„c„x{ƒ  }†ÆˆxG}}xGw  †  x{„‡zG« ò|5x#z†s xGЧ5ƒa}¢z]wyv5w€Mzy€‡|]±wyx{z5„}7†@Œ5}y§5x{zx#x^™c‰ vxGw€‡ x{|~}yzZŠƒ@zº}y§5ƒa}º}y§]x˜u‘È—Ôpsƒ@€‡|5zÀŒfx{ƒa}y]wyx zx{„x{‚^}y€‡†s|à xG}y§]†  wyx{zy5„}x  €‡| ˆxG}}xGwÒ †  x{„‡z }y§5ƒ@|Ä}y§]x(—˜š0—˜ sƒ@€‡|_zª xG}y§]†  £  x{zvW€}x®}y§]x Œ ƒ@‚^}}y§_ƒa}}y§5x„‡ƒa}}xGwƀ‡zˆ_ƒ@zyx  †s|pƒºJ_‚§p †@wyx  €wx{‚^} x{ƒ@z]wyx†@Œ=x{ƒ@‚S§‹Œ¨x{ƒa}y5wyx?æúz±Š†@wy}y§ºŠ"€}y§5€M| }y§]xҗ˜š0—˜  †  x{„¤« E v,†szzy€ˆ_„x(x^™cvW„‡ƒ@|5ƒa}y€†s| Œf†@w#}y§5€Mz0€‡z#}y§5ƒa}}y§]x"sƒ@€‡|Z†q–@xGw±}y§5xwyxGŒfxGwyx{|5‚^x}w€ž‰ @wƒ@ €Mz |]†@}ƒp@†~†  v5wx  €‡‚^}†@w †@Œ}y§]x‹sƒ@€‡|;€‡| }y§]x®v_wyx{zx{|5‚^x‹†@ŒÆƒ@|~›»†@}y§]xGwŒfx{ƒa}y]wyx{zGî}y§5€‡z €‡z ˆ†@w|]x)†s5}0ˆc› }y§]x"Œ ƒ@‚^}#}y§_ƒa}G£~Œf†@w#–@xGw›zyƒ@„‡„]Œfx{ƒQ‰ }y]wyxzxG}yzŸf†s|ª}y§]x)†@w  xGw0†@ŒŽÔGñ@ñJІ@w  z ƒ@|  „x{zyzS¡S£ }y§]x±—˜š0—˜ØxG}y§]† % €  †s]}vxGwyŒf†@w }y§]x¯u‘È—Ô  xG}y§]†  « E |]†@}y§]xGwæx^™cv_„Mƒ@|5ƒa}y€†s|׀‡z3€M|5ƒ@‚G‚G]wƒQ‰ ‚G€x{z€‡|º}y§]x/sƒ@€‡|kƒav5v5wy†q™O€‡ªƒa}y€†s|5z)‚^†s vW]}x  ˆc› G wS€‡|9}I?æW xG}y§]†  £5Š"§5€M‚§‹€‡|~–@†s„–@x{z"ªƒ@|9›®|c5 xGw‰ €‡‚Gƒ@„Žv_ƒawSƒ@ xG}xGwz}y§5ƒa}Æwyx{•c5€wxJ}y5|_€‡|]]«Cì55wy}y§]xGw €‡|~–@x{z}y€sƒa}y€†s| €‡z wyx{•c5€wx  €‡|9}†æ}y§5€‡zƒ@|  †@}y§]xGw }x{‚§_|5€‡•c]x{zŒ¨†@w­W|  €‡|]–aƒ@„‡€  Š#†@w  vWƒ@€wzG£zy€‡|5‚^x ƒ@„‡„ xG}y§]†  zZ}x{z}x  ›c€‡x{„  x  z€s|5€ž­W‚Gƒ@|~}ª•c5ƒ@|O‰ }y€}y€x{zJ†@Œ"|]†s€MzxªˆxG›@†s|  ÖañO£úñ@ñ@ñpv_ƒ@€wz{«kÈx{‚Gƒ@5zx }y§]xz†s5w‚^x –@†c‚GƒaˆW5„‡ƒawy›˜‚^†s|9}yƒ@€‡|_z/ƒaˆ†s]}èžañO£úñ@ñ@ñ І@w  z}y§5€Mz7€‡z†@ˆc–c€†s_zy„›ƒ@|/5|]wyx{ƒ@„M€‡z}y€‡‚Gƒ@„‡„‡›"zyƒ@„‡„ |c5CˆxGw±†@ŒŽ}wƒ@|5z„‡ƒa}y€†s|5zG« E „}y§]†s]s§Ù}y§]xу@€‡| 5zx،¨†@w»}y§]xÑ †  x{„Zu §5ƒq–@x  x{zy‚^w€ˆx  €‡| }y§5€‡z˜v_ƒavxGwp€‡zp€‡|éƒav5v_„‡€‡‚GƒQ‰ }y€†s|5zĄ‡€Ç@x ¬…wƒ@|5zy¬#›~vxъ"§5€M‚§ |]xGx  }†¹ƒaÇ@x wƒav_€  v_wyx  €‡‚^}y€†s|_z"†@Œ]v‚^†s€‡|]À}yƒawy@xG}/}x^™O}G£…€} €‡z‹€‡|~}xGwyx{z}y€M|];}†Äzvx{‚G5„‡ƒa}x҃aˆ†s]}‹Š§]xG}y§]xGwkƒ —˜š0—˜  †  x{„]Œf†@w¼¢Ÿfç%¾ è0Ðy¿?¡0‚^†s5„  ƒ@„Mz†/ˆx)5zx^‰ Œ 5„]Œf†@w ¸ —˜¬Æ«cú†sv_ƒawyx  }†J}y§]xz}yƒ@|  ƒaw  |]†s€Mz› ‚§_ƒ@|5|]x{„ƒav5v5wy†sƒ@‚S§£&}y§5€‡zJ§5ƒ@z}y§]xZƒ  –aƒ@|~}yƒa@x‹†@Œ vxGw€}}y€M|]3J5‚S§Î„x{zyzª‚^†s v_„x^™Îzx{ƒaw‚S§Îv5w†c‚^x^‰  ]wx{zGî"†@ŒCƒ@„M„†qŠ"€M|]æƒ@|~›Î€‡|]Œf†@wƒa}y€†s|Ċ"§5€‡‚S§Ä€‡z  €wx{‚^}y„›p†@ˆ_zxGwy–aƒaˆ_„xÀ€‡|Ò}y§]xZ}wƒ@€‡|5€M|]k‚^†@wyv__z/}† ˆxæ–@xGwy› x{ƒ@zy€‡„›Á€‡|5‚^†@wv,†@wSƒa}x  €‡|~}†Ø}y§]x\ †  x{„ –O€‡ƒÄˆ†c†s„x{ƒ@| Œ¨x{ƒa}y]wx{zGîÀƒ@|  †@Œªƒ@| x{z}y€Mƒa}y€†s| v5wy†O‚^x  ]wxkŠ"§]xGwx(}wƒ@|_zy„‡ƒa}y€†s|Ø †  x{„v_ƒawƒ@ x^‰ }xGwz‚Gƒ@|ˆx)†@v5}y€M€IGx  Œf†@w_zx)Š"€}y§ªƒ@|x^™]€‡z}y€‡|] „‡ƒ@|]s_ƒa@xª †  x{„¤«poѝ€‡zyƒ  –aƒ@|~}yƒa@x{zJ€‡|_‚G„‡  x%}y§]x qTr û“?ýÚöúû‘ö#øþZ@üÿÚõ“ôûQùSùSþ&Sû$ÿ¤ýSûQ÷ yÿÚöøüSû5‘ü' §5€‡s§À‚^†szy}†@Œ…}wƒ@€‡|5€M|]—˜š0—˜  †  x{„‡zG£c}y§]xŒfƒ@‚^} }y§5ƒa}"¼¢Ÿfç%¾ è0Ðy¿?¡C€‡zCzy†s xGŠ"§5ƒa}C„‡x{zyz/@x{|]xGwSƒ@„ }y§5ƒ@| ¼¢Ÿ ¿c¾ ½Q¡CŒ¨†@w%ˆ_5€M„  €‡|]‹wx{ƒ@„‡€‡z}y€‡‚ª}wƒ@|5zy„Mƒa}y€†s|» †  ‰ x{„‡z{îƒ@|  }y§5xº„Mƒ@‚ydž@Œƒp x{‚§_ƒ@|5€‡zy x{•c5€–aƒ@„x{|~} }†®}y§5xš0—-ƒ@„@†@w€}y§_¦Œ¨†@wC€M|5‚^†@wyv†@wƒa}y€‡|]»Â§5€  ‰  x{|_Å%–Qƒaw€Mƒaˆ_„x{z¯€‡|~}†—(š#—(Ã †  x{„‡z/Ÿ zyxGxZŸ ì]†sz‘‰ }xGw{£,ðañ@ñ@ñ9¡±Œf†@w"ƒ  €‡z‚G5zyzy€†s|À†@ŒŽ}y§5€‡z¯v_wy†@ˆ_„x{Z¡S« s t 4 ´ tœM°¶0<a·°4 ´ ¬)§5xJv5w†@ˆ_„x{ †@Œ#zx{ƒaw‚S§5€‡|]ÀŒf†@w}y§5xˆx{z}}yƒawy@xG} }x^™O}#€‡|ªz}yƒa}y€‡z}y€‡‚Gƒ@„_}wƒ@|5zy„Mƒa}y€†s|ƒav5v_„M€‡‚Gƒa}y€†s|5z&‚Gƒ@| ˆxª@wx{ƒa}y„›\zy€M v_„‡€ž­Wx  €Œ±}y§]xÀŒf5|  ƒ@ x{|~}yƒ@„  €‡z‰ }w€‡ˆ_]}y€†s|C¼¢Ÿf½c¾ ¿?¡€Mz¯x^™cv_ƒ@|  x ‹ €wyx{‚^}y„‡›ª€‡|À}xGwz †@Œ}y§]x  €‡zy}w€ˆ_]}y€‡†s|¼¢Ÿfç ¾ è Ðy¿Q¡S£…wƒa}y§5xGw/}y§5ƒ@|Ґ5z‰ €‡|5Ò}y§]xkz}yƒ@|  ƒaw  |5†s€‡z›~‰°‚§5ƒ@|5|5x{„)ƒav5v5wy†sƒ@‚S§«Ñu ‚^†s vWƒawyx  ƒªzy€‡ v_„‡x/„‡€‡|]x{ƒaw"†  x{„7Œ¨†@w0¼¢Ÿfç%¾ è0Ðy¿?¡ ˆ_ƒ@zyx  †s|\u‘ȯ—Ûæúzª †  x{„ÔºŠ"€‡}y§\ƒ@|»x{•c5€–aƒ@„x{|~} —˜š0—˜  †  x{„¤£"ƒ@|  Œf†s5|  }y§5ƒa}º}y§]x3—(š#—(  †  x{„Ƨ5ƒ@z®†q–@xGw7ø©žb’„†?Š#xGw®}x{zy}k‚^†@wyv_5z®v,xGwy‰ v_„‡x^™O€}l›@£  x{zv_€‡}x#5zy€M|]"}lŠ#††@w  xGwSz…†@ŒWƒas|5€}y  x ŒfxGŠ#xGwpv_ƒawSƒ@ xG}xGwzG« u®ƒ@„‡z† ‚^†sv_ƒawyx  zxG–@xGwƒ@„  xG}y§5†  zCŒ¨†@w%zx{„x{‚^}y€M|]k—˜š0—˜ І@w  ‰Úv_ƒ@€wCŒ¨x{ƒQ‰ }y]wx{zG£]ƒ@|  Œ¨†s5|  }y§_ƒa}¯ƒ%zy€M v_„x xG}y§]†  Ч5€‡‚§ wƒ@|5Çcz\v_ƒ@€wzƒ@‚G‚^†@w  €M|] }† }y§]x{€wsƒ@€‡| Š"€}y§_€‡|  †  x{„WÔ †a¥xGwz…z„‡€s§~}y„›ˆxG}}xGw¢vxGwyŒ¨†@wSƒ@|5‚^x0ƒ@|  zy€‡s|5€ž­W‚Gƒ@|~}y„›Z„‡†qŠxGw‚^†s v_5}yƒa}y€†s|5ƒ@„=‚^†szy})}y§5ƒ@|kƒ  †@wx˜@x{|5xGwƒ@„/—˜š0—˜ Œfx{ƒa}y]wyx^‰°zyx{„x{‚^}y€†s|Áƒ@„@†a‰ w€‡}y§5  5x)}†5G0w€‡|~}I@«&좀M|5ƒ@„‡„›@£su z]@@x{z}}y§5ƒa}€‡} ƒq›ºˆ,xƌfw5€‡}Œf5„W}†x^™Ov_„†@wyxÆ}y§5xC€  x{ƒ†@Œ&5zy€‡|5%ƒ —˜š0—˜¹ †  x{„…Œ¨†@w#¼¢Ÿfç ¾ è Ðy¿?¡ƒ@zƒ@|˜ƒ@„}xGw|5ƒa}y€–@x }†}y§5xC|]†s€‡z›~‰°‚§_ƒ@|5|]x{„ƒav5v5w†sƒ@‚§®}† ¸ —(¬/« n3tu ´ 4v7MÚ2_µ0872QØ2 ´ >s< ¬)§_€‡z抆@wyÇ׊¯ƒ@z»‚Gƒawyw€x  †s]}»ƒ@z»v_ƒawy}†@Œ®}y§]x ¬…wƒ@|5zy¬#›cv,x(v5wy†Qåx{‚^}‹ƒa}xw E ï=uS£Œf_|  x  ˆc›Î}y§]x Cƒa}y]wƒ@„ ¸ ‚G€x{|_‚^x{z(ƒ@|  š0|5s€‡|]xGxGw€‡|5w)x{zx{ƒawS‚§ ú†s_|5‚G€‡„†@Œ)ú¯ƒ@|5ƒ  ƒO«u¯Š"€Mzy§‹}†ª}y§_ƒ@|]Ç­=]›®ï=ƒQ‰ v_ƒ@„M xkƒ@|  E |  wyx{ƒ@zªš#€‡zx{„x®Œf†@wº‚^†s x{|~}yzª†s| }y§]x"v_ƒavxGw{£sƒ@|  G#§5€‡„‡€v_v,xï=ƒ@|]s„Mƒ@€‡zŽŒ¨†@w#€‡|5zv_€‡w€‡|]  €Mzy‚G5zyzy€‡†s|5zG« y 2z‘256s2 ´ tO2_< {#|'}R~=€e‚pƒR„†…|0‡6ˆ‰|]…ŠŒ‹'|0…Ž‘‡’…Š,“”‡’•–|]~i‚—‹˜|]–‰Š ™Z~=š‡6…›™Z…‡’œ0–˜‰Šj‹0ž'–…,Ÿ|] ~=R¢¡'Šj£†|0…›“”~i‚¤|]¥¦~‰§Š ¨‘|0…ˆ©ƒB‹'ž'}T~©ª«„J•–Ь£†|­š‡¤§¯®#§ ¡0Ь°±ž'|0–²€j³ ´ ¥H‡6T–е|]…§(£†|­š‡¤§F{#|]‘ž·¶±}R¸¡0³ ¹‰º0º'º³ ´ |·ƒ ‘‡’}RT‡¤•i|0‚»¥¼|'•–‡6…~½T|]…}R‚¤|·‘‡6ž'…¾ ¨¿‡’…|]‚À‘~iÁž0T=Š @üTû?þ‘ûGÿÚ÷Q‘ü,#ô$™@þŽÿ¤ýSöúûQþ$÷¤ö – ÿSû?þ‘ü,?÷"!(/ ‹˜Âeà ¶Äž0‘¸}T–ž'Á ¹=º'º0º³ Å ~=•–…‡’•=|]‚«T~=Áž'R‰Š Å8–~~i…˜T~=ƪǞ0ÀŸ|0…œ0|0œ0~›|0…§ ´ Á~i~=•–È®2‘ž]ƒ •i~=}‘}R‡’…œŠ Å8–~ ‹'ž0–…} ±ž'Á¸‡6…} ñ…‡’š0~=‘}T‡p¢¡'Š ¶P¶P¶É³ •©‚¤}RÁ³ Ê¢–³ ~=§ Ë­¶±}‘º0º˜Ë·Á‘ž]Ê¢~=•©‘}Ë­¥jˉÌ…|]‚ ‘~iÁž0T=³ €e§|]¥ÍŸ2³8ÎÄ~i‘œ0~==Š ´ T~iÁ–~i…À€Ï³P£e~=‚6‚¤|d®2‡’~©T|б|0…§ І‡6…•i~i…˜Ñ‹³­£e~=‚6‚¤|J®2‡’~©T|³#¹=º0º'Ò³€d“|·Ó ‡’¥j¥,Ô#… ƒ ‘Tž'Á˜¡l|]ÁÁTž˜|0•–xTžx°e|]T|]‚8Ÿ|0…œ0|0œ0~Õ®#Tž •©~‰}T}Rƒ ‡’…œ³ZÖ2×·ØeÙÚ ÛBÜ·Û?ÝÞ×·ßÜ·àá+Ý:ßâ]Ú Ý¤ãÛ?ÝÞäãŠå'å梹­ç¾ è0º·éê¹'³ –‘‡’}RTž'Á–~=H“g³ÑÎć¤}R–ž'Á³d¹=º0º˜ë ³lìjíiÚ îTÜ·àÑìjíiÛ ï#×]îTð­ã ñ ×]îJòPÜ·Û?ÛBí©îßNó†íä‘בâ0ßÝ:Û?ÝÞ×·ß³„eӪǞ'‘§³ ®+~©‘~i#¨Ä³˜Î‘ž·¶P…Š ´ T~=Á–~i…N€Ï³'£†~i‚’‚¤|Ï®2‡’~©‘‘|Š0Ðe‡’…•i~i…˜ £†~i‚’‚’|ô‹³Ñ®#‡6~iT|ŠÑ|]…§µõPž0ö~iTHŸ#³+“~i•©~i‰³µ¹‰º0º'è³ Å8–~÷¥¼|·T–~i¥¼|·‘‡’•=}†ž0ª#“|0•–‡6…~HÅ|]…}R‚¤|·‘‡6ž'…¾P®Ñ|·ƒ |]¥¦~©‘~il~=}RT‡’¥¼|·‘‡6ž'…³øÖ2×·ØeÙÚ ÛBÜ·Û?ÝÞ×·ßÜ·à÷á+Ý:ßâ]Ú Ý¤ãù Û?ÝÞä㊹‰ºæ?å0ç¾úå]Ò'è­é 蹉åŠ0‹'…~0³ ´ ³£†~i‚’‚¤|®2‡’~©‘‘|ŠÐj³£†~i‚’‚¤|®2‡’~©‘‘|Š|]…§l‹³Ÿ|] ~=R¢¡'³ ¹‰º0º'ë³ûB…§ •©‡’…œZªÇ~=|]T‘~=}+ž0ª|]…§ ž'¥,Ì~=‚’§}=³+Å~‰•– ƒ …‡¤•i|0‚õP~iÁž0TJÄ“ñƒ¢ ´ ƒBº'뷃‘¹=ü0üŠÄ“Ãj³ ý ~iž'Tœ'~¼¨ž'}RT~==Š+®2‡’~i‘‘~Hû¢}‘|]ö~i‚’‚’~0Š+|]…§g®#‡6~=T‘~¼®2‚¤|]¥Hƒ ž'…§ ž0…³F¹‰º0ºê³Å |0Tœ'~©Rƒ?T~iӝþ“”~‰§ ‡’|]T~‰§ÆûB…˜T~=‘|'•ƒ ‘‡6š'~µ“|0•–‡6…~ÿÅ |]…}T‚’|]T‡’ž0…³Ü'ä Ý:ßíîTÜ·ßãà6Ü·ù Û?ÝÞ׷ߊ¹‰å¾6¹·ê]ë‰é¹=º0ü³ ý ~iž'Tœ'~Õ¨ž˜}¢‘~i‰³ôå ³”ûB…•©ž0‘Áž0|·T‡’…œ”Áž'}T‡p‘‡6ž'…g‡6…ƒ ªÇž'T¥¼|·‘‡6ž'…Œ‡6…˜Tž»|Ɠ|·Ó ‡’¥j¥ Ô#…'‘Tž'Á¡ÀË “”‡’…‡6ƒ ¥÷¥ £†‡6š'~i‘œ0~=…•©~ôT|]…}T‚¤|·T‡’ž0…À¥¦ž § ~i‚?³«ûB…›ò8îT×·ù äíí ·Ý:ߘâ]ã†× ñ Û í Û Ö2×·ØeÙÚ ÛBÜ·Û?ÝÞ×·ßÜ·à ìjÜ·Û?Ú îTÜ]à˜áÜ]ßù â0ÚÜiâ˜ídá íÜ·îßÝ:ߘâ”×·îTð·ã×ÙÖ2×iìJáá­ŠHŸ‡¤}Röž0…Š ®+ž'R‘œ'|0‚ Š ´ ~iÁT~i¥÷ö~==³€eÄŸ ´ ‡6œ˜°eŸ Ÿ2³ û¢}T¥¦|0~i‚ ý |0‘•  |·ƒ Ð2|]‘~=|Š˜¨|]…•i‡’}‘•©ž Ä|'}T|'•©ö~iT‘|Š|0…§ Âe~i‘¥¦|0…… °e~i¡'³—¹=º0º³l€±…d‡p‘~i|·‘‡6š'~0Š2£J®Ñƒ ö|0}T~=§ }T~=|0‘•–x|]‚’œ0ž0‘‡6T–¥ ªÇž'Ï}¢|·‘‡’}RT‡¤•i|0‚Ñ¥¦|'•–‡’…~HT|]…}Rƒ ‚¤|·‘‡6ž'…³¼ûB…lûT ´ Ÿ®2ƒ ºxæÞûT ´ Š2¹‰º0º ˜çŠÁ|]œ0~‰}¦¹0¹‰è'ë‰é ¹'¹=è ³ ¹=º'º ³lòÄîT׉äíí ]Ý:ߘâ]ã × ñ Ûí]Û ©ßÛBí©îßÜ·Û?ÝÞ×·ßÜ]àZÖÑ×]ßù ñ íiîTíißäíÆ×·ß]Ù×=ð'í©ß á Ü·ßâ]ÚÜ=â'íµò8îT×=äí©ããÝ:ßâ ·Ö+ù áò!#"$$ %0Š ´ ¡ § …~=¡0Š€e}RT|]‚’‡’|Š£e~‰•©~i¥÷ö~==³ ¨Ä³‹0~=‚6‡’…~=¸ |]…§ôõɳŸ#³“”~i•©~==³Õ¹‰º ³ÏûB…˜T~=TÁž0‚¤|·‘~=§ ~‰}¢‘‡6¥¼|·‘‡6ž'…µž0ª†“|]‘¸0ž·šg}Tž0•©~NÁ|]|]¥¦~iT~i}jªÇTž'¥ }TÁ|]}T~±§|·|³¿ûB…ÕÔ±³ ´ ³ ý ~i‚¤}T~i¥¼|Ï|]…§¼Ÿ2³'°j³'™É|0…|]‚?Š ~‰§ ‡p‘ž0}iŠ0òPÜ]Û ÛBíiîßjóeíä×Tâ0ßÝ:Û?ÝÞ×·ßÕÝ:ß÷ò8îTÜ0äiÛ ÝÞäíi³]°±ž'R‘– ƒ Âež0‚’‚’|0…§Š€±¥¼}RT~=‘§|0¥ ³ ®2–³Ÿ |]…œ'‚’|0‡’}Ä|]…§ ý ³ ¨ž˜}¢‘~i‰³2å ³+Ãe}T‡6…œH•iž0…˜T~iӘTƒ § ~=Á~=…§ ~=…'H‡6…˜‘~i‘Áž'‚’|]T‡’ž0…xTžx•©ž0¥÷ö‡’…~ }R‘|·‘‡’}RT‡¤•i|0‚ ‚¤|]…œ'|]œ'~±|]…§H‘‘|0…}T‚’|]T‡’ž0…¼¥¦ž § ~i‚¤}ѪǞ'#‡’…'‘~i|0•©T‡’š0~ “ÅZ³ ûB…”Ö2×·ßÛBíißÛ?ù'&eÜ·ã©í (”Ú àWÛ?Ý:Ø¦í ·ÝÞÜ)©ß ñ ×]îئÜ]Û ÝÞ×]ß * ää‘í©ãã+Çó, *.­Š ®Ñ|0T‡¤}iŠ ¨‘|0…•©~'Š€±Á‘‡6‚?³ ®2–‡6‚’‡6ÁÁ~ Ÿ|0…œ0‚¤|]‡¤}iŠ ´  ~=ö|0}RT‡’~i… ´ |]š ~'Š ý ~=ž0‘œ0~ ¨ž'}Rƒ ‘~i‰ŠþÔ#‚6‚’‡6ž0R “|0•¸‚’ž·š˜‡6‘•–Š |]…§ ý ¡1Ÿ|]Á|]‚’¥H~'³ å   ³€¬•©ž0¥¦Á|0T‡¤}Tž0…—ž]ªÉ‘–~iž'T~iT‡¤•i|0‚J|0…§—}T~iTƒ ž'T‡’~i…˜T~‰§H~iš·|]‚’|·‘‡6ž'…¦ÁTž •i~=§ ‘~=}2ž]ª|Z…~i¶d¢¡Á~±ž]ª ‡’…˜T~i|0•©T‡’š0~¦“ÅZ³¼ûB…/íä×·ß 0ßÛ íiîßÜ]Û ÝÞ×]ßÜ·àeÖÑ×]ßù ñ íiîTíißäí ß¼áÜ]ߘâ0ÚÜiâ˜í8óeí©ã©×]Ú îTäíã†Ü]ß1 3254·Ü·àWÚÜ]Û ÝÞ×]ß Þáó62JÖ1­ŠÁ|0œ0~‰}#Ò]ü¹é Ò]üŠ]€ÄT–~=…}=Š ý ‘~i~‰•©~'Š˜‹'…~'³ ´ ³Ä°±‡’~=}‘}R~=…Š ´ ³Ðў0œ0~=‚ бÂj³°e~i¡0б|0…§ †³Å8‡’‚’‚6¥¼|]……³ ¹‰º0º³€È£†® ö|0}T~=§—}T~=|0‘•–Æ|0‚6œ'ž0‘‡p‘–¥ ªÇž0 }¢|·ƒ ‘‡’}RT‡¤•i|0‚N¥¼|'•–‡6…~—T|]…}T‚¤|·T‡’ž0…³ ûB… òÄîT׉ä‘íí ·Ý:ßâ·ã × ñ Û í87:9­Û * ßßÚÜ·à;íí©Û?Ý:ߘâ × ñ Ûí * ããi×=äiÝÞÜ·ù Û?ÝÞ×·ß ñ ×]îNÖ2×·ØeÙÚ ÛBÜ·Û?ÝÞ×·ßÜ·à áÑÝ:ߘâ0Ú Ý¤ãÛ?ÝÞä©ã< * Öá1þÜ]ß1 ">=iÛ?ßÛBíiîßÜ·Û?ÝÞ×·ßÜ·àdÖ2×·ß ñ íiîTíißäí1׷ߎÖ2×·ØeÙÚ ÛBÜ·ù Û?ÝÞ×·ßÜ·àá+Ý:ߘâ0Ú Ý¤ãÛ ÝÞä©ã@Ö áARì@BC0"$$ %]ŠÁ|0œ0~=}8º'Ò ­é º'Ò˜ê Š“ž0…˜T> ~=|0‚ ŠÄ|0…|0§|Š €±œ'}¢‰³ ¨‘|0…ˆ”‹0ž˜}R~iªÉ„J•–ŠP–T‡¤}RTž0Á–dÅ8‡’‚6‚’¥¼|]……Š8|0…§d±~iTƒ ¥¼|0……x°e~i¡'³þ¹‰º0º'º³jûB¥¦Á‘ž·š0~‰§ |0‚6‡’œ0…¥¦~=…'Z¥¦ž § ~i‚¤} ªÇž'†}R‘|]T‡¤}¢‘‡’•=|]‚¥¼|'•–‡6…~ɝT|]…}T‚¤|·T‡’ž0…³ÄûB…lòÄîT׉ä‘íí ·ù Ý:ßâ·ãÉ× ñ Ûí, ß1 þÖ2×·ß ñ í©î‘í©ßä‘í÷×·ßD28رÙÝ:îÝÞä‘Ü]àEí©Ûù ×F ­ãJÝ:ß ìjÜ]Û Ú îTÜ]à á Ü·ßâ]ÚÜ=â'í±ò8îT×=äí©ããÝ:ßâGH2ì†á ò!·Š ž'‚6‚’~iœ'~É®Ñ|]‘¸Š“|0T¡‚¤|]…§³ †|]‘T¡ÿ®#T‡’…˜Tˆ0³ ¹=º0º³ÿ¨|0}R¦•iž0¥¦Á |·T‡’ž0…dž]ªJ“|·Óƒ ‡’¥j¥ Ô#…'‘Tž'Á¡Ë]“”‡’…‡6¥÷¥ £†‡6š'~i‘œ0~=…•©~ŒªÇ~‰|·TT~ œ˜|]‡’…³,ûB…ÆûT ´ Ÿ ®ÑƒBº —æÞûT ´ І¹‰º0º'çŠ8Á|0œ0~‰}Nå0è·é å  'Ò³ õPž'…|]‚¤§ÆõPž˜}R~=… ªÇ~i‚¤§³«¹=º'º0Ò³€(¥¼|]Ó‡’¥j¥ ~i…˜T‘ž0Á¡ |0ÁÁ‘ž'|'•–¦Tž¦|0§|0Á T‡’š0~J}¢|·T‡¤}RT‡¤•i|]‚‚¤|]…œ'|]œ'~±¥¦ž §ƒ ~=‚6‚’‡’…œ³†ÖÑ×]رÙÚ Û íiî5]Ùíí‘ä ¼Ü]ß1 eá Ü·ßâ]ÚÜ=â'íiйI¾6¹F˜ê‰é å'å³ “”‡¤•–~=‚ ´ ‡’¥¼|]§Š ý ~iž'Tœ'~ըij¿¨ž'}RT~i‰Š2|]…§g®2‡’~i‘T~Õû¢}Rƒ |0ö~=‚6‚’~0³ ¹‰º0º˜å ³ Ãe}T‡6…œÀ•iž0œ0…|·T~‰}‘žÀ|0‚6‡’œ0… }R~=… ƒ ‘~i…•i~=}H‡’…ÿö‡’‚6‡’…œ0|]‚P•©ž'TÁž0|³ûB…—òÄîT׉äí‘í ]Ý:ߘâ]ãþ× ñ Ûí, ÛxÖ2×·ß ñ íiîTíißäíj×]ßJí×·îTíiÛ?ÝÞä‘Ü]à Ü·ß .í©Û×F ·ù ×]à6בâ]ÝÞäÜ·àKããÚíã¦Ý:ßLÜ'ä Ý:ßíM î‘Ü·ßãà6Ü]Û ÝÞ×]ß8NO0P·Š “ž0…˜T> ~=|0‚ Š>Q†R ~=ö~‰•]³ {2~iƒ?¡‡Sl|]…œ±|0…§É€±‚’~©ÓTSx|0‡6ö~i‚?³#¹‰º0º³¨|'}¢ §~=•©ž § ‡’…œ ªÇž'j}R‘|·‘‡’}RT‡¤•i|0‚2¥¼|0•–‡’…~¦‘‘|0…}R‚¤|·‘‡6ž'…³ÕûB…lûT ´ Ÿ ®Ñƒ º æÇûT ´ й‰º0º'çŠÁ|0œ0~‰}8å˜ê0ê0ë‰é å˜ê0ê³
2000
6
An Information-Theory-Based Feature Type Analysis for the Modelling of Statistical Parsing SUI Zhifang †‡, ZHAO Jun †, Dekai WU † † Hong Kong University of Science & Technology Department of Computer Science Human Language Technology Center Clear Water Bay, Hong Kong ‡ Peking University Department of Computer Science & Technology Institute of Computational Linguistics Beijing, China [email protected], [email protected], [email protected] Abstract The paper proposes an information-theorybased method for feature types analysis in probabilistic evaluation modelling for statistical parsing. The basic idea is that we use entropy and conditional entropy to measure whether a feature type grasps some of the information for syntactic structure prediction. Our experiment quantitatively analyzes several feature types’ power for syntactic structure prediction and draws a series of interesting conclusions. 1 Introduction In the field of statistical parsing, various probabilistic evaluation models have been proposed where different models use different feature types [Black, 1992] [Briscoe, 1993] [Brown, 1991] [Charniak, 1997] [Collins, 1996] [Collins, 1997] [Magerman, 1991] [Magerman, 1992] [Magerman, 1995] [Eisner, 1996]. How to evaluate the different feature types’ effects for syntactic parsing? The paper proposes an information-theory-based feature types analysis model, which uses the measures of predictive information quantity, predictive information gain, predictive information redundancy and predictive information summation to quantitatively analyse the different contextual feature types’ or feature types combination’s predictive power for syntactic structure. In the following, Section 2 describes the probabilistic evaluation model for syntactic trees; Section 3 proposes an information-theory-based feature type analysis model; Section 4 introduces several experimental issues; Section 5 quantitatively analyses the different contextual feature types or feature types combination in the view of information theory and draws a series of conclusion on their predictive powers for syntactic structures. 2 The probabilistic evaluation model for statistical syntactic parsing Given a sentence, the task of statistical syntactic parsing is to assign a probability to each candidate parsing tree that conforms to the grammar and select the one with highest probability as the final analysis result. That is: ) | ( max arg S T P T T best = (1) where S denotes the given sentence, T denotes the set of all the candidate parsing trees that conform to the grammar, P(T|S) denotes the probability of parsing tree T for the given sentence S. The task of probabilistic evaluation model in syntactic parsing is the estimation of P(T|S). In the syntactic parsing model which uses rulebased grammar, the probability of a parsing tree can be defined as the probability of the derivation which generates the current parsing tree for the given sentence. That is, ∏ ∏ = = − = = = n i i i n i i i n S h r P S r r r r P S r r r P S T P 1 1 1 2 1 2 1 ) , | ( ) , , , , | ( ) | , , , ( ) | (   (2) Where, 1 2 1 , , , − ir r r  denotes a derivation rule sequence, hi denotes the partial parsing tree derived from 1 2 1 , , , − ir r r  . In order to accurately estimate the parameters, we need to select some feature types m F F F , , , 2 1  , depending on which we can divide the contextual condition S hi , for predicting rule ri into some equivalence classes, that is, ] , [ , , , , 2 1 S h S h i F F F i m    →   , so that ∏ ∏ = = ≈ n i i i n i i i S h r P S h r P 1 1 ]) , [| ( ) , | ( (3) According to the equation of (2) and (3), we have the following equation: ∏ = ≈ n i i i S h r P S T P 1 ]) , [| ( ) | ( (4) In this way, we can get a unite expression of probabilistic evaluation model for statistical syntactic parsing. The difference among the different parsing models lies mainly in that they use different feature types or feature type combination to divide the contextual condition into equivalent classes. Our ultimate aim is to determine which combination of feature types is optimal for the probabilistic evaluation model of statistical syntactic parsing. Unfortunately, the state of knowledge in this regard is very limited. Many probabilistic evaluation models have been published inspired by one or more of these feature types [Black, 1992] [Briscoe, 1993] [Charniak, 1997] [Collins, 1996] [Collins, 1997] [Magerman, 1995] [Eisner, 1996], but discrepancies between training sets, algorithms, and hardware environments make it difficult, if not impossible, to compare the models objectively. In the paper, we propose an information-theory-based feature type analysis model by which we can quantitatively analyse the predictive power of different feature types or feature type combinations for syntactic structure in a systematic way. The conclusion is expected to provide reliable reference for feature type selection in the probabilistic evaluation modelling for statistical syntactic parsing. 3 The information-theory-based feature type analysis model for statistical syntactic parsing In the prediction of stochastic events, entropy and conditional entropy can be used to evaluate the predictive power of different feature types. To predict a stochastic event, if the entropy of the event is much larger than its conditional entropy on condition that a feature type is known, it indicates that the feature type grasps some of the important information for the predicted event. According to the above idea, we build the information-theory-based feature type analysis model, which is composed of four concepts: predictive information quantity, predictive information gain, predictive information redundancy and predictive information summation. z Predictive Information Quantity (PIQ) ) ; ( R F PIQ , the predictive information quantity of feature type F to predict derivation rule R, is defined as the difference between the entropy of R and the conditional entropy of R on condition that the feature type F is known. ∑ ∈ ∈ ⋅ = − = R r F f r P f P r f P r f P F R H R H R F PIQ , ) ( ) ( ) , ( log ) , ( ) | ( ) ( ) ; ( (5) Predictive information quantity can be used to measure the predictive power of a feature type in feature type analysis. z Predictive Information Gain (PIG) For the prediction of rule R, PIG(Fx;R|F1,F2,...,Fi), the predictive information gain of taking Fx as a variant model on top of a baseline model employing F1,F2,...,Fi as feature type combination, is defined as the difference between the conditional entropy of predicting R based on feature type combination F1,F2,...,Fi and the conditional entropy of predicting R based on feature type combination F1,F2,...,Fi,Fx. ) 6 ( ) , , , ( ) , , ( ) , , , ( ) , , , , ( log ) , , , , ( ) , , , | ( ) , , | ( ) , , | ; ( 1 1 1 1 1 1 1 1 1 1 r f f P f f P f f f P r f f f P r f f f P F F F R H F F R H F F R F PIG i i x i x i R r F f F f F f x i x i i i x x x i i          ⋅ = − = ∑ ∈ ∈ ∈ ∈ If ) , , , | ; ( ) , , , | ; ( 2 1 2 1 i y i x F F F R F PIG F F F R F PIG   > , then Fx is deemed to be more informative than Fy for predicting R on top of F1,F2,...,Fi, no matter whether PIQ(Fx;R) is larger than PIQ(Fy;R) or not. z Predictive Information Redundancy(PIR) Based on the above two definitions, we can further draw the definition of predictive information redundancy as follows. PIR(Fx,{F1,F2,...,Fi};R) denotes the redundant information between feature type Fx and feature type combination {F1,F2,...,Fi} in predicting R, which is defined as the difference between PIQ(Fx;R) and PIG(Fx;R|F1,F2,...,Fi). That is, ) , , , | ; ( ) ; ( ) }; , , , {, ( 2 1 2 1 i x x i x F F F R F PIG R F PIQ R F F F F PIR   − = (7) Predictive information redundancy can be used as a measure of the redundancy between the predictive information of a feature type and that of a feature type combination. z Predictive Information Summation (PIS) PIS(F1,F2,...,Fm;R), the predictive information summation of feature type combination F1,F2,...,Fm, is defined as the total information that F1,F2,...,Fm can provide for the prediction of a derivation rule. Exactly, ∑ = − + = m i i i m F F R F PIG R F PIQ R F F F PIS 2 1 1 1 2 1 ) , , | ; ( ) ; ( ) ; , , , (   (8) 4 Experimental Issues 4.1 The classification of the feature types The predicted event of our experiment is the derivation rule to extend the current nonterminal node. The feature types for prediction can be classified into two classes, history feature types and objective feature types. In the following, we will take the parsing tree shown in Figure-1 as the example to explain the classification of the feature types. In Figure-1, the current predicted event is the derivation rule to extend the framed nonterminal node VP, the part connected by the solid line belongs to history feature types, which is the already derived partial parsing tree, representing the structural environment of the current non-terminal node. The part framed by the larger rectangle belongs to the objective feature types, which is the word sequence containing the leaf nodes of the partial parsing tree rooted by the current node, representing the final objectives to be derived from the current node. 4.2 The corpus used in the experiment The experimental corpus is derived from Penn TreeBank[Marcus,1993]. We semiautomatically assign a headword and a POS tag to each non-terminal node. 80% of the corpus (979,767 words) is taken as the training set, used for estimating the various co-occurrence probabilities, 10% of the corpus (133,814 words) is taken as the testing set, used to calculate predictive information quantity, predictive information gain, predictive information redundancy and predictive information summation. The other 10% of the corpus (133,814 words) is taken as the held-out set. The grammar rule set is composed of 8,126 CFG rules extracted from Penn TreeBank. S VP VP N N P Pierre N N P Vinken M D will VB join DT the N N board IN as DT a JJ nonexecutive N N director N N P Nov. C D 29 . . NP NP NP PP NP Figure-1: The classification of feature types 4.3 The smoothing method used in the experiment In the information-theory-based feature type analysis model, we need to estimate joint probability ) , , , , ( 2 1 r f f f P i  . Let F1,F2,...,Fi be the feature type series selected till now, R r F f F f F f i i ∈ ∈ ∈ ∈ , , , , 2 2 1 1  , we use a blended probability ) , , , , ( ~ 2 1 r f f f P i  to approximate probability ) , , , , ( 2 1 r f f f P i  in order to solve the sparse data problem[Bell, 1992]. ∑ = − − + + = i j j j i r f f f P w r P w r P w r f f f P 1 2 1 0 0 1 1 2 1 ) , , , , ( ) ( ) ( ) , , , , ( ~   (9) In the above formula, ∑ ∈ − = R r r c r P ˆ 1 )ˆ ( 1 ) ( (10) ∑ ∈ = R r r c r c r P ˆ 0 )ˆ ( ) ( ) ( (11) where ) (r c is the total number of time that r has been seen in the corpus. According to the escape mechanism in [Bell, 1992], we define the weights wk ) 1 ( i k ≤ < − in the formula (9) as follows. i i i k s s k k e w i k e e w − = ≤ ≤ − − = ∏ + = 1 1 , ) 1( 1 (12) where ek denotes the escape probability of context ) , , , ( 2 1 kf f f  , that is, the probability in which (f1 , f2 , ... , fk , r) is unseen in the corpus. In such case, the blending model has to escape to the lower contexts to approximate ) , , , , ( 2 1 r f f f P k  . Exactly, escape probability is defined as        − = ≤ ≤ = ∑ ∑ ∈ ∈ 1 , 0 0 , )ˆ , ,..., , ( )ˆ , ,..., , ( ˆ 2 1 ˆ 2 1 k i k r f f f c r f f f d e R r k R r k k (13) where    = > = 0 )ˆ, ,..., , ( ,0 0 )ˆ, ,..., , ( ,1 )ˆ, ,..., , ( 2 1 2 1 2 1 r f f f c if r f f f c if r f f f d k k k (14) In the above blending model, a special probability ∑ ∈ − = R r r c r P ˆ 1 )ˆ ( 1 ) ( is used, where all derivation rules are given an equal probability. As a result, 0 ) , , , , ( ~ 2 1 > r f f f P i  as long as 0 )ˆ ( ˆ > ∑ ∈R r r c . 5 The information-theory-based feature type analysis The experiments led to a number of interesting conclusions on the predictive power of various feature types and feature type combinations, which is expected to provide reliable reference for the modelling of probabilistic parsing. 5.1 The analysis to the predictive information quantities of lexical feature types, part-of-speech feature types and constituent label feature types z Goal One of the most important variation in statistical parsing over the last few years is that statistical lexical information is incorporated into the probabilistic evaluation model. Some statistical parsing systems show that the performance is improved after the lexical information is added. Our research aims at a quantitative analysis of the differences among the predictive information quantities provided by the lexical feature types, part-of-speech feature types and constituent label feature types from the view of information theory. z Data The experiment is conducted on the history feature types of the nodes whose structural distance to the current node is within 2. In Table-1, “Y” in PIQ(X of Y; R) represents the node, “X” represents the constitute label, the headword or POS of the headword of the node. In the following, the units of PIQ are bits. z Conclusion Among the feature types in the same structural position of the parsing tree, the predictive information quantity of lexical feature type is larger than that of part-of-speech feature type, and the predictive information quantity of partof-speech feature type is larger than that of the constituent label feature type. Table-1: The predictive information quantity of the history feature type candidates PIQ(X of Y; R) X= constituent label X= headword X= POS of the headword Y= the current node 2.3609 3.7333 2.7708 Y= the parent 1.1598 2.3253 1.1784 Y= the grandpa 0.6483 1.6808 0.6612 Y= the first right brother of the current node 0.4730 1.1525 0.7502 Y= the first left brother of the current node 0.5832 2.1511 1.2186 Y= the second right brother of the current node 0.1066 0.5044 0.2525 Y= the second left brother of the current node 0.0949 0.6171 0.2697 Y= the first right brother of the parent 0.1068 0.3717 0.2133 Y= the first left brother of the parent 0.2505 1.5603 0.6145 5.2 The analysis to the influence of the structural relation and the structural distance to the predictive information quantities of the history feature types z Goal: In this experiment, we wish to find out the influence of the structural relation and structural distance between the current node and the node that the given feature type related to has to the predictive information quantities of these feature types. z Data: In Table-2, SR represents the structural relation between the current node and the node that the given feature type related to. SD represents the structural distance between the current node and the node that the given feature type related to. Table-2: The predictive information quantity of the selected history feature types PIQ(constituent label of Y; R) SR= parent relation SR= brother relation SR= mixed parent and brother relation 0.5832 (Y= the first left brother) SD=1 1.1598 (Y= the parent) 0.4730 (Y= the first right brother) 0.2505 (Y= the first left brother of the parent) 0.0949 (Y= the second left brother) SD=2 0.6483 (Y= the grandpa) 0.1066 (Y= the second right brother) 0.1068 (Y= the first right brother of the parent) z Conclusion Among the history feature types which have the same structural relation with the current node (the relations are both parent-child relation, or both brother relation, etc), the one which has closer structural distance to the current node will provide larger predictive information quantity; Among the history feature types which have the same structural distance to the current node, the one which has parent relation with the current node will provide larger predictive information quantity than the one that has brother relation or mixed parent and brother relation to the current node (such as the parent's brother node). 5.3 The analysis to the predictive information quantities of the history feature types and the objective feature types z Goal Many of the existing probabilistic evaluation models prefer to use history feature types other than objective feature types. We select some of history feature types and objective feature types, and quantitatively compare their predictive information quantities. z Data The history feature type we use here is the headword of the parent, which has the largest predictive information quantity among all the history feature types. The objective feature types are selected stochastically, which are the first word and the second word in the objective word sequence of the current node (Please see 4.1 and Figure-1 for detailed descriptions on the selected feature types). Table-3: The predictive information quantity of the selected history and objective feature types Class Feature type PIQ(Y;R) History feature type Y= headword of the parent 2.3253 Y= the first word in the objective word sequence 3.2398 Objective feature type Y= the second word in the objective word sequence 3.0071 z Conclusion Either of the predictive information quantity of the first word and the second word in the objective word sequence is larger than that of the headword of the parent node which has the largest predictive information quantity among all of the history feature type candidates. That is to say, objective feature types may have larger predictive power than that of the history feature type. 5.4 The analysis to the predictive information quantities of the objective features types selected respectively on the physical position information, the heuristic information of headword and modifier, and the exact headword information z Goal Not alike the structural history feature types, the objective feature types are sequential. Generally, the candidates of the objective feature types are selected according to the physical position. However, from the linguistic viewpoint, the physical position information can hardly grasp the relations between the linguistic structures. Therefore, besides the physical position information, our research try to select the objective feature types respectively according to the exact headword information and the heuristic information of headword and modifier. Through the experiment, we hope to find out what influence the exact headword information, the heuristic information of headword and modifier, and the physical position information have respectively to the predictive information quantities of the feature types. z Data: Table-4 gives the evidence for the claim. Table-4: the predictive information quantity of the selected objective feature types the information used to select the objective feature types PIQ(Y;R) the physical position information 3.2398 (Y= the first word in the objective word sequence) Heuristic information 1: determine whether a word has the possibility to act as the headword of the current constitute according to its POS 3.1401 (Y= the first word in the objective word sequence which has the possibility to act as the headword of the current constitute) Heuristic information 2: determine whether a word has the possibility to act as the modifier of the current constitute according to its POS 3.1374 (Y= the first word in the objective word sequence which has the possibility to act as the modifier of the current constitute) Heuristic information 3: given the current headword, determine whether a word has the possibility to modify the headword 2.8757 (Y= the first word in the objective word sequence which has the possibility to modify the headword) the exact headword information 3.7333 (Y= the headword of the current constitute) z Conclusion The predictive information quantity of the headword of the current node is larger than that of a feature type selected according to the selected heuristic information of headword or modifier, and larger than that of a feature type selected according to the physical positions; The predictive information quantity of a feature type selected according to the physical positions is larger than that of a feature types selected according to the selected heuristic information of headword or modifier. 5.5 The selection of the feature type combination which has the optimal predictive information summation z Goal: We aim at proposing a method to select the feature types combination that has the optimal predictive information summation for prediction. z Approach We use the following greedy algorithm to select the optimal feature type combination. In building a model, the first feature type to be selected is the feature type which has the largest predictive information quantity for the prediction of the derivation rule among all of the feature type candidates, that is, ) ; ( max arg 1 R F PIQ F i Fi Ω ∈ = (15) Where Ω is the set of candidate feature types. Given that the model has selected feature type combination j F F F , , , 2 1  , the next feature type to be added into the model is the feature type which has the largest predictive information gain in all of the feature type candidate except j F F F , , , 2 1  , on condition that j F F F , , , 2 1  is known. That is, ) 16 ( ) , , , | ; ( 2 1 } , , 2 ,1 { 1 max arg j i j F F F iF iF j F F F R F PIG F   ∉ Ω ∈ + = z Data: Among the feature types mentioned above, the optimal feature type combination (i.e. the feature type combination with the largest predictive information summation) which is composed of 6 feature types is, the headword of the current node (type1), the headword of the parent node (type2), the headword of the grandpa node (type3), the first word in the objective word sequence(type4), the first word in the objective word sequence which have the possibility to act as the headword of the current constitute(type5), the headword of the right brother node(type6). The cumulative predictive information summation is showed in Figure-2 0 1 2 3 4 5 6 7 type1 type2 type3 type4 type5 type6 feature type cummulative predicting information summation Figure-2: The cumulative predictive information summation of the feature type combinations 6 Conclusion The paper proposes an information-theory-based feature type analysis method, which not only presents a series of heuristic conclusion on the predictive power of the different feature types and feature type combination for syntactic parsing, but also provides a guide for the modeling of syntactic parsing in the view of methodology, that is, we can quantitatively analyse the different contextual feature types or feature types combination's effect for syntactic structure prediction in advance. Based on these analysis, we can select the feature type or feature types combination that has the optimal predictive information summation to build the probabilistic parsing model. However, there are still some questions to be answered in this paper. For example, what is the beneficial improvement in the performance after using this method in a real parser? Whether the improvements in PIQ will lead to the improvement of parsing accuracy or not? In the following research, we will incorporate these conclusions into a real parser to see whether the parsing accuracy can be improved or not. Another work we will do is to do some experimental analysis to find the impact of data sparseness on feature type analysis, which is critical to the performance of real systems. The proposed feature type analysis method can be used in not only the probabilistic modelling for statistical syntactic parsing, but also language modelling in more general fields [WU, 1999a] [WU, 1999b]. References Bell, T.C., Cleary, J.G., Witten,I.H. 1992. Text Compression, PRENTICE HALL, Englewood Cliffs, New Jersey 07632, 1992 Black, E., Jelinek, F.,Lafferty, J.,Magerman, D.M., Mercer, R. and Roukos, S. 1992. Towards history-based grammars: using richer models of context in probabilistic parsing. In Proceedings of the February 1992 DARPA Speech and Natural Language Workshop, Arden House, NY. Brown, P., Jelinek, F., & Mercer, R. 1991. Basic method of probabilistic context-free grammars. IBM internal Report, Yorktown Heights, NY. T.Briscoe and J. Carroll. 1993. Generalized LR parsing of natural language (corpora) with unification-based grammars. Computational Linguistics, 19(1): 25-60 Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statics. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI Press/MIT Press, Menlo Park. Stanley F. Chen and Joshua Goodman. 1999. An Empirical Study of Smoothing Techniques for Language Modeling. Computer Speech and Language, Vol.13, 1999 Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34 th Annual Meeting of the ACL. Michael John Collins. 1997. Three generative lexicalised models for statistical parsing. In Proceedings of the 35 th Annual Meeting of the ACL. J.Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of COLING-96, pages 340-345 Joshua Goodman. 1998. Parsing Inside-Out. PhD. Thesis, Harvard University, 1998 Magerman, D.M. and Marcus, M.P. 1991. Pearl: a probabilistic chart parser. In Proceedings of the European ACL Conference, Berlin, Germany. Magerman, D.M. and Weir, C. 1992. Probabilistic prediction and Picky chart parsing. In Proceedings of the February 1992 DARPA Speech and Natural Language Workshop, Arden House, NY. David M. Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33 th Annual Meeting of the ACL. Mitchell P. Marcus, Beatrice Santorini & Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn treebank. Computational Linguistics 19, pages 313-330 C. E. Shannon. 1951. Prediction and Entropy of Printed English. Bell System Technical Journal, 1951 Dekai,Wu, Sui Zhifang, Zhao Jun. 1999a. An Information-Based Method for Selecting Feature Types for Word Prediction. Proceedings of Eurospeech'99, Budapest Hungary Dekai, Wu, Zhao Jun, Sui Zhifang. 1999b. An Information-Theoretic Empirical Analysis of Dependency-Based Feature Types for Word Prediction Models. Proceedings of EMNLP'99, University of Maryland, USA
2000
60
Lexicalized Sto c hastic Mo deling of Constrain t-Based Grammars using Log-Linear Measures and EM T raining Stefan Riezler IMS, Univ ersität Stuttgart [email protected] Detlef Presc her IMS, Univ ersität Stuttgart [email protected] Jonas Kuhn IMS, Univ ersität Stuttgart [email protected] Mark Johnson Cog. & Ling. Sciences, Bro wn Univ ersit y [email protected] Abstract W e presen t a new approac h to sto c hastic mo deling of constrain tbased grammars that is based on loglinear mo dels and uses EM for estimation from unannotated data. The tec hniques are applied to an LF G grammar for German. Ev aluation on an exact matc h task yields % precision for an am biguit y rate of ., and 0% precision on a sub cat frame matc h for an am biguit y rate of . Exp erimen tal comparison to training from a parsebank sho ws a 0% gain from EM training. Also, a new class-based grammar lexicalization is presen ted, sho wing a 0% gain o v er unlexicalized mo dels.  In tro duction Sto c hastic parsing mo dels capturing con textual constrain ts b ey ond the dep endencies of probabilistic con text-free grammars (PCF Gs) are curren tly the sub ject of in tensiv e researc h. An in teresting feature common to most suc h mo dels is the incorp oration of con textual dep endencies on individual head w ords in to rulebased probabilit y mo dels. Suc h w ord-based lexicalizations of probabilit y mo dels are used successfully in the statistical parsing mo dels of, e.g., Collins (  ), Charniak (  ), or Ratnaparkhi (  ). Ho w ev er, it is still an op en question whic h kind of lexicalization, e.g., statistics on individual w ords or statistics based up on w ord classes, is the b est c hoice. Secondly , these approac hes ha v e in common the fact that the probabilit y mo dels are trained on treebanks, i.e., corp ora of manually disam biguated sen tences, and not from corp ora of unannotated sen tences. In all of the cited approac hes, the P enn W all Street Journal T reebank (Marcus et al.,  ) is used, the a v ailabilit y of whic h ob viates the standard eort required for treebank traininghandannotating large corp ora of sp ecic domains of sp ecic languages with sp ecic parse t yp es. Moreo v er, common wisdom is that training from unannotated data via the exp ectationmaximization (EM) algorithm (Dempster et al.,  ) yields p o or results unless at least partial annotation is applied. Exp erimen tal results conrming this wisdom ha v e b een presen ted, e.g., b y Elw orth y (  ) and P ereira and Sc hab es (  ) for EM training of Hidden Mark o v Mo dels and PCF Gs. In this pap er, w e presen t a new lexicalized sto c hastic mo del for constrain t-based grammars that emplo ys a com bination of headw ord frequencies and EM-based clustering for grammar lexicalization. F urthermore, w e mak e crucial use of EM for estimating the parameters of the sto c hastic grammar from unannotated data. Our usage of EM w as initiated b y the curren t lac k of large unicationbased treebanks for German. Ho w ev er, our exp erimen tal results also sho w an exception to the common wisdom of the insuciency of EM for highly accurate statistical mo deling. Our approac h to lexicalized sto c hastic mo deling is based on the parametric family of loglinear probabilit y mo dels, whic h is used to dene a probabilit y distribution on the parses of a Lexical-F unctional Grammar (LF G) for German. In previous w ork on log-linear mo dels for LF G b y Johnson et al. ( ), pseudolik eliho o d estimation from annotated corp ora has b een in tro duced and exp erimen ted with on a small scale. Ho w ev er, to our kno wledge, to date no large LF G annotated corp ora of unrestricted German text are a v ailable. F ortunately , algorithms exist for statistical inference of log-linear mo dels from unannotated data (Riezler,  ). W e apply this algorithm to estimate log-linear LF G mo dels from large corp ora of newspap er text. In our largest exp erimen t, w e used 0,000 parses whic h w ere pro duced b y parsing ,000 newspap er sentences with the German LF G. Exp erimen tal ev aluation of our mo dels on an exact-matc h task (i.e. p ercen tage of exact matc h of most probable parse with correct parse) on 0 man ually examined examples with on a v erage . analyses ga v e % precision. Another ev aluation on a v erb frame recognition task (i.e. p ercen tage of agreemen t b et w een sub categorization frames of main v erb of most probable parse and correct parse) ga v e 0% precision on  man ually disam biguated examples with an a v erage am biguit y of . Clearly , a direct comparison of these results to stateof-the-art statistical parsers cannot b e made b ecause of dieren t training and test data and other ev aluation measures. Ho w ev er, w e w ould lik e to dra w the follo wing conclusions from our exp erimen ts:  The problem of c haotic con v ergence b eha viour of EM estimation can b e solv ed for log-linear mo dels.  EM do es help constrain t-based grammars, e.g. using ab out 0 times more sentences and ab out 00 times more parses for EM training than for training from an automatically constructed parsebank can impro v e precision b y ab out 0%.  Class-based lexicalization can yield a gain in precision of ab out 0%. In the rest of this pap er w e in troduce incomplete-data estimation for log-linear mo dels (Sec. ), and presen t the actual design of our mo dels (Sec. ) and rep ort our exp erimen tal results (Sec. ).  Incomplete-Data Estimation for Log-Linear Mo dels . Log-Linear Mo dels A log-linear distribution p  (x) on the set of analyses X of a constrain t-based grammar can b e dened as follo ws: p  (x) = Z   e  (x) p 0 (x) where Z  = P xX e  (x) p 0 (x) is a normalizing constan t,  = (  ; : : : ;  n )  I R n is a v ector of log-parameters,  = (  ; : : : ;  n ) is a v ector of prop ert y-functions  i : X ! I R for i = ; : : : ; n,    (x) is the v ector dot pro duct P n i=  i  i (x), and p 0 is a xed reference distribution. The task of probabilistic mo deling with loglinear distributions is to build salien t prop erties of the data as prop ert y-functions  i in to the probabilit y mo del. F or a giv en v ector  of prop ert y-functions, the task of statistical inference is to tune the parameters  to b est reect the empirical distribution of the training data. . Incomplete-Data Estimation Standard n umerical metho ds for statistical inference of log-linear mo dels from fully annotated dataso-called complete dataare the iterativ e scaling metho ds of Darro c h and Ratcli (  ) and Della Pietra et al. (  ). F or data consisting of unannotated sen tencesso-called incomplete datathe iterativ e metho d of the EM algorithm (Dempster et al.,  ) has to b e emplo y ed. Ho w ev er, since ev en complete-data estimation for log-linear mo dels requires iterativ e metho ds, an application of EM to log-linear mo dels results in an algorithm whic h is exp ensiv e since it is doubly-iterativ e. A singly-iterativ e algorithm in terlea ving EM and iterativ e scaling in to a mathematically w ell-dened estimation metho d for log-linear mo dels from incomplete data is the IM algorithm of Riezler ( ). Applying this algorithm to sto c hastic constrain t-based grammars, w e assume the follo wing to b e giv en: A training sample of unannotated sentences y from a set Y , observ ed with empirical Input Reference mo del p 0 , prop ert y-functions v ector  with constan t  # , parses X (y ) for eac h y in incomplete-data sample from Y . Output MLE mo del p   on X . Pro cedure Un til con v ergence do Compute p  ; k  , based on  = (  ; : : : ;  n ), F or i from  to n do i :=   # ln P y Y ~ p(y ) P xX (y ) k  (xjy ) i (x) P xX p  (x) i (x) ,  i :=  i + i , Return   = (  ; : : : ;  n ). Figure : Closed-form v ersion of IM algorithm probabilit y ~ p(y ), a constrain t-based grammar yielding a set X (y ) of parses for eac h sen tence y , and a log-linear mo del p  () on the parses X = P y Y j ~ p (y )>0 X (y ) for the sen tences in the training corpus, with kno wn v alues of prop ert y-functions  and unkno wn v alues of . The aim of incomplete-data maxim um lik eliho o d estimation (MLE) is to nd a v alue   that maximizes the incomplete-data loglik eliho o d L = P y Y ~ p (y ) ln P xX (y ) p  (x), i.e.,   = arg max I R n L(): Closed-form parameter-up dates for this problem can b e computed b y the algorithm of Fig. , where  # (x) = P n i=  i (x), and k  (xjy ) = p  (x)= P xX (y ) p  (x) is the conditional probabilit y of a parse x giv en the sen tence y and the curren t parameter v alue . The constancy requiremen t on  # can b e enforced b y adding a correction prop ert yfunction  l : Cho ose K = max xX  # (x) and  l (x) = K  # (x) for all x  X . Then P l i=  i (x) = K for all x  X . Note that b ecause of the restriction of X to the parses obtainable b y a grammar from the training corpus, w e ha v e a log-linear probabilit y measure only on those parses and not on all p ossible parses of the grammar. W e shall therefore sp eak of mere log-linear measures in our application of disam biguation. . Searc hing for Order in Chaos F or incomplete-data estimation, a sequence of lik eliho o d v alues is guaran teed to con v erge to a critical p oin t of the lik eliho o d function L. This is sho wn for the IM algorithm in Riezler ( ). The pro cess of nding lik eliho o d maxima is c haotic in that the nal lik eliho o d v alue is extremely sensitiv e to the starting v alues of , i.e. limit p oin ts can b e local maxima (or saddlep oin ts), whic h are not necessarily also global maxima. A w a y to searc h for order in this c haos is to searc h for starting v alues whic h are hop efully attracted b y the global maxim um of L. This problem can b est b e explained in terms of the minim um div ergence paradigm (Kullbac k,   ), whic h is equiv alen t to the maxim um lik eliho o d paradigm b y the follo wing theorem. Let p[f ] = P xX p(x)f (x) b e the exp ectation of a function f with resp ect to a distribution p: The probabilit y distribution p  that minimizes the div ergence D (pjjp 0 ) to a reference mo del p 0 sub ject to the constrain ts p[ i ] = q [ i ]; i = ; : : : ; n is the mo del in the parametric family of log-linear distributions p  that maximizes the lik eliho o d L() = q [ln p  ] of the training data  .  If the training sample consists of complete data Reasonable starting v alues for minim um div ergence estimation is to set  i = 0 for i = ; : : : ; n. This yields a distribution whic h minimizes the div ergence to p 0 , o v er the set of mo dels p to whic h the constrain ts p[ i ] = q [ i ]; i = ; : : : ; n ha v e y et to b e applied. Clearly , this argumen t applies to b oth complete-data and incomplete-data estimation. Note that for a uniformly distributed reference mo del p 0 , the minim um div ergence mo del is a maxim um en trop y mo del (Ja ynes,  ). In Sec. , w e will demonstrate that a uniform initialization of the IM algorithm sho ws a signican t impro v emen t in lik eliho o d maximization as w ell as in linguistic p erformance when compared to standard random initialization.  Prop ert y Design and Lexicalization . Basic Congurational Prop erties The basic  0 prop erties emplo y ed in our mo dels are similar to the prop erties of Johnson et al. ( ) whic h incorp orate general linguistic principles in to a log-linear mo del. They refer to b oth the c(onstituen t)structure and the f(eature)-structure of the LF G parses. Examples are prop erties for  c-structure no des, corresp onding to standard pro duction prop erties,  c-structure subtrees, indicating argumen t v ersus adjunct attac hmen t,  f-structure attributes, corresp onding to grammatical functions used in LF G,  atomic attribute-v alue pairs in fstructures,  complexit y of the phrase b eing attac hed to, th us indicating b oth high and lo w attac hmen t,  non-righ t-branc hing b eha vior of non terminal no des,  non-parallelism of co ordinations. x  X , the exp ectation q [] corresp onds to the empirical exp ectation ~ p[]. If w e observ e incomplete data y  Y , the exp ectation q [] is replaced b y the conditional exp ectation ~ p[k  0 []] giv en the observ ed data y and the curren t parameter v alue  0 . . Class-Based Lexicalization Our approac h to grammar lexicalization is class-based in the sense that w e use classbased estimated frequencies f c (v ; n) of headv erbs v and argumen t head-nouns n instead of pure frequency statistics or classbased probabilities of head w ord dep endencies. Class-based estimated frequencies are intro duced in Presc her et al. (000 ) as the frequency f (v ; n) of a (v ; n)-pair in the training corpus, w eigh ted b y the b est estimate of the class-mem b ership probabilit y p(cjv ; n) of an EM-based clustering mo del on (v ; n)-pairs, i.e., f c (v ; n) = max cC p(cjv ; n)(f (v ; n) + ). As is sho wn in Presc her et al. (000 ) in an ev aluation on lexical am biguit y resolution, a gain of ab out % can b e obtained b y using the class-based estimated frequency f c (v ; n) as disam biguation criterion instead of classbased probabilities p(njv ). In order to mak e the most direct use p ossible of this fact, w e incorp orated the decisions of the disam biguator directly in to  additional prop erties for the grammatical relations of the sub ject, direct ob ject, indirect ob ject, innitiv al ob ject, oblique and adjunctiv al dativ e and accusativ e prep osition, for activ e and passiv e forms of the rst three v erbs in eac h parse. Let v r (x) b e the v erbal head of grammatical relation r in parse x, and n r (x) the nominal head of grammatical relation r in x. Then a lexicalized prop ert y  r for grammatical relation r is dened as  r (x) =  < :  if f c (v r (x); n r (x))  f c (v r (x 0 ); n r (x 0 )) x 0  X (y ); 0 otherwise : The prop ert y-function  r th us predisam biguates the parses x  X (y ) of a sen tence y according to f c (v ; n), and stores the b est parse directly instead of taking the actual estimated frequencies as its v alue. In Sec. , w e will see that an incorp oration of this pre-disam biguation routine in to the mo dels impro v es p erformance in disam biguation b y ab out 0%. exact matc h ev aluation basic mo del lexicalized mo del selected + lexicalized mo del complete-data estimation P:  E:  . P: . E: . P: . E: . incomplete-data estimation P:  E: . P:  E: . P: . E: . Figure : Ev aluation on exact matc h task for 0 examples with a v erage am biguit y . frame matc h ev aluation basic mo del lexicalized mo del selected + lexicalized mo del complete-data estimation P: 0. E: 0. P: . E: . P: . E:  incomplete-data estimation P: . E: . P: . E: . P: 0 E: . Figure : Ev aluation on frame matc h task for  examples with a v erage am biguit y   Exp erimen ts . Incomplete Data and P arsebanks In our exp erimen ts, w e used an LF G grammar for German  for parsing unrestricted text. Since training w as faster than parsing, w e parsed in adv ance and stored the resulting pac k ed c/f-structures. The lo w am biguit y rate of the German LF G grammar allo w ed us to restrict the training data to sen tences with at most 0 parses. The resulting training corpus of unannotated, incomplete data consists of appro ximately ,000 sen tences of online a v ailable German newspap er text, comprising appro ximately 0,000 parses. In order to compare the con tribution of unam biguous and am biguous sen tences to the estimation results, w e extracted a sub corpus of ,000 sen tences, for whic h the LF G grammar pro duced a unique parse, from the full train The German LF G grammar is b eing implemen ted in the Xero x Linguistic En vironmen t (XLE, see Maxw ell and Kaplan (  )) as part of the P arallel Grammar (P arGram) pro ject at the IMS Stuttgart. The co v erage of the grammar is ab out 0% for unrestricted newspap er text. F or the exp erimen ts rep orted here, the eectiv e co v erage w as lo w er, since the corpus prepro cessing w e applied w as minimal. Note that for the disam biguation task w e w ere in terested in, the o v erall grammar co v erage w as of sub ordinate relev ance. ing corpus. The a v erage sen tence length of . for this automatically constructed parsebank is only sligh tly smaller than that of 0. for the full set of ,000 training sentences and 0,000 parses. Th us, w e conjecture that the parsebank includes a represen tativ e v ariet y of linguistic phenomena. Estimation from this automatically disam biguated parsebank enjo ys the same complete-data estimation prop erties  as training from man ually disam biguated treebanks. This mak es a comparison of complete-data estimation from this parsebank to incomplete-data estimation from the full set of training data in teresting. . T est Data and Ev aluation T asks T o ev aluate our mo dels, w e constructed t w o dieren t test corp ora. W e rst parsed with the LF G grammar 0 sen tences whic h are used for illustrativ e purp oses in the foreign language learner's grammar of Helbig and Busc ha (  ). In a next step, the correct parse w as indicated b y a h uman disam biguator, according to the reading in tended in Helbig and Busc ha (  ). Th us a precise  F or example, con v ergence to the global maxim um of the complete-data log-lik eliho o d function is guaran teed, whic h is a go o d condition for highly precise statistical disam biguation. indication of correct c/f-structure pairs w as p ossible. Ho w ev er, the a v erage am biguit y of this corpus is only . parses p er sen tence, for sen tences with on a v erage . w ords. In order to ev aluate on sen tences with higher am biguit y rate, w e man ually disam biguated further  sen tences of LF G-parsed newspap er text. The sen tences of this corpus ha v e on a v erage  parses and . w ords. W e tested our mo dels on t w o ev aluation tasks. The statistical disam biguator w as tested on an exact matc h task, where exact corresp ondence of the full c/f-structure pair of the hand-annotated correct parse and the most probable parse is c hec k ed. Another ev aluation w as done on a frame matc h task, where exact corresp ondence only of the subcategorization frame of the main v erb of the most probable parse and the correct parse is c hec k ed. Clearly , the latter task in v olv es a smaller eectiv e am biguit y rate, and is th us to b e in terpreted as an ev aluation of the combined system of highly-constrained sym b olic parsing and statistical disam biguation. P erformance on these t w o ev aluation tasks w as assessed according to the follo wing ev aluation measures: Precision = #correct #correct +#incorrect , Eectiv eness = #correct #correct +#incorrect +#don't kno w . Correct and incorrect sp ecies a success/failure on the resp ectiv e ev aluation tasks; don't kno w cases are cases where the system is unable to mak e a decision, i.e. cases with more than one most probable parse. . Exp erimen tal Results F or eac h task and eac h test corpus, w e calculated a random baseline b y a v eraging o v er sev eral mo dels with randomly c hosen parameter v alues. This baseline measures the disam biguation p o w er of the pure sym b olic parser. The results of an exact-matc h ev aluation on the Helbig-Busc ha corpus is sho wn in Fig. . The random baseline w as around % for this case. The columns list dieren t mo dels according to their prop ert y-v ectors. Basic mo dels consist of  0 congurational prop erties as describ ed in Sec. .. Lexicalized mo dels are extended b y  lexical predisam biguation prop erties as describ ed in Sec. .. Selected + lexicalized mo dels result from a simple prop ert y selection pro cedure where a cuto on the n um b er of parses with non-negativ e v alue of the prop ert y-functions w as set. Estimation of basic mo dels from complete data ga v e % precision (P), whereas training lexicalized and selected mo dels from incomplete data ga v e .% precision, whic h is an impro v emen t of %. Comparing lexicalized mo dels in the estimation metho d sho ws that incomplete-data estimation giv es an impro v emen t of % precision o v er training from the parsebank. A comparison of mo dels trained from incomplete data sho ws that lexicalization yields a gain of % in precision. Note also the gain in eectiv eness (E) due to the pre-disam bigution routine included in the lexicalized prop erties. The gain due to prop ert y selection b oth in precision and eectiv eness is minimal. A similar pattern of p erformance arises in an exact matc h ev aluation on the newspap er corpus with an am biguit y rate of . The lexicalized and selected mo del trained from incomplete data ac hiev ed here 0.% precision and . % eectiv eness, for a random baseline of around %. As sho wn in Fig. , the impro v emen t in p erformance due to b oth lexicalization and EM training is smaller for the easier task of frame ev aluation. Here the random baseline is 0% for frame ev aluation on the newspap er corpus with an am biguit y rate of . An o v erall gain of roughly 0% can b e ac hiev ed b y going from unlexicalized parsebank mo dels (0.% precision) to lexicalized EM-trained mo dels ( 0% precision). Again, the con tribution to this impro v emen t is ab out the same for lexicalization and incomplete-data training. Applying the same ev aluation to the Helbig-Busc ha corpus sho ws .% precision and .% eectiv eness for the lexicalized and selected incompletedata mo del, compared to around 0% for the random baseline. Optimal iteration n um b ers w ere decided b y rep eated ev aluation of the mo dels at ev ery fth iteration. Fig.  sho ws the precision of lexicalized and selected mo dels on the exact 68 70 72 74 76 78 80 82 84 86 88 10 20 30 40 50 60 70 80 90 precision number of iterations complete-data estimation incomplete-data estimation Figure : Precision on exact matc h task in n um b er of training iterations matc h task plotted against the n um b er of iterations of the training algorithm. F or parsebank training, the maximal precision v alue is obtained at  iterations. Iterating further sho ws a clear o v ertraining eect. F or incomplete-data estimation more iterations are necessary to reac h a maximal precision v alue. A comparison of mo dels with random or uniform starting v alues sho ws an increase in precision of 0% to 0% for the latter. In terms of maximization of lik eliho o d, this corresp onds to the fact that uniform starting v alues immediately push the lik eliho o d up to nearly its nal v alue, whereas random starting v alues yield an initial lik eliho o d whic h has to b e increased b y factors of  to 0 to an often lo w er nal v alue.  Discussion The most direct p oin ts of comparison of our metho d are the approac hes of Johnson et al. ( ) and Johnson and Riezler (000 ). In the rst approac h, log-linear mo dels on LF G grammars using ab out 00 congurational prop erties w ere trained on treebanks of ab out 00 sen tences b y maxim um pseudo-lik eliho o d estimation. Precision w as ev aluated on an exact matc h task in a 0-w a y cross v alidation paradigm for an am biguit y rate of 0, and ac hiev ed  % for the rst approac h. Johnson and Riezler (000 ) ac hiev ed a gain of % o v er this result b y including a classbased lexicalization. Our b est mo dels clearly outp erform these results, b oth in terms of precision relativ e to am biguit y and in terms of relativ e gain due to lexicalization. A comparison of p erformance is more dicult for the lexicalized PCF G of Beil et al. ( ) whic h w as trained b y EM on 0,000 sentences of German newspap er text. There, a 0.% precision is rep orted on a v erb frame recognition task on  examples. Ho w ev er, the gain ac hiev ed b y Beil et al. ( ) due to grammar lexicalizaton is only %, compared to ab out 0% in our case. A comparison is dicult also for most other state-of-theart PCF G-based statistical parsers, since dieren t training and test data, and most imp ortan tly , dieren t ev aluation criteria w ere used. A comparison of the p erformance gain due to grammar lexicalization sho ws that our results are on a par with that rep orted in Charniak (  ).  Conclusion W e ha v e presen ted a new approac h to sto c hastic mo deling of constrain t-based grammars. Our exp erimen tal results sho w that EM training can in fact b e v ery helpful for accurate sto c hastic mo deling in natural language processing. W e conjecture that this result is due partly to the fact that the space of parses pro duced b y a constrain t-based grammar is only mildly incomplete, i.e. the am biguit y rate can b e k ept relativ ely lo w. Another reason ma y b e that EM is esp ecially useful for log-linear mo dels, where the searc h space in maximization can b e k ept under con trol. F urthermore, w e ha v e in tro duced a new classbased grammar lexicalization, whic h again uses EM training and incorp orates a predisam biguation routine in to log-linear mo dels. An impressiv e gain in p erformance could also b e demonstrated for this metho d. Clearly , a cen tral task of future w ork is a further exploration of the relation b et w een complete-data and incomplete-data estimation for larger, man ually disam biguated treebanks. An in teresting question is whether a systematic v ariation of training data size along the lines of the EM-exp erimen ts of Nigam et al. (000 ) for text classication will sho w similar results, namely a systematic dep endence of the relativ e gain due to EM training from the relativ e sizes of unannotated and annotated data. F urthermore, it is imp ortan t to sho w that EMbased metho ds can b e applied successfully also to other statistical parsing framew orks. A c kno wledgemen ts W e thank Stefanie Dipp er and Bettina Sc hrader for help with disam biguation of the test suites, and the anon ymous A CL reviewers for helpful suggestions. This researc h w as supp orted b y the P arGram pro ject and the pro ject B of the SFB 0 of the DF G. References F ranz Beil, Glenn Carroll, Detlef Presc her, Stefan Riezler, and Mats Ro oth.  . Inside-outside estimation of a lexicalized PCF G for German. In Pr o c e e dings of the th A CL, College P ark, MD. Eugene Charniak.  . Statistical parsing with a con text-free grammar and w ord statistics. In Pr o c e e dings of the th AAAI, Menlo P ark, CA. Mic hael Collins.  . Three generativ e, lexicalised mo dels for statistical parsing. In Pr oc e e dings of the th A CL, Madrid. J.N. Darro c h and D. Ratcli.  . Generalized iterativ e scaling for log-linear mo dels. The A nnals of Mathematic al Statistics, ():0 0. Stephen Della Pietra, Vincen t Della Pietra, and John Laert y .  . Inducing features of random elds. IEEE P AMI,  ():0 . A. P . Dempster, N. M. Laird, and D. B. Rubin.  . Maxim um lik eliho o d from incomplete data via the EM algorithm. Journal of the R oyal Statistic al So ciety,  (B):. Da vid Elw orth y .  . Do es Baum-W elc h reestimation help taggers? In Pr o c e e dings of the th ANLP, Stuttgart. Gerhard Helbig and Joac him Busc ha.  . Deutsche Gr ammatik. Ein Handbuch für den A usländerunterricht. Langensc heidt, Leipzig. Edwin T. Ja ynes.  . Information theory and statistical mec hanics. Physic al R eview, 0:00. Mark Johnson and Stefan Riezler. 000. Exploiting auxiliary distributions in sto c hastic unication-based grammars. In Pr o c e e dings of the st NAA CL, Seattle, W A. Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler.  . Estimators for sto c hastic unication-based grammars. In Pr o c e e dings of the th A CL, College P ark, MD. Solomon Kullbac k.   . Information The ory and Statistics. Wiley , New Y ork. Mitc hell P . Marcus, Beatrice San torini, and Mary Ann Marcinkiewicz.  . Building a large annotated corpus of english: The P enn treebank. Computational Linguistics,  ():0. John Maxw ell and R. Kaplan.  . Unicationbased parsers that automatically tak e adv an tage of con text freeness. Unpublished man uscript, Xero x P alo Alto Researc h Cen ter. Kamal Nigam, Andrew McCallum, Sebastian Thrun, and T om Mitc hell. 000. T ext classication from lab eled and unlab eled do cumen ts using EM. Machine L e arning,  (/):0. F ernando P ereira and Y v es Sc hab es.  . Insideoutside reestimation from partially brac k eted corp ora. In Pr o c e e dings of the 0th A CL, New ark, Dela w are. Detlef Presc her, Stefan Riezler, and Mats Ro oth. 000. Using a probabilistic class-based lexicon for lexical am biguit y resolution. In Pr o c e e dings of the th COLING, Saarbrüc k en. A dw ait Ratnaparkhi.  . A linear observ ed time statistical parser based on maxim um entrop y mo dels. In Pr o c e e dings of EMNLP-. Stefan Riezler.  . Pr ob abilistic Constr aint L o gic Pr o gr amming Ph.D. thesis, Seminar für Sprac h wissensc haft, Univ ersität Tübingen. AIMS Rep ort, (), IMS, Univ ersität Stuttgart.
2000
61
        !" #$ &%' %(*)+,.#0/12*%3 54637 89 %:);< >=?7 @17ACBD@E23F,%3F G46H/12 I+JLKLMHKON P"Q MSROPTPVUXWZY\[]^JLKOM(_3`bacKON PedZ`gfh` i WZjlk^monpjrqtsvuXwx:jzyZnUXns{U|WZY{}tW~wuXn€U|qjzuXW0‚DƒojzmoWZƒom „c…†ˆ‡ UXp‰ZŠ^UD‹2ŒEp‰ZS‰Zy2U~‹Zސ^‘ …t’ ‘X‘|D‹3“|”;•”;– —L˜g™^šXšL›O˜DœšXž Ÿ^¡¢žC™L£ Iv¤ KcJL¥¦`2§XJ ¨C©ª«^¬r­ ®S¯O®Z°²±´³SµG°E®g±·¶L®Z¶¦­t°1¯,¸°²ª«¶¹ ªt¶º°¼»ªt±¯L½²ª¾¹°?­·½¼±¬¿®gª·¬À¶¦©g­Á¶LÂêt°?½¼«g©^¬¿Ä ½²¯¦Åª·°²±¸ ­1ÂÆ±t¶Ç¸5ÈV°´ÉÊ®S¯O˦°´­;¬À©0¶L±¹°´± ªt¶$Ì^ª·¬ÀÅr¬¿Í²°Vª·«^°vÈ榱·År¹ÏÈЬÀ¹^°6È{°´Éѯ¦­ ¯¦©0°?©^½²ÒX½´Å¿¶¦®~°´¹g¬À¯^Ó0ÈΰÔÌg­t°+ÅÀ¬r©ËÇÌg¬À­tª·¬r½ ®S¯Oª·ªt°²±©g­ ¯¦©^¹Õ*ÖØ×ÚÙ{ªt°²»Xª"­tª·±·Ìg½¼ª̐±·°?­ ªt¶Û°¼»ªt±¯¦½¼ªvª·°¼»ªvÂz±¯L˦¸°?©|ª·­6½²¶¦©|ª¯L¬r©XÄ ¬À©^Ëvª·°²±¸Ü¹^°´­½¼±¬¿®gª·¬¿¶Ç©g­²Ó6È{°,¯¦ÅÀ­t¶VÌ^­t° ¯vÅr¯L©^˦Ìg¯O˦°Ô¸¶¹^°´Å ª·¶6¹g¬À­·½´¯O±¹Ê°²»Xª·±·¯OÄ ©^°²¶ÇÌ^­¹°?­·½¼±¬¿®gª¬¿¶¦©g­²³2¯¦©^¹Ú¯,½´ÅÀÌg­tªt°´±·¬r©Ë ¸°²ª«¶^¹{ªt¶Ú­·Ìg¸¸ ¯L±¬¿Í²°,±·°?­·Ì^Å¿ª¯L©ª.¹^°²Ä ­·½²±·¬À®^ª¬¿¶¦©g­²ÓÝÈV°{­·«^¶OµÞª·«^°V°²ß~°?½¼ª¬¿àǰ²Ä ©^°´­·­ ¶LÂ2¶¦Ì^±¸°²ª«^¶¹&ÉXÒ;µá¯?Ò;¶LÂ2°²»X®~°´±tÄ ¬À¸°?©Xª­²Ó â aoãJO¥¦ä¢åHM §XJLPäã æØ°²çg°?½¼ª¬À©ËЪ·«^°ÊËL±·¶OµØª«è¬À©ÁÌ^ª·¬ÀÅr¬¿Í´¯Lª·¬À¶¦©Á¶LÂ,¸ ¯OÄ ½«^¬r©°Î±·°´¯¦¹g¯OÉSÅ¿°Êª·°¼»ª·­²³é°¼»ªt±¯¦½¼ª¬¿¶Ç©Á¯L©g¹Á¯¦½²êXÌ^¬¿Ä ­·¬¿ª¬¿¶¦©ë¶¦Â¾År¬À©ËÇÌg¬À­tª·¬r½Þ쐩^¶oµØÅÀ°´¹^ËL°\Âz±·¶Ç¸ ÅÀ¯O±·Ë¦° ½¼¶¦±·®D¶L±¯«g¯L­ÁÉ~°²°?©¶Ç©°í¶¦Âê·«^°¸ ¯cît¶L±¾ªt¶¦®Ä ¬À½´­Îµ ¬¿ª«^¬r©ïª·«^°Ð©g¯Oª̐±¯LÅÔÅÀ¯L©^˦Ìg¯LËL°ð®g±·¶½¼°?­·­·¬À©^Ë ñóò ÙôGõÚ½¼¶Ç¸ ¸éÌg©g¬¿ªCÒLÓö÷­·¯L¸®SÅ¿°Ã¶LÂhÅÀ¬À©^˦Ìg¬À­tª¬À½ ìX©^¶oµ Å¿°?¹Ë¦°éª·¯L±·ËL°´ªt°?¹v¬À©v®g¯¦­tª±·°´­t°?¯O±½¼«v¬À©^½´ÅÀÌg¹°?­ ËL±¯¦¸ ¸¯L±­ ñóø Ì^®g¬À°´½¯L©g¹ ×Ú¯O»XµG°´ÅÀÅe³úù´û¦ûLü|õ³ µ¶¦±·¹½²År¯L­·­t°?­ ñ Õ*¯LªtÍ´¬ÀàL¯L­·­·¬ÀÅÀ¶LËÇÅ¿¶ÇÌé¯L©g¹ ×Ú½ ø °²¶OµØ©2³ ù´û¦ûLý|õ1¯¦©g¹bÉS¬ÀÅr¬À©ËÇÌg¯LÅ Å¿°¼»^¬À½²¶¦©g­ ñÿþ ¸ ¯¦¹cî·¯6°´ª&¯¦ÅóÓr³ ù´û¦û|õÓ3ÈЫg¬ÀÅ¿°H«Ìg¸ ¯L©°¼»®D°²±·ª·­S©g¹E¬¿ª¹g¬,½´Ì^Å¿ªªt¶ ®^±·¶X¹gÌ^½²°Ô°²»«g¯LÌg­tª¬¿àǰԯ¦©g¹b½²¶¦©g­·¬À­tª·°´©|ª ÅÀ¬À©^˦Ìg¬À­tª¬À½ ìX©^¶oµ Å¿°?¹Ë¦°L³^¯¦Ì^ªt¶Ç¸¯Lª·¬r½E¸°´ª·«^¶X¹^­"½²¯¦©Ô«^°´ÅÀ®+¯¦ÅÀÅ¿°²Ä àX¬r¯Oª·°Ô®g±t¶¦ÉSÅ¿°´¸ ­ ¯¦­·­t¶½´¬À¯Oª·°´¹bµ ¬¿ª·«Ã¸ ¯L©Ì^¯¦Åá½²¶¦©Ä ­tªt±Ìg½¼ª·¬À¶¦©2Ó Ö3°´±¸¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­²³Zµ «g¬À½¼«v¯O±·°Ìg­·Ì^¯¦ÅÀÅ¿Ò ½²¯L±t°²Ä ÂÆÌgÅÀÅ¿Òï¶L±·ËǯL©g¬¿Í²°?¹¬r©°?©^½²ÒX½´Å¿¶¦®D°´¹g¬À¯¦­²³+¯L±t°ÛàL¯LÅÀÌÄ ¯OÉSÅ¿°0År¬À©^˦Ìg¬À­tª·¬r½Ê쐩¶oµ Å¿°?¹^ËL°¦³.ÉS̐ªV«^¯càǰʭt°´ÅÀ¹^¶¦¸ ÉD°²°?©¾ª¯O±·Ë¦°²ª·°´¹>¬À©!®g¯¦­tª ò Ùô År¬¿ªt°´±¯Oª̐±·°LÓö"­ µ ¬¿ª«ˆ¶Lª«°´±GªtÒ|®S°?­á¶LÂ3År¬À©ËÇÌg¬À­tª·¬r½*쐩¶LµØÅÀ°´¹^ËL°*±·°?Å¿ÒÇÄ ¬À©^Ëh¶Ç©v«XÌg¸ ¯L©v¬À©Xª·±·¶¦­t®S°´½²ª·¬À¶¦©V¯L©g¹v­·Ì^®D°²±·à¬À­·¬¿¶Ç©2³ ½¼¶Ç©g­tªt±Ì^½²ª·¬r©Ë*°?©^½²Ò½²Å¿¶¦®~°?¹^¬À¯¦­¢¬À­:ê|Ìg¬¿ª·°G°¼»®D°´©g­·¬¿à|°LÓ ö*¹^¹g¬¿ª¬¿¶Ç©^¯¦ÅÀÅ¿Òdz^­·¬À©g½¼°°²»¬r­tª·¬À©^Ë;°?©^½²Ò½²Å¿¶¦®~°?¹^¬À¯¦­á¯O±·° Ìg­·Ì^¯¦ÅÀÅ¿Ò1±t°´à¬À­t°´¹;°²à|°²±·ÒEÂz°´µÃÒ¦°´¯L±­²³¦¬À©¸¯¦©|Òé½²¯¦­t°´­ Ìg­t°²±­D©g¹;¬¿ª¢¹g¬,½´Ì^ÅÀª¢ªt¶*¶¦Égª·¯¦¬À©;¹^°´­·½²±¬¿®^ª¬¿¶Ç©^­Âz¶¦± ©^°²µ Å¿Òh½¼±·°´¯Lªt°?¹hª·°²±¸ ­²Ó Ö¶½¼¶¦®~°EµØ¬Àª·«hª·«^°E¯LÉ~¶oর<År¬À¸ ¬¿ª¯Oª¬¿¶¦©Ô¶¦Â3°¼»^¬À­tªtÄ ¬À©^Ë °´©g½¼Ò½²ÅÀ¶L®D°?¹^¬r¯L­²³L¬Àª¬À­®~¶¦­·­·¬ÀÉgÅÀ°(ª·¶"Ì^­t° ¯ ­t°´¯L±·½²« °´©^ËǬÀ©^°&¶Ç©{ª·«^°,È{¶L±ÅÀ¹{ÈЬr¹°,È{°²Éί¦­1¯Ú­·Ì^Ég­tª¬ Ä ª·Ì^ªt°¦³G°²»X®D°?½¼ª·¬r©Ë{ª«^¯Lªˆ½¼°´±tª¯L¬r©bÈ{°´Éb®S¯O˦°´­,µ ¬ÀÅÀÅ ¹^°´­·½²±·¬ÀÉD°Îª·«^°{­·Ì^Ég¸ ¬Àªtª·°´¹ 즰´Ò|µG¶L±¹2ÓÕ"¶oµG°²àL°´±²³ ­·¬À©g½¼° ìǰ´Ò|µ¶L±¹XÄTÉg¯¦­t°´¹V­t°?¯O±½«Ú°?©ËǬÀ©^°´­.¶LÂzª·°´©6±·°¼Ä ªt±¬¿°´à¦°ð¯Á­·Ì^±t®g±¬À­·¬À©^ËÇÅ¿Ò ÅÀ¯L±·ËL°ð©Ìg¸.ÉD°´±Ê¶¦ÂhÈΰ´É ®S¯O˦°´­²³2¬Àª¬À­*ª·¬À¸°²ÄT½²¶¦©g­·Ì^¸ ¬r©Ëhª·¶h¬À¹^°´©ª·¬ÀÂzÒÔ®g¯L˦°´­ ª·«g¯LªØ­·¯Lª·¬À­tÂÆÒ,ª«°1Ìg­t°²±­g¬À©^Âz¶¦±¸ ¯Oª¬¿¶¦©+©^°²°?¹^­²Ó ¨ ©ÝàX¬À°²µ ¶LÂVª·«g¬À­ð®g±t¶¦ÉgÅÀ°´¸Ô³6µG°è®^±·¶¦®~¶¦­t°¾¯ ¸°²ª«¶X¹9ª·¶>¯¦Ìª·¶¦¸ ¯Lª·¬À½´¯LÅrſҰ¼»ªt±¯¦½¼ª$ªt°´±·¸ ¹^°¼Ä ­·½²±·¬À®^ª¬¿¶¦©g­{ÂÆ±·¶¦¸ È{°´É>®S¯LËL°?­Ê¯¦©^¹ï­·Ìg¸ ¸ ¯O±¬¿Í´° ª·«^°?¸hÓ,¨ ©6ª«^¬r­E®g¯L®D°´±²³µG°˦°´©^°²±¯LÅrÅ¿Ò6Ì^­t° tÈΰ²É ®S¯O˦°´­ 覆¶í±·°²Âz°´±Ðª·¶íª·«^¶Ç­t°Á®g¯LËL°?­Ð½¼¶Ç©Xª·¯¦¬À©g¬À©^Ë ªt°²»XªÌg¯L޲¶¦©ªt°´©|ª·­²³*°¼»^½²ÅrÌ^¹g¬À©^Ë0ª«¶Ç­t°6µ ¬¿ª«Ð¶Ç©gÅ¿Ò ¬À¸ ¯LËL° o¯¦Ì^¹g¬¿¶ð¬r©ÂƶL±¸ ¯Oª¬¿¶Ç©ZÓ°?­·¬À¹^°´­Úª·«g¬À­²³1µ° ­t®D°?½²¬D½²¯¦ÅÀÅÀÒÛª·¯L±t˦°²ªV¹°?­·½¼±¬¿®gª·¬À¶¦©g­ ÂÆ¶L±vªt°?½¼«^©g¬À½´¯LÅ ªt°´±¸ ­²³é¯¦©g¹èª·«^Ì^­  ª·°²±¸ ­ ð˦°´©^°²±¯¦ÅÀÅ¿Òè±t°´Âz°´±Úªt¶ ªt°?½«g©^¬r½²¯¦Å~ªt°´±·¸ ­²Ó ¨ © É^±¬¿°´Âÿ³Z¶Ç̐±<¸°²ª«¶¹Ú°¼»ªt±¯L½²ª·­Âz±¯L˦¸°?©|ª·­"¶L È{°²É+®g¯L˦°´­²³DÉg¯¦­t°´¹+¶Ç©+®g¯Lªtª·°²±©^­ ñ ¶¦± ªt°?¸;®SÅÀ¯Lªt°?­õ ªCÒX®g¬r½²¯¦ÅÀÅ¿ÒÚÌ^­t°?¹vª·¶+¹^°´­·½²±¬¿ÉD°ª·°²±¸ ­²Ó;È{°²É6®g¯LËL°?­ ¯L±t°,¬À©{¯ ­t°?©^­t°,­t°´¸ ¬¿ÄT­tªt±Ìg½¼ª̐±·°´¹Î¹g¯Lª·¯^³É~°?½²¯¦Ìg­t° Õ*ÖØ×ÚÙ ñ Õ ÒX®~°´±Ö3°²»Xª*×Ú¯L±·ìXÌ^®vÙ¯¦©ËÇÌ^¯LËL°Oõ᪷¯LËÇ­ ®g±t¶oà¬À¹^°;ª«°ªt°¼»ª·Ìg¯¦Å¢¬À©^Âz¶L±¸ ¯Lª·¬¿¶Ç©V½¼¶Ç©|ª·¯¦¬À©^°´¹6¬À© ¯V®g¯LËL° µ ¬¿ª«Ï¯{½¼°´±tª¯L¬r©Ï­tªt±Ì^½²ª·Ì^±·°LÓÑÖØ«Ìg­²³¶¦Ì^± ¸°²ª«¶X¹Û±·°´År¬¿°´­¶¦©ÑÉ~¶¦ª·«ÛÅÀ¬À©^˦Ìg¬À­tª¬À½{¯L©g¹ ­tªt±Ì^½¼Ä ª·Ì^±¯Lʐ°?­·½¼±¬¿®gª·¬À¶¦©Ô®g¯Lªtª·°²±©g­²Ó È{°ÔÌg­t°´¹Ã­t°²àǰ²±¯LÅ ò Ùô>ªt°´½²«^©g¬ÀêX̐°?­;ªt¶6­t°?¸ ¬ Ä ¯¦Ìª·¶¦¸ ¯Lª·¬À½´¯LÅrÅ¿Òë®g±t¶X¹gÌg½¼°5ÅÀ¬r©ËÇÌg¬À­tª·¬r½®g¯Lªtª·°²±©^­²Ó È{°é½²¯¦ÅÀÅ3ª·«g¬À­*¯O®g®^±·¶Ç¯L½¼« ò Ùô:ÄTÉg¯¦­t°´¹ ¸°´ª·«^¶¹ZÓ È{°Î¯¦ÅÀ­t¶Ñ®^±·¶X¹gÌ^½²°´¹¾­·°²à¦°´±·¯¦Å髐°?Ì^±·¬r­tª·¬À½´­Ú¯¦­·­t¶X½²¬¿Ä ¯Oª·°´¹<µ ¬¿ª·«<ª·«^° Ì^­t°(¶LÕ"ÖØ×ÚÙ,ª·¯L˦­²³cµ «^¬r½¼«*µá°:½´¯LÅÀÅ tÕ*ÖØ×ÚÙ2ÄÉS¯¦­t°´¹ ¸°²ª«¶X¹2Ó ÈЫg¬ÀÅÀ°Vª«°{Âz¶¦±¸°²± ¸°²ª«¶X¹ˆ¬r­ ÅÀ¯¦©^˦Ìg¯O˦°¼Äÿ¹°´®~°´©g¹^°´©|ª´³|¯¦©g¹ˆ½²Ì^±·±t°?©|ª·ÅÀÒ ¯O®g®SÅÀ¬¿°?¹{¶¦©gÅ¿ÒVª·¶¦¯O®S¯¦©°?­t°L³:ª«^°,År¯Oª·ªt°´±;¸°²ª«¶¹ ¬À­áª«°´¶L±·°²ª¬À½´¯LÅÀÅÀ҈År¯L©^˦Ìg¯O˦°²ÄT¬À©g¹°´®~°?©^¹^°´©|ª´Ó  ̐± ±·°?­t°´¯L±·½²«Ô½´¯L©ÉD°é½´ÅÀ¯¦­·­·¬g°?¹+Âz±·¶¦¸­t°²à¦°´±¯LÅ ¹^¬¿ß~°´±·°´©|ª+®~°²±­t®~°?½¼ª¬¿à¦°?­²Óºö"­°¼»®gÅr¯L¬À©^°?¹Û¬r© ª·«^° ÉD°²ËǬÀ©g©^¬r©ËѶL ª·«g¬À­6­t°?½¼ª¬¿¶¦©2³é¶Ç̐±6±t°?­t°´¯L±½«Á½²¯¦© ÉD°$­·°²°´© ¯¦­{ÅÀ¬r©ËÇÌ^¬r­tª·¬À½b쐩¶Oµ ÅÀ°´¹^ËL°Ã°¼»ªt±¯L½²ª·¬À¶¦©2Ó þ ®D°?½²¬S½´¯LÅrÅ¿Òdz¶ÇÌ^± ±t°?­t°´¯L±½« ¬À­±·°´År¯Oª·°´¹Ûªt¶ÏÈΰ´É ¸ ¬À©g¬À©^ËÛ¸°²ª«¶X¹g­ ñeò ¬¿°b°´ªV¯¦ÅóÓÀ³ ù´û¦ûLû+æ °´­·©g¬¿ì~³ ù´û¦ûLû|õÓ S±·¶¦¸Ñ¯L©.¬À©^Âz¶¦±¸ ¯Oª¬¿¶¦©<±·°²ª·±·¬À°²àL¯LÅO®~¶¦¬r©|ª2¶¦Â|à¬¿°²µE³ ¶¦Ì^± ±·°´­t°?¯O±½¼«\½²¯¦© ÉD°í­t°²°?©\¯¦­è½²¶¦©g­tª·±·Ìg½¼ª¬À©^Ë ¹¶Ç¸ ¯L¬r©XÄÿ­t®D°´½´¬D½ ñ ¶¦±ª·¯¦­tì|ĶL±¬¿°?©Xª·°´¹~õZÈV°´Éé­t°´¯L±½« °´©^˦¬r©°?­Ô¯L©g¹Ñ­t¶¦Âzªtµ¯O±·°6¯LËL°?©|ª·­ ñ ªtÍ?¬¿¶Ç©^¬e³ù´û¦û  ×Ú½"!G¯¦ÅÀÅÀÌg¸ °²ª ¯LÅóÓr³ù?ûLû¦ûÇõ Ó # $&% ]g¥ % P]^f  ̐± ¶¦ÉXît°´½²ª·¬¿à|° ¬r­\ªt¶ ½²¶¦ÅrÅ¿°´½²ª°´©g½¼Ò½²ÅÀ¶L®D°?¹^¬r½ ìX©^¶oµ Å¿°?¹Ë¦° Âz±·¶¦¸úª·«^°ÚÈ{°²É³ØÂƶL±hµØ«g¬À½«ðµ°Ú¹^°¼Ä ­·¬¿ËÇ©°?¹6¯­tҐ­tªt°?¸\¬À©XàL¶ÇÅ¿à¬À©^ËÔªCµG¶Ô®g±t¶X½²°´­·­t°?­²Ó&ö*­ µ ¬¿ª·« °²»¬r­tª·¬À©^ËíÈ{°²ÉÝ­t°?¯O±½¼« ­tҐ­tªt°?¸ ­²³V¬À© ª«° Ég¯¦½ ìXËL±·¶¦Ìg©g¹ð®g±t¶^½¼°´­­ˆ¶Ç̐±­tҐ­tªt°?¸®~°²±¬¿¶X¹g¬À½´¯LÅrÅ¿Ò Ì®~¹g¯Lªt°´­3¯Ø¹g¯Lª·¯LÉg¯¦­t°H½²¶¦©g­·¬À­tª¬À©^ËØ¶¦Â^ªt°´±·¸è¹°?­·½¼±¬¿®^Ä ª·¬À¶¦©g­ ñ ¯"¹^°´­·½²±¬¿®gª·¬¿¶Ç©1¹^¯Lª·¯LÉS¯L­t°Oõ³oµ «^¬rÅ¿°Ì^­t°´±­½²¯¦© É^±·¶oµ ­t° ªt°´±·¸í¹°?­·½¼±¬¿®gª·¬À¶¦©g­ ¯¦©XÒXª·¬r¸;°"¬À©&ª«°"ÂÆ¶L±·°¼Ä ËL±·¶ÇÌ^©g¹h®g±·¶X½¼°?­·­²Ó ¨C©{ª«°hÉg¯¦½ìXËL±·¶¦Ìg©g¹6®g±·¶½¼°?­·­²³:¹^°²®S¬À½¼ª·°´¹0¯L­;¬À© ¢¬¿ËÇ̐±·°ùL³L¯*­t°?¯O±½¼«.°?©ËǬÀ©^°G­t°´¯L±½¼«°?­ª«°È{°´É1Âz¶¦± ®g¯LËL°?­Ø½¼¶Ç©|ª¯L¬r©^¬À©^Ë ªt°²±¸ ­ØÅÀ¬r­tªt°?¹Ô¬r©+¯ Å¿°²»¬r½¼¶Ç©ZÓ ÖØ«°?©Z³EÂz±¯OËǸ°´©ª·­ ñ ­·Ì^½¼«Û¯¦­®S¯O±¯LËL±¯L®g«g­õ,¶¦Â ±·°²ªt±¬¿°´à¦°?¹ÎÈΰ²ÉÊ®S¯LËL°?­ ¯O±·°h°²»Xª·±·¯¦½¼ª·°´¹0Ég¯¦­t°´¹b¶Ç© ÅÀ¬À©^ËÇÌ^¬À­tª¬À½ ¯L©g¹,­tª·±·Ìg½¼ª̐±¯¦Åg¹^°´­·½²±·¬À®^ª¬¿¶Ç©&®g¯Lªtª·°²±©^­²Ó ò ¶Lª·°&ª«^¯Lª;¯¦­1¯ ®g±·°²®g±t¶½²°´­·­·¬r©Ë ÂÆ¶L±;ª·«^°ˆ°¼»ªt±¯L½¼Ä ª·¬À¶¦©.®g±t¶^½¼°?­·­²³OµG° ¹g¬À­·½²¯L±¹é©°´µ ÅÀ¬À©^°G½¼¶X¹^°?­²³L±·°´¹gÌ^©Ä ¹^¯¦©|ª*µ «g¬¿ªt° ­t®g¯¦½¼°?­²³¯¦©^¹6Õ*Öá×ÚÙbª¯OËÇ­ª·«g¯Oª<¶¦Ì^± °¼»ªt±¯¦½¼ª·¬À¶¦©Î¸°´ª·«^¶¹Ê¹^¶|°´­;©¶¦ª;Ìg­t°L³(¬r©{¶L±¹^°²±éª·¶ ­tª·¯¦©^¹g¯L±·¹g¬¿Í´°<ª«°.ÅÀ¯oÒL¶Ç̐ªá¶L¢ÈV°´É+®g¯LËL°?­²Ó Õ"¶oµ °´à¦°²±´³^¬À©Ô­t¶¦¸°.½²¯¦­t°´­áª«°<°¼»ªt±¯L½²ª·¬À¶¦©Ô®^±·¶LÄ ½¼°?­·­"¬r­*Ìg©^­·Ìg½²½²°´­·­·ÂÆÌ^Åe³Z¯L©g¹ ª·«Ì^­"°¼»ªt±¯L½²ªt°?¹Âz±¯OËLÄ ¸°´©|ª­&¯L±t°Ú©^¶LªˆÅÀ¬r©ËÇÌg¬À­tª·¬r½²¯¦ÅÀÅ¿ÒÃÌg©^¹^°²±­tª¯L©g¹^¯LÉSÅ¿°LÓ ¨C©Á¯¦¹g¹^¬¿ª¬¿¶Ç©Z³éÈ{°²É¾®g¯LËL°?­Ú½²¶¦©ª·¯¦¬À©è­t¶Ç¸°b©^¶¦©Ä ÅÀ¬À©^ËÇÌ^¬À­tª¬À½"¬À©^Âz¶¦±¸ ¯Oª·¬À¶¦©2³­·Ì^½¼«,¯L­­t®D°´½´¬À¯LÅ~½«g¯L±·¯¦½Ä ªt°´±·­ ñ ­tҐ¸1ÉD¶ÇÅÀ­õ<¯¦©^¹V°²ÄT¸ ¯L¬rÅH¯¦¹g¹±·°´­·­t°?­.ÂÆ¶L±é½²¶¦©Ä ª·¯¦½¼ª´³Z¯LÅ¿¶Ç©^ˈµ ¬¿ª·«6ÅÀ¬À©^˦Ìg¬À­tª¬À½é¬À©^Âz¶¦±¸ ¯Oª¬¿¶¦©2Ó'!¶¦©Ä ­t°´êX̐°?©|ªſҦ³oª·«^¶Ç­t° ©^¶Ç¬À­t°´­¹°?½¼±·°?¯L­t°°²»Xª·±¯L½²ª·¬¿¶Ç©1¯L½¼Ä ½²Ì^±¯L½²Ò¦Ó lexicon description database search engine browser extraction filtering clustering Web language model extraction patterns ¢¬À˦Ì^±t° ù(.ÖØ«^°&½²¶¦©|ª·±·¶¦ÅçS¶oµ>¶¦Â¶Ç̐±.°²»Xª·±·¯¦½¼ª¬¿¶Ç© ­tҐ­tªt°?¸hÓ ¨ ©{àX¬¿°´µï¶¦ÂGª«^¬r­1®g±·¶LÉSÅ¿°?¸Ô³¢µ°,®D°²±·Âz¶¦±¸¯)DÅ Ä ªt°´±¬À©^Ë;ªt¶é°?©g«^¯¦©^½²°ª·«^°°²»Xª·±¯L½²ª·¬¿¶Ç©h¯¦½²½´Ì±¯¦½¼Ò|Ó3¨ © ®g±·¯¦½¼ª¬À½²°L³|µ° Ì^­t°"¯.ÅÀ¯¦©ËÇÌ^¯LËL°*¸¶X¹^°´Ågªt¶1¸°´¯¦­·Ì±·° ª·«^° °¼»ªt°?©Xª&ªt¶Êµ «g¬À½«Ï¯ÎËǬ¿à¦°?©$°¼»ªt±¯¦½¼ª·°´¹$Âz±¯LËOÄ ¸°´©Xª.½´¯L©ÎÉD°ÔÅÀ¬À©^˦Ìg¬À­tª¬À½O³(¯L©g¹Î¬À©g¹°²»V¶¦©gÅ¿Ò6Âz±¯LËOÄ ¸°´©Xª·­^î·Ì^¹^˦°´¹1¯¦­ÅÀ¬r©ËÇÌg¬À­tª·¬r½H¬r©Xª·¶Øª«°¹^°´­·½²±¬¿®gª·¬¿¶Ç© ¹g¯Oª¯OÉS¯L­t°¦Ó öتت«°1­·¯¦¸;°.ª¬À¸°L³Dª·«^°+**æ ٭ض¦ÂÈΰ´ÉÔ®S¯LËL°?­ Âz±·¶Ç¸ëµØ«g¬À½²«Û¹^°?­·½¼±¬¿®gª·¬¿¶Ç©g­+µG°´±·°6°²»Xª·±·¯¦½¼ª·°´¹è¯L±·° ¯¦ÅÀ­t¶b¬r©^¹^°¼»°´¹Ñ¬À©ðª«°V¹^¯Lª·¯LÉg¯¦­t°L³*­t¶bª·«g¯OªÔÌg­t°²±­ ½²¯¦© É^±·¶LµØ­t°Øª·«^°"ÂÆÌgÅÀÅD½¼¶Ç©|ªt°?©|ª²³|¬À©&ª«°"½²¯¦­t°"µ «°´±t° ¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­á°²»Xª·±·¯¦½¼ª·°´¹+¯O±·°1©¶¦ªØ­·¯Lª·¬r­tÂÆ¯¦½¼ª·¶L±·Ò¦Ó ¨ © ª·«^°Ð½²¯¦­t°ÏµØ«^°´±t°Ð¯è©^Ì^¸1ÉD°²±Î¶¦Â¹°?­·½¼±¬¿®^Ä ª·¬À¶¦©g­E¯O±·° °¼»ªt±¯L½²ªt°?¹VÂz¶L±.¯+­·¬À©^˦ÅÀ° ª·°²±¸Ô³ª«°&±·°¼Ä ­·ÌgÅ¿ª·¯¦©|ª¹^°´­·½²±·¬À®^ª¬¿¶Ç©v­t°´ª¬À­±t°?¹gÌ^©g¹^¯¦©Xª´³~É~°´½´¯LÌg­t° ¬¿ª1½¼¶Ç©Xª¯L¬r©^­.¯©^Ì^¸1ÉD°²±.¶LÂØ­·¬r¸¬rÅÀ¯L±é¹°?­·½¼±¬¿®gª·¬À¶¦©g­²Ó ÖØ«XÌg­²³H¬¿ª ¬À­;®^±·°´Âz°²±¯LÉgÅÀ°hªt¶{­·Ì^¸ ¸ ¯L±¬¿Í²°+¹°?­·½¼±¬¿®^Ä ª·¬À¶¦©g­²³3±·¯Lª·«^°²±Eª«^¯¦©6ª·¶®^±·°´­·°´©|ª.¯LÅÀÅ:ª·«^°&¹^°?­·½¼±¬¿®^Ä ª·¬À¶¦©g­Ø¯L­Ø¯&År¬À­tª²Ó S¶¦±íª·«g¬À­®S̐±·®~¶Ç­t°L³Áµá°ºÌ^­t°\¯5½´ÅÀÌ^­tª·°²±¬À©^Ë ¸°²ª«¶X¹ªt¶>¹^¬¿à¬À¹^°Ð¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­0Âz¶¦±b¯Á­¬À©ËÇÅ¿° ªt°´±¸A¬À©Xª·¶6¯{½¼°´±tª¯L¬r©b©^Ì^¸1ÉD°²±&¶¦Â*½²ÅrÌ^­tª·°²±­²³G¯L©g¹ ®g±t°?­t°´©XªH¶Ç©gÅ¿Ò&¹^°´­·½²±·¬À®^ª¬¿¶¦©g­ª·«g¯Oª¯L±t°*±·°´®^±·°´­·°´©Xª¯oÄ ª·¬Àর:ÂÆ¶L±°?¯L½«E½²ÅrÌ^­tª·°²±´Óö"­3¯á±·°?­·Ì^ÅÀª²³o¬¿ª3¬À­°²»X®D°?½¼ª·°´¹ ª·«g¯LªØ¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­á±·°´­t°?¸1ÉSÅÀ¬r©Ë¶¦©^°E¯¦©¶¦ª·«^°²±Øµ ¬ÀÅÀÅ ÉD°;¬À©ª«°é­·¯¦¸°é½²ÅÀÌg­tª·°²±´³Z¯L©g¹ª·«g¯Oª*°?¯L½« ½´ÅÀÌg­tªt°´± ½¼¶¦±·±t°?­t®D¶Ç©^¹g­áªt¶&¹^¬¿ß~°´±t°?©XªáàX¬¿°´µØ®D¶¦¬r©Xª·­G¯¦©^¹ÔµG¶¦±¹ ­t°?©^­t°?­²Ó ô¢¶Ç­·­·¬¿ÉSÅ¿°(­t¶ÇÌ^±·½²°´­¶¦Âª·«^°HÅÀ°¼»^¬À½¼¶Ç©.¬À©g½²ÅÀÌg¹^°(°²»¬r­tª Ä ¬À©^Ëv¸ ¯¦½¼«^¬r©°,±·°´¯¦¹g¯OÉSÅ¿°hª·°²±¸ ¬À©^¶¦ÅÀ¶L˦Ò{¹^¬r½¼ª¬¿¶¦©g¯L± Ä ¬¿°?­²³¢µ «^¬r½«6¶¦Âzª·°´©ÎÅr¬À­tª1ªt°´±·¸ ­²³¢ÉSÌ^ª1År¯L½¼ìv¹^°´­·½²±¬¿®^Ä ª·¬À¶¦©g­²ÓÚÕ ¶Oµ °²à¦°´±´³¢­·¬À©^½²°h©^°²µïª·°²±¸ ­1Ìg©^År¬À­tª·°´¹Ê¬r© °¼»^¬À­tª¬À©^Ë{¹g¬À½²ª·¬À¶¦©g¯O±¬¿°?­&¯¦ÅÀ­t¶Î«^¯càǰԪt¶{ÉD° ½¼¶Ç©g­·¬À¹XÄ °²±·°´¹2³G©^°²µ ­t®S¯O®D°´±&¯L±·ª·¬À½´Å¿°?­ ¯L©g¹$¸ ¯OËǯOÍ?¬À©^°´­ ¹g¬À­ Ä ªt±¬¿ÉS̐ª·°´¹ˆàX¬À¯.ª«°È{°´Éˆ½²¯¦©&É~°*®D¶Ç­·­·¬¿ÉSÅ¿°­t¶Ç̐±½²°´­²Ó ¨C©Ð¶Lª«°´±ÔµG¶L±¹g­²³"¯Ã¸¶L±·®S«¶ÇÅ¿¶¦Ë¦¬r½²¯¦Å¯L©g¯LÅÀÒX­·¬r­+¬À­ ®D°²±·Âz¶¦±¸°´¹Ô®D°²±¬¿¶¹g¬À½´¯LÅrÅ¿Ò ñ °LÓ Ë^ÓÀ³gµ°´°²ìÅ¿ÒSõ(ªt¶&¬À¹^°´©Ä ª·¬ÀÂzÒ"µá¶¦±·¹ªt¶¦ìǰ?©^­ZÂz±·¶Ç¸Ûª·«^¶¦­t°H±·°´­t¶Ç̐±½²°´­²³o¬À©E¶L±¹^°²± ªt¶+°´©g«^¯¦©g½¼°;ª«°&Å¿°²»¬r½¼¶Ç©ZÓÕ"¶Oµ°²à¦°´±²³ª·«g¬À­E¬À­E©^¶¦ª ª·«^°E½²°´©|ª·±¯LÅ2¬r­·­·Ì°.¬À©Ôª·«g¬À­Ø®S¯L®D°²±´Ó ¨C©0ª·«^°hÂz¶¦±·°²Ë¦±t¶ÇÌg©^¹0®^±·¶X½¼°?­·­²³(ËǬ¿à|°´©Ê¯¦©0¬r©®S̐ª ªt°´±·¸Ô³ˆ¯ Ég±t¶oµ ­t°´±{®^±·°?­t°´©Xª­V¶Ç©°Ï¶¦±Ê¸¶L±·°Ï¹°²Ä ­·½¼±¬¿®gª·¬À¶¦©g­ ªt¶Ê¯ÎÌg­t°²±´ÓШ ©$ª·«^°Ú½²¯¦­t° µØ«^°²±·°ª·«^° ¹^¯Lª·¯LÉS¯L­t°,¹¶|°?­.©^¶Lª1¬À©g¹°²»V¹°?­·½¼±¬¿®gª·¬À¶¦©g­EÂz¶¦±.ª·«^° ˦¬Àর´©Eª·°²±¸Ô³Oªt°´±·¸Á¹°?­·½¼±¬¿®gª·¬À¶¦©g­3¯L±·° ¹^Ґ©^¯¦¸ ¬À½²¯¦ÅÀÅÀÒ °¼»ªt±¯¦½¼ªt°?¹ ¯L­*¬À© ª«°1ÉS¯L½ì|˦±t¶ÇÌg©^¹®g±·¶|½²°´­·­²Ó*ÖØ«^° Ég¯¦½ ìXËL±·¶¦Ìg©g¹<®g±·¶X½¼°?­·­3¬À­3¶L®gª·¬À¶¦©g¯LÅe³O¯L©g¹.ª·«XÌg­ª·°²±¸ ¹°?­·½¼±¬¿®gª·¬À¶¦©g­"½´¯L© ¯¦Å¿µá¯cÒX­ ÉD°é¶LÉgª·¯¦¬À©^°´¹ ¹^ÒX©g¯L¸ ¬¿Ä ½²¯¦ÅÀÅ¿ÒÇÓ(Õ"¶oµ°´àL°²±´³^ª«^¬r­á®~¶Lª·°´©|ª¬À¯¦ÅÀÅ¿Òh¹^°´½¼±·°?¯L­t°?­áª·«^° ª·¬r¸;°E°,,½´¬¿°?©^½²Ò,ÂÆ¶L±"¯±t°?¯LſĪ·¬r¸°<±·°´­t®Z¶¦©g­t°LÓ ¢¬À˦Ì^±t°6ü0­·«¶OµØ­h¯ÎÈ{°²ÉÐÉ^±·¶oµ ­t°²±´³"¬À©ðµ «g¬À½« ¶¦Ì^±G®g±·¶Lª·¶LªCÒX®D°®S¯O˦°®g±·°´­t°?©|ª·­G­t°²à|°²±¯LÅ-¦¯L®S¯L©^°´­t° ¹°?­·½¼±¬¿®gª·¬À¶¦©g­°²»Xª·±¯L½²ªt°´¹ÁÂz¶¦±Úª·«^°Êµ¶¦±¹. 0/21314657 8 59;:<9;: =?> ñ ¹g¯Oª¯Î¸ ¬À©^¬r©ËõÓ @D¶L±&°²»¯¦¸®SÅ¿°L³á¯¦©  ©^˦ÅÀ¬r­·«ˆª·±·¯¦©g­·ÅÀ¯Oª¬¿¶Ç©hÂÆ¶L±áª·«^°AS±­tªØ¹^°´­·½²±·¬À®^ª¬¿¶¦©Ô¬À­ ¯L­áÂÆ¶¦ÅrÅ¿¶oµ ­( ¹g¯Oª¯"¸¬r©^¬r©Ë*¬À­3¯ ®^±·¶X½¼°?­·­ª«^¯Lª½¼¶ÇÅÀÅ¿°?½¼ª­ ¹g¯Oª¯<ÂÆ¶L±H¯<½²°²±·ª¯L¬À© ª¯L­tì~³Ç¯¦©^¹±·°²ª·±·¬À°²à|°´­ ±·°´År¯Oª·¬À¶¦©g­ ÅÀ¯Lªt°?©|ª ¬À©hª«°.¹g¯Oª¯Ó ¨C©B¢¬¿ËÇ̐±·°+ü³H°´¯¦½«Ê¹°?­·½¼±¬¿®gª·¬À¶¦©ÃÌg­t°´­;àL¯O±¬¿¶ÇÌ^­ °¼»®^±·°?­·­·¬¿¶Ç©^­²³ÉS̐ªé¹^°´­·½²±·¬ÀÉ~°´­1ª·«^°ˆ­·¯L¸°ˆ½²¶¦©|ª·°´©Xª( ¹^¯Lª·¯E¸ ¬À©g¬À©^Ë.¬r­(¯<®g±·¶X½¼°?­·­(µ «^¬r½«¹g¬À­·½²¶oর´±·­¢±Ì^ÅÀ°´­ ÅÀ¯Lªt°?©|ªØ¬À©ÔËǬ¿àǰ?©h¹g¯Oª¯OÉS¯L­t°?­²ÓH¨ÿª"¬À­á°¼»®~°?½¼ª·°´¹Ôª·«g¯Oª Ì^­t°´±­H½²¯¦©&Ì^©g¹^°²±­tª·¯¦©^¹µ «^¯LªH¹g¯Lª·¯.¸ ¬À©g¬À©^Ë.¬r­²³ÇÉ^Ò É^±·¶oµ ­·¬À©^ËÚ­t¶Ç¸;°Ô¶LÂØª«¶Ç­t°h¹°?­·½¼±¬¿®gª¬¿¶¦©g­²Óv¨ ©Ê¯L¹Ä ¹^¬Àª·¬¿¶Ç©2³ °?¯¦½«b«^°´¯¦¹µG¶L±¹ ñ "/ 1C1,4657 8 59;:<9;:2=>D {¬r© ª·«g¬À­ ½´¯L­t°Oõ1®D¶Ç­·¬¿ª¬¿¶¦©^°?¹Ã¯LÉD¶OàL°+°´¯¦½¼«0¹^°?­·½¼±¬¿®gª·¬¿¶Ç© ¬À­áÅÀ¬r©ì|°´¹,ª·¶;ª·«^°<È{°´Éh®S¯O˦°Âz±·¶Ç¸ µØ«g¬À½«hª·«^°<¹^°¼Ä ­·½¼±¬¿®gª·¬À¶¦©ÔµG¯¦­á°¼»ªt±¯¦½¼ªt°?¹2Ó ¨C© ª«°èÂÆ¶¦ÅÀÅÀ¶oµ ¬À©^Ëí­t°´½²ª·¬¿¶Ç©^­²³6µ°Eg±­tªÏ°´År¯OÉ^Ä ¶L±¯Lªt°$¶¦©>ª·«^° ò ÙôF OÕ*Öá×ÚÙÄÉS¯L­t°?¹!°²»Xª·±¯L½²ª·¬¿¶Ç© ¸°²ª«¶X¹g­V¬À© þ °´½¼ª¬¿¶Ç©>ýÓÈ{°0ª«°?©>°´ÅÀ¯LÉD¶L±¯Lªt° ¶¦©+©^¶¦¬À­t°E±·°´¹gÌ^½²ª·¬À¶¦©+¯L©g¹Ô½´ÅÀÌg­tªt°´±·¬r©Ë&¸°²ª«¶^¹^­Ø¬À© þ °?½¼ª¬¿¶Ç©^­HG.¯L©g¹JI³¦±·°´­t®D°?½¼ª¬¿à|°´Å¿ÒLÓH¢¬r©^¯¦ÅÀÅ¿Òdz¦¬À© þ °?½Ä ª·¬À¶¦©KhµG° ¬r©|র´­tª¬¿ËǯOª·°;ª·«^°&°¼ßZ°´½²ª·¬Àর´©^°´­·­<¶¦Â ¶Ç̐± °¼»ªt±¯¦½¼ª·¬À¶¦©+¸°²ª«¶X¹+ÉX҈µG¯?҈¶L°¼»®~°²±¬À¸°?©|ª·­²Ó L M+N JO¥¦`2§XJLPTãPOÐ[]g¥Q R6]SKO§¥ÇPSHJLPäã K TUWV X)YFZA[]\_^a`0b2cdfehg"i^ajglknmpoqrb glsmc ÖØ«° ½¼±Ìg½²¬À¯¦Åǽ¼¶Ç©Xªt°?©ÇªÂƶL±ª«° ò Ùô:ÄTÉg¯¦­t°´¹.°²»Xª·±·¯¦½¼Ä ª·¬À¶¦©Ú¸°´ª·«^¶¹v¬r­*ª«°éµØ¯?Ò+ª·¶h®g±t¶¹gÌg½¼°;År¬À©ËÇÌg¬À­tª·¬r½ ¢¬À˦Ì^±t°ü(  »^¯L¸®SÅ¿°t¦¯L®g¯¦©^°´­t°¹^°´­·½²±¬¿®gª·¬¿¶Ç©^­*Âz¶¦± "/ 1C1,4657 8 59;:<9;: =?> ñ ¹^¯Lª·¯&¸ ¬À©g¬À©^ËXõ Ó ®S¯Oª·ªt°²±©g­Øª·«g¯LªØ½´¯L©+É~°1Ì^­t°?¹hª·¶,¹°?­·½¼±¬¿É~°.ª·°´½«g©^¬¿Ä ½²¯¦Åª·°²±¸ ­²ÓVÕ"¶Oµ °´à¦°²±´³H«XÌ^¸ ¯¦©Ê¬À©|ª·±·¶¦­t®D°?½¼ª¬¿¶¦©b¬À­ ¯,¹g¬,½²ÌgÅ¿ª"¸°²ª«¶¹ª·¶&°¼»^«^¯¦Ì^­tª¬¿àǰ?Å¿Òh°´©Ì^¸°´±·¯Lªt° ®D¶Ç­·­·¬¿ÉSÅ¿°1¹°?­·½¼±¬¿®gª·¬¿¶Ç©Ô®g¯Lªtª·°²±©^­²Ó ÖØ«XÌg­²³;µ°ÃÌ^­t°?¹ ò Ùô ªt°´½²«^©g¬ÀêX̐°?­Úªt¶ ­t°´¸ ¬ Ä ¯¦Ìª·¶¦¸ ¯Lª·¬À½´¯LÅrÅ¿Ò ½¼¶ÇÅÀÅÀ°´½²ª¹^°´­·½²±¬¿®gª·¬¿¶Ç© ®g¯Lªtª·°²±©g­ Âz±·¶Ç¸ ¸ ¯L½²«^¬r©° ±·°´¯¦¹^¯LÉgÅÀ°ï°´©g½¼Ò½²Å¿¶¦®~°?¹^¬r¯L­²³bÉZ°¼Ä ½²¯¦Ìg­t°ïª·«^°²Ò Ìg­·Ì^¯¦ÅÀÅÀÒ\½¼¶Ç©Xª·¯¦¬À© ¯­·¬À˦©g¬D½²¯¦©|ª·ÅÀÒ ÅÀ¯L±·ËL°º©XÌg¸.ÉD°´±í¶¦Â ¹°?­·½¼±¬¿®gª·¬À¶¦©g­ Âz¶¦±í°²»¬r­tª·¬À©^Ë ªt°´±¸ ­²Ó¢¨C©®^±¯¦½¼ª·¬r½¼°¦³ÇµG°ØÌg­t°´¹ª·«^°f¦¯O®S¯L©^°?­t°u!wv"Ä æ  ×ïÈζL±ÅÀ¹  ©g½¼Ò½²ÅÀ¶L®D°?¹^¬À¯ ñ Õ °?¬¿É~¶Ç©g­·«^¯^³ù´û¦ûx|õ³ µ «^¬r½«0¬À©g½´ÅÀÌ^¹^°?­&¯O®g®g±t¶c»^¬À¸ ¯Lªt°?Å¿Òyxz³zz?zV°´©|ª·±·¬À°´­ ±·°´År¯Oªt°?¹Ôªt¶ àO¯L±¬¿¶¦Ìg­wS°´ÅÀ¹g­²Ó °´Âz¶¦±·° ½¼¶ÇÅÀÅ¿°?½¼ª¬À©^Ë ¹^°´­·½²±·¬À®^ª¬¿¶Ç© ®S¯Oª·ªt°´±·©g­²³ ª·«^±·¶¦Ì^˦«V¯Ô®^±·°´År¬À¸ ¬À©g¯O±·Ò6­tª·Ìg¹^Ò ¶Ç©vª«° °´©g½¼Ò½²ÅÀ¶OÄ ®D°´¹g¬À¯6µ°+Ì^­t°?¹Z³ µ°ÔÂz¶ÇÌ^©g¹0ª«^¯Lª;ª·°²±¸A¹°?­·½¼±¬¿®^Ä ª·¬À¶¦©g­ØÂz±·°?ê|Ì^°´©ª·Å¿Òh½¼¶Ç©Xª¯L¬r©Ô­·¯¦ÅÀ¬À°´©XªØ®g¯Lªtª·°²±©g­"½²¶¦©Ä ­·¬À­tª¬À©^ËζLÂEª µ¶yǯO®S¯L©^°´­t° {C>2:a|014n|3>< Ê®g«^±·¯¦­t°?­²Ó ÖØ«°Âz¶ÇÅÀÅÀ¶Oµ ¬À©^ËV­t°?©Xªt°?©g½¼°¦³ µ «g¬À½¼«$¹^°´­·½²±¬¿ÉD°´­,ª·«^° ªt°´±¸} ~é³ ;½²¶¦©|ª¯L¬r©^­G¯1ª ÒX®g¬r½²¯¦ÅH{,>2:a|01,4W|3>+½¼¶Ç¸1ÉS¬ Ä ©g¯Oª¬¿¶¦©2³^ª«^¯LªØ¬r­²³€ ~46‚„ƒ25 ¯¦©^¹… 0/21C7†5‡C>a ( ~46‚„ƒ5tˆ‰/21C7†5‡C> ñ ~!¬r­wˆ.õ ÓŠ ‹Œސ6‘l’C“”C‘–•†—˜†™nš3›œ"ž Ÿ6¡¢†Ž£?Ÿ¡a¤’3p¥w¦†œ0§¨©˜©ª«¨œ­¬l‘"®¥3¡6¢C¯ ° ¢±“l¡¢+¦©œ"§l¨†˜ª²¨œt¬l‘"®¥3¡6¢†¡³’´®6¢©£¢©®F6’´µ†’C¶³·lŸ¤¥]6Ÿ’C¤l¡³’3£ ¡¢†¸¢©®¥3Ž ° ’3®¹"¡†º ¨C© ¶¦ª·«^°²±µG¶¦±¹^­²³~µG°;½¼¶ÇÅÀÅ¿°?½¼ª·°´¹Ú¹^°´­·½²±¬¿®gª·¬¿¶Ç©Ú®S¯OªtÄ ªt°´±·©g­²³XÉS¯L­t°?¹,¶¦©ˆª·«^°½¼¶LĶX½´½²Ì^±t±·°?©^½²°*¶Lª µ¶»{C>2:<7 |014n|3>+®S«^±·¯¦­t°´­²³S¯L­ ¬r©hª«°EÂz¶ÇÅÀÅ¿¶Lµ ¬À©Ë ¸°²ª«¶X¹2Ó ¢¬À±·­tª´³(µG°h½¼¶ÇÅÀÅ¿°?½¼ª·°´¹0°´©|ª·±·¬À°´­;¯L­·­t¶^½²¬À¯Lªt°?¹Îµ ¬¿ª« ªt°?½«^©g¬À½´¯LÅ:ªt°´±·¸ ­éÅÀ¬À­tª·°´¹{¬À©Vª·«^°&ÈζL±ÅÀ¹  ©g½¼Ò½²ÅÀ¶OÄ ®D°´¹g¬À¯^³Ø¯¦©^¹Ï±·°´®gÅr¯L½²°´¹Ï«^°´¯¦¹µá¶L±¹g­ µ ¬¿ª·«Ï¯ÊàO¯L±¬ Ä ¯OÉSÅ¿°r ~1Ó ò ¶Lª·°Êª·«g¯Oª ª·«^°ÊÈ{¶L±ÅÀ¹  ©^½²ÒX½´Å¿¶¦®D°¼Ä ¹^¬r¯Ø¹^°?­·½¼±¬¿ÉD°?­àL¯O±¬¿¶ÇÌ^­ªCÒ|®D°?­¶Lµ¶¦±¹^­²³o¬À©g½²ÅrÌ^¹g¬À©^Ë ªt°?½«^©g¬À½´¯LŪt°´±¸ ­²³Z«^¬À­tª·¶L±¬À½´¯LÅ®~°´¶L®SÅ¿°é¯¦©^¹®SÅÀ¯¦½¼°?­²³ ¯L©g¹ ª·«^Ì^­(¹^°´­·½²±¬¿®^ª¬¿¶Ç© ®g¯Lªtª·°²±©^­ àL¯O±·Òé¹^°²®Z°´©g¹^¬r©Ë ¶¦©Ãª«°µ¶¦±¹0ª ÒX®S°LÓ¼S¶L± °²»¯¦¸®gÅÀ°L³G°?©|ªt±¬¿°?­Âz¶¦± «^¬r­tªt¶¦±·¬r½²¯¦ÅD®~°²¶¦®SÅ¿°<Ìg­·Ì^¯¦ÅÀſ҈½¼¶Ç©|ª¯L¬À©ˆµØ«^°´©½ oµØ«^°²±·° ª·«^°.®D°²¶¦®gÅ¿°.µá°²±·°É~¶¦±©+¯¦©g¹Ôª«°?¬¿±"¸ ¯oî ¶¦± ½²¶¦©Xªt±¬ Ä ÉgÌ^ª·¬À¶¦©g­áªt¶ ª·«^°1­t¶X½²¬¿°´ª ÒLÓ Õ"¶oµ °´à¦°²±´³*Âz¶¦±ª·«^°6®S̐±·®D¶Ç­t°´­+¶¦Â1¶Ç̐±°²»Xª·±¯L½¼Ä ª·¬À¶¦©2³L¬¿ª:¬r­¹^°´­·¬À±·¯LÉSÅ¿°Gª·¶Ì^­t°°?©Xª·±·¬À°´­­t¶¦ÅÀ°´Å¿Òé¯L­·­t¶X½´¬ Ä ¯Oª·°´¹µ ¬¿ª·« ª·°´½«g©^¬r½²¯¦Åªt°´±¸ ­²Ó3Èΰ᪷«^°´©&½¼¶Ç©g­·Ì^ÅÀªt°´¹ ª·«^°  væÞ¸ ¯¦½¼«g¬À©°±·°´¯¦¹^¯LÉSÅ¿°Úªt°?½«g©^¬r½²¯¦Åت·°²±¸ ¬ Ä ©¶ÇÅ¿¶¦ËLÒ¹g¬À½¼ª¬¿¶Ç©^¯L±tÒdz+µ «g¬À½«½²¶¦©ª·¯¦¬À©g­0¯L®g®^±·¶c»^¬ Ä ¸ ¯Oª·°´Å¿Òvù?ü„z^³z?zz ªt°´±·¸ ­ ±t°?ÅÀ¯Lªt°?¹hª·¶ ª«°.¬À©^Âz¶¦±·¸ ¯OÄ ª·¬À¶¦©.®g±t¶^½¼°?­·­·¬À©Ëfg°?ÅÀ¹ ñ ¦¯O®S¯L©  Å¿°´½²ªt±·¶Ç©^¬r½Fv¬À½²ª·¬¿¶LÄ ©^¯L±·Òhæ °´­t°?¯O±½¼«+¨C©g­tª¬¿ª·Ì^ªt°¦³3ù?û¦ûI|õ³~¯L©g¹¶LÉgª¯L¬À©^°´¹ üX³ üI¦û°´©ªt±¬¿°?­é¯¦­·­t¶X½²¬r¯Oª·°´¹0µØ¬Àª·«Êª·°²±¸ ­;ÅÀ¬À­tª·°´¹b¬À© ª·«^°  væ ¹g¬À½²ª·¬¿¶Ç©^¯L±·ÒÇÓ þ °´½²¶¦©g¹Z³Gµá° Ì^­t°?¹Ãª·«^°!«g¯ þ °?©Ï¸;¶¦±·®g«^¶¦ÅÀ¶LËǬ Ä ½²¯¦Å¯¦©^¯¦Å¿ÒXͲ°´± ñ ×Ú¯Lª·­·Ìg¸¶Lª·¶,°²ª*¯¦ÅóÓÀ³ù?ûLû Lõ ³~µØ«g¬À½« «^¯¦­E½¼¶Ç¸ ¸¶¦©gÅ¿ÒvÉ~°´°´©{Ìg­t°´¹6Âz¶¦±1¸;Ìg½ «¾¦¯L®S¯L©^°´­t° ò ÙôÞ±·°´­t°?¯O±½«2³1ª·¶Û­t°´Ë¦¸°?©|ªv½²¶¦ÅÀÅÀ°´½²ªt°?¹¾°?©|ªt±¬¿°?­ ¬À©|ª·¶hµ¶¦±¹^­²³¯L©g¹6¯¦­·­·¬¿Ëǩڪ«°?¸Þ®g¯L±·ª·­ Ä¶¦ÂlÄÿ­t®~°²°?½«2Ó È{°¯¦ÅÀ­t¶Ê¹^°²à¦°?Å¿¶¦®D°´¹$­·¬À¸®SÅ¿°Ú«^°´Ì^±·¬r­tª·¬À½´­&ª·¶Î®^±·¶LÄ ¹^Ìg½¼°¾{,>2:D|"1,4n|C>0®g«^±¯L­t°?­.ÉS¯L­t°?¹Î¶¦©Êª·«^°,®g¯L±tªtͦÂlÄ ­t®D°²°?½¼«h¬r©ÂƶL±¸ ¯Oª¬¿¶Ç©ZÓ ¢¬r©^¯¦ÅÀſҦ³ØµG°Ú½²¶¦ÅrÅ¿°´½²ªt°?¹ð½¼¶Ç¸1ÉS¬À©g¯Lª·¬¿¶Ç©^­,¶LÂ.ª µ¶ {C>:D|014n|3> ®g«^±¯L­t°?­²³D¯¦©g¹+­t¶¦±·ªt°?¹Ôª«°?¸Ý¯¦½²½²¶L±¹g¬À©Ë ªt¶Ôª·«^°´¬¿±E½¼¶LĶX½´½²Ì^±·±t°?©^½²°;Âz±·°?ê|Ì^°´©g½¼Ò|³2¬À©6¹°?­·½¼°?©^¹Ä ¬À©^ËÔ¶¦±¹°´±²ÓÕ"¶Lµ °²à¦°´±´³­·¬À©g½¼° ª·«^°±·°?­·Ì^ÅÀª·¯L©ª¿{C>:a7 |014n|3>6½¼¶LĶ^½²½²Ì^±·±·°´©g½¼°?­ ñ °²àǰ?© µ ¬¿ª«Ú«g¬¿Ë¦«^°²±<±¯L©^ìÇÄ ¬À©^˦­õG¯O±·°.°¼»ªt±¯¦©°´¶¦Ìg­²³Dµ°1­·Ì®D°´±tà¬À­t°?¹ ñ àǰ´±¬S°´¹2³ ½¼¶¦±·±t°?½¼ª·°´¹!¶L±{¹g¬À­·½²¯L±¹°?¹Dõª·«^°bª·¶L® ùz?zѽ´¯L©g¹g¬ Ä ¹^¯Lªt°?­²³Z¯L©g¹ ®g±t¶^¹^Ìg½¼°?¹Úü„zˆ¹^°?­·½¼±¬¿®gª·¬¿¶Ç© ®S¯Oª·ªt°´±·©g­²Ó ¢¬¿ËÇ̐±·°Vý0­·«¶OµØ­h¯ÊÂÆ±·¯LËǸ;°?©|ªˆ¶¦Â.ª·«^°v±·°´­·ÌgÅ¿ª¯L©ª ®g¯Lªtª·°²±©g­¢¯¦©g¹;ª«°?¬¿±  ©^˦ÅÀ¬r­·«;ËÇÅ¿¶Ç­·­t°´­²Ó¢¨ ©;ª«^¬r­_gËLÄ Ì±·°¦³u ~A +¯¦©g¹À ©ˆÁ ¹°?©¶¦ªt°&àL¯O±¬À¯LÉSÅ¿°´­<ª·¶µØ«g¬À½¼« ªt°?½«^©g¬À½´¯LÅ ªt°´±¸ ­,¯¦©^¹ð­t°?©Xª·°´©g½¼°Âz±¯L˦¸°´©ª·­,½²¯¦© ÉD°EÌg©^¬g°?¹Z³S±·°´­t®D°?½¼ª¬¿àǰ?Å¿ÒLÓ Õ"°²±·°¦³:µ°h¯L±·°h¬À©0¯ ®Z¶¦­·¬Àª·¬¿¶Ç©Îªt¶6°¼»ªt±¯¦½¼ª;­t°?©XÄ ªt°?©^½²°´­ª«^¯Lª<¸ ¯Oª½« µ ¬¿ª«v¹^°´­·½²±·¬À®^ª¬¿¶¦©6®S¯Oª·ªt°´±·©g­²³ Âz±·¶¦¸ÑÈ{°´É<®S¯O˦°´­2±·°´ªt±¬¿°²à|°´¹ɐÒ"ª·«^°(­t°?¯O±½¼«*°?©^˦¬À©^° ñ ­t°´°­¢¬À˦Ì^±t°.ùcõ Ó¨C©ª·«g¬À­:®g±t¶X½²°´­·­²³Çµá°G¹¶.©¶¦ª:½²¶¦©Ä ÂlÃ0ÄÃ0ÅÆ3ÇÆ ÈÅÉ0ÊËǍÌJÍFÊÎ"Ç©Ç ÏEÐWÑ,Ò?ÓfÔrÕlֆÓ0×؄٠ÏBËÇHÔf٠ϼÒ?ÓfÔrÕlֆÓ0×؄٠ÏBËÇHÔfÙ ÔÛÚÑfÏÜÐWÑ0ÝW޲؄٠ԅËÇ ß]Ã0ÊÊàÆCá+ϱ٠ÏEÚÑuÔEÐWÑ0Ýnâ]ÓlÕlÓ0ãAÖ]×؄ÙäÏBËÇ áÆ å Å?Æ3á–Ã"ÇHÔfÙ ÔÛÚÑfÏÜÐWÑ0ÝWælÑ"ç]ØÙ ÔèËÇPß]Ã0ÊàÊÆ3áéÏuÙ ¢¬À˦Ì^±t°ý(:öÐÂz±¯L˦¸°?©|ª(¶¦ÂÅÀ¬r©ËÇÌ^¬r­tª·¬À½"¹^°´­·½²±¬¿®gª·¬¿¶Ç© ®S¯Oª·ªt°²±©g­áµ°E®^±·¶X¹gÌ^½²°´¹2Ó ¹gÌ^½²ª¸¶¦±t®S«^¶¦ÅÀ¶LËǬÀ½²¯¦ÅE¯¦©g¯LſҐ­·¬À­¶¦© È{°²ÉÛ®S¯O˦°´­²³ ÉD°´½´¯¦Ì^­t°¶LÂ<½¼¶Ç¸®gÌ^ª·¯Lª·¬À¶¦©g¯LÅ ½¼¶Ç­tª²Ó𨠩^­tª·°´¯¦¹Z³µá° S±·­tª ­t°²ËǸ°´©|ªéª·°¼»ª·Ìg¯LÅG½²¶¦©Xªt°?©|ª·­;¬À©0ÈV°´ÉÊ®S¯O˦°´­ ¬À©Xªt¶ ­t°´©|ª·°´©g½¼°?­²³Ég¯¦­t°´¹{¶Ç©Vª·«^°ê¦¯O®S¯L©^°?­t°&®SÌg©^½¼Ä ª·Ìg¯Lª·¬¿¶Ç©ï­tÒX­tª·°´¸Ô³h¯¦©^¹ïÌg­t°ð¯è­·Ì±·Âó¯L½²°ð®g¯Lªtª·°²±© ¸ ¯Oª½«g¬À©ËÉg¯¦­t°´¹+¶Ç©h±·°´Ë¦ÌgÅÀ¯L±á°²»X®g±·°´­·­·¬À¶¦©g­²Ó Õ"¶oµG°²àL°²±´³á¬r©ð¸¶¦­tªˆ½´¯L­t°?­&ª·°²±¸÷¹^°´­·½²±¬¿®^ª¬¿¶Ç©^­ ½¼¶Ç©g­·¬À­tªˆ¶¦ÂE¸¶¦±·°vª·«g¯¦©Ï¶¦©^°v­t°´©ªt°?©^½²°LÓ¾ÖØ«g¬À­h¬À­ °´­·®D°´½²¬r¯LÅrÅ¿Ò,­·¯¦ÅÀ¬¿°?©|ªG¬r©,ª·«^°½²¯¦­t°µ «^°²±·°¯L©g¯L®g«^¶L±¬À½ °¼»®g±t°?­·­·¬¿¶Ç©^­3¯L©g¹1¬¿ªt°?¸ ¬¿Í?¯Oª¬¿¶¦©.¯L±·° Ìg­t°?¹ZÓÖØ«Ìg­²³c¬Àª ¬À­*¹°?­·¬¿±¯OÉSÅ¿°1ª·¶,°¼»ªt±¯L½²ª¯,ÅÀ¯L±t˦°²±*Âz±¯OËǸ°´©Xª"½²¶¦©Ä ª·¯¦¬À©g¬À©^Ë<­t°?©Xªt°?©g½¼°?­3ª«^¯Lª(¸ ¯Lª·½«;µØ¬Àª·«;¹^°?­·½¼±¬¿®gª·¬¿¶Ç© ®S¯Oª·ªt°²±©g­²Ó ¨ ©ÏàX¬À°²µº¶LÂ.ª«^¬r­,®g±·¶LÉSÅ¿°´¸Ô³ µG°)S±­tªÔÌg­t°vÅÀ¬À©Ä ËÇÌ^¬À­tª¬À½"¹^°´­·½²±·¬À®^ª¬¿¶Ç©,®g¯Lªtª·°²±©g­Hªt¶;É^±¬¿°²çgÒ ¬r¹°?©Xª¬¿ÂzÒ ¯ˆÍ´¶¦©^°L³¯L©g¹v­t°?ê|Ì^°´©ª·¬À¯¦ÅÀÅÀÒ+­t°´¯L±·½²« ª·«^°;Âz¶ÇÅÀÅÀ¶oµ ¬À©Ë Âz±¯L˦¸°?©|ª·­ ±t°?ſҐ¬À©^ËÏ®g¯L±tª¬À¯¦ÅÀÅ¿ÒÛ¶Ç©èÕ*ÖØ×ÚÙíª¯OËÇ­²³ Ìg©|ª·¬rÅZ¯ ½²°²±·ª·¯¦¬À©ÔÂz±¯L˦¸°´©ªá¬À­á°²»ªt±¯L½²ªt°?¹-(ë ñ ùoõ1®S¯L±·¯LËL±¯L®g«Ïª¯O˦ËL°?¹Ïµ ¬¿ª«Eì„íaî ž|ž|ž ì ï„íaî ñ ¶L± ìíaî ž|žXž ì„í<î!¬À© ª·«^°Û½´¯L­t°Ûµ «°´±t°ðì ï„í<î>¬À­ ¸ ¬r­·­·¬À©^ËXõ ³ ñ ü|õé¬Àªt°?¸ ¬¿Í´¯Lª·¬À¶¦©+ª·¯LËL˦°´¹Ôµ ¬¿ª«ñìlòDóaî žXž|ž ì ïòóa ñ ý|õ+ôÞ­t°?©Xª·°´©g½¼°?­H¬À¹^°´©Xª·¬S°´¹ µ ¬¿ª«&ª·«^°A¦¯O®S¯L©^°?­t° ®SÌg©^½²ª·Ìg¯Oª¬¿¶Ç©6­tҐ­tªt°?¸Ô³µ «^°²±·°ª«°&­t°´©ªt°?©^½²° ª«^¯Lª"¸ ¯Oª·½¼«°?¹Ôµ ¬Àª·«¯&¹°?­·½¼±¬¿®gª·¬¿¶Ç©+®g¯Lªtª·°²±© ¬r­1®D¶Ç­·¬¿ª¬¿¶¦©^°?¹0¯¦­é©^°´¯L±;½²°´©Xª·°²±;¯L­é®~¶¦­·­·¬ÀÉgÅÀ°L³ µ «^°²±·°<µG°<°?¸®g¬À±·¬r½²¯¦ÅÀÅ¿Òh­t°²ª±ôöõÛýXÓ TU;÷ øñù+qrYP[]\_^a`0b2cdfehg"il^aj?glknm½oqrb glsmc ÖØ«±·¶Ç̐ËÇ«;¯®^±·°´År¬À¸ ¬À©g¯O±·Ò1­tªÌ^¹^Ò鶦©;°¼»^¬À­tª·¬r©Ë<Èΰ²É ®S¯O˦°´­²³ µG° ¬À¹^°´©Xª·¬S°´¹ ª µ ¶>ª ÒX®g¬r½²¯¦ÅÔÌg­·¯O˦°´­Ã¶¦Â ú†Œސ6‘l’C“”C‘ ° ¢û“¡¢fü_ýþÁÿ&¥3”C¡F6’´Ÿ¹l¢†¤06Ÿ£t¥3¬l¬"®6’ ¬"®6Ÿà¥]6¢û6¢0³£®¥C”C¶F¢†¤6¡†¯ ° ¢±µ†¥CŽŽp6‘l¢u¶w¢©6‘l’0¹t¹"¢†¡6µ©®6Ÿ·¢¹ Ÿ¤t‘Ÿ¡w¡6¢†µ©6Ÿ’C¤_ÿ  ²·„¥3¡¢¹t¶w¢©6‘l’0¹¯-Ÿ¤¿¥´µ†’C¶w¬¥]®6Ÿ¡’C¤ ° Ÿ6‘J6‘l¢±¶w¢©6‘l’"¹–Ÿ¤ 0¢†µ©6Ÿ’,¤ "º A6‘¥]³®6¢†ŽŸ¢†¡F¡6’CŽ¢†Ž–’C¤ ü_ýþAÿ´¥3”C¡†º Õ*Öá×ÚÙϪ·¯L˦­1¯L­·­t¶½´¬À¯Lªt°?¹6µ ¬¿ª«{¹°?­·½¼±¬¿ÉS¬À©^Ë ªt°?½«XÄ ©^¬r½²¯¦ÅZªt°´±¸ ­²Ó ¨C©<ª·«^°Pg±­tªÌg­·¯LËL°¦³O¯áª·°²±¸Û¬À©.êX̐°?­tª·¬¿¶Ç©.¬À­«g¬¿ËÇ«XÄ ÅÀ¬¿ËÇ«|ª·°´¹{¯L­é¯Ú«^°´¯¦¹^¬r©Ë ÉXÒ6µá¯?Ò6¶LÂfì<î ž|ž|ž ì2ï<î^³ ìaî ž|ž|ž ì2ïaî,¶¦±'ì <î&ª¯OËg³2¯¦©^¹ ÂÆ¶¦ÅrÅ¿¶oµ°?¹+ɐÒ+¬¿ª­ ¹°?­·½¼±¬¿®gª·¬À¶¦©0¬À©0¯Ú­·«^¶¦±tª;ÂÆ±¯OËǸ°´©|ª´Óv¨C©0ª·«^°Ô­t°´½¼Ä ¶¦©g¹ÏÌg­·¯O˦°L³ ªt°´±·¸ ­ˆª·«g¯Oªh¯L±·° ®~¶Lª·°´©|ª¬À¯¦ÅÀÅ¿ÒÃÌg©^ÂÆ¯oÄ ¸ ¬ÀÅÀ¬r¯O±<ªt¶+±t°?¯L¹^°²±­<¯L±t° ª·¯LËL˦°´¹vµ ¬¿ª·«6ª·«^°¯¦©^½«^¶¦± ìD¯OËg³2®g±·¶OàX¬r¹^¬r©Ëh«XÒ|®D°´±·År¬À©^ìX­ªt¶+¶¦ª·«^°²±<®g¯LËL°?­ ñ ¶¦±¯1¹^¬¿ß~°´±·°´©XªH®~¶Ç­·¬¿ª¬¿¶¦©&µ ¬Àª·«g¬À©&ª·«^°*­·¯L¸°*®S¯LËL°Oõ µ «°´±t°Eª·«^°²Òh¯L±t°1¹^°´­·½²±·¬ÀÉD°´¹2Ó ÖØ«°Á½²±·Ìg½´¬À¯LÅ Âó¯L½²ªt¶¦±Ð«^°²±·°¾¬À­ðªt¶¹^°²ª·°²±¸ ¬À©^° µ «^¬À½¼«ÏÂz±¯OËǸ°´©ªh¬À©ðª«°V®S¯O˦°6¬À­h°²»Xª·±·¯¦½¼ª·°´¹Ñ¯L­ ¯6¹^°´­·½²±¬¿®gª·¬¿¶Ç©ZÓèD¶L±ª·«g¬À­®S̐±·®D¶¦­·°L³µ°+Ì^­t°+ª·«^° ­·¯L¸°Ø±·ÌgÅ¿°?­H¹^°´­·½²±·¬ÀÉ~°´¹ ¬À© þ °´½²ª·¬¿¶Ç©,ýXÓlùOÓ:Õ"¶oµG°²àL°´±²³ Ì^©gÅÀ¬À즰᪫° ò Ùô:ÄTÉg¯¦­t°´¹ ¸°²ª«¶¹2³X¬À©ª·«^° Õ*ÖØ×ÚÙ2Ä Ég¯¦­t°´¹b¸°²ª«¶¹0µG°h°¼»ªt±¯L½²ªª«°hÂÆ±¯OËǸ°´©Xªéª«^¯Lª  ‚«‚ | ª«°Ú«^°´¯¦¹g¬À©Ëb¯L©g¹$ª·«^° ®~¶¦­·¬Àª·¬¿¶Ç©ÏÅÀ¬r©ì¦°?¹ Âz±·¶¦¸ ª·«^°.¯¦©^½²«¶¦±´Ó Õ"¶Oµ°²à¦°´±²³S¬À©+ª·«^°é½²¯¦­t°.µØ«^°²±·° ¯Vªt°²±¸ ¬r©ÃêXÌ^°´­tª¬¿¶¦©$¬À­&ª·¯LËL˦°´¹Ãµ ¬¿ª«ì aáµ° °¼»ªt±¯¦½¼ª ª·«^°ÊÂz¶ÇÅÀÅ¿¶OµØ¬r©ËÏÂÆ±·¯L˦¸°?©|ª ª·¯LËL˦°´¹ µ ¬¿ª« ì½îÓ ò ¶Lª·°Úª·«g¯Oª)ì aîʯL©g¹Eì<îʯL±t°v¬r©^«^°²±tÄ °´©|ªÅ¿Ò,®^±·¶Oà¬À¹^°´¹hªt¶&¹°?­·½¼±¬¿É~°Eªt°´±¸­²Ó ÖØ«°{Õ*Öá×ÚÙÄÉS¯L­t°?¹Ñ¸°´ª·«^¶X¹Ñ¬r­+°¼»®D°´½²ªt°?¹Ðª·¶ °¼»ªt±¯¦½¼ªª·°²±¸Þ¹°?­·½¼±¬¿®gª·¬À¶¦©g­<ª·«g¯OªE½²¯¦©^©^¶Lª<É~°;°²»|Ä ªt±¯L½²ªt°?¹VÉXÒvª·«^° ò Ùô(ÄÉS¯L­t°?¹Î¸°²ª·«^¶X¹2³¢¯L©g¹6à¬À½²° র²±­·¯^Ó¨ ©EÂÆ¯¦½¼ª²³¦¬À©´¢¬¿ËÇ̐±·°Güت·«^°Hª·«g¬¿±¹1¯L©g¹.Âz¶Ç̐±·ª·« ¹°?­·½¼±¬¿®gª·¬À¶¦©g­ µG°²±·°1°¼»ªt±¯¦½¼ªt°?¹µ ¬¿ª·«Úª·«^°éÕ"ÖØ×ÚÙ2Ä Ég¯¦­t°´¹V¸°²ª«¶X¹2³µØ«g¬ÀÅÀ°ª·«^° ±t°?­tª<µá°²±·°;°²»Xª·±·¯¦½²ªt°´¹ µ ¬¿ª·«Ôª·«^° ò Ùô(ÄÉS¯L­t°?¹+¸°´ª·«^¶X¹ZÓ   `ãPO3M(`hO]  ä:åH]"!TPã€O$#tä¥ÚQEP%!óJL]g¥ÇPã€O & ¬Àর´©Ô¯­t°´ªG¶¦Â3È{°²ÉÔ®g¯LËL°EÂz±¯OËǸ°´©|ª­G°¼»ªt±¯L½²ªt°?¹ ÉXÒЪ·«^° ò ÙôF oÕ*ÖØ×ÚÙ2ÄTÉg¯¦­t°´¹ ¸°²ª«^¶¹^­²³.µ°Î­t°¼Ä Å¿°?½¼ª,Âz±¯OËǸ°´©Xª·­&ª«g¯Oªˆ¯O±·°ÚÅÀ¬r©ËÇÌ^¬r­tª·¬À½´¯LÅrÅ¿ÒÃÌg©^¹^°²±tÄ ­tª·¯¦©^¹g¯LÉgÅ¿°¦³(¯¦©^¹b¬À©g¹°²»Îª«°?¸5¬r©|ªt¶Úª«°Ô¹°?­·½¼±¬¿®^Ä ª·¬À¶¦©v¹g¯Lª·¯LÉg¯¦­t°LӖD¶¦±*ª·«g¬À­<®SÌ^±t®D¶Ç­t°L³µ°®D°²±·Âz¶¦±·¸ ¯0ÅÀ¯¦©^˦Ìg¯O˦°6¸¶¹^°´År¬À©Ëg³*­t¶b¯¦­Ôªt¶bêXÌg¯L©|ª¬¿ÂzÒ$ª·«^° °¼»ªt°?©|ªª·¶hµ «^¬r½«v¯ˆËǬ¿à|°´© ª·°¼»ªÂz±¯L˦¸°?©Xª¬À­EÅr¬À©Ä ˦Ìg¬À­tª¬À½²¯¦ÅÀÅÀ҈¯¦½²½²°²®gª·¯LÉgÅÀ°LÓ ÖØ«°´±·°¯L±·° ­t°´à¦°´±·¯¦Åد¦Å¿ª·°²±©^¯Lª·¬Àর ¸°²ª·«^¶X¹g­&Âz¶¦± ÅÀ¯¦©ËÇÌ^¯LËL°ˆ¸¶¹^°?ÅÀ¬À©^Ë^Ó D¶¦±1°¼»^¯L¸®SÅ¿°¦³¢ËL±¯L¸ ¸ ¯L±­ ¯O±·°E±t°?ÅÀ¯Lª·¬¿à|°´Å¿Òh­tª·±·¬r½¼ªØÅr¯L©^˦Ìg¯LËL°1¸¶X¹°?ÅÀ¬À©^Ë,¸°²ª«XÄ ¶X¹^­²Ó Õ"¶Oµ °´àL°²±´³µG°,Ìg­t°ˆ¯ ¸;¶^¹°?Å(ÉS¯¦­t°´¹{¶Ç© ô+Ä ËL±¯¦¸Ô³|µ «g¬À½«,¬r­ Ìg­·Ì^¯¦ÅÀÅ¿Ò&¸¶¦±·°"±t¶¦ÉgÌg­tªª«^¯¦©&ª·«g¯Oª Ég¯¦­t°´¹<¶Ç©<˦±¯L¸ ¸ ¯L±·­²Ó3¨C©<¶¦ª·«^°²±µ¶¦±¹^­²³cªt°²»ª2Âz±¯LËOÄ ¸°´©|ª­<µ ¬¿ª·«VÅ¿¶Lµ °²±E®D°²±·®SÅ¿°²»¬¿ªtҁàL¯LÅr̐°?­E¯O±·°&¸;¶¦±·° ÅÀ¬À©^ËÇÌ^¬À­tª¬À½´¯LÅrſ҈¯¦½²½²°²®gª·¯LÉSÅ¿°LÓ ¨C©{®g±¯L½²ª·¬À½²°L³(µG°ˆÌg­t°´¹Êª·«^°»!× * Ć!¯¦¸1Ég±¬À¹^ËL° ªt¶|¶Çſ쐬¿ª ñ !GÅÀ¯O±·ì­t¶¦©b¯L©g¹ÊæØ¶Ç­t°´©^Âz°?ÅÀ¹2³ ù?ûLû Lõ ³H¯¦©^¹ ®g±t¶X¹gÌ^½²°´¹\¯ ªt±¬¿Ë¦±·¯¦¸;ÄÉS¯L­t°?¹ÞÅr¯L©^˦Ìg¯LËL° ¸;¶X¹^°?Å Âz±·¶Ç¸ ªCµ ¶ Òǰ?¯O±­¶¦Â×Ú¯¦¬À©g¬À½¼«g¬ þ «^¬r¸éÉSÌ^©´¦¯L®g¯¦©°?­t° ©^°²µ ­t®S¯O®D°´±;¯O±·ª¬À½²ÅÀ°´­ ñ ×Ú¯¦¬À©g¬À½«g¬ þ «g¬À¸;ÉgÌg©Z³Gù?ûLû„G ù?ûLû?IÇõ ³µØ«g¬À½«ÐµG°´±·°6¯¦Ìª·¶¦¸ ¯Lª·¬À½´¯LÅrÅ¿Òð­t°´ËǸ;°?©|ª·°´¹ ¬À©Xªt¶,µG¶¦±¹^­"ɐÒÔª«°t!G«g¯ þ °´©Ú¸¶¦±·®g«^¶¦ÅÀ¶LËǬÀ½´¯Lů¦©XÄ ¯¦Å¿Ò|Í´°²± ñ ×Ú¯Oª­·Ì^¸¶¦ªt¶ °²ª"¯LÅeÓÀ³ù?ûLû Lõ Ó ¨ ©Úª·«^°&½²Ì^±t±·°?©|ª<¬À¸®SÅ¿°?¸;°?©|ª¯Oª¬¿¶¦©2³2µá°é°´¸®S¬¿±¬ Ä ½²¯¦ÅÀÅÀÒ ­t°´Å¿°?½¼ª¯¦­Hª«°±D©^¯¦ÅD°¼»ªt±¯¦½¼ª·¬À¶¦©&±·°´­Ì^Å¿ª­Hª·°¼»ª Âz±¯L˦¸°?©|ª·­éµ «¶Ç­t°h®~°´±t®SÅ¿°²»¬ÀªCÒVàL¯LÅr̐°?­é¯L±·°ÔÅ¿¶Oµ°²± ª·«g¯¦©6ùL³z?zz^Ó ' ( !M KcJO]S¥ÇPã€Oð[]S¥?Q R6]SKO§¥ÇPSHJLPäã K S¶L±Ôª·«^°v®S̐±·®~¶Ç­t°v¶¦Âé½´ÅÀÌ^­tª·°²±¬À©^Ë0ª·°²±¸ë¹^°´­·½²±·¬À®Ä ª·¬À¶¦©g­;°¼»ªt±¯L½²ªt°?¹bÌg­·¬À©^Ë{¸;°´ª·«^¶X¹g­ ¬À© þ °?½¼ª·¬À¶¦©g­&ý ¯¦©^¹tG^³|µG° Ìg­t° ª·«^° Õ*¬¿°´±¯O±½¼«g¬À½´¯LÅDá¯cÒ¦°´­¬À¯L©¿!GÅÀÌg­ Ä ªt°´±¬À©^Ë ñ Õu±!Øõ,¸°²ª·«^¶¹ ñ ¨CµG¯cÒ¦¯¦¸ ¯0¯L©g¹ÛÖ3¶¦ìXÌÄ ©g¯OËǯ³ù´û¦ûI|õ³^µ «g¬À½«+«^¯¦­áÉD°²°?©+Ì^­t°?¹hÂÆ¶L±"½´ÅÀÌ^­·ªt°²±tÄ ¬À©^Ë ©^°´µØ­Ø¯L±·ª·¬À½´Å¿°?­ ¯L©g¹+½¼¶Ç©^­tª·±·Ìg½²ª·¬À©^˪«°?­·¯LÌ^±¬óÓ ö*­ µ ¬¿ª«Ï¯{©XÌ^¸éÉS°²±,¶¦Â«g¬¿°´±·¯L±½¼«g¬À½²¯¦Åá½´ÅÀÌg­tªt°´± Ä ¬À©^˸°²ª«¶^¹^­²³Oª«°ØÕu±!{¸°´ª·«^¶|¹¸°²±·Ë¦°´­¢­·¬À¸ ¬ÀÅr¯O± ¬¿ª·°´¸ ­ ñ ¬óÓ °LÓÀ³¢ª·°²±¸¹^°´­½¼±¬¿®gª·¬¿¶Ç©g­é¬À©{¶ÇÌ^±é½²¯¦­t°oõE¬r© ¯ÉD¶Lª·ªt¶Ç¸;ÄTÌ^®Ê¸ ¯L©g©°´±´³:Ì^©Xª·¬ÀÅH¯LÅrÅ(ª«°ˆ¬Àªt°´¸ ­é¯O±·° ¸°²±·Ë¦°´¹{¬À©ªt¶¯ ­·¬À©^˦ſ°,½´ÅÀÌ^­tª·°²±´Ó+ÖØ«g¯Oª1¬À­²³¢¯ ½²°²±tÄ ª·¯¦¬À©$©Ìg¸.ÉD°´±,¶¦Â½´ÅÀÌ^­·ªt°²±­ˆ½²¯¦©$ÉD°¶¦Égª·¯¦¬À©^°´¹$ÉXÒ ­t®SÅÀ¬¿ª·ª·¬À©^ˁª·«^°,±t°?­·Ì^ÅÀª·¯¦©|ªE«g¬¿°´±¯O±½¼«ÇÒ6¯Oª1¯ ½¼°´±·ª·¯¦¬À© Å¿°´à¦°´ÅeÓ öت ª«°Ú­·¯¦¸° ª·¬À¸°¦³áª·«^°ÚÕfû!9¸°´ª·«^¶|¹Ï¯¦ÅÀ­t¶ ¹^°²ª·°²±¸ ¬À©^°´­ª·«^°G¸¶¦­tª3±·°²®g±·°´­t°?©Xª·¯Lª·¬ÀàL°¬¿ªt°?¸ ñ ½¼°?©XÄ ªt±·¶Ç¬À¹~õÂÆ¶L±H°?¯L½²« ½´ÅÀÌg­tªt°´±²Ó:ÖØ«^°´©2³|µG° ®g±t°?­t°´©|ªH¶Ç©^ÅÀÒ ª·«^¶Ç­t°E½²°´©|ª·±t¶Ç¬À¹g­áªt¶&Ì^­t°´±­²Ó ÖØ«^°,­¬À¸ ¬ÀÅÀ¯L±¬¿ªCÒvÉ~°²ªCµG°²°?©{¬¿ªt°?¸ ­1¬À­é½¼¶Ç¸®gÌ^ªt°?¹ ÉS¯L­t°?¹ ¶Ç© Âz°´¯Lª·Ì^±·°Ûà|°´½²ªt¶¦±·­0ª«^¯Lª$½²«^¯L±¯L½²ªt°´±·¬ÀͲ° °´¯¦½« ¬¿ªt°?¸ÔÓ ¨C© ¶¦Ì^±$½²¯¦­t°L³àǰ?½¼ª·¶L±­0Âz¶¦±b°?¯L½« ªt°´±¸>¹°?­·½¼±¬¿®gª·¬¿¶Ç© ½¼¶Ç©^­·¬r­tª(¶¦ÂDÂz±·°?ê|Ì^°´©g½²¬À°´­:¶¦ÂZ½²¶¦©Ä ªt°?©|ª&µ¶¦±·¹g­ ñ °¦Ó ËgÓÀ³Ø©^¶ÇÌ^©g­&¯¦©^¹Ãàǰ´±·Ég­ ¬À¹^°?©Xª·¬g°?¹ ª·«^±·¶¦Ì^˦«6¯Ô¸¶¦±·®g«^¶¦ÅÀ¶LËǬÀ½´¯LÅ¢¯¦©^¯¦Å¿Ò­·¬À­õ*¯O®g®D°´¯L±·¬r©Ë ¬À©Ôª·«^°E¹^°?­·½¼±¬¿®gª·¬¿¶Ç©2Ó ) MéN Sá]S¥ÇP6QÏ]Sã¢JO`~J¦Peäã *UV qrb g"s mhc_m,+nm"-". È{°Ú¬À©XàL°?­tª·¬À˦¯Lªt°?¹$ª·«^°v°¼ßZ°´½²ª·¬Àর´©^°´­·­,¶LÂE¶Ç̐±Ô°¼»XÄ ªt±¯¦½¼ª¬¿¶¦© ¸°²ª«¶X¹ Âz±·¶¦¸ ¯¾­½²¬¿°?©Xª¬D½Ï®D¶¦¬À©ªÊ¶LÂ à¬¿°²µEÓÊÕ"¶oµ°´àL°²±´³HÌg©gÅÀ¬¿ì¦°ˆ¶¦ª·«^°²±±·°´­t°?¯O±½¼«Îª·¶L®S¬À½´­ µ «°´±·°VÉ~°?©g½«^¸ ¯L±tìðªt°?­tª ½¼¶ÇÅÀÅ¿°?½¼ª¬¿¶¦©g­ ¯L±·°{¯oào¯¦¬ÀÅ Ä ¯LÉgÅ¿° ª·¶"ª·«^° ®gÌ^ÉgÅr¬À½ ñ °¦Ó ËgÓÀ³L¬À©^Âz¶¦±¸¯Lª·¬À¶¦©.±·°´ªt±¬¿°²àO¯¦Åzõ³ ª·«^°´±t°"¯O±·°"ªCµ¶.¸ ¯cît¶L±H®g±·¶LÉSÅ¿°?¸­ Âz¶¦±Hª·«^°"®gÌ^±·®D¶¦­t° ¶¦Â3¶¦Ì^±Ø°¼»®~°²±¬r¸;°?©|ª¯Oª¬¿¶¦©2³g¯¦­áÂz¶ÇÅÀÅ¿¶oµ ­( / ®^±·¶¹gÌ^½²ª·¬¿¶Ç©Ñ¶LÂ;ªt°?­tª+ª·°²±¸ ­ÔÂz¶¦±µØ«g¬À½«Û¹^°¼Ä ­·½²±·¬À®^ª¬¿¶¦©g­ ¯L±·°<°¼»ªt±¯¦½¼ª·°´¹2³ / îtÌ^¹^˦°´¸°?©|ª*Âz¶¦±<¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­°²»Xª·±·¯¦½¼ª·°´¹vÂz¶L± ª·«^¶Ç­t°Eª·°´­tªØªt°´±·¸ ­²Ó S¶L±bªt°?­tªÃª·°²±¸ ­²³+®Z¶¦­·­·¬¿ÉSÅ¿° ­t¶¦Ì^±½¼°?­$¯O±·°Ûª·«^¶¦­t° ÅÀ¬À­tª·°´¹ ¬À©Þ°²»¬À­tª¬À©^Ë ªt°²±¸ ¬À©^¶ÇÅ¿¶L˦ÒÞ¹^¬r½¼ª·¬À¶¦©g¯O±¬À°´­²Ó Õ"¶oµ°´àL°²±´³S­·¬r©^½²°Eª«°Øî·Ì^¹^ËL°?¸°´©|ª"½´¯L©+ÉD°é½¼¶Ç©^­¬À¹XÄ °²±¯LÉgÅ¿Òb°²»X®Z°´©g­·¬¿à¦° ÂÆ¶L±h¯0ÅÀ¯O±·Ë¦°Ú©Ìg¸.ÉD°²±h¶LÂ<ªt°?­tª ªt°´±·¸ ­²³Z¬Àª"¬r­"®g±·°²Âz°´±¯OÉSÅ¿°1ªt¶ˆ­t°?Å¿°?½¼ª¬¿àǰ?Å¿ÒÔ­·¯L¸®SÅ¿°;¯ ­·¸ ¯LÅrÅ^©XÌg¸1ÉS°²±(¶¦ÂDª·°²±¸ ­:ª·«g¯Oª(®~¶Lª·°´©Xª·¬À¯¦ÅÀÅ¿Òé±·°¼çS°´½²ª ª·«^°E¬r©|ªt°´±t°?­tªØ¬r©hª«°E±·°´¯¦Å2µ¶¦±ÅÀ¹2Ó ¨C©ÐàX¬À°²µ\¶LÂ1ª·«g¬À­Ô®g±t¶¦ÉgÅÀ°´¸Ô³*µ°VÌ^­t°?¹Ð¯¦­Ôªt°?­tª ªt°´±·¸ ­ª·«^¶Ç­t°½¼¶Ç©|ª·¯¦¬À©^°´¹,¬r©ˆê|Ì^°²±¬¿°?­G¬À©,ª·«^° ò öu! Ä þ ¨ þ ªt°?­tª:½²¶¦ÅÀÅÀ°´½²ª·¬¿¶Ç© ñeø ¯L©g¹¶<°²ª(¯LÅóÓr³gù?ûLû¦ûÇõ ³Lµ «^¬r½¼« ½¼¶Ç©^­·¬r­tª·­¶¦ÂAz¾¦¯L®g¯¦©^°´­t° ê|Ì^°²±¬¿°?­ ¯¦©^¹Ã¯O®g®^±·¶c»XÄ ¬À¸ ¯Lªt°´ÅÀÒÏýLýz^³zz?z0¯LÉg­tª·±¯L½²ª·­ ñ ¬r©Ï°´¬Àª·«^°²±+¯Ê½¼¶Ç¸;Ä Ég¬r©^¯Lª·¬À¶¦© ¶¦Â  ©ËÇÅÀ¬À­·«¾¯L©g¹r¦¯O®S¯¦©°?­t°Ê¶L± °?¬¿ª«°´± ¶Lª«^°EÅÀ¯¦©ËÇÌg¯O˦°´­Ø¬À©g¹g¬¿à¬À¹^Ìg¯¦ÅÀÅ¿ÒSõ³S½¼¶ÇÅÀÅ¿°?½¼ª·°´¹ÔÂz±·¶¦¸ ªt°?½«^©g¬À½´¯LÅ|®S¯O®Z°²±­¢®gÌ^ÉgÅr¬À­·«^°´¹ÉXÒ+?IA¦¯L®g¯¦©^°´­t°á¯¦­ Ä ­t¶X½²¬r¯Oª¬¿¶¦©g­áÂz¶¦± àL¯O±¬¿¶ÇÌ^­FS°´År¹^­²Ó10 ÖØ«^¬r­ ½¼¶ÇÅÀÅÀ°´½²ª·¬¿¶Ç©µG¯¦­Ø¶¦±·¬À˦¬À©g¯¦ÅÀÅ¿Òh®g±t¶¹gÌg½¼°?¹Âz¶L± ª·«^°0°´àO¯¦ÅÀÌg¯Oª¬¿¶Ç©è¶Lˆ¬À©ÂƶL±¸ ¯Oª¬¿¶Ç©¾±·°´ªt±¬¿°²àL¯¦Åé­tÒX­ Ä ªt°?¸ ­²³Eµ «^°²±·°V°?¯L½²« ê|Ì^°²±·ÒѬÀ­ Ìg­t°?¹ ªt¶Ï±·°²ª·±·¬À°²à¦° ªt°?½«^©g¬À½´¯LÅ*¯LÉg­tª·±¯L½²ª·­²Ó>ÖØ«^Ì^­²³ ª·«^°Úª·¬Àª·Å¿° g°?ÅÀ¹ð¶¦Â °´¯¦½« ê|Ì^°²±·ÒéÌg­·Ìg¯LÅrÅ¿Òé½²¶¦©ª·¯¦¬À©g­¢¶¦©^°Ø¶¦±H¸¶L±·° ªt°´½¼«XÄ ©^¬r½²¯¦Å"ªt°´±·¸ ­²Ó °´­·¬r¹°?­ˆª·«g¬À­²³*­·¬À©g½¼°v°?¯L½«Ðê|Ì^°²±·Ò µG¯¦­2®^±·¶X¹gÌ^½²°´¹EÉS¯L­t°?¹<®S¯L±tª¬À¯¦ÅÀſҶǩ<°²»¬r­tª·¬À©^Ë ªt°?½«XÄ ©^¬r½²¯¦ÅZ¯OÉS­tªt±¯¦½¼ª­²³ª·«^°²Ò,±t°²çg°?½¼ªáª·«^°±·°´¯¦Å~µ¶¦±·År¹ˆ¬À©Ä ªt°´±t°?­tª²³Gª·¶Ê­t¶¦¸°°¼»ªt°?©Xª´ÓÐö*­,¯{±·°´­·ÌgÅ¿ª²³áµG°°¼»XÄ ªt±¯L½²ªt°?¹tI¦ý<ªt°?­tª(ª·°²±¸ ­²³Ç¯¦­H­·«^¶Oµ ©;¬r© Ö:¯LÉgÅ¿°EùOÓ¢¨ © ª·«g¬À­(ª¯OÉSÅ¿°¦³ÇµG°Ø±·¶Ç¸ ¯L©g¬¿Í´°´¹¿¦¯O®S¯¦©°?­t°Øª·°²±¸ ­²³X¯L©g¹ ¬À©g­t°²±·ªt°?¹,«|ÒX®g«^°´©g­(É~°´ªCµ°´°´© °?¯L½«&¸¶L±·®S«°?¸°"Âz¶L± °´©g«^¯¦©g½¼°?¹h±·°´¯¦¹g¯OÉS¬ÀÅÀ¬ÀªCÒ¦Ó ò ¶Lª·°áª«^¯Lª(Ìg©^År¬¿ì¦°Øª·«^°Ø½´¯L­t°Ø¶LÂZ¬À©^Âz¶L±¸ ¯Lª·¬¿¶Ç©±t°²Ä ªt±¬¿°´àL¯LÅ ñ °LÓ ËgÓÀ³3¯Ô®S¯Lªt°?©|ª±·°²ª·±·¬À°²àL¯LÅÆõ³µØ«^°´±t° °²à|°²±·Ò ±·°´Å¿°´àL¯L©Çª.¹¶½´Ì^¸°?©|ª1¸éÌg­tª<É~°&±·°²ª·±·¬À°²à¦°?¹Z³¬À©V¶Ç̐± ½²¯¦­t°,°²à¦°?©{¶¦©^°ˆ¹^°´­·½²±·¬À®^ª¬¿¶¦©0½²¯¦©{®D¶Lª·°´©ª·¬À¯¦ÅÀÅ¿ÒvÉ~° ­·Ì2,½´¬¿°?©|ª²Ó3¨C©;¶Lª«°´±µá¶L±¹g­²³L¬À©1¶Ç̐±:°¼»®D°²±¬À¸°?©Xª­²³ ¸¶L±·°µG°?¬¿ËǫǪ&¬r­&¯Oª·ª·¯¦½¼«^°´¹bªt¶Ê¯¦½²½´Ì±¯L½²Ò ñ ®g±·°´½´¬ Ä ­·¬¿¶Ç©Dõ ª·«g¯L©Ô±·°´½´¯LÅÀÅeÓ S¶L± ª«°;­t°?¯O±½«+°?©ËǬÀ©^°;¬À©ñ¢¬À˦Ì^±t°ˆùL³~µG°1Ìg­t°´¹  Ë¦¶|¶^³ 2+µ «^¬r½«0¬À­ ¶¦©^°h¶LÂ*ª·«^°Ô¸ ¯Oî ¶¦±t¦¯L®g¯¦©^°´­t° È{°²É&­t°´¯L±·½«°?©ËǬÀ©^°´­²Ó:ÖØ«^°´©2³¦ÂƶL±H°´¯¦½«°¼»ªt±¯L½²ªt°?¹ ¹°?­·½¼±¬¿®gª·¬À¶¦©2³g¶Ç©^°E¶¦Âª«°é¯¦Ìª«¶¦±­¢îtÌg¹Ë¦°´¹¬Àª ½²¶L±tÄ ±·°´½¼ªØ¶L± ¬À©g½²¶L±·±·°´½²ª²Ó 34656587"9;:<:>=<=<="?@<A ?CBEDEF<G<H<G?;DIF?;J>7E:EKLB65FDANM: HOB6A6P8QIR8PB"?C4I5NMS T 4656587"9;:<:>=<=<="?UIV<VW?CBEPX?;J>7E: *U;÷ Y b2`EZ[+;g0`  ̐ª.¶¦ÂGª«° I¦ý+ªt°´­tª.ª·°²±¸ ­.°²»Xª·±·¯¦½¼ª·°´¹VÂz±·¶Ç¸ ª·«^° ò öu! þ ¨ þ ½²¶¦ÅrÅ¿°?½¼ª·¬À¶¦©2³LÂÆ¶L±PGG<ªt°´±·¸ ­:˦¶|¶<±·°²ª·±¬¿°²àǰ´¹ ¶Ç©°Î¶¦±Ú¸¶L±·°ÎÈV°´É ®g¯LËL°?­²Ó ö*¸¶Ç©ËϪ«¶Ç­t° GG ªt°?­tª ªt°´±·¸ ­²³X¶Ç̐±á¸°²ª«¶X¹,°²»Xª·±·¯¦½²ªt°´¹,¯LªGÅ¿°?¯L­tª ¶Ç©° ªt°´±¸¹°?­·½¼±¬¿®gª·¬À¶¦©ÛÂz¶¦±Úü0ª·°²±¸ ­²³E¹g¬À­t±·°²ËǯO±¹g¬À©^Ë ª·«^°&îtÌg¹^ËL°?¸°´©|ª´ÓèÖØ«XÌg­²³áª«°v½²¶OàL°´±·¯L˦° ñ ¶L±Ô¯O®^Ä ®SÅÀ¬À½´¯OÉS¬ÀÅÀ¬ÀªCÒgõ¶¦Â¶Ç̐±"¸°´ª·«^¶¹ÔµØ¯¦­±ùLӐG \,Óá¨C©Ö¢¯OÄ ÉSÅ¿°vùL³¢ª·«^°ˆª·«g¬¿±¹Ê½¼¶ÇÅÀÌg¸ ©Ê¹^°´©^¶Lª·°´­éª·«^°h©Ìg¸.ÉD°´± ¶¦Â~È{°²É&®S¯O˦°´­H¬r¹°?©|ª¬g°?¹ɐÒ1˦¶X¶gÓ:Õ"¶oµ°´àL°²±´³¦Ë¦¶X¶ ±·°²ª·±·¬À°²à¦°?­½¼¶Ç©|ªt°?©Çª·­2ÂÆ¶L±¶Ç©^ſҪ«°(ªt¶¦® ùL³z?zzØ®S¯O˦°´­²Ó Ö¢¯OÉSÅ¿°.ù᯦ÅÀ­t¶1­·«¶LµØ­¢ª«°"©Ìg¸.ÉD°´±H¹°?­·½¼±¬¿®gª·¬¿¶Ç©g­ îtÌg¹^ËL°?¹!¯¦­{½¼¶¦±·±t°?½¼ª ñ ª«°Ï½¼¶ÇÅÀÌg¸ ©ö ]t!w |õ ³<ª·«^° ªt¶¦ª·¯¦Å<©XÌg¸éÉg°´±¶LÂ;¹°?­·½¼±¬¿®gª·¬À¶¦©g­Ô°¼»ªt±¯¦½¼ªt°?¹ ñ ª·«^° ½¼¶ÇÅÀÌg¸ © ]éÖû Xõ³¯¦©^¹Áª·«^°0¯L½´½²Ì^±·¯¦½¼Ò ñ ª«°0½²¶¦Å¿Ä Ìg¸ © ·öf |õ ³(Âz¶¦±&É~¶¦ª·«$½²¯¦­t°´­&µØ¬Àª·«½ cµ ¬¿ª«¶ÇÌ^ª ª«° ªt±¬¿Ë¦±¯L¸;ÄTÉg¯¦­t°´¹+ÅÀ¯¦©ËÇÌ^¯LËL°.¸¶¹^°´ÅóÓ Ö¢¯OÉSÅ¿°ù.­·«^¶oµ ­áª·«g¯OªØª·«^° ò Ùôw oÕ*ÖØ×ÚÙ2ÄÉS¯L­t°?¹ ¸°²ª«¶X¹g­ °²»Xª·±¯L½²ªt°?¹ ¯O®g®^±·¶¦®^±¬À¯Lªt°1ªt°´±¸ ¹°?­·½¼±¬¿®^Ä ª·¬À¶¦©g­µ ¬¿ª«è¯¼¦ýXÓI^\믦½²½´Ì±¯¦½¼Òdz.¯L©g¹èª«^¯Lª ª·«^° ªt±¬¿Ë¦±¯L¸;ÄTÉg¯¦­t°´¹ ÅÀ¯¦©ËÇÌg¯O˦° ¸¶X¹^°´ÅVÂÆÌ^±tª«^°²±è¬À¸;Ä ®g±t¶oàǰ´¹ ª·«^°¯¦½²½´Ì±¯L½²ÒÂz±·¶¦¸ ¦ýXÓI^\!ª·¶J|Ó û^\ˆÓ¢¨C© ¶¦ª·«^°²±éµá¶¦±·¹g­²³¢¶Ç©^Å¿Ò{ª µ ¶v¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­;¯O±·°Ô­·Ì Ä ½²¬À°´©|ª Âz¶¦±*Ìg­t°²±­ ª·¶ˆÌ^©g¹^°²±­tª·¯¦©^¹ ¯&ª·°²±¸Ý¬r©+êXÌ^°´­ Ä ª·¬À¶¦©2ÓáæØ°?¯L¹g¬À©^ˈ¯&Âz°´µÁ¹°?­·½¼±¬¿®gª·¬À¶¦©g­ ¬À­*©¶¦ª"ª·¬À¸°²Ä ½¼¶Ç©g­·Ì^¸ ¬À©^Ëg³.ÉD°?½²¯¦Ì^­t°0ª·«^°²Ò Ìg­·Ì^¯¦ÅÀÅÀÒ ½¼¶Ç©^­·¬r­tª ¶L ­·«^¶L±·ªá®S¯O±¯LËL±¯L®g«g­²Ó È{°ð¯LÅr­t¶!¬À©Xর´­tª¬¿Ë¦¯Lªt°?¹ ª·«^°ð°²ß~°?½¼ª¬¿àǰ?©°?­·­{¶¦Â ½²ÅrÌ^­tª·°²±¬À©^Ëg³gµ «°´±·°.ÂÆ¶L± °?¯L½²«hª·°´­tª"ªt°´±¸Ô³gµá°E½´ÅÀÌ^­ Ä ªt°´±·°´¹Î¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­;¬À©|ª·¶ ª·«^±t°´°ˆ½²ÅrÌ^­tª·°²±­ ñ ¬À©{ª«° ½²¯¦­t°+µØ«^°²±·°ª·«^°²±·° ¯O±·° Å¿°´­­ ª·«g¯L©bÂz¶Ç̐±,¹^°´­·½²±¬¿®^Ä ª·¬À¶¦©g­²³¬À©g¹^¬ÀàX¬À¹gÌg¯LÅE¹°?­·½¼±¬¿®gª·¬À¶¦©g­Ôµ°´±t°6±t°´Ë¦¯L±¹°?¹ ¯¦­¹^¬¿ß~°´±·°´©|ª ½²ÅrÌ^­tª·°²±­õ³.¯L©g¹Û¶¦©gÅ¿Òѹ°?­·½¼±¬¿®gª·¬À¶¦©g­ ¹^°²ª·°²±¸ ¬À©^°´¹¯¦­Ï±·°´®^±·°´­t°?©|ª¯Oª¬¿à¦°èÉÒ ª«°¾Õfû! ¸°²ª«¶X¹;µ°´±t°®g±·°´­t°?©Xª·°´¹1¯¦­ª«°wS©g¯¦ÅDZ·°?­·Ì^ÅÀª²ÓÈΰ Âz¶ÇÌ^©g¹èª«^¯LªÓ\붦Â,¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­Ú®^±·°?­t°´©|ª·°´¹ µ°´±·°½¼¶¦±·±t°?½¼ª<¶¦©^°´­²Ó騠©Ú¶Lª«°´±µá¶¦±·¹g­²³2Ìg­t°²±­<½²¯¦© ¶¦É^ª¯L¬À©v¹°?­·½¼±¬¿®gª·¬À¶¦©g­"ÂÆ±t¶Ç¸¹g¬ ßZ°²±·°?©|ª"àX¬À°²µØ®~¶Ç¬À©|ª­ ¯¦©^¹{µG¶L±¹{­t°´©g­t°´­²³¢¸ ¯¦¬À©Xª¯L¬r©^¬r©Ë+ª·«^°,°¼»ªt±¯L½²ª·¬À¶¦© ¯¦½²½´Ì±¯L½²Ò,¶¦Égª·¯¦¬À©°?¹+¯OÉ~¶oàǰ ñ ¬eÓ °LÓr³< ÇÓ û^\&õÓ Õ"¶oµG°²àL°²±´³¦µ°á½²¶¦©g½¼°?¹°Øª·«g¯Oª(µ°Ø¹^¬À¹©¶¦ª(¬r©XàL°?­ Ä ª·¬À˦¯Lªt°Hµ «°´ª·«^°²±¶¦±3©¶¦ª°?¯¦½¼«<½´ÅÀÌg­tªt°´±3½¼¶¦±t±·°?­t®D¶¦©g¹^­ ªt¶&¹g¬ ßZ°²±·°´©|ªØàX¬À°²µØ®D¶Ç¬À©Xª­G¬r©Ô¯ ±¬¿Ë¦¶L±·¶ÇÌ^­Ø¸ ¯¦©^©^°²±´Ó S¶¦±ª·«^°<®D¶ÇſҐ­t°´¸;Ò ®g±·¶LÉSÅ¿°´¸Ô³gµ°¬À©àL°´­tª¬¿ËǯOª·°´¹ ¯¦ÅÀÅSª·«^°"¹^°?­·½¼±¬¿®gª·¬¿¶Ç©g­H°¼»ªt±¯¦½¼ª·°´¹2³¯L©g¹&Âz¶ÇÌ^©g¹&ª·«g¯Oª ¶Ç©^Å¿Òð <_„‚‡ ‚_1C9 |3ƒ‚„: ñ ½²¶¦ÅrÅ¿¶½´¯Oª¬¿¶¦©~õ vµá¯¦­¯L­·­t¶X½´¬ Ä ¯Lªt°´¹ÁµØ¬Àª·«¾ªCµ¶Ïµá¶L±¹è­t°´©g­t°´­²³ª·«g¯Oªv¬À­²³ tµá¶L±¹ ½¼¶ÇÅÀÅÀ¶X½²¯Lª·¬¿¶Ç©^­] í¯L©g¹  ®D¶Ç­·¬¿ª¬¿¶Ç©Ý¶LÂV¸ ¯L½²«^¬r©°´±tÒLÓ ö*¸¶¦©^ËÚª·«^°ˆª«±·°²°ˆ±·°´®^±·°´­·°´©|ª¯Oª¬¿à¦°ˆ¹^°´­·½²±·¬À®^ª¬¿¶¦©g­ Ö¯LÉgÅÀ°ù„(  »ªt±¯L½²ª·¬¿¶Ç©+¯L½´½²Ì^±·¯¦½²Ò,Âz¶¦±Øª·«^°.ü .ª·°´­tªØªt°´±·¸ ­ ñ ]t!Eõ誫°.©XÌ^¸1ÉD°²±Ø¶¦Â½¼¶¦±·±t°?½¼ª ¹°?­·½¼±¬¿®gª·¬À¶¦©g­²³ ];ÖÀõ誫°.ªt¶¦ª·¯¦Å2©XÌg¸éÉg°´± ¶L°¼»ªt±¯L½²ªt°?¹Ô¹^°?­·½¼±¬¿®gª·¬¿¶Ç©g­²³Sö õÁ¯L½´½²Ì^±¯L½²Ò ñ \&õtõÓ °[` ’wý®6Ÿ”3®¥3¶ ° ý ®6Ÿ”3®¥3¶ a ¥3¬¥3¤l¢†¡¢ ýD¢©®6¶ ba¤”CŽŸ¡‘dc_Ž’,¡¡ efa¥3”C¢†¡gefhieFý Œ ejhkeFý Œ l Ÿ¬"£W™ §mC™onEm3œ0¨m>p œ l Ÿ¬"£Lq ¡hŽà¥ ° rNs r rtrOu8u r rvrOu8u š>p œ"¨©˜©¨6œl™ ¨©˜wxy<m ¥3µ†µ†¢†¡¡µ†’C¤®6’CŽ z"¯ {< s rOu  u su º u rOu  u su º u ¦©œ"§l¨%nEmC™|x,šN}Om3œl™ ›~wp3šw ¹"’0µ†“l¶w¢†¤0-Ÿ¶­¥3”C¢ “l¤„¹"¢©®6¡W¥3¤¹"Ÿ¤l” < r r€rOu8u r r€rOu8u  n6wª;˜‚pNw™W˜˜ƒ˜©§lªm Ÿ¤0¢†ŽŽŸ”C¢†¤¥3”C¢†¤0 < s z u º u s z u º u —˜˜ª;šC™C„wšw§w§Ix3œ ¹¥]¥³¶FŸ¤lŸ¤l” "¯ 8…8{ 6† <{‡† s º s u  u † s º u —˜§¨%n6w™ ¨œ6p3š3¨%n6w ¹"Ÿ”CŸ6¥3Ž ° ¥3¢©®6¶­¥]®%ˆ 0¯ r > { <‰{ u º z { <‰{ u º z —˜§¨%n6w™ ªm3¨ŠnEm>pCš3§ ¹"Ÿ”CŸ6¥3Ž ŽŸ·l®¥]®Š†0¯ {8 8… rOu z‡ 8…"º s … r †‹I†0º r x,šO}Nm3œ"™op,˜§¨šp œ Ÿ¶­¥3”C¢_®6¢©®6Ÿ¢†¸C¥3Ž r ¯ z8{ r Œ s º u r ‡ 8 "º x3œ"›œ0œN„œEŽp˜š ”3®6’C“l¬ ° ¥]®6¢ r {"¯ †z u r   u s º u r   rs †0º r n6wpCš3›~w«™ ‘©šw«¦š,š ’C¬lŸµ¥3Ž“’·„¢©® rOu ¯ u †… r †  s z8…lº u r   r z8z"º † w  n6w™opC˜‚w¨‚m>p]œ ¬„’,¡ŸŸ’C¤f¶w¢¥3¡6“"®6¢†¶w¢†¤0 † s u u u u w«—˜©§lªn˜~p>w™nš3›œ8x<m3›‚w}†œI„Fœ ”C¢†¤l¢©6Ÿµ ¥3Ž”C’3®6Ÿ6‘l¶ l¯ z8…8z > r †8†0º  8 … †…"º z ƒ‚w§”p8m3œ"™  n6w§m3œ ¥]®Ÿ’µ†Ÿà¥3Ž Ÿ¤06¢†ŽŽŸ”C¢†¤lµ†¢ r …"¯ r { u rOu r { s 0º z { r ‡z8{"º  ƒ‚w›‚wª²¨œ"™Cw«—<m3œl™ ›%m,¦%mCª«ªm ¥3“"6’C¤l’C¶w’C“l¡¶w’C·lŸŽ¢_®6’C·’3 †{<   rOu8u   rOu8u ƒ‚w¨†˜—,šw™Cw§ª;š,š3§˜©ª«ªm ¤l¢0”C¢†¤l¢©®¥]6Ÿ’C¤–•;¤06¢©®6¤l¢© r ¯ {8z8 z rOu z u º u z rOu z u º u pNw|w޽š,š,—<mC™ ƒ‚w²—<m3œ"™  nœ"œ0¨%nœ0ª²¨6œ$ˆ,¢° ’3®¹±¥3“l’,¶w¥]6Ÿµ ¢0W®¥Cµ©Ÿ’C¤  s r rtrOu8u r rvrOu8u pNwp3šw™onEm3§— yCšp œ ¶­¥3µ©‘lŸ¤¢®¥3¤¡Žà¥]Ÿ’C¤ "¯ r  r r rOu˜rOu º u u … u pm3›~m>pC˜‚w¨ŠnEm3§ µ†’CŽŽ’0µ†¥3Ÿ’C¤ s 6† † r z‹< "º … † rNs <z"º † pm3¨%nEm3œl™²¨%n6w§—,š3§ £«¥3“lŽ¹lŸ¥3”C¤’C¡Ÿ¡ r ¯ z8…<  s  u º u   su º u „wš3›œ  n6wpNy,š3¨œ0ªm ¶F“lŽŸµ¥3¡W s ¯ † s … r …  s †80º u rNs 8‰z8…"º  „­˜—w«šC™n—<m3œIpNw ¶F¢¹"Ÿà¥F¡Š0¤µ‘l®6’C¤lŸ™¥]6Ÿ’C¤ <z r r€rOu8u r r€rOu8u §?˜ª²ªCm޽š,šp œ"™ ªm‚”m3›Šmšƒww ¤l¢© ° ’3®%ˆF6’C¬?’CŽ’C”< 8… r › s º u u u §y]œ"œ0›6š3›œl™ §˜©ª«ªm޽š,š>p]œ ¤l¢†“"®¥3ޤl¢© ° ’3®%ˆ {"¯ s <† <† 6†‡†…"º † 8z  s … u º u ›‚w§Ix3œl™xš3ª šC™ §?˜ª²ªm޽š,š>p œ ®6Ÿ¤”w¤l¢© ° ’3®%ˆ 8 u r u u r u ¨%n6w¨m3œ0›6š3¨œ 6‘l¢†¡6¥3“"®6“¡ "¯ 8{<{  r  ‡{ r º r {  u { s º u ¨m3œ0›6š,šC™opCš,š ¡’CŽà¥]®µ¥]® "¯ z8{8… r   r‹s †0º r r   rs †0º r ªn˜›%m„­˜6š 6¢†Ž’C¶w¢©®6¢ …<†8 z 8z †80º   s Œ† "º s ’C6¥3Ž œ rOu {"¯ u <{{<›<z u z8 "º s z8zž 8{<‹z<†0º { Âz¶¦± 8_„‚‡ ‚_?1C9;|]ƒ‚: ñ ½²¶¦ÅÀÅÀ¶½²¯Lª·¬À¶¦©~õ³ ݪCµ¶ ½¼¶¦±t±·°²Ä ­t®D¶¦©g¹^°´¹>ªt¶ ª·«^°èg±­tªÎ­t°´©g­t°L³,¯L©g¹>¶¦©^°Ï½¼¶¦±t±·°²Ä ­t®D¶¦©g¹^°´¹ ªt¶Ðª·«^°Ê­t°´½²¶¦©g¹è­t°´©g­t°LÓÖ¶Ï­·Ìg¸ Ì^®2³ ª·«^°,Õu±!!½²ÅÀÌg­tªt°´±¬À©^ˁ¸°´ª·«^¶X¹V½²¶L±·±·°´½²ª·ÅÀÒv¬r¹°?©Xª¬ Ä g°?¹h®D¶ÇſҐ­t°´¸;ÒLÓ Ÿ ( äã §W!TMHKOPäã ¨C©{ª«^¬r­1®S¯O®Z°²±´³:µ°,®g±·¶L®D¶Ç­t°´¹Ê¯v¸;°´ª·«^¶¹Îª·¶ °¼»XÄ ªt±¯L½²ª °´©g½¼Ò½²ÅÀ¶L®D°?¹^¬r½"ìX©^¶oµ Å¿°?¹Ë¦°ØÂƱ·¶¦¸íª·«^°"ÈV¶¦±·År¹ ÈЬÀ¹^°<È{°´É2Ó S¶L±¢°¼»ªt±¯L½²ª·¬r©Ë*ÂÆ±·¯LËǸ;°?©Xª­¶¦ÂSÈ{°²Éé®S¯O˦°´­¢½²¶¦©Ä ª·¯¦¬À©g¬À©^ËÛª·°²±¸ ¹^°´­·½²±·¬À®^ª¬¿¶Ç©^­²³&µG°$Ì^­t°?¹ ÅÀ¬À©^˦Ìg¬À­ Ä ª·¬r½¯¦©g¹ÏÕ"ÖØ×ÚÙ¾­tªt±Ìg½¼ª·Ì^±¯LÅØ®g¯Lªtª·°²±©g­&ª ÒX®g¬r½²¯¦ÅÀÅ¿Ò Ì^­t°?¹!ªt¶Û¹^°?­·½¼±¬¿ÉD°Ãªt°´±·¸ ­²ÓëÖØ«^°´©2³µG°bÌg­t°´¹>¯ ÅÀ¯¦©ËÇÌ^¯LËL°1¸¶X¹^°´Åªt¶,¹^¬r­·½²¯L±·¹¬À±t±·°?Å¿°²àL¯¦©|ªØ¹^°´­·½²±¬¿®Ä ª·¬À¶¦©g­²ÓÊÈΰh¯LÅÀ­t¶{Ìg­t°´¹Ã¯v½´ÅÀÌ^­·ªt°²±¬À©^Ë{¸°²ª·«^¶X¹bªt¶ ­·Ì^¸ ¸ ¯L±¬¿Í²°$°¼»ªt±¯¦½¼ª·°´¹ ¹^°´­·½²±·¬À®^ª¬¿¶¦©g­VÉS¯L­t°?¹ ¶¦© ¹^¬¿ß~°´±·°´©|ªGà¬¿°´µØ®~¶¦¬r©|ª·­Ø¯¦©^¹Ôµ¶¦±¹Ô­t°?©^­t°?­²Ó È{°°²àL¯¦ÅÀÌg¯Oª·°´¹1¶ÇÌ^±(¸°´ª·«^¶X¹éɐÒ.µG¯cÒE¶¦ÂS°¼»®~°´±·¬¿Ä ¸°´©Xª·­²³¦¯¦©^¹;Âz¶ÇÌ^©g¹ª·«g¯Oª¢ª«° ¯L½´½²Ì^±¯L½²Ò.¶¦ÂS¶¦Ì^±(°¼»XÄ ªt±¯¦½¼ª¬¿¶¦©Ô¸°²ª«¶X¹hµá¯¦­H®g±¯L½²ª·¬À½´¯LÅe³^ª·«g¯OªØ¬À­²³^¯;Ìg­t°´± ½²¯¦©&Ì^©g¹^°²±­tª·¯¦©^¹,¯.ª·°²±¸ï¬r© êXÌ^°´­tª¬¿¶¦©2³XÉXÒ;Ég±t¶LµØ­ Ä ¬À©^ËðªCµ¶Ï¹^°´­·½²±¬¿®^ª¬¿¶Ç©^­²³é¶Ç©Á¯cর´±¯O˦°LÓÈ{°Î¯¦ÅÀ­t¶ Âz¶ÇÌ^©g¹ ª«^¯Lª"ª·«^°;ÅÀ¯¦©ËÇÌ^¯LËL°¸;¶^¹°?ůL©g¹ª·«^°;½²ÅrÌ^­ Ä ªt°´±¬À©^Ëø°²ª«^¶¹ÐÂó̐±·ª·«^°²±+°?©^«g¯L©g½¼°?¹Û¶¦Ì^±+Âz±¯L¸°²Ä µ¶¦±·ìDÓ DÌ^ª·Ì^±t° µ¶¦±·ìéµ ¬rÅÀÅD¬À©^½´ÅÀÌg¹°"°²»X®D°´±¬À¸°´©ª·­HÌg­·¬À©^Ë ¯{ÅÀ¯L±t˦°²±,©XÌg¸éÉg°´±&¶Lªt°?­tª ª·°²±¸ ­²³G¯L©g¹$¯O®g®SÅÀ¬À½´¯oÄ ª·¬À¶¦©b¶L°¼»ªt±¯¦½¼ª·°´¹$¹^°´­·½²±·¬À®^ª¬¿¶¦©g­ªt¶{¶¦ª·«^°²± ò Ùô ±·°´­t°?¯O±½«2Ó I6§Çd:ãHägf !]Så€OQÏ]Sã3J¦K ÖØ«°¯¦Ì^ª·«^¶L±­µ¶¦ÌgÅÀ¹,År¬¿ìǰ ªt¶;ª·«g¯L©^ì&Õ*¬¿ª¯L½¼«g¬½v¬¿ËLÄ ¬¿ª¯LÅ"Õ"°´¬ÀÉ~¶¦©g­·«g¯³á¨ ©^½LÓèÂz¶L±ˆª·«^°´¬¿±Ô­·Ì®g®D¶¦±tªˆµ ¬¿ª« ª·«^°y!Fv*ÄTæ  × ÈζL±ÅÀ¹  ©^½²Ò½²Å¿¶¦®~°?¹^¬À¯^³<×Ú¯LìǶ¦ªt¶ ¨ÿµá¯cÒ¦¯L¸ ¯b¯L©g¹ÐÖ:¯OìL°?©¶¦ÉSÌÐÖ3¶¦ìÌ^©g¯OËǯÎÂz¶¦±+ª·«^°´¬¿± ­·Ì®g®D¶¦±tªVµ ¬¿ª·«¾ª·«^°bÕfû!5½²ÅrÌ^­tª·°²±¬À©^ËÑ­t¶¦Âzª µ¯L±·°L³ ¯L©g¹ ò ¶¦±¬¿ì¦¶ ø ¯L©g¹¶ ñóò ¯Lª·¬¿¶Ç©^¯¦Å¢¨ ©^­tª¬¿ª·Ì^ª·° ¶¦Â ¨C©Ä Âz¶¦±·¸ ¯Lª·¬r½²­²³´¦¯L®S¯L©~õ,ÂÆ¶L± «^°²± ­·Ì^®^®D¶¦±·ª+µ ¬¿ª«Ûª·«^° ò öu! þ ¨ þ ½²¶¦ÅrÅ¿°´½²ª·¬À¶¦©2Ó ¡ ] #t]g¥¦]Sã §]SK ôH«g¬ÀÅÀ¬À®»!GÅÀ¯O±·ì­t¶¦©Ô¯L©g¹hæ ¶¦©g¯LÅr¹hæØ¶Ç­t°´©^Âz°?ÅÀ¹2Óáù´û¦û|Ó þ ª·¯Lª·¬À­tª¬À½´¯LÅ0ÅÀ¯¦©^˦Ìg¯O˦° ¸¶¹^°´År¬À©^ËÞÌ^­·¬r©ËÞª·«^° !G× * Ä©!G¯L¸éÉ^±¬À¹^˦°Øª·¶|¶¦Å¿ì¬¿ª´Ó¢¨C©£¢w‡3‚I¤,131/9;:2=„|u‚  ¥w>‡ ‚¦“§<131O¤,ƒ©¨ ªX«´³®S¯O˦°´­*üz I¬Xü |ù0zÓ  ±t°?©  ª·Í´¬¿¶Ç©^¬eÓ<ù´û¦û|Óá× ¶Oà¬À©^ˈÌ^® ª·«^°;¬À©^Âz¶L±¸ ¯OÄ ª·¬À¶¦© Âz¶|¶¹&½²«^¯¦¬À©2Ó®­°¯²±¾5"=5³,9;:-1¼³Dù"x ñ üÇõ3(rù¦ù¬~ù0xÓ ´H¯¦­·¬ÀÅ¿°?¬¿¶Ç­ÊÕ*¯LªtÍ?¬¿àL¯L­·­·¬rÅ¿¶LËÇÅ¿¶Ç̯¦©^¹ ø ¯Oª«^ÅÀ°²°´© æEÓ ×Ú½ ø °´¶oµ ©ZÓ0ù?ûLû¦ýXÓVÖ3¶oµ¯L±¹^­hª·«^°V¯¦Ì^ªt¶Ç¸¯Lª·¬r½ ¬À¹^°´©Xª·¬D½²¯Lª·¬À¶¦©V¶¦ÂG¯¦¹cît°´½²ª·¬¿àL¯¦Å(­·½´¯LÅ¿°?­(t!GÅÀÌg­tªt°´± Ä ¬À©^˾¯¦¹cît°´½¼ª¬¿à¦°?­Î¯¦½²½²¶L±¹g¬À©^ËÁªt¶¾¸°?¯L©g¬À©^Ë^ÓÁ¨ © ¢w‡ ‚E¤,131/„9n: =„|t‚  4nƒa1 µ¶„|,4f­A:a:<>D5·± 131,49;: = ‚  4nƒa1¸­A|3|‚”¤9W5?4W9‚„:  ‚‡º¹ ‚ 8 §½>D465?4W9‚„:p5»H9;:2=>29;|C7 4W9Ф,| ³g®g¯LËL°?­Eùoü”¬Dù"xLüÓ Õ*¬¿ª·¯¦½«g¬ûv¬À˦¬Àª·¯¦ÅØÕ"°´¬ÀÉ~¶¦©g­·«g¯Ó6ù?ûLû?xXÓñ!Fv*ÄTæ  × È{¶L±ÅÀ¹  ©g½¼Ò½²Å¿¶¦®~°?¹^¬r¯Ó ñ ¨C©)¦¯O®S¯¦©°?­t°oõ Ó ×Ú¯OìǶLªt¶Ï¨Cµá¯?Ò|¯L¸ ¯Ï¯L©g¹ÁÖ¢¯O즰?©¶¦ÉSÌèÖ3¶¦ìXÌg©^¯L˦¯^Ó ù?ûLû?IXÓ Õ*¬À°²±¯O±½¼«g¬À½´¯LÅFG¯OÒL°´­·¬r¯L©b½²ÅÀÌg­tªt°´±¬À©^ËvÂz¶¦± ¯¦Ìª·¶¦¸ ¯Lª·¬À½Vªt°²»Xª ½´ÅÀ¯L­·­·¬S½´¯Oª¬¿¶Ç©ZÓ{¨C©¼¢w‡]‚E¤C1C1/7 9;:2=„|­‚  4nƒa1½¶I¾<4nƒ¿¯]:<41,‡C:½5?4W9W‚:p5WÀD‚„9n:<4®¹ ‚:  1C‡C7 1C:,¤,1'‚:g­f‡,4W9 Áf¤9W5,¯]:½4189=21,:¤1¼³®g¯LËL°?­Eù´ý¦ü¦üE¬ ù?ýLü ÇÓ ¦¯O®S¯¦©  Å¿°?½¼ªt±·¶Ç©^¬r½ñv¬À½²ª·¬¿¶Ç©g¯O±·ÒÃæØ°?­t°´¯L±·½«Ã¨ ©^­tª¬ Ä ª·Ì^ªt°¦Ó ù´û¦ûIÓ  væ °´ÅÀ°´½²ªt±·¶¦©g¬À½ ¹^¬r½¼ª¬¿¶¦©g¯L±tÒ ªt°?½«g©^¬r½²¯¦Å~˦Ìg¬À¹^°LÓ ò ¶L±¬¿ì¦¶ ø ¯L©g¹¶g³ ø ¯LÍ´Ì^즶 ø Ì^±¬¿Ò|¯¦¸ ¯³Ç¯L©g¹&Ö3¶Ç­·«^¬¿Ä «g¬¿ì¦¶ ò ¶¦Í´Ì^°LÓù?û¦ûLûÓ ò öf! þ ¨ þ ª·°´­tª.½¼¶ÇÅÀÅ¿°?½¼ª¬¿¶¦© µ¶¦±·ìX­·«^¶L® ñóò Öu! ¨ æáÄùoõÓ ¨C©Ã¢w‡]‚E¤C131 /„9n: =„|¼‚  4nƒa1¸Ä^Ä?:p/½­A:a:<>D5¯]:<4©1C‡C:½5?4W9‚„:p5­ ¹Å±k¦ ¯”Ʋ¯LÇ ¹ ‚:  1C‡31C:¤,1»‚:›Çu13|"15‡N¤,ƒ 5:½/ÉÈJ18Ê18«‚8§ 8 1C:<4 9;:g¯]:  ‚‡ 8 5?4W9W‚:gÇf1,4W‡C918Ê5 ³®g¯L˦°´­"ü¦ûLû”¬Xýz?zÓ ¦Ì^År¬À¯¦© ø Ì^®g¬À°´½Ý¯¦©^¹ L¶Ç«^© ×Ú¯O»XµG°´ÅrÅóÓ ù?ûLû¦üXÓ Ö3±¯¦¬À©g¬À©Ë ­tªt¶X½²«^¯¦­tª·¬r½˦±·¯¦¸ ¸ ¯O±­!Âz±·¶Ç¸ Ì^©Ä ÅÀ¯LÉD°´ÅrÅ¿°´¹\ª·°¼»ª ½²¶L±·®D¶¦±·¯^Ó\¨ © Ë)‚„‡_l|]ƒ‚8§ ‚„: ¦½4549 |,4W9;¤C5̄7ŠÍf5„|01]/Î´5?4W>‡ 5j»_5: =>a5"=21Ï¢w‡ ‚7 =‡ 5 8–8 9;: = Ðh1O¤,ƒ :a9ŠÑ>½13| Ó3ö*ö*ö ¨2Ö3°?½«^©g¬À½´¯LÅ¦æØ°²Ä ®D¶¦±tª­áÈ þ ÄCûLüočzgùOÓ ×Ú¯L¬r©^¬r½«^¬ þ «g¬À¸;ÉgÌg©2Ó ù´û¦û„GOÄù?û¦ûIÓ ×Ú¯¦¬À©g¬À½¼«g¬ ­·«g¬À¸1ÉSÌg©»!wv"Äÿæ  ×  ûGLÄ  û?IXÓ ñ ¨ ©)¦¯L®S¯L©^°´­t°OõÓ ˆ Ìoît¬2×Ú¯Lª·­·Ìg¸¶¦ªt¶^³Sö"ìX¬¿±¯ ø ¬¿ª¯LÌg½¼«g¬ó³^Ö:¯Oª­·Ì¶+ˆG¯OÄ ¸ ¯L­·«g¬¿ª¯³  ­·¯¦¸;Ì5¨C¸ ¯¦¬À½«g¬ó³Ð¯L©g¹AÖ3¶Ç¸;¶Ç¯LìX¬ ¨C¸ ¯¦¸éÌ^±¯Ó¾ù?ûLû ÇÓE¦¯L®S¯L©^°´­t°$¸¶L±·®S«¶ÇÅ¿¶¦Ë¦¬À½´¯LÅ ¯¦©^¯¦Å¿Ò­·¬À­G­tÒX­tª·°´¸‰!«g¯ þ °?©h¸ ¯L©Ì^¯¦ÅóÓ:Ö°´½«g©^¬r½²¯¦Å æ °²®D¶¦±tª ò ö"¨ þ ÖÄT¨ þ ÄTÖØæ"û lz?z |³ ò ö"¨ þ Ö<Ó ñ ¨C© ¦¯L®g¯¦©^°´­t°OõÓ ö*©^¹^±·°²µ ×Ú½"!G¯LÅrÅÀÌg¸Ô³ ø ¯¦¸ ¯LÅ ò ¬¿ËǯL¸Ô³¼¦¯¦­t¶¦© æ °´©g©^¬¿°¦³6¯¦©^¹ ø ±¬À­tª¬¿° þ °²Ò¸¶L±·°LÓ ù´û¦ûLûÓ9ö ¸ ¯¦½«^¬r©°ÜÅ¿°?¯O±©g¬À©^Ë÷¯O®g®^±·¶Ç¯L½¼«ª·¶ ÉgÌg¬ÀÅr¹^¬r©Ë ¹^¶¦¸ ¯¦¬À©ÄT­t®D°?½²¬D½"­t°´¯L±½¼« °´©^˦¬r©°?­²Ó¢¨C©£¢w‡3‚I¤,131/7 9n: =„|­‚  4nƒa1Ò¶ӄ4nƒd¯]:<4©1C‡C:½5?4W9‚„:p5XÀ‚9;:<4¸¹ ‚„:  1C‡C7 1C:,¤,1&‚:‡­A‡,4W9 Á²¤95¸¯]:<418š9«=21C:¤1³:®S¯O˦°´­ ?Lü”¬ ?|Ó ¦¬À¯¦©Äˆ Ìg© ò ¬¿°¦³¾×Ú¬À½¼«^°´Å þ ¬À¸ ¯L±·¹2³¾ôH¬¿°´±·±t°Þ¨C­ Ä ¯LÉD°´ÅrÅ¿°L³¯¦©^¹Væ ¬À½«g¯L±·¹vÌ^±·¯¦©g¹ZÓù?ûLû¦ûXÓ'!±·¶¦­·­ Ä År¯L©^˦Ìg¯O˦°ï¬À©ÂƶL±¸ ¯Lª·¬¿¶Ç©\±t°´ªt±¬¿°´àL¯LÅvÉS¯¦­t°´¹\¶¦© ®S¯O±¯¦ÅÀÅ¿°?Å^ª·°¼»ª·­¯L©g¹ ¯¦Ì^ªt¶Ç¸ ¯Oª·¬r½"¸ ¬À©^¬r©Ë.¶¦Â~®S¯O±tÄ ¯¦ÅÀÅ¿°?Å~ªt°²»Xª­áÂz±·¶¦¸ª«°EÈV°´É2Ó(¨C©g¢w‡ ‚”¤C131/9n: =„|'‚  4nƒa1¸Ä^Ä?:p/½­A:a:<>D5¯]:½41C‡C:½5?4W9‚:½5­ ¹Å±k¦ ¯”Ʋ¯LÇ ¹ ‚:  1C‡31C:,¤C1»‚:›Çf13|015‡N¤ƒy5:½/ÔȖ18Ê1<‚8§ 8 1C:½4 9n:Õ¯]:  ‚‡ 8 5?4W9‚„:gÇf1,4‡C9618Ê5 ³®S¯O˦°´­AlG¬2x^ùOÓ ôH«g¬ÀÅÀ¬À®íæØ°?­·©^¬¿ì~Óïù´û¦ûLûÓè×Ú¬r©^¬r©Ë誫°ÏÈ{°²É Âz¶¦± ÉS¬ÀÅÀ¬r©ËÇÌ^¯¦Å(ªt°¼»ª·­²ÓE¨C© ¢w‡ ‚”¤C131 /9;:2=„| ‚  4nƒa1Öµ"«"4nƒ ­f:a:<>D5‡± 1C1,4W9;:2=}‚  4WƒD1×­A|3|,‚”¤09W5?4W9W‚:  ‚„‡ ¹ ‚ 8 §p>D465?4W9W‚:½5» 9;:2=>29;|C49;¤,|³^®S¯O˦°´­fILü 6¬ILý„G^Ó S±¯L©^ì þ ¸ ¯¦¹cî·¯³ ø ¯Oª«^ÅÀ°²°?©ðæEÓG×Ú½ ø °´¶Oµ ©2³Ø¯L©g¹ ´ ¯L­·¬ÀÅÀ°´¬À¶¦­<Õ*¯Oª·Í´¬ÀàO¯¦­·­·¬ÀÅÀ¶LËÇÅ¿¶¦Ì2Ó1ù´û¦ûÓ"Ö¢±¯L©g­·ÅÀ¯Lª Ä ¬r©ËϽ¼¶ÇÅÀÅ¿¶^½²¯Lª·¬¿¶Ç©g­+Âz¶¦± Ég¬rÅÀ¬À©^˦Ìg¯¦Å.ÅÀ°¼»^¬À½¼¶Ç©g­(Ðö ­tª¯Oª¬À­tª·¬r½²¯¦Å ¯L®^®g±·¶¦¯¦½«ZÓù ‚ 8 §½>D465?4W9‚„:p5º»H9;:<7 =>9 |,4W9Ф,| ³DüLü ñ ùoõ](lù¬ýxÓ
2000
62
Term Recognition Using Technical Dictionary Hierarchy Jong-Hoon Oh, KyungSoon Lee, and Key-Sun Choi Computer Science Dept., Advanced Information TechnologyResearch Center (AITrc), and Korea Terminology Research Center for Language and Knowledge Engineering (KORTERM) Korea Advanced Institute of Science & Technology (KAIST) Kusong-Dong, Yusong-Gu Taejon, 305-701 Republic of Korea {rovellia,kslee,kschoi}@world.kaist.ac.kr Abstract In recent years, statistical approaches on ATR (Automatic Term Recognition) have achieved good results. However, there are scopes to improve the performance in extracting terms still further. For example, domain dictionaries can improve the performance in ATR. This paper focuses on a method for extracting terms using a dictionary hierarchy. Our method produces relatively good results for this task. Introduction In recent years, statistical approaches on ATR (Automatic Term Recognition) (Bourigault, 1992; Dagan et al, 1994; Justeson and Katz, 1995; Frantzi, 1999) have achieved good results. However, there are scopes to improve the performance in extracting terms still further. For example, the additional technical dictionaries can be used for improving the accuracy in extracting terms. Although, the hardship on constructing an electronic dictionary was major obstacles for using an electronic technical dictionary in term recognition, the increasing development of tools for building electronic lexical resources makes a new chance to use them in the field of terminology. From these endeavour, a number of electronic technical dictionaries (domain dictionaries) have been acquired. Since newly produced terms are usually made out of existing terms, dictionaries can be used as a source of them. For example, ‘distributed database’ is composed of ‘distributed’ and ‘database’ that are terms in a computer science domain. Further, concepts and terms of a domain are frequently imported from related domains. For example, the term ‘Geographical Information System (GIS)’ is used not only in a computer science domain, but also in an electronic domain. To use these properties, it is necessary to build relationships between domains. The hierarchical clustering method used in the information retrieval offers a good means for this purpose. A dictionary hierarchy can be constructed by the hierarchical clustering method. The hierarchy helps to estimate the relationships between domains. Moreover the estimated relationships between domains can be used for weighting terms in the corpus. For example, a domain of electronics may have a deep relationship to that of computer science. As a result, terms in the dictionary of electronics domain have a higher probability to be terms of computer science domain than terms in the dictionary of others do (Felber, 1984). The recent works on ATR identify the candidate terms using shallow syntactic information and score the terms using statistical measure such as frequency. The candidate terms are ranked by the score and are truncated by the thresholds. However, the statistical method solely may not give accurate performance in case of small sized corpora or very specialized domains, where the terms may not appear repeatedly in the corpora. In our approach, a dictionary hierarchy is used to avoid these limitations. In the next section, we describe the overall method description. In section 2, section 3, and section 4, we describe primary methods and its details. In section 5, we describe experiments and results 1 Method Description The description of the proposed method is shown in figure 1. There are three main steps in our method. In the first stage, candidate terms that are complex nominal are extracted by a linguistic filter and a dictionary hierarchy is constructed. In the second stage, candidate terms are scored by each weighting scheme. In dictionary weighing scheme, candidate terms are scored based on the kind of domain dictionary where terms appear. In statistical weighting scheme, terms are scored by their frequency in the given corpus. In transliterated word weighting scheme, terms are scored by the number of transliterated foreign words in the terms. In the third stage, each weight is normalized and combined to Term weight (Wterm), and terms are extracted by Term weight. Figure 1. The method description 2 Dictionary Hierarchy 2.1 Resource Field Agrochemical, Aerology, Physics, Biology, Mathematics, Nutrition, Casting, Welding, Dentistry, Medical, Electronical engineering, Computer science, Electronics, Chemical engineering, Chemistry.... and so on. Table 1. The fragment of a list: dictionaries of domains used for constructing the hierarchy. A dictionary hierarchy is constructed using bi-lingual dictionaries (English to Korean) of the fifty-seven domains. Table 1 lists the domains that are used for constructing the dictionary hierarchy. The dictionaries belong to domains of science and technology. Moreover, terms that do not appear in any dictionary (henceforth we call them unregistered terms) are complemented by a domain tagged corpus. We use a corpus, called ETRI-KEMONG test collection, with the documents of seventy-six domains to complement unregistered terms and to eliminate common term. 2.2 Constructing Dictionary Hierarchy The clustering method is used for constructing a dictionary hierarchy. The clustering is a statistical technique to generate a category structure using the similarity between documents (Anderberg, 1973). Among the clustering methods, a reciprocal nearest neighbor (RNN) algorithm (Murtaugh, 1983) based on a hierarchical clustering model is used, since it joins the cluster minimizing the increase in the total within-group error sum of squares at each stage and tends to make a symmetric hierarchy (Lorr, 1983). The algorithm to form a cluster can be described as follows: 1. Determine all inter-object (or inter-dictionary) dissimilarity. 2. Form cluster from two closest objects (dictionaries) or clusters. 3. Recalculate dissimilarities between new cluster created in the step2 and other object (dictionary) or cluster already made. (all other inter-point dissimilarities are unchanged). 4. Return to Step2, until all objects (including cluster) are in the one cluster. In the algorithm, all objects are treated as a vector such as Di = (xi1, xi2, ... , xiL ). In the step 1, inter-object dissimilarity is calculated based on the Euclidian distance. In the step2, the closest object is determined by a RNN. For given object i and object j, we can define that there is a RNN relationship between i and j when the closest object of i is object j and the closest object of j is object i. This is the reason why the algorithm is called a RNN algorithm. A dictionary hierarchy is constructed by the algorithm, as shown in figure 2. There are ten domains in the hierarchy – this is a fragment of whole hierarchy. Technical Dictionaries Domain tagged Documents …. A C B D …. Constructing hierarchy POS-tagged Corpus Linguistic filter Abbreviation and Translation pairs extraction Candidate term Frequency based Weighing Transliterated Word detection Transliterated word Based Weighting Complement Unregistered Term Scoring by hierarchy Eliminate Common Word Dictionary based Weighting Statistical Weight Transliterated Word Weight Dictionary Weight Term Recognition Figure 2. The fragment of whole dictionary hierarchy : The hierarchy shows that domains clustered in the terminal node such as chemical engineering and chemistry are highly related. 2.3 Scoring Terms Using Dictionary Hierarchy The main idea for scoring terms using the hierarchy is based on the premise that terms in the dictionaries of the target domain and terms in the dictionary of the domain related to the target domain act as a positive indicator for recognizing terms. Terms in the dictionaries of the domains that are not related to the target domain act as a negative indicator for recognizing terms. We apply the premise for scoring terms using the hierarchy. There are three steps to calculate the score. 1. Calculating the similarity between the domains using the formula (2.1) (Maynard and Ananiadou, 1998) where Depthi: the depth of the domaini node in the hierarchy Commonij: the depth of the deepest node sharing between the domaini and the domainj in the path from the root. In the formula (2.1), the depth of the node is defined as a distance from the root – the depth of a root is 1. For example, let the parent node of C1 and C8 be the root of hierarchy in figure 2. The similarity between “Chemistry” and “Chemical engineering” is calculated as shown below in table 2: Domain Chemistry Chemical Engineering Path from the root Root->C8-> C9->Chemistry Root->C8->C9-> Chemical Engineering Depthi 4 4 Common ij 3 3 Similarity ij 2*3/(4+4) =0.75 2*3/(4+4) =0.75 Table 2. Similarityij calculation: The table shows an example in caculating similarity using formula (2.1). In the example, Chemical engineering domain and Chemistry domain are used. Path, Depth, and Common are calculated according to figure 1. Then similarity between domains are determined to 0.75. 2.Term scoring by distance between a target domain and domains where terms appear: where N: the number of dictionaries where a term appear Similarityti: the similarity between the target domain and the domain dictionary where a term appears For example, in figure 2, let the target domain be physics and a term ‘radioactive’ appear in physics, chemistry and astronomy domain dictionaries. Then similarity between physics and the domains where the term ‘radioactive’ appears can be estimated by formula (2.1) as shown below. Finally, Score(radioactive) is calculated by formula (2.2) – score is (0.4+1+0.7)/3.: N 3 similarity physics-chemistry 0.4 similarity physics-physics 1 similarity physics-astronomy 0.7 Score(radioactive) 2.1*1/3 = 0.7 Table 3. Scoring terms based on similarity between domains 3. Complementing unregistered terms and common terms by domain tagged corpora. )1.2 ( 2 j i ij ij depth depth Common similarity + × = ) 2.2 ( 1 ) ( 1∑ = = N i ti similarity N term Score where W: the number of words in the term ‘α‘ dofi: the number of domain that words in the term appear in the domain tagged corpus. Consider two exceptional possible cases. First, there are unregistered terms that are not contained in any dictionaries. Second, some commonly used terms can be used to describe a special concept in a specific domain dictionary. Since an unregistered term may be a newly created term of domains, it should be considered as a candidate term. In contrast with an unregistered term, common terms should be eliminated from candidate terms. Therefore, the score calculated in the step 2 should be complemented for these purposes. In our method, the domain tagged corpus (ETRI 1997) is used. Each word in the candidate terms – they are composed of more than one word – can appear in the domain tagged corpus. We can count the number of domains where the word appears. If the number is large, we can determine that the word have a tendency to be a common word. If the number is small, we can determine that the word have a high probability to be a valid term. In this paper, the score calculated by the dictionary hierarchy is called Dictionary Weight (WDic). 3. Statistical Method The statistical method is divided into two elements. The first element, the Statistical Weight, is based on the frequencies of terms. The second element, the Transliterated word Weight, which is based on the number of transliterated foreign word in the candidate term. This section describes the above two elements. 3.1. Statistical Weight: Frequency Based Weight In the Statistical Weight, not only abbreviation pairs and translation pairs in a parenthetical expression but also frequencies of terms are considered. Abbreviation pairs and translation pairs are detected using the following simple heuristics: For a given parenthetical expression A(B), 1. Check on a fact that A and B are abbreviation pairs. The capital letter of A is compared with that of B. If the half of the capital letter are matched for each other sequentially, A and B are determined to abbreviation pairs (Hisamitsu et. al, 1998). For example, ‘ISO’ and ‘International Standardization Organization’ is detected as an abbreviation in a parenthetical expression ‘ISO (International Standardization Organization)’. 2. Check on a fact that A and B are translation pairs. Using the bi-lingual dictionary, it is determined. After detecting abbreviation pairs and translation pairs, the Statistical Weight (WStat) of the terms is calculated by the formula (3.1). where α: a candidate term |α|: the length of a term’α’ S (α): abbreviation and translation pairs of ‘α’ T(α): The set of candidate terms that nest ‘α’ f(α): the frequency of ‘α ’ C(T(α)): The number of elements in T(α) In the formula (3.1), the nested relation is defined as follows: let A and B be a candidate term. If A contains B, we define that A nests B. The formula implies that abbreviation pairs and translation pairs related to ‘α’ is counted as well as ‘α’ itself and productivity of words in the nested expression containing ‘α’ gives more weight, when the generated expression contains ‘α’. Moreover, formula (1) deals with a single- word term, since an abbreviation such as GUI (Graphical User Interface) is single word term and English multi-word term usually translated to Korean single-word term – (e.g. distributed database => bunsan deitabeisu) )3.2 ( * )1 ) ( ( ) ( 1 W dof Score W W i i Dic ∑ = + = α α ( )                            + × × = ∑ ∑ ∑ ∪ ∈ ∈ ∪ ∈ } { ) ( ) ( } { ) ( ) 1 . 3 ( )) ( ( ) ( ) ( ) ( ) ( β β α α γ β β α α γ α α α α α β S T S Stat otherwise T C f f nested is if f W 3.2 Transliterated word Weight: By Automatic Extraction of Transliterated words Technical terms and concepts are created in the world that must be translated or transliterated. Transliterated terms are one of important clues to identify the terms in the given domain. We observe dictionaries of computer science and chemistry domains to investigate the transliterated foreign words. In the result of observation, about 53% of whole entries in a dictionary of a computer science domain are transliterated foreign words and about 48% of whole entries in a dictionary of a chemistry domain are transliterated foreign words. Because there are many possible transliterated forms and they are usually unregistered terms, it is difficult to detect them automatically. In our method, we use HMM (Hidden Markov Model) for this task (Oh, et al., 1999). The main idea for extracting a foreign word is that the composition of foreign words would be different from that of pure Korean words, since the phonetic system for the Korean language is different from that of the foreign language. Especially, several English consonants that occur frequently in English words, such as ‘p’, ’t’, ’c’, and ‘f’, are transliterated into Korean consonants ‘p’, ‘t’, ‘k’, and ‘p’ respectively. Since these consonants of Korean are not used in pure Korean words frequently, this property can be used as an important clue for extracting a foreign word from Korean. For example, in a word, ‘si-seu-tem’ (system), the syllable ‘tem’ have a high probability to be a syllable of transliterated foreign word, since the consonant of ‘t’ in the syllable ‘tem’ is usually not used in a pure Korean word. Therefore, the consonant information which is acquired from a corpus can be used to determine whether a syllable in the given term is likely to be the part of a foreign word or not. Using HMM, a syllable is tagged with ‘K’ or ‘F’. A syllable tagged with ‘K’ means that it is part of a pure Korean word. A syllable tagged with ‘F’ means that it is part of a transliterated word. For example, ‘si-seu-tem-eun (system is)’ is tagged with ‘si/F + seu/F + tem/F + eun/K’. We use consonant information to detect a transliterated word like lexical information in part-of-speech-tagging. The formula (3.2) is used for extracting a transliterated word and the formula (3.3) is used for calculating the Transliterated Word Weight (WTrl). The formula (3.3) implies that terms have more transliterated foreign words than common words do. where si: i-th consonant in the given word. ti: i-th tag (‘F’ or ‘K’) of the syllable in the given word. where |α| is the number of words in the term α trans(α) is the number of transliterated words in the term α 4.Term Weighting The three individual weights described above are combined according to the following formula (4.1) called Term Weight (WTerm) for identifying the relevant terms. Where ϕ: a candidate term ‘ϕ’ f,g,h : normalization function α+β+γ = 1 In the formula (4.1), the three individual weights are normalized by the function f, g, and h respectively and weighted parameter αααα,ββββ, and γγγγ. The parameter αααα,ββββ, and γγγγ are determined by experiment with the condition αααα+ββββ+γγγγ = 1. Each value which is used in this paper is αααα=0.6, ββββ =0.1, and γγγγ=0.3 respectively. ) 3.3 ( ) ( ) ( α α α trans WTrl = ) 2.3 ( ) | ( ) , | ( ) | ( ) ( ) ( ) | ( 1 3 2 1 1 2 1         = ∏ ∏ = = − − n i i i n i i i i t s p t t t p t t p t p S P S T P )1.4 ( )) ( ( )) ( ( )) ( ( ) ( ϕ γ ϕ β ϕ α ϕ Stat Trl Dic term W h W g W f W × + × + × = 5. Experiment The proposed method is tested on a corpus of computer science domains, called the KT test collection. The collection contains 4,434 documents and 67,253 words and contains documents about the abstract of the paper (Park. et al., 1996). It was tagged with a part-of-speech tagger for evaluation. We examined the performance of the Dictionary Weight (WDic) to show its usefulness. Moreover, we examined both the performance of the C-value that is based on the statistical method (Frantzi. et al., 1999) and the performance of the proposed method. 5.1 Evaluation Criteria Two domain experts manually carry out the assessment of the list of terms extracted by the proposed method. The results are accepted as the valid term when both of the two experts agree on them. This prevents the evaluation from being carried out subjectively, when one expert assesses the results. The results are evaluated by a precision rate. A precision rate means that the proportion of correct answers to the extracted results by the system. 5.2 Evaluation by Dictionary Weight (WDic) In this section, the evaluation is performed using only WDic to show the usefulness of a dictionary hierarchy to recognize the relevant terms The Dictionary Weight is based on the premise that the information of the target domain is a good indicator for identifying terms. The term in the dictionaries of the target domain and the domain related to the target domain acts as a positive indicator for recognizing terms. The term in the dictionaries of the domains, which are not related to the target domain acts as a negative indicator for recognizing terms. The dictionary hierarchy is constructed to estimate the similarity between one domain and another. Top 10% Bottom 10% The Valid Term 94% 54.8% Non-Term 6% 45.2% Table 4. terms and non-terms by Dictionary Weight The result, depicted in table 4, can be interpreted as follows: In the top 10% of the extracted terms, 94% of them are the valid terms and 6% of them are non-terms. In the bottom 10% of the extracted terms, 54.8% of them are the valid terms and 45.2% of them are non-terms. This means that the relevant terms are much more than non-terms in the top 10% of the result, while non-terms are much more than the relevant terms in the bottom 10% of the result. The results are summarized as follow: !"According as a term has a high Dictionary Weight (WDic), it is apt to be valid. !"More valid terms have a high Dictionary Weight (WDic) than non-terms do 5.3 Overall Performance Table 5 and figure 3 show the performance of the proposed method and of the C-value method. By dividing the ranked lists into 10 equal sections, the results are compared. Each section contains the 1291 terms and is evaluated independently. C-value The proposed method Section # of term Precision # of term Precision 1 1181 91.48% 1241 96.13% 2 1159 89.78% 1237 95.82% 3 1207 93.49% 1213 93.96% 4 1192 92.33% 1174 90.94% 5 1206 93.42% 1154 89.39% 6 981 75.99% 1114 86.29% 7 934 72.35% 1044 80.87% 8 895 69.33% 896 69.40% 9 896 69.40% 780 60.42% 10 578 44.77% 379 29.36% Table 5. Precision rates of C-value and the proposed method : Section contain 1291 terms and precision is evaluated independently. For example, in section 1, since there are 1291 candidate terms and 1241 relevant terms by the proposed method, the precision rate in section 1 is 96.13% . The result can be interpreted as follows. In the top sections, the proposed method shows the higher precision rate than the C-value does. The distribution of valid terms is also better for the proposed method, since there is a downward tendency from section 1 to section 10. This implies that the terms with higher weight scored by our method have a higher probability to be valid terms. Moreover, the precision rate of our method shows the rapid decrease from section 6 to section 10. This indicates that most of valid terms are located in the top sections. 20% 30% 40% 50% 60% 70% 80% 90% 100% 1 2 3 4 5 6 7 8 9 10 Section Precision The Proposed method C-value Figure 2. The performance of C-value and the proposed method in each section The results can be summarized as follow : !"The proposed method extracts a valid term more accurate than C-value does. !"Most of the valid terms are in the top section extracted by the proposed method. Conclusion In this paper, we have described a method for term extraction using a dictionary hierarchy. It is constructed by clustering method and is used for estimating the relationships between domains. Evaluation shows improvement over the C-value. Especially, our approach can distinguish the valid terms efficiently – there are more valid terms in the top sections and less valid terms in the bottom sections. Although the method targets Korean, it can be applicable to English by slight change on the Tweight (WTrl). However, there are many scopes for further extensions of this research. The problems of non-nominal terms (Klavans and Kan, 1998), term variation (Jacquemin et al., 1997), and relevant contexts (Maynard and Ananiadou, 1998), can be considered for improving the performance. Moreover, it is necessary to apply our method to practical NLP systems, such as an information retrieval system and a morphological analyser. Acknowledgements KORTERM is sponsored by the Ministry of Culture and Tourism under the program of King Sejong Project. Many fundamental researches are supported by the fund of Ministry of Science and Technology under a project of plan STEP2000. And this work was partially supported by the KOSEF through the “Multilingual Information Retrieval” project at the AITrc. References Anderberg, M.R. (1973) Cluster Analysis for Applications. New York: Academic Bourigault, D. (1992) Surface grammatical analysis for the extraction of terminological noun phrases. In Proceedings of the 14th International Conference on Computational Linguistics, COLING’92 pp. 977-981. Dagan, I. and K. Church. (1994) Termight: Identifying and terminology In Proceedings of the 4th Conference on Applied Natural Language Processing, Stuttgart/Germany, 1994. Association for Computational Linguistics. ETRI (1997) Etri-Kemong set Felber Helmut (1984) Terminology Manual, International Information Centre for Terminology (Infoterm) Frantzi, K.T. and S.Ananiadou (1999) The C-value/NC-value domain independent method for multi-word term extraction. Journal of Natural Language Processing, 6(3) pp. 145-180 Hisamitsu, Toru and Yoshiki Niwa (1998) Extraction of useful terms from parenthetical expressions by using simple rules and statistical measures. In First Workshop on Computational Terminology Computerm’98, pp 36-42 Jacquemin, C., Judith L.K. and Evelyne, T. (1997) Expansion of Muti-word Terms for indexing and Retrieval Using Morphology and Syntax, 35th Annual Meeting of the Association for Computational Linguistics, pp 24-30 Justeson, J.S. and S.M. Katz (1995) Technical terminology : some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1(1) pp. 9-27 Klavans, J. and Kan M.Y (1998) Role of Verbs in Document Analysis, In Proceedings of the 17th International Conference on Computational Linguistics, COLING’98 pp. 680-686. Lauriston, A. (1996) Automatic Term Recognition : performance of Linguistic and Statistical Techniques. Ph.D. thesis, University of Manchester Institute of Science and Technology. Lorr, M. (1983) Cluster Analysis and Its Application, Advances in Information System Science,8 , pp.169-192 Murtagh, F. (1983) A Survey of Recent Advances in Hierarchical Clustering Algorithms, Computer Journal, 26, 354-359 Maynard, D. and Ananiadou, S. (1998) Acquiring Context Information for Term Disambiguation In First Workshop on Computational Terminology Computerm’98, pp 86-90 Oh, J.H. and K.S. Choi (1999) Automatic extraction of a transliterated foreign word using hidden markov model , In Proceedings of the 11th Korean and Processing of Korean Conference pp. 137-141 (In Korean). Park, Y.C., K.S. Choi, J.K.Kim and Y.H. Kim (1996). Development of the KT test collection for researchers in information retrieval. In the 23th KISS Spring Conference (in Korean)
2000
63
   !#" $#%"&'(*)!+, .-/012 & 35406*78:9<; =>0? 40@79BA ; CED F 4GIHKJ.78 L:MON.PQ RTS2RVUVWYXYZ\[R^]!_`RaW bcRadUVW_UfegRa]h_&ihRjNklRa]monUf_`mpRVS&q0rsqKSt_RaeuRVSwvx]!y{zoWe}| UV_qKX^S ~c]1q€sRaWS2q€_Uf_P‚zpklq€_^| RTXƒ]qKX^U„i!R<[ Uf_Ufkln]!…hUh†‡&UfWXƒRak€zo]U ˆo‰Šo‹‰Œ0tŽŠs‰p0x‘’“.”f‘Šo‹%•o–!—1˜s”5’™‹sŽš’2Œ1˜ ›gœ‚,žaAo7Ÿpž  ¢¡ £s¤x¡Y¥2¡,¦V§ ¨ ¤x©pªs«p¥t§ ¨V£s£s¤x©o¨a¬Y­ ® ©o¤°¯²±³¦s´f±µ¦p¶·¨T¯µ¤™¡,¨a¸s¹º¡Y»o±²¥t§2±µ¦p¶¼¯²¡Y»o±¾½ ¬,¨T¯µ¿T¥t¡,ÀÁ¨V¦f§t±²¬Â­p±²¡,¤t¨V¤x¬Y­p±²¡Y¥,ࢡīp¥2¡Y¸ ¨Å¬Y©o¦p¥t§`¤t¨T±µ¦f§*¥`¨T§t±²¥ ® ¨T¬Y§t±²©o¦Æ¨a¯²¶V©p¤x±²§`­sÀ Ç ¤x¡Y¯³¨a»È¨T§t±²©o¦É¯³¨Vªs¡Y¯²±³¦p¶oÊ˧2©·¥t¡Y¯²¡Y¬Y§ Ì ¨fÀ©o¦p¶¨&¥2¡Y§© ® ¬,¨V¦p¸s±²¸h¨a§t¡Y¥x̧`­p¡ ¦p©p¸s¡ ±³¦Ë¨Í§`¨V¤x¶V¡Y§Î§`¨T»o©o¦p©pÀ¹Ï§`­s¨a§Ðªs¡Y¥t§t¥ ÀO¨a§t¬Y­p¡Y¥&¡,¨a¬­¢¦p©o¸s¡„±³¦Â¨„¥t©p«s¤x¬Y¡Á§`¨a»f½ ©p¦p©oÀc¹VÃ}Ñ{¦Ò£s¨V¤x§t±²¬,«p¯³¨V¤YÓ0Ô0¡Õ«p¥t¡I±Ö§O§t© ÀO¨V£×§`­p¡g¦p©oÀ±³¦s¨a¯&£s¨V¤x§Á© ®  *©o¤x¸hØ¡Y§ Ù ÃÛÚI©o¦V§2©u ¢©o¤x¸hØ¡Y§ Ù ÃÛÜsÓBÔ±²§`­×¨gÝV¡,¤x¹ ­p±²¶o­Þ£s¤x¡Y¬Y±²¥t±²©o¦°¨V¦p¸¨ÅÝV¡,¤x¹Ð¯Ö©aÔº¤x¡½ ÀO¨a±³¦p±µ¦p¶<¨VÀªp±²¶o«p±²§x¹VÃ ß à,á ž^A C 9:8:ŸpžaH C á â­p¡,¤x¡‚±²¥ã¨V¦w±³¦p¬,¤x¡,¨T¥t±³¦p¶¦p¡Y¡Y¸© ® ­s¨ƒÝo±³¦p¶&¨,Ýa¨a±²¯³¨Vªp¯²¡ ¶V¡,¦p¡,¤t¨T¯äÓ0¨a¬Y¬,«s¤t¨T§t¡å¨f¦p¸}ªs¤x©o¨a¸×¬Y©aÝV¡,¤t¨T¶V¡åÀ«p¯²§t±¾½ ¯²±³¦p¶o«s¨a¯¯²¡Y»p±²¬,¨a¯³¿a¥t¡,ÀÁ¨V¦f§t±²¬}¤x¡Y¥t©p«s¤x¬Y¡Y¥ ® ©p¤„¸s¡YÝV¡Y¯¾½ ©o£p±³¦p¶}æçרV£s£p¯²±²¬,¨a§t±²©o¦p¥ƒÃâ­p«p¥,Ów¨*ÝV¡,¤™¹}¨a¬Y§t±²ÝV¡ è ¡Y¯²¸g±µ¦p¥2±Ö¸s¡ÁæçO¸h«s¤x±³¦p¶<§`­p¡&¯³¨a¥t§w¹V¡,¨V¤™¥ ­s¨a¥ ªé¡Y¡,¦ §`­p¡ ® ¨a¥t§c¸s¡YÝV¡Y¯²©o£sÀ¡,¦f§© ® ¶V¡,¦p¡,¤™±Ö¬„¯³¨V¦p¶o«s¨a¶f¡ê¤x¡½ ¥t©o«s¤™¬Y¡Y¥,Ã ë ¡YÝV¡,¤t¨T¯¨a§2§t¡,ÀÁ£p§t¥<­s¨ƒÝV¡uªé¡Y¡,¦ì£é¡,¤ ® ©o¤tÀ¡Y¸Î§t© ¬Y©o¦s¦p¡Y¬Y§í¨a¯³¤x¡,¨T¸s¹·¡Y»p±Ö¥2§t±³¦p¶·©o¦f§t©V¯²©V¶V±²¡Y¥,à Ñ{¦ Ç€î ¶V¡,¦p©¡Y§¨a¯KÃ³Ó Ù,ïVïað ÊYÓo¨ ë £s¨V¦p±²¥`­s¿Vñ:¦p¶f¯Ö±²¥`­Iªp±²¯Ö±³¦f½ ¶o«s¨a¯p¸s±²¬Y§t±²©o¦s¨V¤x¹c±²¥:«p¥t¡Y¸§t© Ç ¥t¡,Àò±µÊ2¨V«p§t©oÀÁ¨T§t±²¬,¨a¯²¯Ö¹ ¯²±³¦s´ ë £s¨V¦p±²¥`­¼¨V¦p¸óñ:¦p¶f¯Ö±²¥`­§`¨a»p©o¦p©oÀò±Ö¡Y¥¡Y»f½ §`¤t¨T¬Y§t¡Y¸ ® ¤x©oÀ°ôõ#ö€çV÷ Ç€î ¯²Ý^¨V¤øÓ Ù,ïVùVú Ê:¨V¦p¸ÕçVô%û.ü÷ ÇKý ¤x©o¬Y§2¡,¤YÓ Ù,ïfùVú ÊYÃ ë ±³À±²¯³¨V¤x¯²¹VÓΨ°¥t±³ÀÁ£p¯²¡¨V«f½ §t©oÀO¨a§t±²¬þ¨V£s£s¤x©p¨a¬­ ® ©o¤×¯²±³¦s´f±µ¦p¶ ë £s¨V¦p±²¥`­§¨a»V½ ©o¦p©oÀò±Ö¡Y¥g¡Y»p§`¤t¨a¬Y§2¡Y¸ ® ¤™©oÀ·ôõ#ö€çV÷E§2©} *©o¤™¸hØ¡Y§ ǀÿ ±²¯²¯Ö¡,¤Ä¡Y§u¨a¯KÃ³Ó ÙƒïVïs٠ʄ¥t¹s¦p¥t¡Y§t¥I±²¥Ä£s¤™©o£s©V¥2¡Y¸þ±³¦ DZ²¶o¨f«×¡Y§ê¨a¯KÃ³Ó ÙƒïVï ÚVÊYÃÒâ­p¡<Ô0©o¤t´ ¤x¡,£s©o¤™§t¡Y¸}±³¦ Ç ¦p±²¶o­f§Á¨V¦p¸«s´!Ó Ù,ïVïað Ê ® ©o¬,«p¥t¡Y¥©o¦¢§`­p¡„¬Y©o¦f½ ¥t§`¤2«p¬Y§t±²©o¦×© ®ë ¡,¦p¥«p¥,Ó ¨¢¯³¨V¤x¶V¡g´o¦p©aÔ¯²¡Y¸s¶V¡gªs¨a¥t¡ ® ©o¤0¥«s£s£s©o¤x§2±µ¦p¶„§`­p¡ ý ¨V¦p¶V¯²©V¥t¥ ÀO¨a¬­p±³¦p¡c§`¤t¨V¦p¥t¯³¨ƒ½ §2±Ö©p¦*¥t¹p¥t§t¡,ÀjÃuѓ¦ Ç ´o«sÀ«s¤t¨g¨V¦p¸©^Ýp¹VÓ Ù,ïfïað Ê Ç ¥t¡,À±³Êt¨V«p§t©pÀÁ¨a§t±²¬EÀ¡Y§`­p©p¸s¥ ® ©p¤×¨a¥t¥t©p¬Y±³¨a§t±³¦p¶¨ p¨V£s¨V¦p¡Y¥t¡<¯²¡Y»p±Ö¬Y©p¦×§t©*¨V¦ ñ:¦p¶V¯²±²¥`­\©o¦V§2©V¯²©V¶V¹Â«p¥x½ ±³¦p¶ê¨Oªp±Ö¯²±³¦p¶o«s¨a¯¸s±²¬Y§t±²©o¦s¨f¤x¹Õ¨V¤x¡&¸s¡Y¥t¬,¤™±µªé¡Y¸Ã ë ¡YÝV½ ¡,¤2¨a¯g¡Y»È£é¡,¤x±³À¡,¦V§2¥þ¨a¯²±²¶o¦p±³¦p¶í÷sô ˨V¦p¸/ *©o¤x¸p½ Ø¡Y§:©p¦V§t©f¯Ö©f¶V±²¡Y¥ ¨f¤x¡¸s¡Y¥t¬,¤x±³ªé¡Y¸Õ±³¦ Ç §t±²¹o¨fÀÁ¨¨V¦p¸ w¨a¥t±²¸h¨sÓ Ù,ïfïVú ÊYÃ ë ¡YÝV¡,¤t¨T¯:¯²¡Y»o±²¬,¨a¯w¤x¡Y¥t©o«s¤x¬Y¡Y¥Á¨V¦p¸ §2¡Y¬­s¦p± s«p¡Y¥Õ¨V¤x¡Õ¬Y©pÀªp±³¦p¡Y¸Î±³¦ Ç€î §t¥t¡,¤x±³¨a¥„¡Y§å¨T¯äÃ Ó ÙƒïVïVú ÊB§t©ÁÀÁ¨V£ ë £s¨V¦p±²¥`­jÔ5©p¤x¸s¥ ® ¤x©oÀ°¨ªp±Ö¯²±³¦p¶o«s¨a¯ ¸s±²¬Y§t±²©o¦s¨f¤x¹}§t©× ¢©o¤x¸hØ¡Y§,Ó ¨V¦p¸Î±³¦ Ç ¨V¤t¤x¡,¤™¡Y¥ê¡Y§ ¨T¯äÃ Ó Ù,ïfïVù Ê0§`­p¡«p¥t¡© ® §`­p¡c§`¨a»p©o¦p©oÀ±²¬¥2§`¤t«p¬Y§`«s¤x¡ ¸s¡,¤™±ÖÝf¡Y¸ ® ¤x©oÀͨÁÀ©o¦p©V¯²±³¦p¶o«s¨a¯ ô*±²¥£s¤x©p£s©V¥t¡Y¸ ¨T¥ ¨f¦Õ¨a±²¸<§t©Á§­p±Ö¥wÀÁ¨V£s£p±³¦p¶å£s¤™©o¬Y¡Y¥t¥ƒÃ â­p¡ì«p¥t¡×© ® ¤x¡Y¯³¨a»s¨a§t±²©o¦ ¯µ¨fªs¡Y¯²±³¦p¶Æ¨a¯²¶V©o¤x±²§`­sÀ §2©Õ¨a§t§`¨a¬Y­<¥«sªp¥t§`¨V¦f§t±³¨a¯ ® ¤t¨a¶oÀ¡,¦f§t¥&© ® §`­p¡ ë £s¨V¦f½ ±²¥`­ §¨a»o©p¦p©oÀc¹u¸s¡,¤™±ÖÝf¡Y¸ ® ¤x©pÀíôõ#ölçf÷ DZ²¶o¨f«*¡Y§ ¨T¯äÃ Ó Ù,ïVïVù ʄ§t©Î§`­p¡Âñ:¦p¶f¯Ö±²¥`­Ð ¢©o¤x¸hØ¡Y§Ä«p¥2±µ¦p¶E¨ ªp±²¯²±³¦p¶o«s¨a¯¸s±²¬Y§t±²©o¦s¨V¤x¹ ® ©o¤O¬Y©o¦s¦p¡Y¬Y§t±³¦p¶Eªs©V§`­Î­p±¾½ ¡,¤2¨V¤x¬Y­p±Ö¡Y¥ƒÓÈ­s¨T¥ ªs¡Y¡,¦<¤x¡,£é©o¤x§t¡Y¸<±³¦ Ç ¨f«p¸ ¡¡Y§ ¨T¯äÃ Ó ÙƒïVïVï ÊYà ѓ¦„§`­p±²¥ £s¨V£é¡,¤0Ô0¡«p¥t¡&§­p¡&¥`¨VÀ¡w§t¡Y¬Y­s¦p± s«p¡§2© ÀO¨V£êæ&§t©êæ hÃâ­p¡w¨a±³À© ® §`­p¡¡Y»s£s¡,¤x±¾½ Àò¡,¦V§‚±Ö¥§ Ô0© ® ©V¯²¸ !  ±³¤x¥t§,Óf¥`­p©aÔì§­s¨a§0§`­p¡Àò¡Y§`­p©o¸ ±²¥c¶V¡,¦p¡,¤t¨a¯‚¡,¦p©o«p¶o­g§t©<¯²±³¦s´ ¨V¦V¹*£s¨a±³¤ © ® ©o¦V§2©V¯²©a½ ¶f±Ö¡Y¥ƒÃ ë ¡Y¬Y©o¦p¸Ó¡YÝa¨a¯³«s¨a§t¡ ©o«s¤I§`¨a»p©o¦p©oÀc¹}¯²±³¦s´T½ ±³¦p¶ê£s¤™©o¬Y¡Y¸h«s¤x¡VÓ!ªV¹„¬Y©oÀÁ£s¨V¤™±µ¦p¶©o«s¤¤x¡Y¥`«p¯²§t¥Ô±²§`­ ©f§`­p¡,¤"êæÁ§t©#„æ Á¡Y»o±²¥t§2±µ¦p¶<ÀÁ¨V£s£p±³¦p¶V¥ƒÃ â­p±Ö¥£s¨V£s¡,¤w±²¥c©o¤x¶o¨V¦p±$Y¡Y¸*¨a¥ ® ©V¯²¯²©^Ô¥%!&Ñ{¦g¥t¡Y¬½ §2±Ö©p¦'& Ô0¡*¸s¡Y¥t¬,¤x±³ªs¡×§­p¡Â«p¥t¡Y¸E§t¡Y¬Y­s¦p± È«p¡ Ç §`­p¡ ¤™¡Y¯µ¨T»È¨a§2±Ö©p¦*¯³¨Vªé¡Y¯Ö±³¦p¶}¨a¯²¶V©p¤x±²§`­sÀÁʨV¦p¸×±²§t¥ê¨V£s£p¯²±¾½ ¬,¨T§t±²©o¦„§t©ê­p±²¡,¤t¨V¤™¬­f¹ÕÀÁ¨V£s£p±³¦p¶hÃ0Ñ{¦j¥t¡Y¬Y§t±²©o¦)(òÔ5¡ ¸s¡Y¥2¬,¤x±³ªs¡0§`­p¡0¬Y©o¦p¥t§`¤2¨a±³¦V§t¥«p¥t¡Y¸c±µ¦c§`­p¡¤x¡Y¯³¨a»s¨a§t±²©o¦ £s¤™©o¬Y¡Y¥t¥ƒÓo¨V¦p¸ è ¦s¨a¯²¯²¹VÓh¨ ® §t¡,¤:£s¤x¡Y¥2¡,¦V§t±³¦p¶¥t©oÀò¡¡Y»V½ £é¡,¤x±³À¡,¦f§t¥¨V¦p¸ ¤x¡Y¥`«p¯²§t¥,Ó%Ô5¡O©+*¡,¤¥t©pÀ¡Á¬Y©o¦p¬Y¯³«f½ ¥2±Ö©p¦p¥ ¨V¦p¸Õ©p«p§t¯²±µ¦p¡ ® «s¤™§`­p¡,¤0¯²±³¦p¡Y¥&© ® ¤™¡Y¥t¡,¨V¤x¬Y­à , -/.".10243576+248 9:87;=<?>@045BA576+248 9 C 5EDF>@02491GH6I8KJ"LBM NPORQTS+US+VXWY[Z\QTSI]ORQWTZ_^)`aEbIc1WdS^eO%Z_O%fgWhiZSIj=OkY[f Slk SIj=WQmY+knWVXO%foS+VXWpIOFS+Q^IY[fgWVrqj=dtsuq_WhRqwvO%fxkY[fXj k4yZ_hRVXWY[ZHY[v_VXWTjwWz%S+VoWY_ZE{i]S|dXOR}HY[Z~QY_h%S+QWTZekY[fx j#S+VoWY_ZE€K_ORO`‚ƒY[fXfXS|d%{F„%…I†e…IckY[f#S?dryj\j#SIfgmI€ ‡VodjwYIdXVfgO%j\SIfXˆ%Se]_QOlk‰O%S+VŠyfgOPWd1VŠqS+V1WV1h%SIZw}O%S+Q sFWVrq‹SIZemŒˆeWTZ_})Y+k1hRY[Z_dXVrfoS+WTZIVXd{[VŠq[y_d%{_Vrq_OŽj=Y[}ORQ h%SIZH]O?WTj#vfgY+pIOR}H]ImHS+}}WTZ_^SIZemhRY[Z_dXVrfXS|WZeVXd SpS+WQTSI]_QOI{ŽSIZ_}~Vrq_O‘S+Q^IY[fgWVrqj’Wd‹WTZ_}O%vO%Z_}O%ZeV Y+kPVrq_O#hRY[j#v_QORU_WV“m”Y+kFVrq_OŒjwY_}ORQ€•‚–qS+VWd%{—sO h%SIZ‘y_doO=j=Y[fgO#dXY_vq_WdoVXWh%S+VXOR}‘hRY[Z_doVrfXS+WTZeVXdsFWVrqe Y[y_VFhRqSIZ_^IWTZ_^#VŠq_O˜S+Q^IY[f™WVŠqj‹€ ‚Fq_OšS|Q^eY[fgWVrqj qS+d]@ORO%Z›SIvv_QWOR}œVXYžŸ—  VrS+^e^IWTZ_^¡`¢£ SIfg¤y_ORz=SeZ_}¡¥1S+}Bf¦ YB{1„…I…I§IcR{7drqS+QQY+s vSIfgdoWZ_^)`¨1Y_y_VXWQS|WZ_O%Z‹SeZ_}Œ¥1S+}Bf¦ YB{7„%…I…I§ec1SIZ_}#VXY sPY[fg}‹dXO%Z_doO}WdŠSIjw]_W^_yS+VXWY[Z¡` ¥1S+}Bf¦ YB{E„%…I…e†IcR€ © QVŠq_Y[y_^[qªYIVrq_O%f«k4yZ_hRVXWY[Z¬Y_v_VXWTjwWz%S+VXWY[Z¬S+Q­ ^IY[f™WVŠqjwdhRY[y_Q}®qSpIOš]ORO%Z¯y_dXOR}°`OI€±^B€ ^IOŠ Z_ORVXWh/S|Q^eY[fgWVrqjwd%{"dXWTjwy_QTS+VXOR}KSeZZ_O%S+QWZ_^7{FORVXhI€²c³{ sPOwkY[yZ_}´aEbµVXY?]O¶dry_WVrSI]_QOŒVoY)Y[yfvyfXvYIdoORd%{ ^IWpIO%ZµWVXdŽSI]_WQWV“m?VXY‹y_dXO#jwY_}ORQd˜]S+dXOR}?Y[Z«hRY[Ze VXORU_VthRY_Z_dXVrfXS+WTZeVXd%{+SIZ_}wVrq_OPORU[WdXVoO%Z_hROFY+k·vfgORp_WY[y_d sPY[fXˆwY_Z‹SIvv_Qm[WTZ_^‹WVFVXYŒ¸ƒbI‹VrS|drˆId€ ¹ORVrS+WQOR}ORUv_QSeZS+VXWY[ZºY+kFVrq_O‹S+Q^IY[fgWVrqj»h%SIZ ]O¼k‰Y_yZ_}/WTZº`‚ Y_fXfXS+d%{ƒ„%…I†I…Ic³{[sq_WQOwWVXdSIvv_QWh%S VXWY[Z¶VXY¶¸ bI‹VŠS+drˆed%{S+}pSIZeVrS+^eORdlSeZ_}‹}BfXSsu]S+hRˆId SIfgOŽS|}}BfgORdXdXOR}µWZº`¥1S+}Bf¦ Y7{B„%…I…e†IcR€ ½ ¾¿ À‹ÁTÂ7÷Ä+ÅTÆ+ǗÈÊɶËn̍ÍIÄ+ÅÏ΃Æ%Åà Р‚Fq_OPNPORQTS+UnS|VXWY[ZwÑ SI]ORQWTZ_^ŽS+Q^IY[f™WVŠqj'}O%S+Qd—sFWVrq S–dXORV1Y+kBpSefgWTSI]_QORdi`suq_WhRq#j#SmŽfgO%vfgORdoO%ZIV1sPY[fg}d{ dXmZ_dXORVXd%{¼ORVXhI€²cR{O%S+hRq'Y|k=sq_WhRqªj#SmKVŠSIˆ+O?Y[Z_O SIjwY_Z_^ÒdXORpIO%foS+Q}W­ÓÔO%f™O%ZIV:QTSI]ORQdÕ` Ÿt ÖVrS+^Id{ dXO%Z_dXORd{Ž×uaEØÙO%ZIVrf™WORd{ORVXhI€²cR€›‚–q_O%fgO¡Wd?S+QdXYKS dXORV‹Y+k#hRY[Z_doVrfXS+WTZeVXd‹suq_WhÚqÛdXVrS+VXO?hRY[j\vS+VXWT]_WQWVgm Y[fWTZ_hRY[j#vS+VoW]_WQWVgmKY|kiSµhRY[jw]_WZS|VXWY[ZY+kvS+WTfgd pSefgWTSI]_QOŠÜ[QTSI]ORQ€ ‚Fq_OuS|WjªY|k·Vrq_OFS+Q^IY[f™WVŠqjªWd"VXY–Ý·Z_}ŒSsPORW^_qIV S+dXdoW^_ZjwO%ZIVµkY[f‹O%S|hÚqªvYIdodXWT]_QO~QTSI]ORQkY[f‹O%S+hRq pSefgWTSI]_QOI{wdry_hÚqªVŠqS+V´`Sec)VŠq_O¡sPORW^[qeVXdµk‰Y[f«Vrq_O QTSI]ORQd‘Y|k˜Vrq_O‘dŠSIjwO‘p+SIfgWTSI]_QOS+}}šyvHVoYY[Z_OI{ SIZ_}Þ` ]c¶Vrq_O‘sPORW^[qIV)S|dXdXW^[ZjwO%ZeV)drS|VXWdgÝBORd¶Ü[VXY Vrq_Oj#S+U_WTj˜yjßvYIdodXWT]_QOORU_VXO%ZeVgܘVŠq_OudXORV"Y+k hRY[Ze dXVrfoS+WTZIVXd€ yj#j#SefgWzRWZ_^7{EVrq_O#S+Q^IY[fgWVrqj¯vO%fxkY[fXj=dFhRY[Ze dXVrfoS+WTZIV1drS|VXWdgk S|hRVXWY[ZwVXYdXYeQpeOuShRY[Z_dXWdXVXO%ZeVQTSI]@ORQ­ WTZ_^•vf™Y[]_QO%j‹€‚Fq_O¼k‰YIQQY+sPOR}‹dXVXO%v_duSIf™OIà „I€_VŠSIfgVFsFWVrq‹S\fXSIZ_}Y[j:sPORW^[qeVuS+dXdoW^_ZjwO%ZIV€ á €âlY[j#vy_VXOPVrq_O"ãä³ååtæç³èEpS+QTy_O"k‰Y_f O%S+hRqQTSI]@ORQ Y|knO%S+hRqp+SIfgWTSI]_QOI€1yvvY[f™VtWd hRY[j\vy_VXOR}˜S+hŠ hRY_fg}WTZ_^/VoY‹Vrq_OwhRY[Z_dXVrfXS|WZeVŽdoORVŽSIZ_}‘VXY‹Vrq_O h%yfofgO%ZIV?sPORW^[qeVXd‹kY[f‹QTSI]@ORQd¡]ORQY[Z_^IWTZ_^'VoY hRY_ZIVXORU_VFp+SIfgWTSI]_QORd%€ é €‡xZ_h%fgO%S+dXO?Vrq_O‹sPORW^[qeVXd¶Y+kiVŠq_O‹QSe]ORQd/jwY_fgO hRY_j#vS+VXWT]_QOwsFWVrq¶Vrq_OhRY[ZeVXORU_V¼`‰QSefg^IO%fPdryve v@Y[fgVrcPSIZ_}¶}ORh%fgO%S+doOuVrq_YedXOFY+ktVŠq_OuQORdXdPhRY[j vS|VXWT]_QOQTSI]ORQd`drj#S+QQO%f—dryvv@Y[fgVrcR€—ê‘ORW^[qeVXd SefgO‘hRqSIZ_^IOR}švfgY_vY[fgVoWY_ZS+QQmëVXYVrq_O‘dryve v@Y[fgVuf™ORhRORWpeOR})k4fgY[jßVrq_OhRY_ZIVXORU_V%€ ì €‡‰k·SPdXVXY_vv_WZ_^_í+hRY[ZepIO%fg^IO%Z_hRO"h%fgWVXO%fgWY[ZWdtdrS|Vg WdgÝBOR} {"dXVXY_vE{ YIVrq_O%f™sFWdXOŒ^eY‹VXY)dXVoO%v á €¶ê‘O y_doO#Vrq_OŽh%f™WVoO%fgWY[Z?Y+kPdXVXY[vv_WTZ_^‹suq_O%ZµVrq_O%fgO SefgO#Z_Y)jwY[fgOwhRqSIZ_^IORd%{ S+QVrq_Y_y_^[q‘jwY[f™OŽdXY+ vq_WdXVoWh%S|VXOR}Œq_O%yfgWdXVXWhuvf™Y[hROR}ByfgORdj\S%mS+QdXY ]@Oiy_doOR}ŒVXYdoVXY[v#fgORQTS+US+VXWY[Z\vfgY[hRORdodXORdi`î"ˆ+ QTyZ_}Bq~SIZ_}´NlYIdXO%ZekORQ} {„…I§I†ï1NPWhRqSIfg}d#ORV S|Q‰€{ „%…I†„cR€ ‚–q_OhRYedXV1Y+kBVrq_OuS|Q^eY[fgWVrqjªWdvfgY[v@Y[fgVXWY[ZS+QVoY VŠq_OuvfgY_}By_hRVY|k·Vrq_OZ[yjw]@O%f1Y+k pSefgWTSI]_QORdu]ImwVrq_O Z_yjw]O%fPY|kðhRY[Z_doVrfXS+WTZeVXd%€ ½ƒ¾T½ À‹Î—ÎtÁÅÏÍ|ñÆ%ÅTÃÐÆRáÆRñòƒÃ ÐEà È)ó È?ñBÎtΗÅÐE © d }ORdXh%f™W]@OR}wWZvfgORp_WY_y_dtdXORhRVoWY_Z_d%{%VŠq_OvfgY[]_QO%j sPOPSIfgO1}O%S+QWTZ_^ŽsFWVrqWd—VXYij\SIvVgsPYFVrS+U_Y[Z_Y[j=WORd€ ‡xZVrq_Wd)vSIf™VXWh%y_QSef#h%S+doOI{PsO?SIfgOµWZeVXO%fgORdoVXOR}WTZ j\SIvv_WTZ_^uôŒ¸"õö÷FVXYFôŒ¸õöøB{VrqS+VƒWd{IS+dXdXW^[Z¼O%S+hÚq domnZ_dXORV–Y+kVrq_OkY[fXj=O%fPVXYŒS+VQO%S+dXVY[Z_OŽdomnZ_dXORVY+k VŠq_OQS|VXO%fR€ ‚–q_OŽjwY[}ORQWTZ_^‹Y+ktVŠq_OŽvfgY[]_QO%j›WdVrq_OkYIQQY+sP WTZ_^Bà ù î"S+hRq?ôŒ¸õö÷)domnZ_dXORVWd=S«pSIf™WSe]_QO\kY[fVrq_O f™ORQS|UnS+VoWY_Z/S|Q^eY[fgWVrqj‹€ê‘OwsFWQQfgOŠkO%fVXYŒWV S|duãRæä+çŠúRûFãgüBý_ãRû%èSIZ_}µVXY#ôŒ¸õö÷ŒS|dFãRænä+çŠúRû è™þ%ÿBæ+ý7æ¼üԀ ù ‚–q_OŒvYIdodXWT]_QO¶QTSI]ORQdkY[fVrqS+Vp+SIfgWTSI]_QOŒSIfgO S|QQwVrq_O”ôŒ¸õöødXmZ_dXORVXdµsuq_WhÚqëhRY[ZeVrS+WTZ'S sPY[f™}•]@ORQY_Z_^IWTZ_^ŒVXYVrq_OdXY[yfghROdXmZ_dXORV%€—ê‘O s–WQQuf™OŠk‰O%f¼VXY)Vrq_O%j S+dŒè™þçBû%è–ãgü@ý_ã³û%èã˜SIZ_} VoY#ôŒ¸õöøŒS+dègþçBû%èPègþÿBæý7æü € ù ‚–q_OiS+Q^IY_fgWVrqj¬sFWQQ Z_OROR}#hRY[Z_dXVŠfXS+WTZIVoddXVrS|Vg WTZ_^¶suq_ORVrq_O%fPSôŒ¸õöøwdXmZ_dXORVFWduSdry_WVrSI]_QO     "!#$ %&')(*+-, /.0 21&+ 2 3/&.&  345687%&9.& %/8 2 ;:< = = >&=?$@% A87%B8C% % $D-E%.&8E0F' G HIKJMLN;OKP6QR%SUTVOWQFP X  %/8 2 YF0ZE%&+[1F$\-7%]>&=^C_/ ?% \=26, 1&= 2 % `= F > ?-7a/b 2 %.0c +&.0@87% :9& 7Fde dDfF0 2F1%= D=2F1g&='Dh  E<.F5i.& , 8/ 2 F 2 %.0@87%A:9& 7F/3e j87%@.&  %&.-,   %c1g&k:9& l) E>.&m$ %&cF %+ln8F0 & $ %&'lh %.0 2 % M87%@:9& 7De D).&  %&.-,   o 2Dp%= ?&d+&.0/ ^ % o87%*:9& 7K%= =i87% F87%qp 21%= r.&%  %&.&  %q%;87%K8Fs/ E0.& $ %&'DtWc 2 %.0/D87%D:9& 7d Bu.&  %&.-,   i5v.& %8 ^ A=   w)e A=20+$x.&  %&.&&+ %%+&M87M7f`87%Z8Fyz0&=2  %-7% ^p%Z 2 187A8C% %  &' (d=?-7% E% 7@-7%03 B:< +{ % FB0&=2  , 87% 2p%`1&0:9& }|m 0+U~v&n$_ %/&Y:d7% ?.&7}.F 1E%&+€/1E% ?= +‚.& %8/ 2 F65@:9ƒ76fFx, .E%&+ „87%Y7$_pg…F7$_pg F$†0&=2  %87% 2p%' t<7 53:9n 2 %.0M87%n:9& 7Fu%b‡.& , %&.&  u:*7% @87% 2 FfF=?f&+b %%+&ˆ76fF37$_pg, $_-…F7F$p% F$= .&  %&.&&+;'s|mv.& % + 7$_pg…F7$_pg F$‰0&=2  %87% 2p%c& 87%o+ 20&.&= $  j ^ %+ 20&.&/=?$lŠ‹ e'ŒF'@F %.&&/ 0B j+&.& %+UF 8&5 +p %+ 2 % @ u87%3w ^ %+s.& %8 ^ *E%&+;' Ž E0Dv87%:<rF o&CFDp%= BpgF 21%= B.& , %&.&  %#1&0:9& `0:9m8C  %% &' X   %&.-,   ‘<’:< =?=976fF3 B:9& 7B ^ %.>&+m+UE% ‘*“65U‘v”3F %+#‘B•–5—:*7% = D.&  %&.&  %B‘d˜BF %+c‘<™ :< = =76fFB-7%& ^š:r& 7<+&.>&+;' C1 C2 C3 C4 C5 C6 Ž E0\F›}œU&ž-ŸK ¡£¢c¤/¥B¦k¤/§§¢0¦k¨‹©£¤/§ªA«F¢0¨‹¬­¢0¢§Y¨ež--® ¤/§¤-Ÿ©£¢0ª0¯ |m°+  2 % E% 87 + ²±³> Faw 2 %+´.& , 8/ 2 F659+p %+ 2 % Z x:*7%&87%A:9#.& % + 7$_pg F$5d7F$p/ F$c%1gF87i5<% `:d7%&87% :9<.&% % +987%Fd0&=2  %87% 2p%d+ ^>&.&r%s 2 %+ ², >&.&5UF %+# u ^ :*7% .&7µ1gF878C% % y ?&v:r +AU'<¶K.&7@.& %- 2 FdD6$c1{E%/&+·=?% %{% .&%{1% 2 %&+:< 87AF87%06' ¸ &= :´:9c+&.0 21g¹= =3w 2 %+@{.& %/8 2  E%/&+;'tv7%&$3F0s=2F1&= &+D:v ?-73v870&-º .&7F.&/ .&%+¹Š»;¼¾½F&5K:*7% .&7`E%@1b>+zyF= = :<› tv7%B¿­0v.–7F/.&Š‹»¾9 2 %+ .&37%:87%{7$F, pg…F7$_pg $_0&=2  %87% 2p ¾.&% % +0&+ ^ 87% / E0.&A8C% % $­› %= $) jÀ2ÁDÁAÂ&ÃFÀ‹Ä Å0Â@ % +& ŠÇÆÈd ˆ%ÄÉÊzŠ‹Ëq3F %.&&/ …+&.& %+U F'@t<7% /&.& %+D.&7F.&dŠ‹¼¾µ.& +&987%<-F< 2 e D6, / ?% Ae 987%8F> F&<8C% % $D/ ?+F'Bt<7%87% 20+ .&7F.& ^ %+ ./&Y:*7%&-7%#87%M.& %- 2 F/ >&Ì_E% 20&87%@&C%  %.&@ˆu.&  %&.&&+`7$_pg, $aŠe͍&57$_pg F$aŠeÎq&5% <1gF87mŠeϗ&' ÆeƋÍ#ÐÑ;Ò;Ó&ÔÕ6ÖU×ÇÒUÔFØ*tv7%\ 2Dp%= &\.& %8/ 2 Fl  /l.–7%&.&w„:*7%&87%#87%x.&  %&.&&+Ù % +& 76fF0&8p&.&/ ?f\+ ^>&.&]7$p $_yz=  .&%  %&.&&+;' ÆeƋÍ[-F %+Ze x ^o&+ 2 / E0.& ŠeƋ&56 ^o&+ 2<80 F&rŠeƋ;7$_pg $_ ŠÇ͍&' tv7% ?B.&% %8 2 *:v ?= =q 2 %.087%:r& 7 e A87%m.&  %&.&  % 2 l:*7% .&7Z87%# 2, y&+ ^7$_pg F$‚87%%E0.&D % +D  .&%  %&.&&+:v ?-787%< 2D&+ 2/ˆ7$_pg $_ 87%8F0 &r %%+F' ÆeƋÎMÐÑWÒ³ÓÚÔÕÖU×ÇÒUÔFØdt<7% s.& %8 ^ ; 2 %.0/&¾87% :9& 7Be d87.&  %&.&  % 2 #:*7% .&7YF 2Dy&+ ^b7$_pg $_Û*-7%%E0.&Ü %%+ A.&  %&.&&+MnF m 2Dy&+ ^b7$_pg $_ 87%8F0 &r %%+F' ÆeƋÏmÐÑWÒ³ÓÚÔÕ6ÖU×eÒUÔFØdt<7% s.& %8/ 2 Fq 2 %.0&s87% :9& 7 j87%A.&  %&.&  %D 2 ):*7% .–7n87% 2Dy&+ ^7$_pg F$Ý87%3/ E0.&{ %%+ <.&%  %&.&&+/387%d 2D&+ 237$_pg $_ ¾87%B8F> F&r %%+3F %+@F A 2D&+ 237$F, pg $_Û<87%A E0.&A .&  %&.&&+nbF 2Dy&+ ^D7F$p $Ùs87%B8F0 &' ÆeÆsÐÑ;Ò;Ó&ÔÕ6ÖU×eҗÔÓFÞ*h:9{E%.& %8 ^ *ÆeƋÍi5iÆeÆÈÎ  %+ÜÆeƋÏ@-7%*8< 2F5%:9& 7F/:< = =;1 y + ²¿U&+ne B:9 0+D.&7% 2 % mF F$#<87% .&% %8 2 't<7A ?65<:9bF>b++  fF&= $ .&%{1% 2 % 2 % .& %8/ 2 F6'h 87%<.<:*7%0 0:9mB87%ßpp%=?$F59-7%& ^o-±;&.&D:< = =B1 ++&+;'h‹i87%&${76fFK ppF/ ?/r-±;&.&6587%&$ àvá?â âBãäFå%ã&æ&âdæäã&çMèFé-ç%æê&ëZìsá í îê0æbï)ð8ç%èà<ð ä·í êäFñç%á ãäâ*ê>æñê0æ&ðæåé8äéá è ånèò3äâ âBóeó9ã&è åô ð/é8êäá2åéðë IIE IIO IIB ìsá í îê0æïõqö ö÷kø/ùúeû‹üVý/þŒùûeú0ÿ ç%æ#äFêê0èà<ðá2åá ãäéæmäFånáæá2äéæmç_ñgæêô å  \ê0æ&â2äéá è å%ð8ç%á2ñië ç%æ`å%è æ&ðnè åé8ç%æMâ æ-òeé çäFåAðá æBã&è êê0æ&ð-ñè åAéè3é-ç%æ<ðè îê0ã&ædé8ä %è å%è   äFåAé8ç%æ3å%èæ&ðrè%åAé8ç%æˆê>á?í%çFé9éèé8ç%æBé8äFê0íæ&érç%á æêô äFê0ã&çFë ç%æèFééæBâ á2å%æ9á?ðqé8ç%æsã&è åå%æ&ã&é/á?è%å3àdç%á?ã&ç à9æ&á í çFédà<á â âæá2å%ãê0æäðæUî%æ{é/èé8ç%ææ á ðé/æå%ã&æ èò;é8ç%æ<ã&è%åå%æ&ã&éá è åDá2åá ãäéæ@àvá?é-çDäBã&è åéá2å%î%è î%ð â á2å%æFë  ó "!#"%$ ç%á ð†ã&è å%ðé-êäá2åFé á2å%ãê0æäðæ&ð é-ç%æà9æ&á í çéBòeè êBé8ç%æAã&è åå%æ&ã&éá è å%ðoá^ånàdç%á?ã&ç äåläFå%ã&æ&ðéè%êAèò{é8ç%æ)ðè îê0ã&æYå%èæná?ðã&è åô å%æ&ã&é/æ{é/è<é8ç%æ9áæá2äéæ<ç_ñgæêå%&èòé8ç%æ é-äFê0íFæ&é<å%èæFë  ó'()*+ "!,"%$ ç%á ð ã&è%å%ðé8êäá2åé á2å%ãê0æäðæ&ð é-ç%æà9æ&á í çéBòeè êBé8ç%æAã&è åå%æ&ã&éá è å%ðoá^ånàdç%á?ã&ç ä-æ&ðã&æåUäFåéDèò<é8ç%æAðè îê0ã&æ@å%èæá ðã&è åô å%æ&ã&é/æbé/èÜäFåuáæá2äéæDç_ñgè å%&‚èòsé8ç%æ é-äFê0íFæ&é<å%èæFë  ó./)*+"!#"%$ ç%á ð ã&è å%ðé-êäá2åFé á2å%ãê0æäðæ&ð é-ç%æà9æ&á í çéBòeè êBé8ç%æAã&è åå%æ&ã&éá è å%ðoá^ånàdç%á?ã&ç äåläFå%ã&æ&ðéè%êAèò{é8ç%æ)ðè îê0ã&æYå%èæná?ðã&è åô å%æ&ã&é/æ é/èlé8ç%æ`áyæá^äéæƒç_ñgæêå%& èò é-ç%æ3é8äFê>íFæ&é*å%èæyäå·ä0æ&ðã&æåUäFåéBèòsé8ç%æ ð/è îê0ã&ærå%èæá ðqã&è åå%æ&ã&éæéè*äFådáæá2äé/æ çñè å  èòé-ç%æBé8äFê0íFæ&é<å%èæFë  ó "!#"%132‹òmà9æ]î%ð/ælã&è å%ðé8ê/äá2åFéð  ó465  ó4'#äFå  ó.Dðáî%â é8äFå%æ&è î%ð/â7%5gàrædäFññ%â æ&á²ô é-ç%æê<äDç_ñgæêå%&Ýã&è å%ðé8êäá^åé5%æ&á é8ç%æêdäDç%ô ñgè å ã&è å%ð/é8êäá2åésè êqæ&á é8ç%æêgèFé8çèòié8ç%æ@ë 2åxé8ç%æ#â^äðéãäðæ%5<é8ç%æ980èFá2åFéAã&è å%ðé-êäá2åFéAá ð äâ?ð/è¹äFññ%â á æ;ë ç%á?ð:æäFå%ðAé8çäFå‡ã&è åå%æ&ã-ô é/á?è%å%ðà<á é8ç;oäéã&ç%á^å%íMç_ñgæêå%&‰äFå`ç%ô ñgè å }à<á â â;çä<æ<é8ç%æ&á2êKàræ&á í çéð=è î&%â Dá2åô ãê>æäðæ;ëKìsá?í%îê0æ?>ð-ç%èàvð*äí êäñç%á?ãäâê0æñô ê>æ&ðæåFé-äéá è åAèòäâ â  óWã&è å%ðé8ê/äá2åFéð6ë AIE + AIB + + AIO + ìsá?í%îê0æ@>õBAFö%÷0øùúeû‹üVýþŒùûeú0ÿ 2åAé8ç%á ðCUí îê0æ%5é8ç%æED„ðá í åAá2åá?ãäéæ&ð*é-çäéé8ç%æ çñæêå  ê0æ&â2äé/á?è%å%ð8ç%á2ñ ê>æñê0æ&ðæåéæF%\é8ç%æ äêê0èàGè æ&ðnå%èFébå%æ&æ\éèHæMáæá2äé/æFëI2Vå é-ç%á?ðvãäðæ%5—é8ç%á ð<á éæêäéá è åuá ð*è å%â cäâ â èà9æcá2å@é8ç%æ ð/è îê0ã&æBé8ä %è å%è  Fë ó  )*+"!#"$=Jˆê0æAðK&æ&é8ê>á?ãäâséè  óKã&è åô ð/é8êäá2åéðëB2Våé8ç%á ðsãäðæ%5Fê>æ&ãîê0ðá è åá ðräâ â èàræ è%å%â7@è%åAé8ç%æ3é8äê0íFæ&é9é8ä %è å%è  Fë ìsá í îê>æ:Lbð-ç%èàvðyäí ê/äFñç%á ãäâ<ê0æñê0æ&ðæåé8äéá è å èòäâ â¾ó  ã&è å%ðé8ê/äá2åFéð6ë IAE + IAO + IAB + + ìsá?í%îê0æELUõsöMA3÷0øùúeû‹üVýþŒùûeú0ÿ )  "!#"%$N2å%ã&â^îæ\é-ç%æƒä%è<Fæ„ã&è O%á£ô åäéá è å%ð5î%éäâ?â èà<á2å%íYê0æ&ãîê>ðá è ånè åPèFé8ç ð/á7æ&ð6ë ìsá í îê>æRQcð-ç%èàvðyäí ê/äFñç%á ãäâ<ê0æñê0æ&ðæåé8äéá è å èòäâ â  ã&è%å%ðé8êäá2åéðë S T UVNW&XY[Z\W^]_%`9a6]bdcW"`eBf,_%` 2åBé8ç%æñgæêòeè êKæéæ&ðéð;à9æ9î%ðæ3ðáî%â é8äFå%æ&è î%ð/â7 äâ?âvã&è å%ðé8êäá^åéðà<á é8ç‡é8ç%æ@ð8ä%æcê>æ&ãîê0ðá è åMñäé0ô é/æêåië ç%á ðN á æ&â ð<é8ç%æBñäãgFð6õóeóh5  óh5 ó  äå  ë + + AAE + + AAO AAB + + + + iBj k l&mon?prqtss@uovKwx,yz[{K|}wy,xo~ =n€l‚ ƒK€O„%monmon…r† moƒKn‡† ˆ‚ ‰‹ŠŒ†mNƒnO‚„ƒKnm+Ž6€Kjˆn j ƒNj €ƒ‘n@’O†%€KƒNjˆŠ,† mK’On‡‹† ˆ€Kƒm“„jˆ%ƒ=€Knƒ” •/n/„‚ €K†;† ’0…&„%mon‡H† l&m9’„%…&…jˆk;–Nj ƒ—ƒn ˜^™š›™œž …&mŸ†¡ j ‡&n‡£¢‰¥¤mojˆnƒK†ˆ^¦ާ„ˆ‡¥ƒn †%jˆj ‡&nˆn¨–N„€9©&lj7ƒ“nªj k 6Ž«€…&nj„‚ ‚ ‰¥jˆ—ƒn „€Kn€‹jˆ¥–j  ˜^™š›+™œ/+ž &„€„(j k ¬† ˆ­&® ‡&nˆn(€“† mon%”I¯Enƒ‘„j ‚7€„%ˆ°¢rn;Š,† l&ˆ‡djˆ±€Kn‘® ƒKj † ˆ9²"”´³%” µ[ˆ¶† mo‡&nm)ƒK†§…rnmhŠ,† mK’¬ƒn=† ’…&„%moj €K†ˆ6Ž–·n=&„‡ ƒK†‹† ˆ% nmoƒ ˜^™š›™œž ޏ–j ¹j €„:€Knˆ€Kn:’„%…® …jˆk(ºƒ&„ƒEj €Ž)j7ƒO’„%…€ n„¹¼» ¡½o¾¡š^¿ jˆÀÂÁB³%”Mp ƒK†;„P ¡„%mŸjĈ%ƒ:jˆHÀÂÁij%”ÆÅ%ÇŽ=jˆƒK†(„ ›oÈ"š›™¿ ’„%…® …jˆk"Ž%–3j7Oj €–3&„ƒt†l&mB„‚ k%† moj ƒ&’±‡&†n€”BÉjˆn €K‰&ˆ€KnƒK€:„%mon:† „mo€KnmOƒ&„ˆ;€Knˆ€Kn€ Ž=ƒnR† ˆ %nmh® €Kj † ˆj €?€“ƒmK„j k ƒoŠ,† mo–N„%mo‡”0•ʍnˆƒo–=†R€Knˆ€Kn€Ojˆ ƒn €„%’On³%”Mp@€“‰ ˆ€Knƒ3–=nmon@„€K€“j7kˆn‡Rƒo–=†?€“nˆ€Kn€ jˆ‹‡&jÆËnmonˆƒO³%”ÆÅO€K‰&ˆ€KnƒK€ Ž"–=n@ƒ“† † Ì:¢r†%ƒ-ƒ„%mok%nƒ“€ „€¸ ¡„‚ j ‡Ž€K‚ j k %ƒ“‚7‰Ojˆmon„€“jÈkEƒ‘nÍmŸn’„jˆjˆk?„%’ ® ¢j k lj ƒo‰:†Š ˜^™š›+™œ/+ž ” ÎNnÂmŸn€l‚ ƒK€@„%mon † ’…&lƒKn‡/† %nm«ƒnO€K‰&ˆ€KnƒK€ –Nj ƒ„σB‚7n„Ï€Kƒ† ˆn=„%ˆ‡&j ‡"„ƒ“n† ˆ&ˆnƒ“j7†ˆ6Ž –3j7 mon…&mon€“nˆ%ƒOÐ%Ð&”³Ñ҆Š=ÀÂÁÓ&Ô´Õ"”O•/nO† ˆ€Kj ‡&nm ¡Ö× Ø ¾MÙ*ÚÛ Ú › €K‰&ˆ€KnƒK€Pƒ†%€Kn;–Nj ƒÜ’O† mon;ƒ‘&„%ˆ±† ˆn „%ˆ‡&j ‡"„ƒKn † ˆ&ˆnƒKj † ˆ6” θ„%¢‚ n¨³R…&mon€“nˆ%ƒK€0ƒnR„%’O† l&ˆƒO†Š@ˆ† ‡&n€OŠ,† m –j O‡&j €„%’O¢j k l&„ƒKj † ˆOj €=…&nmhŠ,† mK’On‡Ž%„ˆ‡€K† ’On „%ˆ‡&j ‡"„ƒKnE†ˆ&ˆnƒKj † ˆ€N‡&j €K„%mo‡&n‡¨ºŒjŒ”Ýn%”Bƒn‰‡&† ˆ†%ƒÌÏnn…:„€…&†€K€Kj¢‚ n  Þ7Þ ƒn@„%ˆ‡&j ‡"„ƒ“n€Ç” „’?¢j k l†l€ † %nmK„‚ ‚ ˜^™š›™œž Ð%ßr”Æà%Ñ Ð%Ðr”Æá%Ñ â6ã Ð%Ðr”Æß%Ñ Ð%Ðr”ÆÐ%Ñ Î¸„%¢‚ n¨³%qä=† %nmK„kn†ŠEÀÂÁÓ&Ô´ÕŠŒ†m¢&†%ƒ‘(’„%…® …jˆk%€” θ„%¢‚ nÍá3…&mon€KnˆƒK€B„%ˆEn€“ƒKj’„ƒKj † ˆå†Šæ†–¨’„%ˆ‰ çoèéoéBêë ìæéhí3î {‘ï é {y ð y,y î ñ}òò“ììì ~ uovï‘x,uh|~ î z,|}w¡u é y,vKw~ éoóô%òõoì w ò †ÏŠtƒ†%€KnE„€K€“j7kˆ&’Onˆ%ƒ«–·nmŸn§moj k ƒŽ"„Ï€·–=n‚ ‚t„€=ƒn …&mŸnj7€“j7†ˆ‹ŠŒ†m ˜^™š›™œž Ž6†’…&lƒKn‡¨l&ˆ‡&nm3ƒn €‘„%’OnP† ˆ‡&j ƒKj † ˆ€”öÎN†€KnP­"k l&mon€/–=nmonP† ’ ® …&lƒ“n‡Â¢%‰O’„%ˆl&„‚ ‚ ‰?‚ jˆ&ÌjÈk0ƒK†À9ÁÓ&Ô´÷@„E€„%’0…‚7n †ÏŠø³Ð%ààE€K‰&ˆ€KnƒK€=mK„%ˆ‡&† ’O‚ ‰O†%€Knˆ¶Š#mo†’dÀÂÁB³”Æp&Ž „ˆ‡:ƒnˆ:l€Kn@ƒ‘j7€«€„%’…‚ n?’0„%…&…jˆkù„€N„mŸn‘ŠŒnmh® nˆnOƒK†:n „‚l&„ƒKn„ς7‚’0„%…&…jˆk%€”OÎNn€Kn ­"k l&mon€ €‘†¡–—ƒ&„ƒB† l&m€K‰€KƒKn’Ü…&nmhŠ,† mK’¶€·„O¢&nƒKƒ“nm’„%…® …jˆk9ƒ&„%ˆ ˜^™š›™œž ”ÎNnE‡&jMË*nmŸnˆn¶¢rnƒo–·nnˆ ¢r†%ƒ/’0„%…&…jˆk%€Oj7€ €Kj k ˆjM­"„%ˆƒÂ„ƒ@„RÐp%ÑG† ˆ­&® ‡&nˆn ‚ n %n‚,” „%’O¢j k l† l€ † %nmK„‚ ‚ ˜^™š›™œž Ð%ú&”ÆúÑ û Ð%Å&”MÐ%Ñ Ð%Å&”ÆÐÑ û Ð%ß&”MÅ%Ñ â6ã º#üEý¥à&þÆú%Ç Ð%Å&”ÆpÑ û Ð%ÿ&”Mÿ%Ñ Ð%ß&”}²Ñ û Ð%ß&”MÐ%Ñ â6ã º#üEý¥à&þ}² Ç Ð%ÿ&”ÆàÑ û Ð%ÿ&”MÅ%Ñ Ð%ß&”ÆÅÑ û Ð%ß&”MÐ%Ñ â6ã º#üEý¥à&þÆp%Ç Ð%ÿ&”ÆáÑ û Ð%ÿ&”MÅ%Ñ Ð%ß&”ÆÿÑ û Ð%ß&”MÐ%Ñ Î¸„%¢‚ n á&qI¤monj €Kj † ˆû mon„‚ ‚;mon€l‚ ƒK€—Š,† mH¢&†%ƒ À9ÁB³%”Æp ûÀÂÁB³%”ÆÅE’0„%…&…jˆk%€” Éjˆn@mon‚„  „ƒKj † ˆ0‚„%¢&n‚ jˆk…rnmhŠŒ†mK’O€·„å–=nj7k%ƒ „Ï€K€Kj k ˆ&’OnˆƒOŠ,† mOn„;…&†%€“€Kj¢‚7n† ˆ&ˆnƒ“j7†ˆ6Ž=–·n „ˆ † ˆƒmo†%‚:ƒn±mon’„jˆjˆkF„%’O¢j7klj7ƒo‰ º,„%ˆ‡ ƒ‘ l€ ƒnÂmon„‚ ‚%…&monj €Kj † ˆ¼ƒmK„‡&n†ÏËÍÇE¢‰ &„%ˆk® jˆk;ƒnƒ&mon€†%‚ ‡ º#ü¡Ç0ƒ&„ƒ:ƒ‘n –=nj k ƒ9ŠŒ†m„ †ˆ&ˆnƒKj † ˆ&„€BƒK†@mŸn„¹ ƒK†@¢rnN† ˆ€Kj ‡&nmon‡:„3€K†® ‚lƒKj † ˆ6”3‚ ƒ† lk j7knm@ƒ‘&mon€†%‚ ‡&€?’0„jˆ%ƒ„ÏjÈ mŸn„‚ ‚„%ˆ‡¨…&mo†‡"lnù„:j k nmO…&monj €Kj † ˆ6ŽÄ‡&jÆËnmh® nˆn€E„%monEˆ†%ƒN€“ƒ„ƒKj €KƒKj „‚ ‚ ‰:€Kj k ˆjM­"„%ˆƒ”     ! #"%$'&&(*) i^† m—n„ † ˆ­"‡&nˆn k mo† l&… jˆGƒ‘n ¤mojˆn‘® ƒ“† ˆ;’„%…&…jˆk"Ž=ƒn ›Û,+o¿OÙ ½¹™™Ö9™š"¿ †%‚l&’ˆ;jˆ ƒ‘„%¢‚ nÊú±jˆ‡&j „ƒKn€Hƒn±…&nmonˆƒ„k%nH†ŠÀÂÁB³%”Mp €“‰ ˆ€Knƒ“€jˆ9–j ¹9†l&m€“‰ €Kƒ“n’ …&mŸ† …&†%€“n€Í„σ‚ n„€Kƒ †ˆn† ˆ&ˆnƒKj † ˆP„‚ €K† …&mŸ† …&†%€“n‡¨¢%‰ƒn9¤mojˆn‘® ƒ“† ˆ@’„…&…jÈk^”øÎ«n.½0/@Ù&½‘™™Ö9™š^¿ †%‚l&’ˆ jˆ® ‡&j „ƒ“n€ƒn „’O† l&ˆƒ?†ÏЧ† ˆ&ˆnƒ“j7†ˆ€ù…&mo†…&†%€Kn‡ ¢‰±† l&m€K‰€KƒKn’ „Ï‚7€“†°…&mo† …r†%€Kn‡ ¢‰£¤mojˆnƒK† ˆ ’0„%…&…jˆk"” ۍnʄÏk monn’Onˆƒ¨¢&nƒo–=nnˆ ¢&†%ƒ‘ €“‰ €Kƒ“n’O€¨j € ©&lj ƒKn;j7k6Ž@€…rnj„‚ ‚7‰±Š,† m-ƒn¨kmo† l&…€R–«j7ƒ‘°„ j k —† ˆ­"‡&nˆn‚ n %n‚,”dÎNj €:j €:© lj ƒKnmon„€K† ˆ® „¢‚7n%Ž €KjˆnN„N…&nm ŠŒnƒ€“‰ €Kƒ“n’H–=† l‚ ‡?¢&nn  …rnƒKn‡ ƒ“† „k mŸnn–Nj ƒƒn:„€K€Kj k ˆ&’¶nˆ%ƒK€Ojˆ;á%à%Ñ † ˆ­&® ‡&nˆnOk mŸ† l&…‹†Š ˜^™š›™œž † ˆ‚ ‰ „%¢r† lƒ@á%à%Ñ҆Š ƒ‘nƒKj’On€ ”µ,ƒ„Ï‚7€“†¶’Ol€KƒN¢rnEƒ„%Ìnˆ0jˆ%ƒK†„†l&ˆ%ƒ ƒ‘&„ƒEŠ,† mE‚ †– †ˆ­"‡&nˆnRk mo†l&…€Ž ˜^™š›™œž j € ’Ol:’O† mon@„’?¢j k l†l€” 132547686,9:6,;< =8> ;@?'AB6,; = 6 C3DFEBGIH C3DFEBGKJ C3DFEBGML 2N4 >5OBP QR!S0T U8V,W7X QRYSZT U[V,W\X QRYS0T U8V]W\X 9 > ; >^ 6,9 >5ON^ _`baM_c _@dBa H c _dBa E c _dba H c _dBae]c _dBaIfc e EE c ggbaM`c _ E a J c g_Bahe,c _ E ae,c g_Ba L c g_BaIgc _ E c gdbaM_c g@_BaMgc ggBa J c g_ba H c ggBaMd@c g_Bahe,c g E c `_ba H c d E aMfc d E ahe,c d E a L c d E a J c d E a J c d E c d`ba L c d@gBa E c d`Ba J c ddbaM`c d`Ba L c d`BaIgc ` E c LH aMgc L@H aMgc LH aIgc LH aMgc LH aMg@c LH aIgc L@E c `gba J c gbeaMfc dfBaIdc ddba J c dfBaMd@c ddBa J c JNE c LE aMdc L@E aMgc LE aIgc LE aMgc LE aMg@c LE aIgc H@E c ` L a H c ` L a H c ` L a H c ` L a H c ` L a H c ` L a H c f E c H fbaM`c H fBaM`c H fBaI`c H fbaM`c H fBaM`@c H fBaI`c ^iOBj < > <0k!l gdba H c g@_Bae,c gdBaIgc ggbaMgc ggBa H c ggBaI`c m > <ik!l _ H aM`c _ J a L c _ H aIgc _ J a J c _ J ae]c _ J aIfc m k j ln6oHBpq132547686,9:6,;< j 68<7rs686,; jb> <itu9vk PBPNw ;N2 ^,a m tN6 k]x6,4yk!26 476,9vk w ; w ;N2 k9 jNw 2 ONw <\z w ; { 4 w ; = 68< > ;|9vk PBPNw ;N2|k;NA w ;|<0tN6}9vk PBPNw ;N2 P 6,4~ € > 4968A j z<itN6‚4768lk!ƒ„k< wn> ;l…k j 68l w ;N2:kl†2 > 4 w <itB9 wn^ ^ t > r‡;v4ˆ6 ^iP 6 = < w x68lnz w ; =8> l O 9v; ^‚‰Ši‹NU8ŠiŒR8ŽRYv‘ ’]“I”•]“ Xh– k;NA˜—™ RY ’]“I”•]“ Xh–˜> €š<ik j l†6J a › O 4 ^ z ^ <6,9 P 4 >NPB>^ 6 ^]œ|w ;9 >@^ < = k ^ 6 ^,œ k O ; wnž„O 6uŸ}  eaI`¡^ zB; ^ 68<€ > 4 6,k = tŸ}  ea L ^ zB; ^ 68< a m tN6}k]x6,4yk!26o4yk;N26 ^ €¢4 > 9 ea E@E e < >|ea EE d£P 4 > ~ PB>@^ k!l ^oP 6,4 ^ zB; ^ 68<¤AB6 P 6,;NA w ;N2 > ;<0tN6 = t >^ 6,;C <itB476 ^ t > lnA œ r‡t w l†6u<itN6 { 4 w ; = 68< > ;¥9vk PBPNw ;N2¦tBk ^ k;¡k]x6,4k26 > € ea EE dBa § O 9v9vk@4 wn¨8w ;N2 œ <itN6 >5j <0k w ;N68A©4ˆ6 ^iO ln< ^¥PB>w ;@< <itBk!< >5O 4 ^ z ^ <6,9 w†^ k j ln6 < >:P 4 > A ON= 6k ln6 ^^ k9~ jNw 2 ON>5ON^ k ^^w 25;B96,;@<¡<itBk; ‰Ši‹NU8ŠiŒR8Žšœ r w <itFk ^w 25; w ? = k;@<lnz¥t w 25tN6,4}k =8=,O 4k = z¦k;NAªr w AB6,4 =8> x~ 6,4k!2@6 a « ;¬k!ABA w < wn> ; œ>5O 4 ^ z ^ <6,9 > ;Nlnz ON^ 6 ^q^ <i4 ON= < O 4kl w ;@€ > 49vk!< wn> ;®­;Bk@968lnz œ t@z P 6,4¯t@z Pb> ;zB9z¥4768lk]~ < wn> ; ^ t wPN^i° r‡t w ln6 ‰Ši‹NU8ŠiŒR8Ž˜ON^ 6 ^±^ z„; ^ 68<²r > 47A ^]œ 2l >^^ 6 ^,œ k;NA > <itN6,4 w ;@€ > 49¬k!< wn> ; w ;´³ > 47A'µ¶68< a ›¤;®<0tN6 > ;N6·tBk;NA œ r‡tN6,; w ;@€ > 49vk!< wn> ; > <itN6,4 <itBk;¸<ik!ƒ > ; > 9z ^ <04 ON= < O 476 w†^±ON^ 68Ao476 ^iO ln< ^ 9 w 2Nt< j 668x6,; j 68<<y6,4 a ›¤;u<itN6 > <itN6,43tBk;NA œ € > 4 = k ^ 6 ^ w ;¹r‡t wn= t ^iON= t w ;@€ > 49vk!< w†> ; wn^ ; > <¦k,x!k w lk j ln6 ­ 6 a 2 a € O 47<0tN6,4 AB68x@68l >NP 96,;< > €º O 4 > ³ > 47A'µ¶68< ^ w ;¥;N68r»lk;N2 O k!26 ^0°8œ²^ <i4 ON= < O 476˜9vk]z P 4 > x w AB6k 4768l w k j ln6 j k ^wn^,a ¼ ½}¾¿ÁÀN¢ñÄ]Å¢¾¿±ÄÆ ÇqÃÉÈÊ˱ÌbȘÍξÈNÏ ³6ÐtBk]x6k PBP l w 68AÑ<itN6|4768lk!ƒBk!< wn> ;Òlk j 68l w ;N2ÑklM~ 2 > 4 w <itB9Ó< > k ^^w 25;Òk;¥k PBP 4 >5P 4 w k!<6; > AB6 w ;¥k <0k47268<<0k!ƒ > ; > 9z¤< > 6,k = t; > AB6 w ;vk ^y>5O 4 = 6²<ik!ƒ@~ > ; > 9z œ5ON^w ;N2 > ;NlnzÔt@z P 6,4¯t@z Pb> ;@z„9z w ;@€ > 49vk]~ < w†> ; a Õ 6 ^iO ln< ^¤> ;£Ÿ} ±ÖB×hØ£< > Ÿ} ÁÖB×hÙ}9vk PBPNw ;N2ÔtBk]x6 j 686,;¥476 Pb> 47<68A a m tN6˜t w 25t P 476 =8wn^wn> ;´k = t w 68x68A P 4 > x w AB6 ^ € O 47<itN6,4¶68x w AB6,; = 6<itBk<‚<it wn^ <6 = tB; w†žBO 6 Ú P 4768x wn>5ON^ l†z ON^ 68A w ;Û­ÝÜ k O AsÞ 6¡68<¡k!l aœ e]___° < > l w ;Bߥk§ P k; wn^ tÒ<ik!ƒ > ; > 9z|< > Ÿ£ ±ÖB×hØ Ú =8> ; ^ < w ~ < O <6 ^ k@;àk =8=,O 4k!<6´968<it > Aà< >Ñ=8> ;B;N6 = <Ò<ik!ƒ@~ > ; > 9 w 6 ^,œ 6 w <0tN6,4¶€ > 4.<itN6 ^ k96 > 4¶A wIá 6,476,;<3lk;@~ 2 O k!26 ^,aâ*O 47<0tN6,4¤68ƒN<6,; ^wn> ; ^ > €.<0t w†^ <6 = tB; w†žBO 6 < >˜w ; = l O AB6 w ;@€ > 49vk!< wn> ; > <itN6,4<itBk; ^ <i4 ON= < O 4kl 9¬k,zã476 ^iO ln< w ;¥kuxYkl O k j l†6< >N> lÁ€ > 4<it >^ 6 =8> ;@~ = 6,4y;N68AÑr w <it%<0tN6ªAB68x68l >5P 9:6,;<ªk@;NA w 9 P 4 > x60~ 9:6,;< > €qlk4726¤ln68ƒ w†= kl > 4 ^ 6,9vk;@< wn= > ;@< > l > 2 w 6 ^]a m tN6476 ^0O l†< ^s>Nj <ik w ;N68A OBP < > ; > r ^ 686,9¹< > w ;@~ A wn= k!<y6¤<itBk!<,p ä m tN6·4768lk!ƒBk!< wn> ;ål…k j 68l w ;N2àk!ln2 > 4 w <0tB9 wn^ k 2 >5> A}<y6 = tB; wnžBO 6‡< > l w ;B߬<\r > A wIá 6,476,;@<st w 6,4~ k@4 = t w 6 ^,aâ> 46,k = t; > AB6Ár w <it ^ 68x@6,4k!l Pb>^^w ~ j ln6 =8> ;B;N6 = < wn> ; ^,œ <0tN6 = k;NA w A'k!<6¬<itBk!< j 6 ^ < 9¬k!< = tN6 ^ <itN6 ^iO 44 >NO ;NA w ;N2 ^ <i4 ON= < O 476 wn^¤^ 60~ ln6 = <68A a ä m tN6 ^ <i4 ON= < O 4k!l w ;@€ > 49vk!< wn> ; P 4 > x w AB6 ^ 6,; >NO 25tvßN; > r¶l†68AB2@6‚< > k =8=,O 4k!<68lnzl w ;Bß<ik!ƒ@~ > ; > 9 w 6 ^,a ºÉƒ P 6,4 w 96,;< ^o> ;|9vk PBPNw ;N2<ik!ƒ@~ æ²çNè@é'êBë,èNì8ë íîiïNð8îiñò8ó ôõ}ö÷øNùnú5ûNùnü7ý ú5þ7çNûBÿ Nù8ë ö÷øNùnú5ûNùnü7ý        ÷:ç5èNçë,÷çNû                                       !  !     !   !           "   #  !   $                            0ûBøNüçüiö!% !         & çü0ö!%         & öø%në' )(+* ,@ë,þö!úë þ7ë,÷¬ö!ùèNù…èNú¡ö÷øNùnú5ûNùnü7ý}ç!-±øBç@ü/.¡÷vöÿBÿNùèNú çNèNç5÷ùnë0vöûNüç5÷vöüùnì,ö!%1%†ýë02Nüiþö!ì8üyë8ê3þˆç5÷ö BÿBö@èNù4.653ô87 üyç:9<;+ >=@? öûNêBA ëãë8üªö!%C"D  E /.Nç!FÎü4.Bö!ü¡ü/.Nëüë8ì0.BèNùGBûNëª÷vö]ý´øBë ûyë4û%ë0,@ë,èF .Në,èøBçü/.3ü0ö!25çNèNç5÷ùnë0 øbë0%nç5èNú üyçvêBùHë,þˆë,èü%öèNú5ûBöúë0‡çNþI.Bö,ë'ü0þûNì8üiûBþ7ë0 %në0JKù÷ù1%öþü/.Böèuùè|ü/.Nëì,ö!yë}þ7ë,ÿBçNþ7üë8ê|ùè ü4.Nù¤ÿBöÿbë,þ0 L & .NëMýüë,÷ ÿBþ7çNê'ûNì8ë0àö»úçNçNê ö#Jùnú5èN ÷:ë,èü3-ç5þ39<;Î÷vö@ÿBÿNù…èNú)D}øBö!ë8ê¹ç5è%ný ç5è .@ýBÿBë,þJO.@ýBÿBç5è@ý„÷ý|þ7ë0%ö!üùnç5è4.Nù…ÿD+F .NùnìP.ù1 0ÿBë8ì8ùö!%1%ný·ûë4-¢û%QF .Në,è¦èNççü/.Në,þùè-ç5þ÷vöN üyù†çNè¡ù1ö,!ö!ù1%…ö@ø%†ëR= ùCKëÉù…èuü/.Në¤ì,ö#ë ç!-±÷vöÿN ÿNùèNúü/.NëTS±ûBþ7çUWVX.Nùnë,þöþˆìP.Nùnë0 E  & .Në}þ7ë4N ÷¬ö!ùèNù…èNú£ö÷øNù†úNûNù†ü7ýù1Q%†ç!FYF‚ùnü/.£ö'.Nùnú.£ö!ì4N ì,ûBþyö!ì8ýD5öèNê¡ÿBþ7ë8ì8ù1ùnç5èZ„þˆë8ì,ö!%1%*ü0þö!êBë8ç!H˜÷vö]ý øbëvì8ç5è@üiþ7ç%1%në8ê¦øý|ö!ê![ûüùèNú˜ü4.Në\vü4.Bþ7ë0/.N ç%†ê] Nç5÷ë²ù1JiûNë0üç¤ö!êBê'þ7ë0J^-ç5þù÷vÿBþ7ç!,Nù…èNú¤ü4.Nësö!%N úç5þˆù†ü4.B÷ ÿBë,þ_-ç5þ÷vö@èNì8ëDöèNêüç}ë02Bÿ%nçùnü ùnüJ ÿbç`N ùøNù1%nù†üyù†ë0öþ7ë( Lba ë çü/.Në,þ þ7ë0%ö!üùnç5è/.Nùÿ ü4.Böè .ýN ÿbë,þJO.@ý„ÿbç5è@ý„÷ý ö!ãì8ç5èüiþyö!ùèüJãüçcë0%në8ì8ü ü4.Në·øBë0yü|ì8ç5èBèNë8ì8üùnç5è8 d²ë0%ö!üyù†çNè/.Nùÿ·ö! yù…ø%nùèNúeD3ì8ç5ûùè8D¶ë8üì¥ì8çNû%†êFøbëÐûyë8ê]f¢è öêBêBù†üyù†çNè8Dg9\; ÿBþ7ç!,5ùnêBë0%çü/.Në,þ%þ7ë0%ö!üùnç5èN 4.Nù…ÿhiûNì0. ö!iýBèNç5è@ý„÷ýD ÷ë,þ7çNèýB÷ýD ë8üyì F .NùnìP.»ì8çNû%†êö#%yçàÿBþ7ç!,Nù†êBë®ûë4-¢û% ì8çNèüiþö!ùè@üJ Lba ëçü/.Në,þ¤ö,!ö!ù1%…ö@ø%†ë¬ù…è-ç5þ÷¬ö!üùnç5è8D]0ûNìP.ö! yý„èë8üKF²ç5þ7êDú%nçJë0Dë8üìùèü/.Nëj9\;ãüyç 9<;|÷vöÿBÿNùèNú£üiö!/k) Lml ù…èk:ü/.Nën,ë,þyøBö!%CD5ö!ê![7ë8ì8üù1,Yö!%CDNöèNêvö!ê,ë,þøNùö!% ÿBö@þ7üJ‚ç!-+9\;+ öèNêT9\;+  L & ë0ü²ü/.Në¤ÿBë,þ_-ç5þ÷¬öèNì8ë‡ç#-ü4.Në‡üë8ì0.BèNù1G„ûNë¤üyç %nùèkçü/.Në,þyüiþûNì8üiûBþ7ë0T= ëKúT9\;porq7 ôtsu9\;po õ7]vxw8q8Dê'ûNüì0.Ny9\;+DBùnüiö!%nùöèNy9\;zDx0 E  Lma ë|ùnü¡üçg%nùèkÒüiö!2Nç5èNç5÷ùnë0<-ç5þ¡èNë0Fi%öèN úNûBö!úë0‚üç<S±ûBþ7çUç5þ7êeV¶ë8ü Lm{ ù1,ëÔö|üë,ÿ|øbë8ýç5èNê|ü4.Në\çNûBþ7ì8ë4N¢üç!N¢üiö@þ7úë8ü ,Nù1ùnç5è8D|öèNêÎ÷¬öÿ ü/.NëFü0ö!25çNèNç5÷ùnë0¥ùè ö yý„÷v÷:ë8üiþ7ùnì,ö!%ªÿ.Nù1%nçç5ÿ.@ýD¦ü/.BöüFù1Dãë,ö!ìP. èNçNêBë´ç!-¡ë,ö!ìP.åüiö!2Nç5èNç5÷ý%ù¦ö!Jyù†úNèNë8êàüyç ö®èNçNêBëFùè¹ü/.Në´çü/.Në,þãüiö#25ç5èNçN÷ ý & .Nù1 4.Nç5û%nêFùèNì,þ7ë,ö!ë|ü4.Nëì8ç!,ë,þyö!úëD‚öèNêFþ7ë8ùèN -ç5þ7ì8ëOêBùyì,öþ7ê ì8çNèBèNë8ì8üùnç5è¦ü/.Bö!üª÷¬ö,ý%øBë F²ë,ök}F .Në,è ö!Jùnú5èNë8êFçNè%†ý´ùè ç5èNë|êBùþ7ë8ì4N üyù†çNè8 & .Nù1¤ì8ç5û%nê|ë0,ë,èuç5ÿbë,èü/.Në êBçNç5þ`¤üyç ÷¬öèýN¢üç!N\÷¬öèý£üiö!2Nç5èNç5÷ýv÷vöÿBÿNùèNúe ~ 3€+‚nƒ „T…y†e‡zˆx‰Y†e‚^Š#‹ & .Nùþ7ë0ë,öþˆìP..Bö#˜øBë8ë,è%ÿBöþ7üùö!%1%nýY-¢ûBèNêBë8êÛø@ý ü4.Nëvü/.Në a S®æ²ç5÷v÷:ùŒùnç5è=CVu* Žmfæ>fy& N/  N    E D|øý»ü/.Në6BÿBö@èNù4.d.ë0ë,öþ7ì0.i?¤ë,ÿBöþˆü`N ÷:ë,èüR= & fæ  Ny  N\æI N‘ E ö@èNê¥øýãü/.Në˜æ‚ö!ü0öN %öè’d²ë0ë,öþ7ì0.K?¤ë,ÿBöþ7üi÷:ë,èüD!ü4.Bþ7ç5ûNú.ü/.Në‡æudIS l ÿBþˆç[7ë8ì8ü±öèNê}ü4.Në “¤ûBö!%nùnü\ýTd²ë0ë,ö@þ7ìP. { þ7ç5ûBÿ<”±þˆç!N úNþö÷v÷ëj= { dB“  N‘  E  •3–—/–˜–e™nš–)› œIjœŸž J¡¢£R¤`j¥§¦¨_©y Jª1ª0« ¢¡£3¬]j­+®1¯¦¨J£R°±j­+®1ž¦²£ ³ ]­+¢!´µ4« ¶ ž² J·¸£¹¦¡´TœI]º!¦»K®1¢©y¢²½¼4¾¾¸¿'Àn°IÁt À+ª1®1¡ͰQ J¡ Œµy¦©_®1¢¡:Áp¡Ä!®1µy¢¡»' Œ¡©Œi¤C¡WÅzÆyÇ4ȑÉ`É`Ê0Ë Ì"ÍÎ¸Ï ÇyÐTÑ1ÒÉÔÓ¸ÕÑ1Ò×Ö Í ÑØÉ4Æ ÍÙ Ñ Ì Ç ÍÙ0ÚKÛ Ç Í ÐJÉ4ÆyÉ Í È`ÉRÇ Í Û ÇÜzÝßÞ!Ñ Ù Ñ Ì Ç ÍÙ0Ú]à]Ì"ÍÎ Þ Ì1Ï Ñ Ì È Ï½áŒÛ+â]à ÖCãäå æ/çPè0£¹éQêë ¢©_¢£ì¦í¦¡ î ±œnª1Ä0¦µŒ£ï Œ´®1©y¢µŒð¼/¾ñò!ôó Ì È‘È Ì Ç ÍÙ Æ Ì Çä+É Í É/Æ Ù0Ú Ö Ú Þ Ï ÑõÆ Ù ÊÇ<ÊÉ ÚÙKà É ÍÎ Þ ÙjötÏ Ý Ù]÷ Í Ç ÚÙ×ø8â¹ù 'ú§®¯ª¢¸ë žµ_¦0ûxºœI£ú§¦µ_üJ Jª1¢¡¦£ßº#í¦®1¡ 읟œ+©y¨y Œµy®1¦¨Œ£ ºB¥§ª1®1»' J¡!©J£+ýuB¬¦µyµ_ Jµy J¨Œ£ °±n­+®1ž¦²£ ¦¡´ ³ z­z¢!´µ/« ¶ ž² J·0>¼/¾¾ò#þ¥§¢»¯®1¡®1¡ž î ²ª1©y®ë íª1  î  J©yÿ¢´¨ ûØ¢µ ©_ÿ Tœn²©_¢»K¦©y®1üj¥§¢¡¨y©yµ_²üJ©_®¢¡ ¢0û î ²ª1©y®1ª1®¡ž²¦ª  ¢µ_´n Œ©y¨JX¤C¡×Ý߯_Ç/È`É`É`Ê Ì"ÍÎ0Ï ÇyÐ Ö Í ÑØÉ4Æ ÍÙ Ñ Ì Ç ÍÙ0ÚKÛ Ç Í ÐJÉ4ÆyÉ Í È`É<Ç Í É`È`É Í ÑnÊ  ÙÍ È`É Ï Ì"Í ã Ù ÑØÞ!Æ Ù0Ú]àeÙÍÎ Þ Ù4Î É Å+ÆyÇ4È`É Ï`ÏJÌ"ÍÎ á  Ÿã à Åå æ @è0£ À+·J®1ž¢0Ä ¥§ÿ¦µ_ãúx²ª1ž¦µy®1¦ ì Ÿ¦²´+«  0£p)¦´µ« ¢£8¦¡´R°ï¹­+®1ž¦²\¼4¾¾¾# î ¦í!ë í®1¡ž î ²ª1©y®1ª1®¡ž²¦ª ³ ®1 Jµy¦µyü`ÿ®1 J¨n¨_®1¡ž ­+ Œª¦ !¦©y®1¢¡ ¦¯ Œª®1¡že¤C¡Ç Ì Í ÑÖ/ä]ó Û Ç Í ÐŒÉ/Æ_É Í È`É+Ç ÍIö ÜIË Ý Ì Æ Ì È Ù0Ú É/Ñ1ÒÇ4Ê ÏnÌ Í ã Ù ÑõÞ!Æ Ù¸Ú¸à)ÙÍÎ Þ Ù/Î ÉnÅ+Æ_Ç/È`É Ï`ÏrÌ ÍÎ ÙÍ Ê ø É4Æ à)Ù Æ Î É Û ÇÆõÝ#ÇÆ Ù3áØö ã à Å! øàzÛ å æærè "£ î ¦µyê!ª1¦¡´£Qº ì!#I8Á8Ã!ª²¡´ÿ<¦¡´Tœu ­z¢¨_ J¡!ûØ Jª1´j¼4¾òñ#'¥§¢¡!Ä Jµ ë ž Œ¡üJ $8µ_¢í Jµy©_® Œ¨¹¢¸ûß­+ Œª¦ !¦©y®1¢¡%¦¯ Œªª1®1¡žßÀ8 Jü`ÿ!ë ¡®1üJ¦ª'­+ Jí¢µ_© ò &¼0£'¥§¢»'í²©_ Jµ\º!üŒ® Œ¡üJ b¥§ J¡!©_ JµJ n¡®Ä Jµy¨_®1©Cê'¢0û î ¦µyê!ª1¦¡´  ýI]¬¦µyµ_ Jµy Œ¨J£t°ï¹­+®1ž¦²£e¦¡´ ³ ]­+¢!´µ4« ¶ ž² J·0R¼4¾¾ñ! n¨_®1¡ž  ¢µy´'n J© ûØ¢µ)úx²®1ª1´®1¡ž  ¢µy´Ÿ J©y¨Œ¹¤C¡ ÅzÆyÇ0Ë È`É`É‘Ê Ì"ÍÎ¸Ï Ç_Ð Û+â]à Ö@ãä§Ë  Ûeà)( ÇÆ+* Ï ÒÇrÝ Ç Í-,)Ï Ë Ù/Î ÉnÇ_Ð ( ÇÆ_Ê/ãIÉ4Ñ Ì"Í ã Ù ÑõÞ!Æ Ù0Úà)ÙÍÎ Þ Ù/Î ÉxÅ+Æ_Ç/È`É Ï`ÏrÌ ÍÎ  Ï ÑõÉ/Ü Ï £ î ¢¡©_µ«  J¦ªõ£¥§¦¡¦´¦ 鱝¹éQ¡®1žÿ©Q¦¡´Rº²ÃT¼/¾¾¸¿'ú§²®1ª1´®1¡ž ¦.¦µyž `ë º!üJ¦ª1 +éB¡¢0/+ª Œ´ž §úx¦¨y §ûØ¢µ î ¦ü`ÿ®1¡ §À8µy¦¡¨yª1¦©_®¢¡ ¤C¡½ÅzÆyÇ4ȑÉ`É`Ê Ì ÍÎ0Ï ÇyÐQÑ1ÒÉ$QܱÉ/Æ Ì È ÙÍ  Ï`Ï Ç/È Ì"Ù Ñ Ì Ç Í ÐJÇÆ QÆJÑ Ì 1 È Ì"Ù0Ú Ö Í ÑõÉ Ú1Ì Î É Í È`É á 2$ Öå æ4ç4è0 8 î43 ¦µ65!² J·I¦¡´788)¦´µ« ¢B¼/¾¾ò#]œ¬eª1 9!®1¯ª1 #Qº À8¦žž Jµ2n¨y®1¡ž¦¡jœn²©y¢»K¦©_®üŒ¦ª1ª1ê œŸü95!²®1µy J´4¦¡!ë ž²¦ž  î ¢!´ Jªõe¤C¡ Å+Æ_Ç/È`É`É`Ê Ì"ÍÎ0Ï Ç_ÐxÑ1ÒÉ;:!ÕÑ1Ò ÍÍ Þ Ù¸Ú  É`É/Ñ Ì"ÍÎ ÇyÐKÑÒÉ< Ï`Ï Ç/È Ì"Ù Ñ Ì Ç Í ÐJÇÆ Û ÇÜzÝßÞ!Ñ Ù Ñ Ì Ç ÍÙ¸Ú à]Ì ÍÎ Þ Ì1Ï Ñ Ì È Ï "=Ç Ì"Í Ñ Ûeà ö  Ûeà £í¦ž J¨?>@ñBAC>¸¿ED!£ î ¦´µy®1´£º!í¦®1¡£!ì²ªê °±0œI î ®1ªª1 JµŒ£­Q0ú§ Jü`ÃF/+®©_ÿ£¥Ÿ¬ Œªª1¯¦²»j£0 u°Qµy¢¨y¨Œ£ ¦¡´½é± î ®1ª1ª1 JµJn¼4¾¾¼0¹¬)®1Ä )¦í Jµy¨Ÿ¢¡  ¢µy´'n J©Œ Ö Í ÑØÉ4Æ ÍÙ Ñ Ì Ç ÍÙ0Ú ÇÞ!Æ ÍÙ¸Ú ÇyÐ à ÉG Ì È‘Ç Î Æ Ù ÝÒF0 œI7#ŸÃ!²»²µy¦Y¦¡´>Á§ ³ ¢0Ä!ê ¼/¾¾¸¿ úx²®ª1´®1¡ž H`¦í¦¡ Œ¨y `ëC J¡žª®1¨_ÿÔ´®üŒ©y®1¢¡¦µyêT¯¦¨y J´ ¢¡R¢¡!©y¢ª1¢žê ûØ¢µb»K¦ü`ÿ®1¡ g©yµ_¦¡¨_ª1¦©y®1¢¡ ¤C¡ Ý߯yÇ4È`É`É‘Ê Ì"ÍÎ¸Ï ÇyÐ   ÅI ( ÇÆ+* Ï ÒÇ`Ý Ç Í?J Þ!Ü ÙÍIàeÙÍÎ Þ Ù4Î ÉKɑÈ`Ò Í Ç Ú Ë Ç Î 0£í¦ž J¨$>@ LMAC>¸¿¼¸ 8N)¦´µ« ¢ ¼/¾¾ñ#  J O4Æ Ì Ê ö+Í  Ì ÆyÇ Í Ü±É Í Ñ ÐJÇÆP Í Ñ Ù G Q0É4Ü ÙÍ Ñ Ì ÈR Ù/Î4ÎÌ ÍÎ S8ÿ´ À+ÿ Œ¨y®1¨J£ Ÿ Jí  ª Œ¡ž²¦©yž J¨ ®+º!®1¨y©_ J»' Œ¨I¤C¡!ûØ¢µy» 3 ¦©y®1üJ¨ŒTn¡®ë Ä Jµy¨_®1©y¦©)¢ª1®1© 3  ŒüJ¡®1üJ¦Y´ g¥§¦©y¦ª1²¡!ꦣj¬ J¯µy²¦µyê ÿ!©_©yíÂVUUW///Qª¨_®õ ²íü0 J¨UFX4í¦´µ_¢ )8µy¢!üJ©_ JµJ£± J´®1©y¢µJ>¼4¾ñò# à Ç ÍÎ Ü ÙÍ ó Ì È/Ñ Ì Ç ÍÙ Æ Ç_Ð Û ÇÜ±Ü Ç ÍïözÍÎ0Ú1Ì1Ï Ò!Y ¢¡ž»'¦¡°Qµ_¢²í£ ³ ¦µyª1¢0/Q£ Áp¨y¨y Z£eÁ8¡žª1¦¡´  읹­+®1ü`ÿ¦µy´¨Œ£ u ¦¡´žµy J¯ 0£§¦¡´4)¹ºF/§¦®1¡T¼/¾ñ¼0 #Ÿ¡3©yÿ j¦üJüŒ²µy¦üJê3¢0ûBí®[! Jª+µ_ Jª1¦!¦©_®1¢¡Rª1¦¯ Jª1ª1®1¡ž Ö ö]ö]ö Æ ÙÍ!ÏJÙ È/Ñ Ì Ç Í#Ï Ç Í  Ï ÑõÉ/Ü Ï\Y\ÙÍRÙÍ Ê Û 0Ë O`É4Æ Í É4Ñ Ì È Ï £e¼¼ ]"¿F^`ÂV@& @MAC@ &¾# °ïP­+®1ž¦²£ ³ ­z¢´µ/« ¶ ž² Œ·0£¦¡´ ìPÀ8²µ_»'¢p¼4¾¾ D!eœŸ²!ë ©_¢»K¦©y®1üJ¦ª1ªê Z©_µy¦üJ©y®1¡ž À8µy¦¡¨yª1¦©_®1¢¡7®1¡Ã!¨n²¨y®1¡ž ¦_/+®1´ jüJ¢0Ä Jµy¦ž j¨_ J»'¦¡©_®1üj©y¦ ¢¡¢»êm¤C¡3ÝÆyÇ0Ë È`É`É`Ê Ì"ÍÎ0Ï Ó0ÕÑÒ Ö Í ÑõÉ4Æ ÍÙ Ñ Ì Ç ÍÙ0ÚIÛ Ç Í ÐJÉ4ÆyÉ Í È`É` Öå æÕ0£ î ¢¡!©yí Jª1ª1® ŒµJ£¬µy¦¡üJ 0 °ïz­z®1ž¦² £ ³ B­+¢!´µ4« ¶ ž² J·¸£B¦¡´ ÁttœŸž®1µyµ_ 0M¼/¾¾ñ! ú§²®1ª1´®1¡ž œnüJüŒ²µy¦©y jº! J»K¦¡!©y®1üÀ8¦!¢¡¢»'®1 J¨zûصy¢» î ­$ Ÿ¨JX¤C¡Å+ÆyÇ4È`É`É‘Ê Ì"ÍÎ¸Ï Ç_Ð Û+â]à ÖCãä§Ë  Ûeà å æ0a£ î ¢¡!©yµ«  J¦ªõ£¥§¦¡¦´¦ ¥Ÿ0À8¢µ_µy¦¨Jx¼4¾ñ¾#¹­+ Œª¦ !¦©y®1¢¡B¦¡´Ÿ J²µy¦ª  Œ¦µy¡®¡ž )¢®¡!©_¨¹¢¸ûߥ§¢¡!Ä Jµyž J¡üJ Ÿ¦¡´ Ÿ®1Ä Œµyž Œ¡üJ 0YÇÞ!Æ ÍÙ0Ú Ç_Ð Å Ù Æ Ù¸Ú"Ú É Ú]ÙÍ Êjó Ì1Ï ÑØÆ Ì O/Þ!ÑõÉ‘Ê Û ÇÜzÝßÞ!Ñ Ì ÍÎ £YL#ÂV>¼/òBA >¸¿¿ î Yn©_®1ꦻK¦\¦¡´3鱝 ³ ¦¨_®´¦¼/¾¾ò!gú§¢©_©y¢» ë@²í œŸª1®ž¡»' Œ¡©§¢0û#Ÿ¡©_¢ª1¢ž®1 J¨J§¤C¡\Å+ÆyÇ4È`É`É‘Ê Ì"ÍÎ¸Ï ÇyÐQÖbË Û  Ödc^ÇÆ+* Ï ÒÇ`ÝbÇ Í×âtÍ ÑØÇ Ú Ç ÎÌ É Ï'ÙÍ Ê  Þ Ú Ñ Ì1ÚÌ"ÍÎ Þ Ù0Ú ã à ŧ£Ÿ¦ž¢0ꦣì¦í¦¡ œue^¢²©_®ª1¦®¡ J¡j¦¡´f8!e¦´µ« ¢Ô¼4¾¾ò#g Ÿ JÄ Jª1¢í®1¡ž ¦ ³ ê!¯µ_®´h?)¦µy¨_ JµJQ¤C¡\Å+Æ_Ç/È`É`É`Ê Ì"ÍÎ0Ï Ç_ÐIÑ1ÒÉÕÑ1Ò Û Ç Í ÐJÉ4ÆyÉ Í È`É'Ç Í §ÝÝ Ú1Ì É`Ê'ã Ù ÑØÞ!Æ Ù¸Ú¹à)ÙÍÎ Þ Ù/Î É'Å+ÆyÇ¸Ë È`É Ï`ÏrÌ ÍÎ0\ Ÿã à ŧ£eí¦ž J¨Iñ&BA#ñò!£  ¦¨_ÿ®1¡ž©_¢¡. Q¥Ÿ œn¥p
2000
64
Automatic Lab eling of Seman tic Roles Daniel Gildea Univ ersit y of California, Berk eley , and In ternational Computer Science Institute [email protected] Daniel Jurafsky Departmen t of Linguistics Univ ersit y of Colorado, Boulder [email protected] Abstract W e presen t a system for iden tifying the seman tic relationships, or semantic r oles, lled b y constituen ts of a sen tence within a seman tic frame. V arious lexical and syn tactic features are deriv ed from parse trees and used to deriv e statistical classi ers from hand-annotated training data.  In tro duction Iden tifying the seman tic roles lled b y constituen ts of a sen tence can pro vide a lev el of shallo w seman tic analysis useful in solving a n um b er of natural language pro cessing tasks. Seman tic roles represen t the participan ts in an action or relationship captured b y a seman tic frame. F or example, the frame for one sense of the v erb \crash" includes the roles A gent, Vehicle and To-Loca tion. This shallo w seman tic lev el of in terpretation can b e used for man y purp oses. Curren t information extraction systems often use domain-sp eci c frame-and-slot templates to extract facts ab out, for example, nancial news or in teresting p olitical ev en ts. A shallo w seman tic lev el of represen tation is a more domain-indep enden t, robust lev el of representation. Iden tifying these roles, for example, could allo w a system to determine that in the sen tence \The rst one crashed" the subject is the v ehicle, but in the sen tence \The rst one crashed it" the sub ject is the agen t, whic h w ould help in information extraction in this domain. Another application is in w ordsense disam biguation, where the roles asso ciated with a w ord can b e cues to its sense. F or example, Lapata and Brew ( ) and others ha v e sho wn that the di eren t syn tactic subcatgorization frames of a v erb lik e \serv e" can b e used to help disam biguate a particular instance of the w ord \serv e". Adding semantic role sub categorization information to this syn tactic information could extend this idea to use ric her seman tic kno wledge. Seman tic roles could also act as an imp ortan t in termediate represen tation in statistical mac hine translation or automatic text summarization and in the emerging eld of T ext Data Mining (TDM) (Hearst,  ). Finally , incorp orating seman tic roles in to probabilistic mo dels of language should yield more accurate parsers and b etter language mo dels for sp eec h recognition. This pap er prop oses an algorithm for automatic seman tic analysis, assigning a seman tic role to constituen ts in a sen tence. Our approac h to seman tic analysis is to treat the problem of seman tic role lab eling lik e the similar problems of parsing, part of sp eec h tagging, and w ord sense disam biguation. W e apply statistical tec hniques that ha v e b een successful for these tasks, including probabilistic parsing and statistical classi cation. Our statistical algorithms are trained on a hand-lab eled dataset: the F rameNet database (Bak er et al.,  ). The F rameNet database de nes a tagset of seman tic roles called frame elemen ts, and includes roughly 0,000 sen tences from the British National Corpus whic h ha v e b een hand-lab eled with these frame elemen ts. The next section describ es the set of frame elemen ts/seman tic roles used b y our system. In the rest of this pap er w e rep ort on our curren t system, as w ell as a n um b er of preliminary exp erimen ts on extensions to the system.  Seman tic Roles Historically , t w o t yp es of seman tic roles ha v e b een studied: abstract roles suc h as A gent and P a tient, and roles sp eci c to individual v erbs suc h as Ea ter and Ea ten for \eat". The F rameNet pro ject prop oses roles at an intermediate lev el, that of the seman tic frame. F rames are de ned as sc hematic represen tations of situations in v olving v arious participan ts, props, and other conceptual roles (Fillmore,  ). F or example, the frame \con v ersation", sho wn in Figure , is in v ok ed b y the seman tically related v erbs \argue", \ban ter", \debate", \con v erse", and \gossip" as w ell as the nouns \argumen t", \dispute", \discussion" and \ti ". The roles de ned for this frame, and shared b y all its lexical en tries, include Pr ot a gonist and Pr ot a gonist or simply Pr ot a gonists for the participan ts in the con v ersation, as w ell as Medium, and Topic. Example sen tences are sho wn in T able . De ning seman tic roles at the frame lev el a v oids some of the diculties of attempting to nd a small set of univ ersal, abstract thematic roles, or case roles suc h as A gent, P a tient, etc (as in, among man y others, (Fillmore,  ) (Jac k endo ,  )). Abstract thematic roles can b e though t of as b eing frame elemen ts de ned in abstract frames suc h as \action" and \motion" whic h are at the top of in inheritance hierarc h y of seman tic frames (Fillmore and Bak er, 000). The preliminary v ersion of the F rameNet corpus used for our exp erimen ts con tained  frames from  general seman tic domains c hosen for annotation. Examples of domains (see Figure ) include \motion", \cognition" and \comm unication". Within these frames, examples of a total of  distinct lexical predicates, or target w ords, w ere annotated:  v erbs,  nouns, and  adjectiv es. There are a total of  ,0 annotated sen tences, and , annotated frame elemen ts (whic h do not include the target w ords themselv es).  Related W ork Assignmen t of seman tic roles is an imp ortan t part of language understanding, and has b een attac k ed b y man y computational systems. T raditional parsing and understanding systems, including implemen tations of uni cation-based grammars suc h as HPSG (P ollard and Sag,  ), rely on handdev elop ed grammars whic h m ust an ticipate eac h w a y in whic h seman tic roles ma y b e realized syn tactically . W riting suc h grammars is time-consuming, and t ypically suc h systems ha v e limited co v erage. Data-driv en tec hniques ha v e recen tly b een applied to template-based seman tic in terpretation in limited domains b y \shallo w" systems that a v oid complex feature structures, and often p erform only shallo w syn tactic analysis. F or example, in the con text of the Air T ra v eler Information System (A TIS) for sp ok en dialogue, Miller et al. (  ) computed the probabilit y that a constituen t suc h as \A tlan ta" lled a seman tic slot suc h as Destina tion in a seman tic frame for air tra v el. In a data-driv en approac h to information extraction, Rilo (  ) builds a dictionary of patterns for lling slots in a sp eci c domain suc h as terrorist attac ks, and Rilo and Sc hmelzen bac h (  ) extend this tec hnique to automatically deriv e en tire case frames for w ords in the domain. These last systems mak e use of a limited amoun t of hand lab or to accept or reject automatically generated h yp otheses. They sho w promise for a more sophisticated approac h to generalize b ey ond the relativ ely small n um b er of frames considered in the tasks. More recen tly , a domain indep enden t system has b een trained on general function tags suc h as Manner and Temporal b y Blaheta and Charniak (000 ).  Metho dology W e divide the task of lab eling frame elemen ts in to t w o subtasks: that of iden tifying the b oundaries of the frame elemen ts in the sentences, and that of lab eling eac h frame elemen t, giv en its b oundaries, with the correct role. W e rst giv e results for a system whic h confer−v debate−v converse−v gossip−v dispute−n discussion−n tiff−n Conversation Frame: Protagonist−1 Protagonist−2 Protagonists Topic Medium Frame Elements: talk−v Domain: Communication Domain: Cognition Frame: Questioning Topic Medium Frame Elements: Speaker Addressee Message Frame: Topic Medium Frame Elements: Speaker Addressee Message Statement Frame: Frame Elements: Judgment Judge Evaluee Reason Role dispute−n blame−v fault−n admire−v admiration−n disapprove−v blame−n appreciate−v Frame: Frame Elements: Categorization Cognizer Item Category Criterion Figure : Sample domains and frames from the F rameNet lexicon. F rame Elemen t Example (in italics) with target v erb Example (in italics) with target noun Protagonist  Kim argued with P at Kim had an argumen t with P at Protagonist  Kim argued with Pat Kim had an argumen t with Pat Protagonists Kim and Pat argued Kim and Pat had an argumen t T opic Kim and P at argued ab out p olitics Kim and P at had an argumen t ab out p olitics Medium Kim and P at argued in F r ench Kim and pat had an argumen t in F r ench T able : Examples of seman tic roles, or frame elemen ts, for target w ords \argue" and \argumen t" from the \con v ersation" frame lab els roles using h uman-annotated b oundaries, returning to the question of automatically iden tifying the b oundaries in Section .. . F eatures Used in Assigning Seman tic Roles The system is a statistical one, based on training a classi er on a lab eled training set, and testing on an unlab eled test set. The system is trained b y rst using the Collins parser (Collins,  ) to parse the ,  training sen tences, matc hing annotated frame elemen ts to parse constituen ts, and extracting v arious features from the string of w ords and the parse tree. During testing, the parser is run on the test sen tences and the same features extracted. Probabilities for eac h p ossible seman tic role r are then computed from the features. The probabilit y computation will b e describ ed in the next section; the features include: Phrase T yp e: This feature indicates the syn tactic t yp e of the phrase expressing the seman tic roles: examples include noun phrase (NP), v erb phrase (VP), and clause (S). Phrase t yp es w ere deriv ed automatically from parse trees generated b y the parser, as sho wn in Figure . The parse constituen t spanning eac h set of w ords annotated as a frame elemen t w as found, and the constituen t's non terminal lab el w as tak en as the phrase t yp e. As an example of ho w this feature is useful, in comm unication frames, the Speaker is lik ely app ear a a noun phrase, Topic as a prep ositional phrase or noun phrase, and Medium as a prep ostional phrase, as in: \W e talk ed ab out the prop osal o v er the phone." When no parse constituen t w as found with b oundaries matc hing those of a frame elemen t during testing, the largest constituen t b eginning at the frame elemen t's left b oundary and lying en tirely within the elemen t w as used to calculate the features. Grammatical F unction: This feature attempts to indicate a constituen t's syn tactic relation to the rest of the sen tence, S NP PRP VP VBD NP SBAR IN S NNP VP VBD NP PP PRP IN NP NN Goal Source Theme Target NP He heard the sound of liquid slurping in a metal container as approached him from behind Farrell Figure : A sample sen tence with parser output (ab o v e) and F rameNet annotation (b elo w). P arse constituen ts corresp onding to frame elemen ts are highligh ted. for example as a sub ject or ob ject of a v erb. As with phrase t yp e, this feature w as read from parse trees returned b y the parser. After exp erimen tation with v arious v ersions of this feature, w e restricted it to apply only to NPs, as it w as found to ha v e little e ect on other phrase t yp es. Eac h NP's nearest S or VP ancestor w as found in the parse tree; NPs with an S ancestor w ere giv en the grammatical function subje ct and those with a VP ancestor w ere lab eled obje ct. In general, agen tho o d is closely correlated with subjectho o d. F or example, in the sen tence \He dro v e the car o v er the cli ", the rst NP is more lik ely to ll the A gent role than the second or third. P osition: This feature simply indicates whether the constituen t to b e lab eled o ccurs b efore or after the predicate de ning the seman tic frame. W e exp ected this feature to b e highly correlated with grammatical function, since sub jects will generally app ear b efore a v erb, and ob jects after. Moreo v er, this feature ma y o v ercome the shortcomings of reading grammatical function from a constituen t's ancestors in the parse tree, as w ell as errors in the parser output. V oice: The distinction b et w een activ e and passiv e v erbs pla ys an imp ortan t role in the connection b et w een seman tic role and grammatical function, since direct ob jects of activ e v erbs corresp ond to subjects of passiv e v erbs. F rom the parser output, v erbs w ere classi ed as activ e or passiv e b y building a set of 0 passiv eiden tifying patterns. Eac h of the patterns requires b oth a passiv e auxiliary (some form of \to b e" or \to get") and a past participle. Head W ord: As previously noted, w e exp ected lexical dep endencies to b e extremely imp ortan t in lab eling seman tic roles, as indicated b y their imp ortance in related tasks suc h as parsing. Since the parser used assigns eac h constituen t a head w ord as an in tegral part of the parsing mo del, w e w ere able to read the head w ords of the constituen ts from the parser output. F or example, in a comm unication frame, noun phrases headed b y \Bill", \brother", or \he" are more lik ely to b e the Speaker, while those headed b y \prop osal", \story", or \question" are more lik ely to b e the Topic. F or our exp erimen ts, w e divided the F rameNet corpus as follo ws: one-ten th of the annotated sen tences for eac h target w ord w ere reserv ed as a test set, and another one-ten th w ere set aside as a tuning set for dev eloping our system. A few target w ords with few er than ten examples w ere remo v ed from the corpus. In our corpus, the a v erage n um b er of sen tences p er target w ord is only , and the n um b er of sen tences p er frame is  | b oth relativ ely small amoun ts of data on whic h to train frame elemen t classi ers. Although w e exp ect our features to in teract in v arious w a ys, the data are to o sparse to calculate probabilities directly on the full set of features. F or this reason, w e built our classi er b y com bining probabilities from distributions conditioned on a v ariet y of com binations of features. An imp ortan t ca v eat in using the F rameNet database is that sen tences are not c hosen for annotation at random, and therefore are not necessarily statistically represen tativ e of the corpus as a whole. Rather, examples are c hosen to illustrate t ypical usage patterns for eac h w ord. W e in tend to remedy this in future v ersions of this w ork b y b o otstrapping our statistics using unannotated text. T able  sho ws the probabilit y distributions used in the nal v ersion of the system. Cover age indicates the p ercen tage of the test data for whic h the conditioning ev en t had b een seen in training data. A c cur acy is the prop ortion of co v ered test data for whic h the correct role is predicted, and Performanc e, simply the pro duct of co v erage and accuracy , is the o v erall p ercen tage of test data for whic h the correct role is predicted. Accuracy is somewhat similar to the familiar metric of pr ecision in that it is calculated o v er cases for whic h a decision is made, and p erformance is similar to r e c al l in that it is calculated o v er all true frame elemen ts. Ho w ev er, unlik e a traditional precision/recall trade-o , these results ha v e no threshold to adjust, and the task is a m ulti-w a y classi cation rather than a binary decision. The distributions calculated w ere simply the empirical distributions from the training data. That is, o ccurrences of eac h role and eac h set of conditioning ev en ts w ere coun ted in a table, and probabilities calculated b y dividing the coun ts for eac h role b y the total n um b er of observ ations for eac h conditioning ev en t. F or example, the distribution P (r jpt; t) w as calculated sas follo ws: P (r jpt; t) = #(r ; pt; t) #(pt; t) Some sample probabilities calculated from the training are sho wn in T able .  Results Results for di eren t metho ds of com bining the probabilit y distributions describ ed in the previous section are sho wn in T able . The linear in terp olation metho d simply a v erages the probabilities giv en b y eac h of the distributions in T able : P (r jconstituent) =   P (r jt) +   P (r jpt; t) +   P (r jpt; g f ; t) +   P (r jpt; position; v oice) +   P (r jpt; position; v oice; t) +   P (r jh) +   P (r jh; t) +   P (r jh; pt; t) where P i  i = . The geometric mean, expressed in the log domain, is similar: P (r jconstituent) =  Z expf  l og P (r jt) +   l og P (r jpt; t) +   l og P (r jpt; g f ; t) +   l og P (r jpt; position; v oice) +   l og P (r jpt; position; v oice; t) +   l og P (r jh) +   l og P (r jh; t) +   l og P (r jh; pt; t)g where Z is a normalizing constan t ensuring that P r P (r jconstituent) = . The results sho wn in T able  re ect equal v alues of  for eac h distribution de ned for the relev an t conditioning ev en t (but excluding distributions for whic h the conditioning ev en t w as not seen in the training data). Distribution Cover age A c cur acy Performanc e P (r jt) 00% 0. % 0. % P (r jpt; t) . 0. . P (r jpt; g f ; t) .0 . . P (r jpt; position; v oice) . . . P (r jpt; position; v oice; t) 0. 0. . P (r jh) 0. .  . P (r jh; t) .0 . . P (r jh; pt; t) 0. . . T able : Distributions Calculated for Seman tic Role Iden ti cation: r indicates seman tic role, pt phrase t yp e, g f grammatical function, h head w ord, and t target w ord, or predicate. P (r jpt; g f ; t) Count in tr aining data P (r =A gtjpt =NP; g f =Sub j; t =ab duct) = :  P (r =Thmjpt =NP; g f =Sub j; t =ab duct) = :  P (r =Thmjpt =NP; g f =Ob j; t =ab duct) =  P (r =A gtjpt =PP; t =ab duct) = :  P (r =Thmjpt =PP; t =ab duct) = :  P (r =CoThmjpt =PP; t =ab duct) = :  P (r =Manrjpt =AD VP; t =ab duct) =   T able : Sample probabilities for P (r jpt; g f ; t) calculated from training data for the v erb ab duct. The v ariable g f is only de ned for noun phrases. The roles de ned for the r emov ing frame in the motion domain are: A gent, Theme, CoTheme (\... had b een ab ducted with him") and Manner. Other sc hemes for c ho osing v alues of , including giving more w eigh t to distributions for whic h more training data w as a v ailable, w ere found to ha v e relativ ely little e ect. W e attribute this to the fact that the ev aluation dep ends only the the ranking of the probabilities rather than their exact v alues. P(r | h, t) P(r | pt, t) P(r | pt, position, voice) P(r | pt, position, voice, t) P(r | pt, gf, t) P(r | t) P(r | h) P(r | h, pt, t) Figure : Lattice organization of the distributions from T able , with more sp eci c distributions to w ards the top. In the \bac k o " com bination metho d, a lattice w as constructed o v er the distributions in T able  from more sp eci c conditioning ev en ts to less sp eci c, as sho wn in Figure . The less sp eci c distributions w ere used only when no data w as presen t for an y more sp eci c distribution. As b efore, probabilities w ere com bined with b oth linear in terp olation and a geometric mean. Combining Metho d Corr e ct Linear In terp olation  .% Geometric Mean  . Bac k o , linear in terp olation 0. Bac k o , geometric mean  . Baseline: Most common role 0. T able : Results on Dev elopmen t Set,  observ ations The nal system p erformed at 0.% accuracy , whic h can b e compared to the 0. % ac hiev ed b y alw a ys c ho osing the most probable role for eac h target w ord, essen tially c hance p erformance on this task. Results for this system on test data, held out during dev elopmen t of the system, are sho wn in T able Line ar Backo Baseline Dev elopmen t Set 0.% 0. % T est Set . 0.% T able : Results on T est Set, using bac k o linear in terp olation system. The test set consists of  00 observ ations. . . Discussion It is in teresting to note that lo oking at a constituen t's p osition relativ e to the target w ord along with activ e/passiv e information p erformed as w ell as reading grammatical function o the parse tree. A system using grammatical function, along with the head w ord, phrase t yp e, and target w ord, but no passiv e information, scored  .%. A similar system using p osition rather than grammatical function scored .% | nearly iden tical p erformance. Ho w ev er, using head w ord, phrase t yp e, and target w ord without either p osition or grammatical function yielded only .%, indicating that while the t w o features accomplish a similar goal, it is imp ortan t to include some measure of the constituen t's syn tactic relationship to the target w ord. Our nal system incorp orated b oth features, giving a further, though not signi can t, impro v emen t. As a guideline for in terpreting these results, with  observ ations, the threshold for statistical signifance with p < :0 is a .0% absolute di erence in p erformance. Use of the activ e/passiv e feature made a further impro v emen t: our system using p osition but no grammatical function or passiv e information scored .%; adding passiv e information brough t p erformance to 0.%. Roughly % of the examples w ere iden ti ed as passiv e uses. Head w ords pro v ed to b e v ery accurate indicators of a constituen t's seman tic role when data w as a v ailable for a giv en head w ord, con rming the imp ortance of lexicalization sho wn in v arious other tasks. While the distribution P (r jh; t) can only b e ev aluated for .0% of the data, of those cases it gets .% correct, without use of an y of the syn tactic features. . Lexical Clustering In order to address the sparse co v erage of lexical head w ord statistics, an exp erimen t w as carried out using an automatic clustering of head w ords of the t yp e describ ed in (Lin,  ). A soft clustering of nouns w as p erformed b y applying the co-o ccurrence mo del of (Hofmann and Puzic ha,  ) to a large corpus of observ ed direct ob ject relationships b et w een v erbs and nouns. The clustering w as computed from an automatically parsed v ersion of the British National Corpus, using the parser of (Carroll and Ro oth,  ). The exp erimen t w as p erformed using only frame elemen ts with a noun as head w ord. This allo w ed a smo othed estimate of P (r jh; nt; t) to b e computed as P c P (r jc; nt; t)P (cjh), summing o v er the automatically deriv ed clusters c to whic h a nominal head w ord h migh t b elong. This allo ws the use of head w ord statistics ev en when the headw ord h has not b een seen in conjunction w as the target w ord t in the training data. While the unclustered nominal head w ord feature is correct for .% of cases where data for P (r jh; nt; t) is a v ailable, suc h data w as a v ailable for only .% of nominal head w ords. The clustered head w ord alone correctly classi ed  .% of the cases where the head w ord w as in the v o cabulary used for clustering; . % of instances of nominal head w ords w ere in the v o cabulary . Adding clustering statistics for NP constituen ts in to the full system increased o v erall p erformance from 0.% to .%. . Automatic Iden ti cation of F rame Elemen t Boundaries The exp erimen ts describ ed ab o v e ha v e used h uman annotated frame elemen t b oundaries | here w e address ho w w ell the frame elemen ts can b e found automatically . Exp erimen ts w ere conducted using features similar to those describ ed ab o v e to iden tify constituen ts in a sen tence's parse tree that w ere lik ely to b e frame elemen ts. The system w as giv en the h uman-annotated target w ord and the frame as inputs, whereas a full language understanding system w ould also identify whic h frames come in to pla y in a sentence | essen tially the task of w ord sense disam biguation. The main feature used w as the path from the target w ord through the parse tree to the constituen t in question, represen ted as a string of parse tree non terminals link ed b y sym b ols indicating up w ard or do wnw ard mo v emen t through the tree, as sho wn in Figure . S NP VP V NP Det N Pro He ate some target word frame element pancakes Figure : In this example, the path from the frame elemen t \He" to the target w ord \ate" can b e represen ted as NP " S # VP # V, with " indicating up w ard mo v emen t in the parse tree and # do wn w ard mo v emen t. The other features used w ere the identit y of the target w ord and the iden tit y of the constituen t's head w ord. The probabilit y distributions calculated from the training data w ere P (f ejpath), P (f ejpath; t), and P (f ejh; t), where f e indicates an ev en t where the parse constituen t in question is a frame elemen t, path the path through the parse tree from the target w ord to the parse constituen t, t the iden tit y of the target w ord, and h the head w ord of the parse constituen t. By v arying the probabilit y threshold at whic h a decision is made, one can plot a precision/recall curv e as sho wn in Figure . P (f ejpath; t) p erforms relativ ely p o orly due to fragmen tation of the training data (recall only ab out 0 sen tences are a v ailable for eac h target w ord). While the lexical statistic P (f ejh; t) alone is not useful as a classi er, using it in linear interp olation with the path statistics impro v es results. Note that this metho d can only identify frame elemen ts that ha v e a corresp onding constituen t in the automatically generated parse tree. F or this reason, it is in teresting to calculate ho w man y true frame elemen ts o v erlap with the results of the system, relaxing the criterion that the b oundaries m ust matc h exactly . Results for partial matc hing are sho wn in T able . When the automatically iden ti ed constituen ts w ere fed through the role lab eling system describ ed ab o v e,  .% of the constituen ts whic h had b een correctly iden ti ed in the rst stage w ere assigned the correct role in the second, roughly equiv alen t to the p erformance when assigning roles to constituen ts iden ti ed b y hand. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision P(fe|path) P(fe|path, t) .75*P(fe | path)+.25*P(fe | h, t) Figure : Precison/Recall plot for v arious metho ds of iden tifying frame elemen ts. Recall is calculated o v er only frame elemen ts with matc hing parse constituen ts.  Conclusion Our preliminary system is able to automatically lab el seman tic roles with fairly high accuracy , indicating promise for applications in v arious natural language tasks. Lexical statistics computed on constituen t head w ords w ere found to b e the most imp ortan t of the features used. While lexical statistics are quite accurate on the data co v ered b y observ ations in the training set, the sparsit y of the data when conditioned on lexical items mean t that com bining features w as the k ey to high o v erall p erformance. While the com bined system w as far more accurate than an y feature T yp e of Overlap Identi e d Constituents Numb er Exactly Matc hing Boundaries %  Iden ti ed constituen t en tirely within true frame elemen t   T rue frame elemen t en tirely within iden ti ed constituen t   P artial o v erlap 0  No matc h to true frame elemen t   T able : Results on Iden tifying F rame Elemen ts (FEs), including partial matc hes. Results obtained using P (f ejpath) with threshold at .. A total of  constituen ts w ere iden ti ed as FEs,  FEs w ere presen t in hand annotations, of whic h matc hing parse constituen ts w ere presen t for 0 (%). tak en alone, the sp eci c metho d of com bination used w as less imp ortan t. W e plan to con tin ue this w ork b y in tegrating seman tic role iden ti cation with parsing, b y b o otstrapping the system on larger, and more represen tativ e, amoun ts of data, and b y attempting to generalize from the set of predicates c hosen b y F rameNet for annotation to general text. References Collin F. Bak er, Charles J. Fillmore, and John B. Lo w e.  . The b erk eley framenet pro ject. In Pr o c e e dings of the COLING-A CL, Mon treal, Canada. Dan Blaheta and Eugene Charniak. 000. Assigning function tags to parsed text. In Pr oc e e dings of the st A nnual Me eting of the North A meric an Chapter of the A CL (NAA CL), Seattle, W ashington. Glenn Carroll and Mats Ro oth.  . V alence induction with a head-lexicalized p cfg. In Pr o c e e dings of the r d Confer enc e on Empiric al Metho ds in Natur al L anguage Pr o c essing (EMNLP ), Granada, Spain. Mic hael Collins.  . Three generativ e, lexicalised mo dels for statistical parsing. In Pr oc e e dings of the th A nnual Me eting of the A CL. Charles J. Fillmore and Collin F. Bak er. 000. F ramenet: F rame seman tics meets the corpus. In Linguistic So ciety of A meric a, Jan uary . Charles Fillmore.  . The case for case. In Bac h and Harms, editors, Universals in Linguistic The ory, pages {. Holt, Rinehart, and Winston, New Y ork. Charles J. Fillmore.  . F rame seman tics and the nature of language. In A nnals of the New Y ork A c ademy of Scienc es: Confer enc e on the Origin and Development of L anguage and Sp e e ch, v olume 0, pages 0{. Marti Hearst.  . Un tangling text data mining. In Pr o c e e dings of the r d A nnual Me eting of the A CL. Thomas Hofmann and Jan Puzic ha.  . Statistical mo dels for co-o ccurrence data. Memo, Massac h ussetts Institute of T ec hnology Arti cial In telligence Lab oratory , F ebruary . Ra y Jac k endo .  . Semantic Interpr etation in Gener ative Gr ammar. MIT Press, Cam bridge, Massac h usetts. Maria Lapata and Chris Brew.  . Using sub categorization to resolv e v erb class am biguit y . In Joint SIGD A T Confer enc e on Empiric al Metho ds in NLP and V ery L ar ge Corp or a, Maryland. Dek ang Lin.  . Automatic retriev al and clustering of similar w ords. In Pr o c e e dings of the COLING-A CL, Mon treal, Canada. Scott Miller, Da vid Stallard, Rob ert Bobro w, and Ric hard Sc h w artz.  . A fully statistical approac h to natural language in terfaces. In Pr o c e e dings of the th A nnual Me eting of the A CL. Carl P ollard and Iv an A. Sag.  . He adDriven Phr ase Structur e Gr ammar. Univ ersit y of Chicago Press, Chicago. Ellen Rilo and Mark Sc hmelzen bac h.  . An empirical approac h to conceptual case frame acquisition. In Pr o c e e dings of the Sixth Workshop on V ery L ar ge Corp or a. Ellen Rilo .  . Automatically constructing a dictionary for information extraction tasks. In Pr o c e e dings of the Eleventh National Confer enc e on A rti cial Intel ligenc e (AAAI).
2000
65
F eature Logic for Dotted T yp es: A F ormalism for Complex W ord Meanings Manfred Pink al and Mic hael Kohlhase Univ ersit at des Saarlandes, German y {kohlhase@ags|pinkal@col i}.u nisb.d e Abstract In this pap er w e revisit Pustejo vsky's prop osal to treat on tologically complex w ord meaning b y socalled dotted pairs. W e use a higherorder feature logic based on Ohori's record -calculus to mo del the seman tics of w ords lik e b o ok and libr ary, in particular their b eha vior in the con text of quan ti cation and cardinalit y statemen ts. 1 In tro duction The treatmen t of lexical am biguit y is one of the main problems in lexical seman tics and in the mo deling of natural language understanding. Pustejo vsky's framew ork of the \Generativ e Lexicon" made a con tribution to the discussion b y emplo ying the concept of t yp e co ercion, th us replacing the en umeration of readings b y the systematic con text-dep enden t generation of suitable in terpretations, in the case of systematic p olysemies (Pustejo vsky , 1991; Pustejo vsky , 1995). Also, Pustejo vsky p oin ted to a frequen t and imp ortan t phenomenon in lexical seman tics, whic h at rst sigh t lo oks as another case of p olysem y , but is signi can tly di eren t in nature. (1) The b o ok is blue/on the shelf. (2) Ma ry burned the b o ok. (3) The b o ok is amusing. (4) Ma ry understands the b o ok. (5) The b o ok is b eautiful. (6) Ma ry lik es the b o ok. (7) Ma ry read the b o ok. Examples (1)-(4) suggest an inheren t am biguit y of the common noun b o ok : blue, on the shelf, and burn sub categorize for a ph ysical ob ject, while amusing and understand require an informational ob ject as argumen t. (5) and (6) are in fact am biguous: The statemen ts ma y refer either to the shap e or the con ten t of the b o ok. Ho w ev er, a thorough analysis of the situation sho ws that there is a third reading where the b eaut y of the b o ok as w ell as Mary's p ositiv e attitude are due to the harmon y b et w een ph ysical shap e and informational con ten t. The action of reading, nally , is not carried out on a ph ysical ob ject alone, nor on a pure informational ob ject as argumen t, but requires an ob ject whic h is essentially a com bination of the t w o. This indicates a seman tic relation whic h is conjunctiv e or additiv e in c haracter, rather than a disjunction b et w een readings as in the am biguit y case. In addition to the more philosophical argumen t, the assumption of a basically di eren t seman tic relation is supp orted b y observ ations from seman tic comp osition. If the ph ysical/informational distinction in the seman tics of b o ok w ere just an am biguit y , (8) and (9) w ould not b e consisten tly in terpretable, since the sortal requiremen ts of the noun mo di er (amusing and on the shelf, resp.) are incompatible with the selection restrictions of the v erbs burn and understand, resp ectiv ely . (8) Mary burned an am using b o ok. (9) Mary understands the b o ok on the shelf. Pustejo vsky concludes that on tologically complex ob jects m ust b e tak en in to accoun t to describ e lexical seman tics prop erly , and he represen ts them as \dotted pairs" made up form t w o (or more) on tologically simple objects, and b eing seman tically categorized as \dotted t yp es", e.g., P  I in the case of b o ok. He con vincingly argues that complex t yp es are omnipresen t in the lexicon, the ph ysical/informational ob ject distinction b eing just a sp ecial case of a wide range of dotted t yp es, including con tainer/con ten t (b ottle ), ap erture/panel (do or ) building/institution (libr ary ). The part of the Generativ e Lexicon concept whic h w as not concerned with on tologically complex ob jects, i.e., t yp e co ercion and co-comp osition mec hanisms using so-called qualia information, has triggered a line of in tensiv e and fruitful researc h in lexical seman tics, whic h led to progress in represen tation formalisms and to ols for the computational lexicon (see e.g. (Cop estak e and Brisco e, 1995; D olling, 1995; Busa and Bouillon, forthcoming; Egg, 1999)). In con trast, a problem with Pustejo vsky's prop osal ab out the complex ob jects is that the dotted-pair notation has b een formally and seman tically not clear enough to form a starting p oin t for meaning represen tation and pro cessing. In this pap er, w e presen t a formally sound seman tic reconstruction of complex ob jects, using a higher-order feature logic based on Ohori's record -calculus (1995) whic h has b een originally dev elop ed for functionaland ob ject-orien ted programming. W e do not claim that our reconstruction pro vides a full theory of the of the p eculiar kind of on tological ob jects, but it app ears to b e useful as a basis for represen ting lexical en tries for these ob jects and mo deling the comp osition process in whic h they are in v olv ed. W e will not only sho w that the basic examples ab o v e can b e treated, but also that our treatmen t provides a straigh tforw ard solution to some puzzles concerning the b eha vior of dotted pairs in quan ti cational, cardinalit y and iden tit y statemen ts. (10) Mary burned ev ery b o ok in the library . (11) Mary understo o d ev ery b o ok in the library . (12) There are 2000 b o oks in the library . (13) All new b o oks are on the shelf. (14) The b o ok on y our b o ok-shelf is the one I sa w in the library . In (10), the quan ti cation is ab out ph ysical ob jects, whereas in (11), it concerns the b o oks qua informational unit. (12) is am biguous b et w een a n um b er-of-copies and a n um b er-oftitles reading. The resp ectiv e readings in (10) and (11) app ear to b e triggered b y the sortal requiremen ts of the v erbal predicate, as the am biguit y in (12) is due to the lac k of a selection restriction. Ho w ev er, (13) { uttered to w ards a customer in a b o ok store { has a natural reading where the quan ti cation relates to the information lev el and the predicate is ab out ph ysical ob jects. Finally , (14) has a reading where a relation of non-ph ysical iden tit y is ascrib ed to ob jects whic h are b oth referred to b y ph ysical prop erties. 2 The Record--Calculus F  In order to reduce the complexit y of the calculus, w e will rst in tro duce a feature -calculus F and then extend it to F  . F , is an extension of the simply t yp ed -calculus b y feature structures (whic h w e will call records). See Figure 1 for the syn tactical categories of the ra w terms. W e assume the base t yp es e (for individuals) and t (for truth v alues), and a set L = f` 1 ; ` 2 ; : : : g of features. The set of w ell-t yp ed T ::= e j t j T ! T 0 j f f` 1 : T 1 ; : : : ; ` n : T n g g (T yp es: ; ; : : : ) M ::= X j c j (MN) j X T :M j M:` j f f` 1 = M 1 ; : : : ; ` n = M n g g (F orm ulae A; B; : : : )  ::= ; j ; [c: T ] (Signature) ::= ; j ; [X : T ] (En vironmen t) Figure 1: Syn tax terms is de ned b y the inference rules in Figure 2 for the t yping judgmen t `  A: . The meaning of this judgmen t is that term A has t yp e 2 T relativ e to the (global) t yp e assumptions in the signature  and the (local) t yp e assumptions (the con text) for the v ariables. As usual, w e sa y that a term A is of t yp e (and often simply write A to indicate this), i `  A: is deriv able b y these rules. W e will call a t yp e a record [c: ] 2  `  c: [X : ] 2 `  X : `  A: ! `  C: `  A C: ; [X : ] `  A: `  X :A: ! `  A: f f: : : ; `: ; : : : g g `  A:`: `  A 1 : 1 : : : `  A n : n `  f f` 1 = A 1 ; : : : ; ` n = A n g g Figure 2: W ell-t yp ed terms in F t yp e (with features ` i ), i it is of the form f f` 1 : 1 ; : : : ; ` n : n g g . Similarly , w e call an F term A a record, i it has a record t yp e. Note that record selection op erator \." can only b e applied to records. In a sligh t abuse of notation, w e will also use it on record t yp es and ha v e A :`: :`. It is w ell-kno wn that t yp e inference with these rules is decidable (as a consequence w e will sometimes refrain from explicitly marking t yp es in our examples), that w ell-t yp ed terms ha v e unique t yp es, and that the calculus admits sub ject reduction, i.e that the set of w ell-t yp ed terms is closed under w ell-t yp ed substitutions. The calculus F is equipp ed with an (operational) equalit y theory , giv en b y the rules in Figure 3 (extended to congruence relations on F -terms in the usual w a y). The rst t w o are just the w ell-kno wn  equalit y rules from -calculus (w e assume alphab etic renaming of b ound v ariables wherev er necessary). The second t w o rules sp ecify the seman tics of the record dereferencing op eration \:". Here w e kno w that these rules form a canonical (i.e. terminating and con uen t), and t yp e-safe (reduction do es not c hange the t yp e) reduction system, and that w e therefore ha v e unique  -normal forms. The seman tics of F  is a straigh tforw ard exten tion of that of the simply t yp ed -calculus: records are in terpreted as partial functions from features to ob jects, and dereferencing is only application of these functions. With this seman tics it is easy to sho w that the ev aluation mapping is w ellt yp ed (I ' (A ) 2 D ) and that the equalities in Figure 3 are sound (i.e. if A =   B, then I ' (A) = I ' (B)). (X :A)B ! [B=X ]A X = 2 free(A) (X :AX ) !  A f f: : : ; ` = A; : : : g g :` !  A `  A: f f` 1 : 1 ; : : : ; ` n : n g g f f` 1 = A:` 1 ; : : : ; ` n = A:` n g g !  A Figure 3: Op erational Equalit y for F . Up to no w, w e ha v e a calculus for socalled closed records that exactly pr escrib e the features of a record. The seman tics giv en ab o v e also licenses a sligh tly di eren t in terpretation: a record t yp e  = f f` 1 : n ; : : : ; ` n : n g g is descriptive, i.e. an F term of t yp e  w ould only b e required to ha v e at le ast the features ` 1 ; : : : ` n , but ma y actually ha v e more. This mak es it necessary to in tro duce a subt yping relation , since a record f f` = A g g will no w ha v e the t yp es f f`: g g and f fg g . Of course w e ha v e f f`: g g  f fg g , since the latter is less restrictiv e. The higher-order feature logic F  w e will use for the linguistic analysis in section 3 is giv en as F extended b y the rules in Figure 4. The rst rule sp eci es that record k  n f f` 1 : 1 ; : : : ; ` n : n g g  f f` 1 : 1 ; : : : ; ` n : k g g `  A:  `  A: 2 B T   0  0 0 !  ! 0 Figure 4: The op en record calculus F  t yp es that prescrib e more features are more sp eci c, and th us describ e a smaller set of ob jects. The second rule is a standard w eakening rule for the subt yp e relation. W e need the re exivit y rule for base t yp es in order to k eep the last rule, whic h induces the subt yp e relation on function t yp es from that of its domain and range t yp es simple. It states that function spaces can b e enlarged b y enlarging the range t yp e or b y making the domain smaller (in tuitiv ely , ev ery function can b e restricted to a smaller domain). W e sa y that  is co v arian t (preserving the direction) in the range and con tra v arian t in the domain t yp e (in v erting the direction). F or F  , w e ha v e the same meta-logical results as for F  (the t yp e-preserv ations, subject reduction, normal forms, soundness,. . . ) except for the unique t yp e prop ert y , whic h cannot hold b y construction. Instead w e ha v e the principal t yp e prop ert y , i.e. ev ery F  term has a unique minimal t yp e. T o fortify our in tuition ab out F  , let us tak e a lo ok at the follo wing example: It should b e p ossible to apply a function F of t yp e f f` 1 : g g ! to a record with features ` 1 ; ` 2 , since F only exp ects ` 1 . The t yp e deriv ation in Figure 5 sho ws that Ff f` 1 = A 1 1 ; ` 2 = A 2 2 g g is indeed w ell-t yp ed. In the rst blo c k, w e use the rules from Figure 4 (in particular con tra v ariance) to establish a subt yp e relation that is used in the second blo c k to w eak en the t yp e of F, so that it (in the third blo c k) can b e applied to the argumen t record that has one feature more than the feature ` 1 required b y F's t yp e. 1  2 f f` 1 : 1 ; ` 2 : 2 g g  f f` 1 : 1 g g f f` 1 : 1 g g !  f f` 1 : 1 ; ` 2 : 2 g g ! F: f f` 1 : 1 g g !  F: f f` 1 : 1 ; ` 2 : 2 g g !  `  A i : i `  f f` 1 = A 1 1 ; ` 2 = A 2 2 g g : f f` 1 : 1 ; ` 2 : 2 g g `  Ff f` 1 = A 1 1 ; ` 2 = A 2 2 g g : Figure 5: A F  example deriv ation 3 Mo deling on tologically complex ob jects W e start with the standard Mon tago vian analysis (Mon tague, 1974), only that w e base it on F  instead of the simply t yp ed calculus. F or our example, it will b e suÆcien t to tak e the set L of features as a sup erset of fP; I ; H g (where the rst stand for ph ysical, and informational facets of an ob ject). In our fragmen t w e use the extension F  to structure t yp e e in to subsets giv en b y t yp es of the form f f` 1 : e; : : : ; ` n : eg g . Note that thro wing a w a y all feature information and mapping eac h suc h t yp e to a t yp e E in our examples will yield a standard Mon tago vian treatmen t of NL expressions, where E tak es the role that e has in standard Mon tague grammar. Linguistic examples are the prop er name Mary, whic h translates to ma ry 0 : f fH : eg g , shelf whic h translates to shelf 0 : f fP: eg g ! t, and the common noun b o ok whic h translates to b o ok 0 : f fP: e; I : eg g ! t. A predicate lik e blue requires a ph ysical object as argumen t. T o b e precise, the argumen t need not b e an ob ject of t yp e f fP: eg g , lik e a shelf or a table. blue can b e p erfectly applied to complex ob jects as b o oks, libraries, and do ors, if they ha v e a ph ysical realization, irresp ectiv e of whether it is accompanied b y an informational ob ject, an institution, or an ap erture. A t rst glance, this seems to b e a signi can t di erence from kind predicates lik e shelf and b o ok. Ho w ev er, it is OK to in terpret the t yp e assignmen t for kind predicates along with prop ert y denoting expressions: In b oth cases, the o ccurrence of a feature ` means that ` o ccurs in the t yp e of the argumen t ob ject. Th us, f f`: eg g ! t is a sortal c haracterization for a predicate A with the follo wing impact: 1. A has a v alue for feature `, p ossibly among other features, 2. the seman tics of A is pro jectiv e, i.e., the applicabilit y conditions of A and accordingly the truth v alue of the resulting predication is only dep enden t of the v alue of `. Note that 1. is exactly the b eha vior that w e ha v e built the extension F  for and that w e ha v e discussed with the example in Figure 5. W e will no w come to 2. Although t yp e e nev er o ccurs as argumen t t yp e directly in the translation of NL expressions, represen tation language constan ts with t yp e-e argumen ts are useful in the de nition of the seman tics of lexical en tries. E.g., the seman tics of b o ok can b e de ned using the basic constan t book  of t yp e e ! e ! t, as x:(book  (x:P; x:I )), where book  expresses the b o ok-sp eci c relation holding b et w een ph ysical and informational ob jects 1 . The fragmen t in Figure 6 pro vides representations for some of the lexical items o ccurring in the examples of Section 1, in terms of the basic expressions mar y  : e; shel f  ; bl ue  ; amusing  : e ! t on  ; book  ; bur n  ; under stand  : e ! e ! t; r ead  : e ! e ! e ! t Observ e that the represen tations nicely re ect the distinction b et w een linguistic arit y of the lexical items, whic h is giv en b y the pre x (e.g., t w o-place in the case of r e ad ), and 1 Pustejo vsky conjectures that the relation holding among di eren t on tological lev els is more than just a set of pairs. W e restrict ourselv es to the extensional lev el here. W ord Meaning/T yp e Mary f fH = mar y  g g : f fH : eg g shelf x:(shel f  (x:P)): f fP: eg g ! t b o ok x:book  (x:P; x:I ) f fP: e; I : eg g ! t amusing x:amusing  (x:I ) f fI : eg g ! t on xy :on  (x:P; y :P) f fP: eg g ! f fP: eg g ! t burn xy :bur n  (x:H ; y :P) f fP: eg g ! f fP: eg g ! t underst. xy :under stand  (x:H ; x:I ) f fH : eg g ! f fI : eg g ! t r e ad xy :r ead  (x:H ; y :P; y :I ) f fH : eg g ! f fP: e; I : eg g ! t Figure 6: A tin y fragmen t of English the \on tological arit y" of the underlying basic relations (e.g., the 3-place-relation holding b et w een a p erson, the ph ysical ob ject whic h is visually scanned, and the con ten t whic h is acquired b y that action). In particular, all of the meanings are pro jectiv e, i.e. they only pic k out the features from the complex argumen ts and mak e them a v ailable to the basic predicate. Therefore, w e can reconstruct the meaning term R = xy :r ead  (x:H ; y : P; y : I ) of r e ad if w e only kno w the relev an t features (w e call them selection restrictions) of the argumen ts, and write R as r ead  [fH gf P; I g]. The in terpretation of sen tence (2) via basic predicates is sho wn in (15) to (17). F or simplicit y , the de nite noun phrase is translated b y an existen tial quan ti er here. (15) sho ws the result of the direct one-to-one-translation of lexical items in to represen tation language constan ts. In (16), these constan ts are replaced b y -terms tak en from the fragmen t. (17) is obtained b y -reduction and  -equalit y from (16): in particular, f fH = mar y  :H g g is replaced b y the -equiv alen t mar y  . (15) 9v :book 0 (v ) ^ bur n 0 (f fH = mar y  g g ; v ) (16) 9v :(x:book  (x:P; x:I ))(v ) ^ (xy :bur n  (x:H ; x:P) )(f fH = mar y  g g; v ) (17) 9v :book  (v :P; v :I ) ^ bur n  (mar y  ; v :P) (18) and (19) as seman tic represen tations for (4) and (7), resp ectiv ely , demonstrate ho w the predicates understand and r e ad pic k out objects of appropriate on tological lev els. (20) and (21) are in terpretations of (8) and (9) resp ectiv ely , where nested functors coming with di eren t sortal constrain ts apply to one argumen t. The represen tations sho w that the functors select there appropriate on tological lev el lo cally , thereb y a v oiding global inconsistency . (18) 9v (book  (v :P; v :I )) ^ (under stand  (mar y  ; v :I )) (19) 9v (book  (v :P; v :I )) ^ (r ead  (mar y  ; v :P; v :I )) (20) 9v (book  (v :P; v :I )) ^ amusing  (v :I ) ^ (bur n  (mar y  ; v :P)) (21) 9v (book  (v :P; v :I )) ^ 9ushel f  (v :P) ^ on  (v :P; u:P) ^ (under stand  (mar y  ; v :I )) The lexical items b e autiful and like in (5) and (6), resp., are p olysemous b ecause of the lac k of strict sortal requiremen ts. They can b e represen ted as relational expressions con taining a parameter for the selection restrictions whic h has to b e instan tiated to a set of features b y con text. like, e.g., can b e translated to l ik e[S ] 0 , with l ik e[fPg] 0 , l ik e[fI g] 0 , and l ik e[fP; I g] 0 as (some of the) p ossible readings. Of course this presupp oses the a v ailabilit y of a set of basic predicates l ik e  i of di eren t ontological arities. 4 Quan ti ers and Cardinalities W e no w turn to the b eha vior of nonexisten tial quan ti ers and cardinalit y op erators in com bination with complex ob jects. The c hoice of the appropriate on tological lev el for an application of these op erators ma y b e guided b y the sortal requiremen ts of the predicates used (as in (10)-(12)), but as (13) demonstrates it is not determined b y the lexical seman tics. W e represen t quan ti ers and cardinalit y op erators as second-order relations, according to the theory of generalized quan ti ers (Mon tague, 1974; Barwise and Co op er, 1981) and tak e them to b e parameterized b y a con text v ariable S  L for selection restrictions in the same manner as the predicates like and b e autiful. The v alue of S ma y dep end on the general con text as w ell as on seman tic prop erties of lexical items in the utterance. W e de ne the seman tics of a parameterized quan ti er Qj S b y applying its resp ectiv e basic, non-parameterized v arian ts to the S -pro jections of their argumen t predicates P and Q to features in S , whic h w e write as P j S and Qj S , resp ectiv ely . F ormally P j f` 1 ;::: ;` n g is x 1 : : : x n :9u:P (u) ^ x 1 = u:` 1 ^ : : : ^ x n = u:` n A rst prop osal is giv en in (22). (23) giv es the represen tation of sen tence (13) in the \b o okstore reading" (omitting the semantics of new and represen ting on the shelf as an atomic one-place predicate, for simplicit y), (24) the reduction of (23) to ordinary quanti cation on the S -pro jections, whic h is equivalen t to the rst-order form ula (25), whic h in turn can b e sp elled out as (26) using basic predicates. (22) Qj S (P ; Q) , Q (P j S ; Qj S ) (23) ev er y j fI g (book 0 ; on shel f 0 ) (24) ev er y  book 0 j fI g ; on shel f 0 j fI g  (25) 8x:9u:(x = u:I ^ book 0 (u)) = ) 9v :x = v :I ^ on shel f 0 (v ) (26) 8x:9u:(x = u:I ^ book  (u:P; u:I )) = ) 9v :x = v :I ^ on shel f  (v :P) As one can easily see, the instan tiation of S to fI g triggers the w an ted 89 reading (\for all b o oks (as informational ob jects) there is a ph ysical ob ject on the shelf "), where the instan tiation to fPg w ould ha v e giv en the 88 reading, since on shel f 0 is pro jectiv e for P only , and as a consequence w e ha v e on shel f 0 j fPg = x:9u:on shel f 0 (u) ^ x = u:P = x:9u:on shel f  (u:P) ^ x = u:P , x:9u:on shel f  (x) ^ x = u:P , x:on shel f  (x) The extension to cases (10)-(12) is straigh tforw ard. The prop osed in terpretation ma y b e to o p ermissiv e. T ak e a situation, where new publications are alternativ ely a v ailable as b o ok and on CD-R OM. Then (22)-(26) ma y come out to b e true ev en if no b o ok at all is on the shelf (only one CD-R OM con taining all new titles). W e therefore sligh tly mo dify the general sc heme (22) b y (27), where the restriction of the quan ti er is rep eated in the n uclear scop e. (27) Qj S (P ; Q) , Q ( P j S ; (x:P (x) ^ B (x))j S ) F or ordinary quan ti cation, this do es not cause an y c hange, b ecause of the monotonicit y of NL quan ti ers. In our case of lev elsp eci c quan ti cation, it guaran tees that the second argumen t co v ers only pro jections originating from the righ t t yp e of complex objects. W e giv e the revised rst-order represen tation corresp onding to (26) in (28). (28) 8x:9u:(x = u:I ^ book  (u:P; u:I )) = ) 9v :x = v :I ^ book  (v :P; v :I ) ^ on shel f  (v :P) 5 Conclusion Our higher-order feature logic F  pro vides a framew ork for the simple and straigh tforw ard mo deling of on tologically complex objects, including the puzzles of quan ti cation and cardinalit y statemen ts. In this framew ork, a n um b er of in teresting empirical questions can b e further pursued: The on tology for complex ob jects can b e inv estigated. So far, w e constrained ourselv es to the simplest case of \dotted pairs", and ma y ev en ha v e tak en o v er a wrong classi cation from the literature, talking ab out the dualism of ph ysical and informational ob jects, where a t yp e/tok en distinction migh t ha v e b een more adequate. The realit y ab out b o oks (as w ell as b ottles and libraries) migh t b e more complex, ho w ev er, including b oth the P/I distinction as w ell as hierarc hical t yp e/tok en structures. The linguistic selection restrictions are probably more complex than w e assumed in this pap er: As Pustejo vsky argues (1998), w e ma y ha v e to tak e distinguish exo cen tric and endo cen tric cases of dotted pairs, as w ell as pro jectiv e and non-pro jectiv e v erbal predicates. Another fruitful question migh t b e whether the framew ork could b e used to reconsider the mec hanism of t yp e co ercion in general: It ma y b e that at least some cases of rein terpretation ma y b e b etter describ ed b y adding an on tological lev el, and th us creating a complex object, rather than b y switc hing from one lev el to another. W e w ould lik e to conclude with a v ery general remark: The data t yp e of feature structures as emplo y ed in our formalism has b een widely used in grammar formalisms, among other things to incorp orate seman tic information. In this pap er, a logical framew ork for seman tics is prop osed, whic h itself has feature structures as a part of the meaning represen tation. It ma y b e w orth while to consider whether this prop ert y can b e used to tell a new story ab out treating syn tax and semantics in a uniform framew ork. References John Barwise and Robin Co op er. 1981. Generalized quan ti ers and natural language. Linguistics and Philosophy, 4:159{219. F. Busa and P . Bouillon, editors. forthcoming. The language of wor d me aning. Cam bridge Univ ersit y Press, Cam bridge. A. Cop estak e and T. Brisco e. 1995. Semipro ductiv e p olysem y and sense extension. Journal of Semantics, 12:15{67. J. D olling. 1995. On tological domains, seman tic sorts and systematic am biguit y . Int. Journal of Human-Computer Studies, 43:785{807. Markus Egg. 1999. Rein terpretation from a sync hronic and diac hronic p oin t of view. Submitted. R. Mon tague. 1974. The prop er treatmen t of quan ti cation in ordinary english. In R. Montague, editor, F ormal Philosophy. Sele cte d Pap ers. Y ale Univ ersit y Press, New Ha v en. A tsushi Ohori. 1995. A p olymorphic record calculus and its compilation. A CM T r ansactions on Pr o gr amming L anguages and Systems, 17(6):844{895. James Pustejo vsky . 1991. The generativ e lexicon. Computational Linguistics, 17. James Pustejo vsky . 1995. The Gener ative L exic on. MIT Press, Cam bridge, MA. James Pustejo vsky . 1998. The seman tics of lexical undersp eci cation. F olia Linguistic a, 32:323{ 347.
2000
66
PENS: A Machine-aided English Writing System for Chinese Users Ting Liu1 Ming Zhou Jianfeng Gao Endong Xun Changning Huang Natural Language Computing Group, Microsoft Research China, Microsoft Corporation 5F, Beijing Sigma Center 100080 Beijing, P.R.C. { i-liutin, mingzhou, jfgao, i-edxun, [email protected] } Abstract Writing English is a big barrier for most Chinese users. To build a computer-aided system that helps Chinese users not only on spelling checking and grammar checking but also on writing in the way of native-English is a challenging task. Although machine translation is widely used for this purpose, how to find an efficient way in which human collaborates with computers remains an open issue. In this paper, based on the comprehensive study of Chinese users requirements, we propose an approach to machine aided English writing system, which consists of two components: 1) a statistical approach to word spelling help, and 2) an information retrieval based approach to intelligent recommendation by providing suggestive example sentences. Both components work together in a unified way, and highly improve the productivity of English writing. We also developed a pilot system, namely PENS (Perfect ENglish System). Preliminary experiments show very promising results. Introduction With the rapid development of the Internet, writing English becomes daily work for computer users all over the world. However, for Chinese users who have significantly different culture and writing style, English writing is a big barrier. Therefore, building a machine-aided English writing system, which helps Chinese users not only on spelling checking and grammar checking but also on writing in the way of native-English, is a very promising task. Statistics shows that almost all Chinese users who need to write in English1 have enough knowledge of English that they can easily tell the difference between two sentences written in Chinese-English and native-English, respectively. Thus, the machine-aided English writing system should act as a consultant that provide various kinds of help whenever necessary, and let users play the major role during writing. These helps include: 1) Spelling help: help users input hard-to-spell words, and check the usage in a certain context simultaneously; 2) Example sentence help: help users refine the writing by providing perfect example sentences. Several machine-aided approaches have been proposed recently. They basically fall into two categories, 1) automatic translation, and 2) translation memory. Both work at the sentence level. While in the former, the translation is not readable even after a lot of manually editing. The latter works like a case-based system, in that, given a sentence, the system retrieve similar sentences from translation example database, the user then translates his sentences by analogy. To build a computer-aided English writing system that helps Chinese users on writing in the way of native-English is a challenging task. Machine translation is widely used for this purpose, but how to find an efficient way in which human collaborates well with computers remains an open issue. Although the quality of fully automatic machine translation at the sentence level is by no means satisfied, it is hopeful to 1 Now Ting Liu is an associate professor in Harbin Institute of Technology, P.R.C. provide relatively acceptable quality translations at the word or short phrase level. Therefore, we can expect that combining word/phrase level automatic translation with translation memory will achieve a better solution to machine-aided English writing system [Zhou, 95]. In this paper, we propose an approach to machine aided English writing system, which consists of two components: 1) a statistical approach to word spelling help, and 2) an information retrieval based approach to intelligent recommendation by providing suggestive example sentences. Both components work together in a unified way, and highly improve the productivity of English writing. We also develop a pilot system, namely PENS. Preliminary experiments show very promising results. The rest of this paper is structured as follows. In section 2 we give an overview of the system, introduce the components of the system, and describe the resources needed. In section 3, we discuss the word spelling help, and focus the discussion on Chinese pinyin to English word translation. In addition, we describe various kinds of word level help functions, such as automatic translation of Chinese word in the form of either pinyin or Chinese characters, and synonym suggestion, etc. We also describe the user interface briefly. In section 4, an effective retrieval algorithm is proposed to implement the so-called intelligent recommendation function. In section 5, we present preliminary experimental results. Finally, concluding remarks is given in section 6. 1 System Overview 1.1 System Architecture Figure 1 System Architecture There are two modules in PENS. The first is called the spelling help. Given an English word, the spelling help performs two functions, 1) retrieving its synonym, antonym, and thesaurus; or 2) automatically giving the corresponding translation of Chinese words in the form of Chinese characters or pinyin. Statistical machine translation techniques are used for this translation, and therefore a Chinese-English bilingual dictionary (MRD), an English language model, and an English-Chinese word- translation model (TM) are needed. The English language model is a word trigram model, which consists of 247,238,396 trigrams, and the vocabulary used contains 58541 words. The MRD dictionary contains 115,200 Chinese entries as well as their corresponding English translations, and other information, such as part-of-speech, semantic classification, etc. The TM is trained from a word-aligned bilingual corpus, which occupies approximately 96,362 bilingual sentence pairs. The second module is an intelligent recommendation system. It employs an effective sentence retrieval algorithm on a large bilingual corpus. The input is a sequence of keywords or a short phrase given by users, and the output is limited pairs bilingual sentences expressing relevant meaning with users’ query, or just a few pairs of bilingual sentences with syntactical relevance. 1.2 Bilingual Corpus Construction We have collected bilingual texts extracted from World Wide Web bilingual sites, dictionaries, books, bilingual news and magazines, and product manuals. The size of the corpus is 96,362 sentence pairs. The corpus is used in the following three cases: 1) Act as translation memory to support the Intelligent Recommendation Function; 2) To be used to acquire English-Chinese translation model to support translation at word and phrase level; 3) To be used to extract bilingual terms to enrich the Chinese-English MRD; To construct a sentence aligned bilingual corpus, we first use an alignment algorithm doing the automatic alignment and then the alignment result are corrected. There have been quite a number of recent papers on parallel text alignment. Lexically based techniques use extensive online bilingual lexicons to match sentences [Chen 93]. In contrast, statistical techniques require almost no prior knowledge and are based solely on the lengths of sentences, i.e. length-based alignment method. We use a novel method to incorporate both approaches [Liu, 95]. First, the rough result is obtained by using the length-based method. Then anchors are identified in the text to reduce the complexity. An anchor is defined as a block that consists of n successive sentences. Our experiments show best performance when n=3. Finally, a small, restricted set of lexical cues is applied to obtain for further improvement. 1.3 Translation Model Training Chinese sentences must be segmented before word translation training, because written Chinese consists of a character stream without space between words. Therefore, we use a wordlist, which consists of 65502 words, in conjunction with an optimization procedure described in [Gao, 2000]. The bilingual training process employs a variant of the model in [Brown, 1993] and as such is based on an iterative EM (expectation-maximization) procedure for maximizing the likelihood of generating the English given the Chinese portion. The output of the training process is a set of potential English translations for each Chinese word, together with the probability estimate for each translation. 1.4 Extraction of Bilingual Domain-specific Terms A domain-specific term is defined as a string that consists of more than one successive word and has certain occurrences in a text collection within a specific domain. Such a string has a complete meaning and lexical boundaries in semantics; it might be a compound word, phrase or linguistic template. We use two steps to extract bilingual terms from sentence aligned corpus. First we extract Chinese monolingual terms from Chinese part of the corpus by a similar method described in [Chien, 1998], then we extract the English corresponding part by using the word alignment information. A candidate list of the Chinese-English bilingual terms can be obtained as the result. Then we will check the list and add the terms into the dictionary. 2 Spelling Help The spelling help works on the word or phrase level. Given an English word or phrase, it performs two functions, 1) retrieving corresponding synonyms, antonyms, and thesaurus; and 2) automatically giving the corresponding translation of Chinese words in the form of Chinese characters or pinyin. We will focus our discussion on the latter function in the section. To use the latter function, the user may input Chinese characters or just input pinyin. It is not very convenient for Chinese users to input Chinese characters by an English keyboard. Furthermore the user must switch between English input model and Chinese input model time and again. These operations will interrupt his train of thought. To avoid this shortcoming, our system allows the user to input pinyin instead of Chinese characters. The pinyin can be translated into English word directly. Let us take a user scenario for an example to show how the spelling help works. Suppose that a user input a Chinese word “” in the form of pinyin, say “wancheng”, as shown in figure1-1. PENS is able to detect whether a string is a pinyin string or an English string automatically. For a pinyin string, PENS tries to translate it into the corresponding English word or phrase directly. The mapping from pinyin to Chinese word is one-to-many, so does the mapping from Chinese word to English words. Therefore, for each pinyin string, there are alternative translations. PENS employs a statistical approach to determine the correct translation. PENS also displays the corresponding Chinese word or phrase for confirmation, as shown in figure 1-2. Figure 1-1 Figure 1-2 If the user is not satisfied with the English word determined by PENS, he can browse other candidates as well as their bilingual example sentences, and select a better one, as shown in figure 1-3. Figure 1-3 2.1 Word Translation Algorithm based on Statistical LM and TM Suppose that a user input two English words, say EW1 and EW2, and then a pinyin string, say PY. For PY, all candidate Chinese words are determined by looking up a Pinyin-Chinese dictionary. Then, a list of candidate English translations is obtained according to a MRD. These English translations are English words of their original form, while they should be of different forms in different contexts. We exploit morphology for this purpose, and expand each word to all possible forms. For instance, inflections of “go” may be “went”, and “gone”. In what follows, we will describe how to determine the proper translation among the candidate list. Figure 2-1: Word-level Pinyin-English Translation As shown in Figure 2-1, we assume that the most proper translation of PY is the English word with the highest conditional probability among all leaf nodes, that is According to Bayes’ law, the conditional probability is estimated by ) , | ( ) , | ( ) , , | ( ) , , | ( 2 1 2 1 2 1 2 1 EW EW PY P EW EW EW P EW EW EW PY P EW EW PY EW P ij ij ij × = (2-1) Since the denominator is independent of EWij, we rewrite (2-1) as ) , | ( ) , , | ( ) , , | ( 2 1 2 1 2 1 EW EW EW P EW EW EW PY P EW EW PY EW P ij ij ij × ∝ (2-2) Since CWi is a bridge which connect the pinyin and the English translation, we introduce Chinese word CWi into We get ) , , , | ( ) , , , | ( ) , , | ( ) , , | ( 2 1 2 1 2 1 2 1 EW EW EW PY CW P EW EW EW CW PY P EW EW EW CW P EW EW EW PY P ij i ij i ij i ij × = (2-3) For simplicity, we assume that a Chinese word doesn’t depends on the translation context, so we can get the following approximate equation: ) | ( ) , , | ( 2 1 ij i ij i EW CW P EW EW EW CW P ≈ We can also assume that the pinyin of a Chinese word is not concerned in the corresponding English translation, namely: ) | ( ) , , , | ( 2 1 i ij i CW PY P EW EW EW CW PY P ≈ It is almost impossible that two Chinese words correspond to the same pinyin and the same English translation, so we can suppose that: 1 ) , , , | ( 2 1 ≈ EW EW EW PY CW P ij i Therefore, we get the approximation of (2-3) as follows: ) | ( ) | ( ) , , | ( 2 1 i ij i ij CW PY P EW CW P EW EW EW PY P × = (2-4) According to formula (2-2) and (2-4), we get: ) , | ( ) | ( ) | ( ) , , | ( 2 1 2 1 EW EW EW P CW PY P EW CW P EW EW PY EW P ij i ij i ij × × = (2-5) where P(CWi |EWij) is the translation model, and can be got from bilingual corpus, and P(PY | CWi) ) , , | ( 2 1 EW EW EW PY P ij is the polyphone model, here we suppose P(PY|CWi) = 1, and P(EWij | EW1, EW2) is the English trigram language model. To sum up, as indicated in (2-6), the spelling help find the most proper translation of PY by retrieving the English word with the highest conditional probability. ) , | ( ) | ( max arg ) , , | ( max arg 2 1 2 1 EW EW EW P EW CW P EW EW PY EW P ij ij i EW EW ij ij × = (2-6) 3 Intelligent Recommendation The intelligent recommendation works on the sentence level. When a user input a sequence of Chinese characters, the character string will be firstly segmented into one or more words. The segmented word string acts as the user query in IR. After query expansion, the intelligent recommendation employs an effective sentence retrieval algorithm on a large bilingual corpus, and retrieves a pair (or a set of pairs) of bilingual sentences related to the query. All the retrieved sentence pairs are ranked based on a scoring strategy. 3.1 Query Expansion Suppose that a user query is of the form CW1, CW2, … , CWm. We then list all synonyms for each word of the queries based on a Chinese thesaurus, as shown below. m mn n n m m CW CW CW CW CW CW CW CW CW ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 2 1 2 1 2 22 12 1 21 11 We can obtain an expanded query by substituting a word in the query with its synonym. To avoid over-generation, we restrict that only one word is substituted at each time. Let us take the query “ ” for an example. The synonyms list is as follows:  = ……  =   ……. The query consists of two words. By substituting the first word, we get expanded queries, such as “ ”“ ”“ ”, etc, and by substituting the second word, we get other expanded queries, such as “ ”“ ”“ ”, etc. Then we select the expanded query, which is used for retrieving example sentence pairs, by estimating the mutual information of words with the query. It is indicated as follows ∑ ≠ = m i k k ij k j i CW CW MI 1 , ) , ( max arg where CWk is a the kth Chinese word in the query, and CWij is the jth synonym of the i-th Chinese word. In the above example, “ ” is selected. The selection well meets the common sense. Therefore, bilingual example sentences containing “ ” will be retrieved as well. 3.2 Ranking Algorithm The input of the ranking algorithm is a query Q, as described above, Q is a Chinese word string, as shown below Q= T1,T2,T3,…Tk The output is a set of relevant bilingual example sentence pairs in the form of, S={(Chinsent, Engsent) | Relevance(Q,Chinsent) > Relevance(Q,Engsent) >  where Chinsent is a Chinese sentence, and Engsent is an English sentence, and   For each sentence, the relevance score is computed in two parts, 1) the bonus which represents the similarity of input query and the target sentence, and 2) the penalty, which represents the dissimilarity of input query and the target sentence. The bonus is computed by the following formula: Where Wj is the weight of the jth word in query Q, which will be described later, tfijis the number of the jth word occurring in sentence i, n is the number of the sentences in corpus, dfj is the number of i j L j df n m j ij tf W i Bonus /) / log( ) 1 log( × ∑ = × = sentence which contains Wj, and Liis the number of word in the ith sentence. The above formula contains only the algebraic similarities. To take the geometry similarity into consideration, we designed a penalty formula. The idea is that we use the editing distance to compute that geometry similarity. i i i Penalty Bonus R − = Suppose the matched word list between query Q and a sentence are represented as A and B respectively A1, A2, A3, … , Al B1, B2, B3, … , Bm The editing distance is defined as the number of editing operation to revise B to A. The penalty will increase for each editing operation, but the score is different for different word category. For example, the penalty will be serious when operating a verb than operating a noun where Wj’ is the penalty of the jth word Ej the editing distance We define the score and penalty for each kind of part-or-speech POS Score Penalty Noun 6 6 Verb 10 10 Adjective 8 8 Adverb 8 8 Preposition 8 8 Conjuction 4 4 Digit 4 4 Digit-classifer 4 4 Classifer 4 4 Exclamation 4 4 Pronoun 4 4 Auxilary 6 6 Post-preposition 6 6 Idioms 6 6 We then select the first       4 Experimental Results & Evaluation In this section, we will report the primary experimental results on 1) word-level pinyin-English translation, and 2) example sentences retrieval. 4.1 Word-level Pinyin-English Translation Firstly, we built a testing set based on the word aligned bilingual corpus automatically. Suppose that there is a word-aligned bilingual sentence pair, and every Chinese word is labelled with Pinyin. See Figure 4-1. Figure 5-1: An example of aligned bilingual sentence If we substitute an English word with the piny Figure 4-1: An example of aligned bilingual sentence If we substitute an English word with the pinyin of the Chinese word which the English word is aligned to, we can get a testing example for word-level Pinyin-English translation. Since the user only cares about how to write content words, rather than function words, we should skip function words in the English sentence. In this example, suppose EW1 is a function word, EW2 and EW3 are content words, thus the extracted testing examples are: EW1 PY2 (CW2, EW2) EW1 EW2 PY4 (CW4, EW3) The Chinese words and English words in brackets are standard answers to the pinyin. We can get the precision of translation by comparing the standard answers with the answers obtained by the Pinyin-English translation module. i j j L j df n E h j W i Penalty /) / log( ) 1 log( ' × × ∑ = = The standard testing set includes 1198 testing sentences, and all the pinyins are polysyllabic. The experimental result is shown in Figure 4-2. Shoot Rate Chinese Word 0.964942 English Top 1 0.794658 English Top 5 0.932387 English Top 1 (Considering morphology) 0.606845 English Top 5 (Considering morphology) 0.834725 Figure 4-2: Testing of Pinyin-English Word-level Translation 4.2 Example Sentence Retrieval We built a standard example sentences set which consists of 964 bilingual example sentence pairs. We also created 50 Chinese-phrase queries manually based on the set. Then we labelled every sentence with the 50 queries. For instance, let’s say that the example sentence is !"#$%&'()* +,-./(He drew the conclusion by building on his own investigation.) After labelling, the corresponding queries are “' ( )*”, and “ + -.”, that is, when a user input these queries, the above example sentence should be picked out. After we labelled all 964 sentences, we performed the sentence retrieval module on the sentence set, that is, PENS retrieved example sentences for each of the 50 queries. Therefore, for each query, we compared the sentence set retrieved by PENS with the sentence labelled manually, and evaluate the performance by estimating the precision and the recall. Let A denotes the number of sentences which is selected by both human and the machine, B denotes the number of sentences which is selected only by the machine, and C denotes the number of sentences which is selected only by human. The precision of the retrieval to query i, say Pi, is estimated by Pi = A / B and the recall Ri, is estimated by Ri = A/C. The average precision is 50 50 1∑ = = i iP P , and the average recall is 50 50 1∑ = = i i R R . The experimental results are P = 83.3%, and R = 55.7%. The user only cares if he could obtain a useful example sentence, and it is unnecessary for the system to find out all the relevant sentences in the bilingual sentence corpus. Therefore, example sentence retrieval in PENS is different from conventional text retrieval at this point. Conclusion In this paper, based on the comprehensive study of Chinese users requirements, we propose a unified approach to machine aided English writing system, which consists of two components: 1) a statistical approach to word spelling help, and 2) an information retrieval based approach to intelligent recommendation by providing suggestive example sentences. While the former works at the word or phrase level, the latter works at the sentence level. Both components work together in a unified way, and highly improve the productivity of English writing. We also develop a pilot system, namely PENS, where we try to find an efficient way in which human collaborate with computers. Although many components of PENS are under development, primary experiments on two standard testing sets have already shown very promising results. References Ming Zhou, Sheng Li, Tiejun Zhao, Min Zhang, Xiaohu Liu, Meng Cai  1995  . DEAR: A translator’s workstation. In Proceedings of NLPRS’95, Dec. 5-7, Seoul. Xin Liu, Ming Zhou, Shenghuo Zhu, Changning Huang (1998), Aligning sentences in parallel corpora using self-extracted lexical information, Chinese Journal of Computers (in Chinese), 1998, Vol. 21 (Supplement):151-158. Chen, Stanley F.(1993). Aligning sentences in bilingual corpora using lexical infromation. In Proceedings of the 31st Annual Conference of the Association for Computational Linguistics, 9-16, Columbus, OH. Brown. P.F., Jennifer C. Lai, and R.L. Merce. (1991). Aligning sentences in parallel corpora.In Proceedings of the 29th Annual Conference of the Association for Computational Linguistics, 169-176,Berkeley. Dekai Wu, Xuanyin Xia (1995). Large-scale automatic extraction of an English-Chinese translation lexicon. Machine Translation, 9:3-4, 285-313 (1995) Church, K.W.(1993), Char-align. A program for aligning parallel texts at the character level. In Proceedings of the 31st Annual Conference of the Association for Computational Linguistics, 1-8, Columbus, OH. Dagan, I., K.W. Church, and W.A. Gale (1993) Robust bilingual word alignment for machine aided translation. In Proceedings of the workshop on Very Large Corpora, 69-85, Kyoto, Auguest. Jianfeng Gao, Han-Feng Wang, Mingjing Li, and Kai-Fu Lee, 2000. A Unified Approach to Statistical Language Modeling for Chinese. In IEEE, ICASPP2000. Brown, P. F., S. A. DellaPietra, V.J. Dellapietra, and R.L.Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2): 263-311 Lee-Feng Chien, 1998. PAT-Tree-Based Adaptive Keyphrase Extraction for Intelligent Chinese Information Retrieval. Special issue on “Information Retrieval with Asian Language” Information Processing and Management, 1998.
2000
67
Diagnostic Processing of Japanese for Computer-Assisted Second Language Learning Jun’ichi Kakegawa, Hisayuki Kanda, Eitaro Fujioka, Makoto Itami, Kohji Itoh Department of Applied Electronics, Science University of Tokyo 2641 Yamazaki, Noda-shi, Chiba-ken 278-8510, JAPAN {kakegawa,kanda,eitaro76,itami,itoh}@itlb.te.noda.sut.ac.jp Abstract As an application of NLP to computer-assisted language learning(CALL) , we propose a diagnostic processing of Japanese being able to detect errors and inappropriateness of sentences composed by the students in the given situation and the context of the exercise texts. Using LTAG(Lexicalized Tree Adjoining Grammar) formalism, we have implemented a prototype of such a diagnostic parser as a component of a CALL system being developed. 1 Introduction In the recent classroom of second language learning, communicative approach(H.G. Widdowson, 1977) is promoted in which it matters for the students to become aware of the language use, i.e. the functionality of language usage and it’s dependence on the situations and the contexts of communication. In order to achieve the objective according to “constructivistic” point of view of learning (T.M.Duffy et al., 1991), the students are encouraged to produce sentences by themselves in various situations and contexts and guided to recognize by themselves the erroneous or inappropriate functions of their misused expressions. We have already proposed a ComputerAssisted Language Learning(CALL) system (N.Kato et al., 1997) which provides the students with sample texts promoting their reflection on the errors and inappropriateness, detected by a diagnostic parser, of the sentences composed by the students filling the blanks set up in the given contexts and situations. In this paper we report on prototyping the diagnostic parser implemented using LTAG formalism as a component of the system. LTAG(Lexicalized Tree Adjoining Grammar) is a lexicalized grammatical formalism (XTAG Research Group, 1995). For ease of diagnosing the erroneous sentences composed by the students, lexicalized type of grammars seemed most suitable. Comparing HPSG(Head-driven Phrase Structure Grammar) (C.Pollard et al., 1994) and LTAG, the well-known two (almost-)lexicalized grammars, LTAG looked more simple and especially convenient for sentence generation necessary in diagnosis. LTAG systematically associates an elementary tree structure with a lexical anchor and the structure is embedded in the corresponding lexical item. Associated with each of the external nodes of the embedded tree structure are feature structures such as inflection, case information, head symbol, semantic constraints as well as a difference list for surface expressions. These features have their origin in the anchored lexical item. The feature information can, moreover, include the knowledge of situated language use. Appearance of the features at the external nodes of the lexical items greatly facilitates generation of local phrases which is indispensable in diagnostic parsing. These are the reason why we employed LTAG. Preference of unification to all-procedural handling excluded the so-called “ dependency grammar ”(M.Nagao, 1996). 2 LTAG of Japanese 2.1 The Characteristic of Japanese Japanese phrases are classified in the first place into two categories: Yougen phrase(YP) and Taigen phrase(TP). A YP or TP has a Yougen or a Taigen, respectively, as it’s head word. Yougen along with Taigen as categories belong to the category of semantically self-contained (called autonomous) words. The words, e.g. verbs, adjectives, belonging to Yougen have inflections, whereas the words. e.g. nouns, pronouns, demonstratives, belonging to Taigen have no inflection. A YP or TP consists of a head word and its sibling phrases on it’s left semantically modifying the head word. And such a phrase in its turn can semantically modify an autonomous word by way of attaching a connective to it’s right, forming a phrase, or inflecting the head word of the modifier. In general, a sentence is constructed by attaching to a phrase a few (or void of) functional words expressing the attitude of the locutor to the proposional part of the phrase ( modality ) and intention of the locution affecting the listener ( illocutionary-act marking ). 2.2 Elementary Tree Fig.1 shows Elementary Trees of LTAG we defined for Japanese. Figure 1: Example of Elementary Trees Each node is expressed by a predicate formalism, in general, as following, For example, “ ” is a self-contained (autonomous) word and its lexical item, comprising an initial tree, is expressed by, Note that tense, aspect, polite expressions, “Ren-you (te)” are dealt with as inflections just as in the classes teaching Japanese as Second Language. The lexical items are classified into several categories such as auto, link, prio, post, compo, according to the embedded tree structures. 2.3 Tree Operation In LTAG, 2 tree operations are defined(See Fig. 2). A node of a tree is said to be substituted by another tree if the root node of the latter is successfully unified with the node. A tree is said to be adjoined with another tree if it is successfully inserted into the latter by unifying the root node and the foot node(marked ∗) of the former, respectively, with the separated nodes of the latter, all with a same syntactic category. Figure 2: Examples of Substitution and Adjunction In Japanese, a Yougen requires as adjoined modifiers Taigen phrases with connectives(e.g. Fig. 2 (1)) corresponding to the mandatory “ cases ” ( e.g. Fig.2 (2) ), and it also require have those corresponding to the optional “cases”. The default order of the case phrases may be changed for the purpose of stressing or avoiding unintended modification. The change can be dealt with by way of permutation in unification. Another type of phrase to modify the Yougen is YP plus one of the connectives denoting cause, reason-why, condition etc.(e.g. Fig.3 (4)). A Yougen may be modified by a YP (Yougen Phrase) with its head Yougen inflection in Ren-you form without any connective(e.g. Fig.3 (3)). A Taigen is mostly modified by a YP (Yougen Phrase) with its head Yougen inflected in Rentai form with no connective(e.g. Fig.3 (2)). For ease and uniformity of processing, especially in the diagnostic parser, null connectives λ-Ren-you and λ-Rentai are introduced when a YP modifies Yougen and Taigen, respectively, by way of inflection(e.g. Fig.3 (3), (2) ). The other type of phrase to modify the Taigen is TP plus connective “ (no)” denoting proprietary, kinship or whole-part relationship(e.g. Fig.3 (1)). 2.4 Dealing with Situation - Dependent Expression By incorporating into the feature structure an additional item expressing situational constraints, the parser has the capability of diagnosing usage of situation-dependent Japanese expressions such as giving and receiving benefits as well as demonstratives. As for demonstratives, e.g. “ (kono-hon) ”, “ (sono-hon) ”, “ (ano-hon) ” indicates a book located either in the territory of the locuter, the listener, or outside the both, respectively. In the case of expression for giving and receiving benefits, for example as shown in TaFigure 3: Examples of Tree Structure ble 1, the empathy relational constraints are embedded in each of the lexical items for the underlined word along with the case information for “ (ga)”, “ (ni)” Though the indicated three expressions have the same propositional function of expressing giving-benefit whose giver is x and givee is y, “camera” is placed on the side of x, y, y with “angles” towards y, x, x respectively. It is seen that the camera angle determines the requirement to the empathy relations(S.Kuno, 1989). Suppose the situation E(X|Z) < E(Y |Z) is given, where X, Y , Z stand for “the nurse”, “the locutor’s son”, “the locutor”, respectively, for instance, the parser can diagnose the following. English : “ The nurse(:X) reads the book to my son(:Y). ” : I(:Z) am the locutor. Japanese : incorrect “ ” (hobo-san ga watashi no musuko ni hon wo yondeTable 1: Situational Constrains in Lexicon Expressions Case Information Empathy constraint x y ( x ga y ni shite-ageru ) x , y E(x|z) > E(y|z) x y ( x ga y ni shite-kureru ) x , y E(x|z) < E(y|z) y x ( y ga x ni shite-morau ) y , x E(y|z) > E(x|z) locutor z : x give benefit to y ageru . ) Japanese : correct “ ” (watashi no musuko ga hobo-san ni hon wo yondemorau .) 2.5 Composite Verbs The above-mentioned expressions for giving and receiving, e.g. “ ” yondemorau , is an example of “composite verbs” in Japanese. Many composite verbs can be produced with a considerable number of auxiliary verbs preceded by different main verbs. Because of the modification of the sense and the case control due to the auxiliary component, as illustrated in the case information column in Table1, we are forced to generate the composite tree (See Fig.4), carrying out modification of the meaning and the case control, before adjoining of modifiers to the composite verb takes place. Figure 4: Examples of Composite Verb 2.6 Modality Words and Illocutionary - Act Markers In Japanese, “modality words”are functional words expressing the attitude of the locutor towards the propositional part of the utterance, “illocutionary-act markers” demands answer from the listener or expresses other intention of the locution affecting the listener. Some combinations of certain adverbs and a “modality word” co-occur in the position interposing that part of the proposition in which the locutor has concern. The example shown in Fig.5, “ ”(darou) is a modality word expressing locutor’s supposition, and “ ”(osoraku) expresses the extent of his confidence on the supposition. The lexical item for the latter includes the demand for the modality semantics of the locutor’s supposition. English : It will probably rain tomorrow, I’m sure. Japanese : (ashita wa , osoraku ame ga huru darou yo .) Figure 5: Modality Word and IllocutionaryAct Marker 2.7 Connective “wa” In Japanese, TP plus connective “ ”(wa) is frequently used. It is said that there are two kinds of usage of connective “ ” ; the one introduces the theme of the sentence, the other discriminatorily presents one of the cases of the head Yougen as shown, respectively, in the following cases. usage 1 English : Me, I climbed that mountain. Japanese : (boku wa ano yama ni nobo-tta.) usage 2 English : (e.g.) As for me, I’ll have a dish of eel. Japanese : (boku wa unagi da.) Figure 6: Example of the Usage of “wa” In distinguishing between usage1. and usage2., we focus on the head Yougen of YP. If it has any unfilled-case, and the semantic constraint of the Taigen before connective “ ” corresponds to that of one of the unfilledcases, then our processor regards “ ” as discriminatory. Otherwise, “ ” is considered as introducing the theme of the sentence. 2.8 Use of a Stack in Parsing For implementing a parser for Japanese, a stack memory can be conveniently employed. ♯In processing the sentence from left to right, the candidate modifier phrases are kept in a stack memory until a possible Yougen or Taigen word appears and inspected if they can modify the word. The tree-structured features of the candidate modifier phrases popped up one by one from the stack are tried to be unified with those of the word, and the features of the phrases as far as the tree adjoining unification succeeds are integrated with the features of the modified word, to make a Saturated Initial Tree(SIT). The rest of the phrases of the stack are left there to be tested on the next Yougen or Taigen word which will appear later on. Any ordering of modifiers is syntactically permitted except when an undesired modification takes place. ♮If a connective is found by reading one word ahead, the thus-far made SIT substitutes the left external node of the tree of the connective to make a Saturated Auxiliary Tree(SAT) provided unification succeeds(e.g. Fig.7). If the read ahead is a modality word, its yp node is substituted by the yp root of the SIT, and after interposing modality modifiers having been processed, the resulting phrase is considered SIT anew and the procedure goes to ♮. If the read ahead is an illocutionary-act marker or the ending sentence symbol, and the inflection of SIT is appropriate, parsing terminates. Otherwise either of the λ-Renyou or λ-Rentai connectives is attached depending on the inflection of the head of the SIT to make a SAT ( See Fig. 3 and Fig. 7). In either cases as well as the case with a nonnull connective, the SAT is pushed into the stack and the procedure recurs to ♯. Figure 7: Example of SAT and SIT 3 Generation We describe here our algorithm for generating a sentence when the semantic relationFigure 8: Example of a Semantic Relationship and Trees ship, for example as in Fig8, is given. The generation process progresses as illustrated in Fig9. The main stream of our generation algorithm follows. At first, from the lexical database, an autonomous word is fetched, whose semantic relationship term is unifiable with the root of the given semantic relationship. Letting the root and terminal node of the word be the first and the second arguments, respectively. generate2 is called • If the first argument can be unified with the second argument, generation is terminated. Otherwise, the process, carrying over the second argument, searches for a prio or link word whose root node can be unified with the first argument. • If a prio word is found, letting its right ( foot ) node be the first argument and retaining the second argument, generate2 is called. • If a link word is found, an autonomous word is searched for whose root node can be unified with the left ( substitution ) node of the link word. Letting the word’s root and the terminal node be the first argument and the second argument, respectively, generate2 is called. Letting the right ( foot ) node of the link word be the first argument and retaining the second argument, generate2 is called. In the following, searching of the autonomous word and handing their 2 nodes off to generate2 are dealt with by generate1 predicates. generate1(Node):auto(W,Node,Terminal), generate2(Node,Terminal). generate2(Node1,Node2):unify(Node1,Node2). generate2(Root,Terminal):prio(W,Root,Right), generate2(Right,Terminal). generate2(Root,Terminal):link(W,Root,Left,Right), generate1(Left), generate2(Right,Terminal). In the case of generation including modality words, illocutionary-act markers or composite verbs, the algorithm needs a little more complicated procedures. 4 Case and Semantic Processing in Parsing and Generation In parsing and generation, case and semantic processing occurs by unification without any procedural programming. The initial tree structure of the lexical item of an autonomous word consists of a root node and a terminal node. Especially in the YP initial tree, the root node has a filled used-case slot and a variable unused-case slot as well as a variable semantic slot whose head part is filled. The terminal node has the null used-case slot and the filled unused-case slot as well as the semantic slot consisting only of the head predicate. Figure 9: Example of Generation In parsing, following the process as illustrated in Fig. 9 bottom up, when the foot YP node of a YP SAT ( e.g. ) is unified with the terminal node of a Yougen autonomous word ( e.g. ) , the case data, if any, ( e.g. [Y,[ (Y)], ]) corresponding to the SAT is moved from the unused-case slot to the used-case slot in the SAT root node. The semantic data from the SAT is integrated with that of the word and transferred to the SAT root. The foot YP node of another YP SAT if any, ( e.g. ) is unified with the said root node the corresponding case data, if any, ( e.g. [Z,[ (Z)], ]) is further moved from the unused-case slot to the used-case slot. The semantic data from the new SAT is joined with that in the previous SAT root in the root of the new SAT. Likewise proceeding, finally, by unifying the concatenated SAT with the root of the original autonomous word ( e.g. ), there remains in the unused case slot those case datas with no corresponding SAT which may be explained by omitted SATs or the slash case whose entity will be found in the Taigen word to be modified by the thus-constructed modifying YP. The whole semantic data from the SATs is integrated in the root node of the original autonomous word. The process of adjoining TP SATs ( e.g. ) to modify a Taigen autonomous word ( e.g. ) is similar to that for YP SATs to a Yougen word, except that no case data processing occurs. In generation, following the process as illustrated in Fig. 9 top-down, when the whole given semantic relationship is unified into the semantic slot of a Yougen autonomous word ( e.g. ), and if e.g. a link word ( e.g. ) is found with its root unifiable with the root of the Yougen initial tree, the semantic expression is divided into two parts thanks to the case data ( e.g. [Z,[ (Z)], ] ), the one part ( e.g. [ ( ,Z, , )]) is transferred to the right node, and the other part ( e.g. [ (X,Z,Y),[ (Y),[ (U,Y), ( ,U)]]] ) transferred to the left ( foot ) node. From the case data of the used-case slot of the original yougen ( e.g. ), the case data corresponding to the link word ( e.g. ) is moved from the used-case slot to the unused-case slot in the left ( foot ) node. That part of semantics transferred to the right node is processed to find the corresponding surface expression ( e.g. ) by constructing an SIT. The other part of semantics sent to the left ( foot ) node along with the remaining used-case slot ( e.g. [[Y,[ (Y)], ]] ) are made use of for finding a link word ( e.g. ) whose root node is unifiable with the said left ( foot ) node. The semantics sent to the new link root node is divided into two parts; the one part ( e.g. [ (Y),[ (U,Y), ( ,U)]]) sent to the right node to form SIT and construct the corresponding surface expression ( e.g. ), the other part ( e.g. [ (X,Z,Y)]) sent to the left ( foot ) node. Likewise proceeding, when all the used-case data is transferred into the unused-case slot in the foot node, it may be unified with the terminal node of the original yougen ( e.g. ) , terminating the generation. 5 Mechanism of Semantic Diagnostic Processing 5.1 Postulation In our CALL system, the students are asked to fill in the blanks for composition in the given situation and context, using words from a given list. Therefore no morphological analysis is needed. In diagnosing the students’ sentence, we assume that the following data is available for constraining processing. • Semantic elements and their relationships, which should be expressed by the sentence with which the students are asked to fill the blanks. • The list of words, to be used in the composition, corresponding to the semantic elements. Fig.10 is an example of relationships of semantic elements represented by a tree structure. Modifying elements are placed as the children of the parent, the modified elements. The list of the words to be used for expressing an element is linked to the element. Figure 10: Example of relationships of semantic elements 5.2 Principle of Semantic Diagnosis After an SIT has been constructed, the diagnostic parser consults the lexicon with the succeeding word. If it is a connective, the parser tries substitution operation with SIT and, if successful, appends it to the SIT to form the temporary SAT. In case the parser fails to append the connective to the SIT, only the surface expression of the connective along with the SIT is recorded in the provisional SAT. Suppose the succeeding word was not a connective. If it was a Taigen or Yougen and the SIT is yp and its inflections is Rentai or Ren-you, respectively, then λ-Rentai or λRen-you is appended to the SIT to form an SAT, even though the inflection might be incorrect. If the inflection of the SIT is inconsistent with the succeeding word or the SIT is tp, as no reasonable interpretation is possible, “Pending Connective” µ is appended to the SIT to make an SAT. In all of the abovementioned cases, the obtained SAT is pushed into the stack. When the parser encounters a Yougen word[♯] or a Taigen word, it pops up one SAT after another from the stack and examines, locally generating surface expressions, if it conforms with one of the semantic children to the parent corresponding to the target Yougen/Taigen word. If it does, the parser adjoins the SAT to the word, after, if necessary, having corrected wrong/missing connective or wrong inflection of the SAT, thus making an SIT, including error correction messages if any. If the popped SAT does not conform with any of the semantic children, it is pushed into a temporary stack, recording the SAT as a false modifier if SAT can be falsely adjoined to the Yougen/Taigen word. In case of SAT accompanying µ, the parser, consulting the semantic relationship tree data, generating a related phrase, either replaces µ with a suitable missing connective and/or corrects the wrong inflection if necessary. When an SAT is popped up which conforms with one of the semantic children, the SATs held in temporary stack at that instance, if any, should have been obstacles for the popped up SAT to modify the target word. And they are marked “⋆”. After all the SATs in the main stack have been examined, the SATs recorded in the temporary stack are returned into the main stack. And then the SAT constructed as explained in the above is pushed into the main stack. If, later on, the SATs marked “⋆” are found to modify a target word, conforming to the semantic relationship, they are commented as causing modification crossover. Finally, if the semantic relationship requires modality expression(s) and/or illocutionary-act marker(s), the thusfar-made Yougen SIT is (recursively if necessary) substituted into the yp node of the expression(s) and, at the same time, corresponding modifiers of the expression(s) are looked for in the main stack to be popped making an SIT. If at [♯], the found Yougen word is a part of a composite verb the semantic relationship requires, the rest is looked for, supplemented if lacking, the case information is modified if necessary, and the same procedures follow as described after [♯]. 6 Example of Diagnosis For example, supposing the student had input the sentence shown in Fig.11, the parser could detect the errors by using the semantic relationship aforementioned in Fig.10 and the relation of the degrees of empathy in the given situation. The detected errors are listed in the following. Figure 11: Example of Result of Diagnosis false modification : Inappropriate placing “ ”(watashi no), causing the phrase to modify “ ”(hobo-san). missing connective : Missing connective “ ”(ga) which “ ”(hobo-san) must have for the phrase to be adjoined to “ ”(yo-nde kureru). obstacle for modification : “ ” (hobo-san) is in the place of obstacle for “ ”(watashi no) to modify “ ”(musuko). wrong inflection : “ ”(yo-mi) has to be replaced by “ ”(yo-nde) for the verb to form a composite verb together with auxiliary verb “ ”(kureru) expressing giving benefit. wrong connective : Wrong connective “ ”(de) has to be replaced by “ ”(wo) which “ ”(hon) must have for the phrase to be adjoined to “ ”(yo-nde kureru). modification crossover : The sentence has a modification crossover between “ ”(watashi no musuko) and “ ”(hobo-san ga yo-nde kureru). inappropriate situational expression : Use of “ ”(ageru) in the given situation designates empathy relation E(nurse|locutor) > E(the locutor′s son|locutor) which contradict with the given empathy relation. It requires less number of corrections for “ ” to be replaced by “ ”(kureru) for conforming with the relation and retaining “ ”(musuko ni) than to be replaced by “ ”(morau). 7 Conclusions We proposed a diagnostic processing of Japanese and described its procedures in detail. The parser makes use of LTAG formalism introducing several additional data structure such as SIT, SAT, null/pending connectives. The diagnosis we reported here is local in principle. Referring to the given relationship of semantic elements, the error is detected and corrected locally. The correction messages are generated and recorded locally in SITs. The undesired modifications in the student sentence, however, can be detected and commented on. Our CALL system, based on the detected errors and inappropriateness, provides the students with sample texts which will enable the students to correct their sentence by themselves. The tasks to be achieved are: 1. to establish ontology of semantic relationship description, 2. efficient methodology for preparing the lexical items comprising semantic constraints, 3. to communicate semantic contexts and situations to the students through assisting reading the texts by way of bidirectionally linking the text words with an electronic dictionary, 4. to deal with anaphora. Acknowledgment The authors are grateful to Prof. Jun-ichi Tsujii, University of Tokyo, for discussing and providing information on LTAG as well as the status quo of natural language processing. The work reported in this paper was partially supported by the Grant-in-Aid for Scientific Research 09680303, Ministry of Education. References The XTAG Research Group ( 1995 ) : “ A Lexicalized Tree Adjoining Grammar for English ”, University of Pennsylvania, IRCS Report 95-03, March 1995. Owen Rambow and Aravind K. Joshi ( 1994 ) : “ A Processing Model for Free Word Order Languages ”, In Perspectives on Sentence Processing, C.Clifton, Jr.,L.Frazier and K.Rayner, editors. Lawrence Erlbaum Associates. Carl Pollard, Ivan A. Sag ( 1994 ) : “ HeadDriven Phrase Structure Grammar ”, The University of Chicago Press. M.Nagao ( 1996 ) : “ Natural Language Processing ”,Iwanami-Shoten. V. M. Holland, J. D. Kaplan, M. R. Sams ( 1995 ): “ Intelligent Language Tutors – Theory Shaping Technology – ”, LEA,pp.183-200 . T. M. Duffy, J. Lowyck, D. H. Jonassen ( 1991 ) : “ Designing Environment for Constructive Learning ”, NATO ASI Senes Vol.F105, Springer-Verlag. H. G. Widdowson ( 1977 ) : “ Teaching Language as Communication ”, Oxford University Press. Susumu Kuno ( 1989 ) : “Danwa - no - Bunpou( Grammar of Discours )”, Daisyukan-Syoten. Nobutaka Kato, Yi Liu, Tomonori Manome, Hisayuki Kanda, Makoto Itami, Kohji Itoh ( 1997 ) : “ Use of Situation-Functional Indices for Diagnosis and Dialogue Database Retrieval in a Learning Environment for Japanese as Second Language ”, Proceedings of AIED ’97, pp.247-254.
2000
68
W ord Sense Disam biguation b y Learning from Unlab eled Data Seong-Bae P ark y , By oung-T ak Zhang y and Y ung T aek Kim z Arti cial In telligence Lab (SCAI) Sc ho ol of Computer Science and Engineering Seoul National Univ ersit y Seoul 151-742, Korea y fsbpark,[email protected] r z [email protected] Abstract Most corpus-based approac hes to natural language pro cessing su er from lac k of training data. This is b ecause acquiring a large n umb er of lab eled data is exp ensiv e. This pap er describ es a learning metho d that exploits unlab eled data to tac kle data sparseness problem. The metho d uses committee learning to predict the lab els of unlab eled data that augmen t the existing training data. Our exp erimen ts on w ord sense disam biguation sho w that predictiv e accuracy is signi can tly impro v ed b y using additional unlab eled data. 1 In tro duction The ob jectiv e of w ord sense disam biguation (WSD) is to iden tify the correct sense of a w ord in con text. It is one of the most critical tasks in most natural language applications, including information retriev al, information extraction, and mac hine translation. The a v ailabilit y of large-scale corpus and v arious mac hine learning algorithms enabled corpusbased approac h to WSD (Cho and Kim, 1995; Hw ee and Lee, 1996; Wilks and Stev enson, 1998),but a large scale sense-tagged corpus or aligned bilingual corpus is needed for a corpus-based approac h. Ho w ev er, most languages except English do not ha v e a large-scale sense-tagged corpus. Therefore, an y corpus-based approac h to WSD for suc h languages should consider the follo wing problems:  There's no reliable and a v ailable sensetagged corpus.  Most w ords are sense am biguous.  Annotating the large corp ora requires h uman exp erts, so that it is to o exp ensiv e. Because it is exp ensiv e to construct sensetagged corpus or bilingual corpus, man y researc hers tried to reduce the n um b er of examples needed to learn WSD (A tsushi et al., 1998; P edersen and Bruce, 1997). A tsushi et al. (A tsushi et al., 1998) adopted a selectiv e sampling metho d to use small n um b er of examples in training. They de ned a training utilit y function to select examples with minim um certain t y , and at eac h training iteration the examples with less certain t y w ere sa v ed in the example database. Ho w ev er, at eac h iteration of training the similarit y among w ord prop ert y v ectors m ust b e calculated due to their k -NN lik e implemen tation of training utilit y . While lab eled examples obtained from a sense-tagged corpus is exp ensiv e and timeconsuming, it is signi can tly easier to obtain the unlab eled examples. Y aro wsky (Y aro wsky , 1995) presen ted, for the rst time, the p ossibilit y that unlab eled examples can b e used for WSD. He used a learning algorithm based on the lo cal con text under the assumption that all instances of a w ord ha v e the same in tended meaning within an y xed do cumen t and ac hiev ed go o d results with only a few lab eled examples and man y unlab eled ones. Nigam et al. (Nigam et al., 2000) also sho w ed the unlab eled examples can enhance the accuracy of text categorization. A ttribute Substance GFUNC the grammatical function of w P ARENT the w ord of the no de mo di ed b y w SUBJECT whether or not P ARENT of w has a sub ject OBJECT whether or not P ARENT of w has an ob ject NMOD WORD the w ord of the noun mo di er of w ADNWORD the head w ord of the adnominal phrase of w ADNSUBJ whether or not the adnominal phrase of w has a sub ject ADNOBJ whether or not the adnominal phrase of w has an ob ject T able 1: The prop erties used to distinguish the sense of an am biguous Korean noun w . In this pap er, w e presen t a new approac h to w ord sense disam biguation that is based on selectiv e sampling algorithm with committees. In this approac h, the n um b er of training examples is reduced, b y determining b y w eigh ted ma jorit y v oting of m ultiple classi ers, whether a giv en training example should b e learned or not. The classi ers of the committee are rst trained on a small set of lab eled examples and the training set is augmen ted b y a large n um b er of unlab eled examples. One migh t think that this has the p ossibilit y that the committee is misled b y unlab eled examples. But, the exp erimen tal results con rm that the accuracy of WSD is increased b y using unlab eled examples when the memb ers of the committee are w ell trained with lab eled examples. W e also theoretically sho w that p erformance impro v emen t is guaran teed b y a mild requiremen t, i.e., the base classi ers need to guess b etter than random selection. This is b ecause the p ossibilit y misled b y unlab eled examples is reduced b y in tegrating outputs of m ultiple classi ers. One adv an tage of this metho d is that it e ectiv ely p erforms WSD with only a small n um b er of lab eled examples and th us sho ws p ossibilit y of building w ord sense disam biguators for the languages whic h ha v e no sense-tagged corpus. The rest of this pap er is organized as follo ws. Section 2 in tro duces the general pro cedure for w ord sense disam biguation and the necessit y of unlab eled examples. Section 3 explains ho w the prop osed metho d w orks using b oth lab eled and unlab eled examples. Section 4 presen ts the exp erimen tal results obtained b y using the KAIST ra w corpus. Section 5 dra ws conclusions. 2 W ord Sense Disam biguation Let S 2 fs 1 ; : : : ; s k g b e the set of p ossible senses of a w ord to b e disam biguated. T o determine the sense of the w ord, w e need to consider the con textual prop erties. Let x =< x 1 ; : : : ; x n > b e the v ector for represen ting selected con textual features. If w e ha v e a classi er f (x;  ) parameterized with  , then the sense of a w ord with prop ert y v ector x can b e determined b y c ho osing the most probable sense s  : s  = arg max s2S f (x;  ): The parameters  are determined b y training the classi er on a set of lab eled examples, L = f(x 1 ; s 1 ); : : : ; (x N ; s N )g. 2.1 Prop ert y Sets In general, the rst step of WSD is to extract a set of con textual features. T o select particular prop erties for Korean, the language of our cencern, the follo wing c haracteristics should b e considered:  Korean is a partially free-order language. The ordering information on the neighb ors of the am biguous w ord, therefore, do es not giv e signi can tly meaningful information in Korean.  In Korean, ellipses app ear v ery often with a nominativ e case or ob jectiv e case. Therefore, it is diÆcult to build a large scale database of lab eled examples with case mark ers. Considering b oth c haracteristics and results of previous w ork, w e select eigh t properties for WSD of Korean nouns (T able 1). Three of them (P ARENT, NMOD WORD, ADNWORD) tak e morphological form as their v alue, one (GFUNC) tak es 11 v alues of grammatical functions 1 , and others tak e only true or false. 2.2 Unlab eled Data for WSD Man y researc hers tried to dev elop automated metho ds to reduce training cost in language learning and found out that the cost can b e reduced b y active le arning whic h has con trol o v er the training examples (Dagan and Engelson, 1997; Liere and T adepalli, 1997; Zhang, 1994). Though the n um b er of lab eled examples needed is reduced b y activ e learning, the lab el of the selected examples m ust b e giv en b y the h uman exp erts. Th us, activ e learning is still exp ensiv e and a metho d for automatic lab eling unlab eled examples is needed to ha v e the learner automatically gather information (Blum and Mitc hell, 1998; P edersen and Bruce, 1997; Y aro wsky , 1995). As the unlab eled examples can b e obtained with ease without h uman exp erts it mak es WSD robust. Y aro wsky (Y aro wsky , 1995) presen ted the p ossibilit y of automatic lab eling of training examples in WSD and ac hiev ed go o d results with only a few lab eled examples and man y unlab eled examples. On the other hand, Blum and Mitc hell tried to classify W eb pages, in whic h the description of eac h example can b e partitioned in to distinct views suc h as the w ords o ccurring on that page and the w ords o ccurring in h yp erlinks (Blum and Mitc hell, 1998). By using b oth views together, they augmen ted a small set of lab eled examples with a lot of unlab eled examples. The unlab eled examples in WSD can provide information ab out the join t probabilit y 1 These 11 grammatical functions are from the parser, KEMTS (K orean-to-E nglish Mac hine Translation System) dev elop ed in Seoul National Univ ersit y , Korea. distribution o v er prop erties but they also can mislead the learner. Ho w ev er, the p ossibilit y of b eing misled b y the unlab eled examples is reduced b y the committee of classi ers since com bining or in tegrating the outputs of several classi ers in general leads to impro v ed p erformance. This is wh y w e use activ e learning with committees to select informativ e unlab eled examples and lab el them. 3 Activ e Learning with Committees for WSD 3.1 Activ e Learning Using Unlab eled Examples The algorithm for activ e learning using unlab eled data is giv en in Figure 1. It tak es t w o sets of examples as inputs. A Set L is the one with lab eled examples and D = fx 1 ; : : : ; x T g is the one with unlab eled examples where x i is a prop ert y v ector. First of all, the training set L (1) j (1  j  M ) of lab eled examples is constructed for eac h base classi er C j . This is done b y random resampling as in Bagging (Breiman, 1996). Then, eac h base classi er C j is trained with the set of lab eled examples L (1) j . After the classi ers are trained on lab eled examples, the training set is augmen ted b y the unlab eled examples. F or eac h unlab eled example x t 2 D , eac h classi er computes the sense y j 2 S whic h is the lab el asso ciated with it, where S is the set of p ossible sense of x t . The distribution W o v er the base classi ers represen ts the imp ortance w eigh ts. As the distribution can b e c hanged eac h iteration, the distribution in iteration t is denoted b y W t . The imp ortance w eigh t of classi er C j under distribution W t is denoted b y W t (j ). Initially , the base classi ers ha v e equal w eigh ts, so that W t (j ) = 1= M . The sense of the unlab eled example x t is determined b y ma jorit y v oting among C j 's with w eigh t distribution W . F ormally , the sense y t of x t is predicted b y y t (x t ) = arg max y 2S X j :C j (x t )=y W t (j ): If most classi ers b eliev e that y t is the correct Giv en an unlab eled example set D = fx 1 ; : : : ; x T g and a lab eled example set L and a w ord sense set S 2 fs 1 ; : : : ; s k g for x i , Initialize W 1 (j ) = 1 M , where M is the n um b er of classi ers in the committee. Resample L (1) j from L for eac h classi er C j , where jL (1) j j = jLj as done in Bagging. T rain base classi er C j (1  j  M ) from L (1) j . F or t = 1; : : : ; T : 1. Eac h C j predicts the sense y j 2 S for x t 2 D . Y =< y 1 ; : : : ; y M > 2. Find the most lik ely sense y t from Y using distribution W : y t = arg max y 2S X j :C j (x t )=y W t (j ): 3. Set t = 1 t  t , where  t = No. of C j 's whose predictions are not y t M : 4. If t is larger than a c ertainty threshold  , then up date W t : W t+1 (j ) = W t (j ) Z t   t if y j = y t 1 otherwise, where Z t is a normalization constan t. 5. Otherwise, ev ery classi er C j is restructured from new training set L (t+1) j : L (t+1) j = L (t) j + f(x t ; y t )g: Output the nal classi er: f (x) = arg max y 2S X j :C j (x)=y W T (j ): Figure 1: The activ e learning algorithm with committees using unlab eled examples for WSD. sense of x t , they need not learn x t b ecause this example mak es no con tribution to reduce the v ariance o v er the distribution of examples. In this case, instead of learning the example, the w eigh t of eac h classi er is up dated in suc h a w a y that the classi ers whose predictions w ere correct get a higher imp ortance w eigh t and the classi ers whose predictions w ere wrong get a lo w er imp ortance w eigh t under the assumption that the correct sense of x t is y t . This is done b y m ultiplying the w eigh t of the classi er whose prediction is y t b y c ertainty t . T o ensure the up dated W t+1 form a distribution, W t+1 is normalized b y constan t Z t . F ormally , the imp ortance w eigh t is up dated as follo ws: W t+1 = W t (j ) Z t   t if y j = y t ; 1 otherwise. The certain t y t is computed from error  t . Because w e trust that the correct sense of x t is y t , the error  t is the ratio of the n um b er of classi ers whose predictions are not y t . That is, t is computed as t = 1  t  t where  t is giv en as  t = No. of C j 's whose predictions are not y t M : Note that the smaller  t , the larger the v alue of t . This implies that, if the sense of x t is certainly y t and a classi er predicts it, a higher w eigh t is assigned to the classi er. W e assume that most classi ers b eliev e that y t is the sense of x t if the v alue of y t is larger than a certain t y threshold  whic h is set b y trialand-error. Ho w ev er, if the certain t y is b elo w the threshold, the classi ers need to learn the example x t y et with b elief that the sense of it is y t . Therefore, the set of training examples, L (t) j , for the classi er C j is expanded b y L (t+1) j = L (t) j + f(x t ; y t )g: Then, eac h classi er C j is restructured with L (t+1) j . This pro cess is rep eated un til the unlab eled examples are exhausted. The sense of a new example x is then determined b y w eigh ted ma jorit y v oting among the trained classi ers: f (x) = arg max y 2S X j :C j (x)=y W T (j ); where W T (j ) is the imp ortance w eigh t of classi er C j after the learning pro cess. 3.2 Theoretical Analysis Previous studies sho w that using m ultiple classi ers rather than a single classi er leads to impro v ed generalization (Breiman, 1996; F reund et al., 1992) and learning algorithms whic h use we ak classi ers can b e b o osted in to str ong algorithms (F reund and Sc hapire, 1996). In addition, Littlestone and W arm uth (Littlestone and W arm uth, 1994) sho w ed that the error of the w eigh ted ma jorit y algorithm is linearly b ounded on that of the b est memb er when the w eigh t of eac h classi er is determined b y held-out examples. The p erformance of the prop osed metho d dep ends on that of initial base classi ers. This is b ecause it is highly p ossible for unlab eled examples to mislead the learning algorithm if they are p o orly trained in their initial state. Ho w ev er, if the accuracy of the initial ma jorit y v oting is larger than 1 2 , the prop osed metho d p erforms w ell as the follo wing theorem sho ws. Theorem 1 Assume that every unlab ele d data x t is adde d to the set of tr aining examples for al l classi ers and the imp ortanc e weights ar e not up date d. Supp ose that p 0 b e the pr ob ability that the initial classi ers do not make err ors and t (0  t  1) b e the pr ob ability by which the ac cur acy is incr e ase d in adding one mor e c orr e ct example or decr e ase d in adding one mor e inc orr e ct example at iter ation t. If p 0  1 2 , the ac cur acy do es not de cr e ase as a new unlab ele d data is adde d to the tr aining data set. Pro of. The probabilit y for the classi ers to predict the correct sense at iteration t = 1, p 1 , is p 1 = p 0 (p 0 + 0 ) + (1 p 0 )(p 0 0 ) = p 0 (2 0 + 1) 0 b ecause the accuracy can b e increased or decreased b y 0 with the probabilit y p 0 and 1 p 0 , resp ectiv ely . Therefore, without loss of generalit y , at iteration t = i + 1, w e ha v e p i+1 = p i (2 i + 1) i : T o ensure the accuracy do es not decrease, the condition p i+1  p i should b e satis ed. p i+1 p i = p i (2 i + 1) i p i = p i (2 i ) i  0 ) p i  1 2 The theorem follo ws immediately from this result.  3.3 Decision T rees as Base Classi ers Although an y kind of learning algorithms whic h meet the conditions for Theorem 1 can b e used as base classi ers, Quinlan's C4.5 release 8 (Quinlan, 1993) is used in this pap er. The main reason wh y decision trees are used as base classi ers is that there is a fast restructuring algorithm for decision trees. Adding an unlab eled example with a predicted lab el to the existing set of training examples mak es the classi ers restructured. Because the restructuring of classi ers is time-consuming, the prop osed metho d is of little practical use without an eÆcien t w a y to restructure. Utgo et al. (Utgo et al., 1997) presen ted t w o kinds of eÆcien t algorithms for restructuring decision trees and sho w ed exp erimen tally that their metho ds p erform w ell with only small restructuring cost. W e mo di ed C4.5 so that w ord matc hing is accomplished not b y comparing morphological forms but b y calculating similarit y b et w een w ords to tac kle data-sparseness problem. The similarit y b et w een t w o Korean w ords is measured b y a v eraged distance in Wor dNet of their English-translated w ords (Kim and Kim, 1996). W ord No. of Senses No. of Examples Sense P ercen tage p ear 6.2% ship 55.2% b ae 4 876 times 13.7% stomac h 24.9% p erson 46.2% bun 3 796 min ute 50.8% indignation 3.0% the former 28.6% jonja 2 350 electron 71.4% bridge 30.9% dari 2 498 leg 69.1% T able 2: V arious senses of Korean nouns used for the exp erimen ts and their distributions in the corpus. 4 Exp erimen ts 4.1 Data Set W e used the KAIST Korean ra w corpus 2 for the exp erimen ts. The en tire corpus consists of 10 million w ords but w e used in this pap er the corpus con taining one million w ords excluding the duplicated news articles. T able 2 sho ws v arious senses of am biguous Korean nouns considered and their sense distributions. The p er c entage column in the table denotes the ratio that the w ord is used with the sense in the corpus. Therefore, w e can regard the maxim um p ercen tage as a lo w er b ound on the correct sense for eac h w ord. 4.2 Exp erimen tal Results F or the exp erimen ts, 15 base classi ers are used. If there is a tie in predicting senses, the sense with the lo w est order is c hosen as in (Breiman, 1996). F or eac h noun, 90% of the examples are used for training and the remaining 10% are used for testing. T able 3 sho ws the 10-fold cross v alidation result of WSD exp erimen ts for nouns listed in T able 2. The accuracy of the prop osed metho d sho wn in T able 3 is measured when the accuracy is in its b est for v arious ratios of the n um b er of lab eled examples for base classi ers to total examples. The results sho w 2 This corpus is distributed b y the Korea T erminology Researc h Cen ter for Language and Kno wledge Engineering. that WSD b y selectiv e sampling with committees using b oth lab eled and unlab eled examples is comparable to a single learner using all the lab eled examples. In addition, the metho d prop osed in this pap er ac hiev es 26.3% impro v emen t o v er the lo w er b ound for `b ae', 41.5% for `bun', 22.1% for `jonja', and 4.2% for `dari', whic h is 23.6% impro v emen t on the a v erage. Esp ecially , for `jonja' the prop osed metho d sho ws higher accuracy than the single C4.5 trained on the whole lab eled examples. Figure 2 sho ws the p erformance impro v ed b y using unlab eled examples. This gure demonstrates that the prop osed metho d outp erforms the one without using unlab eled examples. The initial le arning in the gure means that the committee is trained on lab eled examples, but is not augmen ted b y unlab eled examples. The di erence b et w een t w o lines is the impro v ed accuracy obtained b y using unlab eled examples. When the accuracy of the prop osed metho d gets stabilized for the rst time, the impro v ed accuracy b y using unlab eled examples is 20.2% for `b ae', 9.9% for `bun, 13.5% `jonja', and 13.4% for `dari'. It should b e men tioned that the results also sho w that the accuracy of the prop osed metho d ma y b e dropp ed when the classi ers are trained on to o small a set of lab eled data, as is the case in the early stages of Figure 2. Ho w ev er, in t ypical situations where the classi ers are trained on minim um training set Using P artially Using All W ord Lab eled Data Lab eled Data Lo w er Bound b ae 81.5  7.7% 82.3%  5.9% 55.2% bun 92.3  7.7% 94.3%  5.7% 50.8% jonja 93.5  6.5% 90.6%  9.4% 71.4% dari 73.3  14.2% 80.8  10.9% 69.1% Av erage 85.2% 87.0% 61.6% T able 3: The accuracy of WSD for Korean nouns b y the prop osed metho d. size, this do es not happ en as the results of other nouns sho w. In addition, w e can nd in this particular exp erimen t that the accuracy is alw a ys impro v ed b y using unlab eled examples if only ab out 22% of training examples, on the a v erage, are lab eled in adv ance. In Figure 2(a), it is in teresting to observ e jumps in the accuracy curv e. The jump app ears b ecause the unlab eled examples mislead the classi ers only when the classi ers are p o orly trained, but they pla y an imp ortan t role as information to select senses when the classi ers are w ell trained on lab eled examples. Other nouns sho w similar phenomena though the p ercen tage of lab eled examples is di eren t when the accuracy gets at. 5 Conclusions In this pap er, w e prop osed a new metho d for w ord sense disam biguation that is based on unlab eled data. Using unlab eled data is esp ecially imp ortan t in corpus-based natural language pro cessing b ecause ra w corp ora are ubiquitous while lab eled data are exp ensiv e to obtain. In a series of exp erimen ts on w ord sense disam biguation of Korean nouns w e observ ed that the accuracy is impro v ed up to 20.2% using only 32% of lab eled data. This implies, the learning mo del trained on a small n um b er of lab eled data can b e enhanced b y using additional unlab eled data. W e also theoretically sho w ed that the predictiv e accuracy is alw a ys impro v ed if the individual classi ers do b etter than random selection after b eing trained on lab eled data. As the lab els of unlab eled data are estimated b y committees of m ultiple decision trees, the burden of man ual lab eling is minimized b y using unlab eled data. Th us, the prop osed metho d seems esp ecially e ectiv e and useful for the languages for whic h a largescale sense-tagged corpus is not a v ailable y et. Another adv an tage of the prop osed metho d is that it can b e applied to other kinds of language learning problems suc h as POStagging, PP attac hmen t, and text classi cation. These problems are similar to w ord sense disam biguation in the sense that unlab eled ra w data are abundan t but lab eled data are limited and exp ensiv e to obtain. Ac kno wledgemen ts This researc h w as supp orted in part b y the Korean Ministry of Education under the BK21 Program and b y the Korean Ministry of Information and Comm unication through I IT A under gran t 98-199. References F. A tsushi, I. Ken taro, T. T ak enobu, and T. Hozumi. 1998. Selectiv e sampling of effectiv e example sen tence sets for w ord sense disam biguation. Computational Linguistics, 24(4):573{597. A. Blum and T. Mitc hell. 1998. Com bining lab eled and unlab eled data with co-training. In Pr o c e e dings of COL T-98, pages 92{100. L. Breiman. 1996. Bagging predictors. Machine L e arning, 24:123{140. J.-M. Cho and G.-C. Kim. 1995. Korean v erb sense disam biguation using distributional information from corp ora. In Pr o c e e dings of Natur al L anguage Pr o c essing Paci c R im Symp osium, pages 691{696. I. Dagan and S. Engelson. 1997. Committeebased sampling for training probabilistic classi50 55 60 65 70 75 80 85 10 20 30 40 50 60 70 80 90 100 Accuracy (%) Ratio of The Number of Labeled Examples to The Number of Total Examples (%) Initial Learning The Proposed Method (a) b ae 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90 100 Accuracy (%) Ratio of The Number of Labeled Examples to The Number of Total Examples (%) Initial Learning The Proposed Method (b) bun 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90 100 Accuracy (%) Ratio of The Number of Labeled Examples to The Number of Total Examples (%) Initial Learning The Proposed Method (c) jonja 30 40 50 60 70 80 10 20 30 40 50 60 70 80 90 100 Accuracy (%) Ratio of The Number of Labeled Examples to The Number of Total Examples (%) Initial Learning The Proposed Method (d) dari Figure 2: Impro v emen t in accuracy b y using unlab eled examples. ers. In Pr o c e e dings of the F ourte enth International Confer enc e on Machine L e arning, pages 150{157. Y. F reund and R. Sc hapire. 1996. Exp erimen ts with a new b o osting algorithm. In Pr o c e e dings of the Thirte enth International Confer enc e on Machine L e arning, pages 148{156. Y. F reund, H. Seung, E. Shamir, and N. Tish b y . 1992. Selectiv e sampling with query b y committee algorithm. In Pr o c e e dings of NIPS-92, pages 483{490. T. Hw ee and H. Lee. 1996. In tegrating m ultiple kno wledge sources to disam biguate w ord sense: An exemplar-based approac h. In Pr o c e e dings of the 34th A nnual Me eting of the A CL, pages 40{47. Nari Kim and Y.-T. Kim. 1996. Am biguit y resolution of k orean sen tence analysis and k oreanenglish transfer based on k orean v erb patterns. Journal of KISS, 23(7):766{775. in Korean. R. Liere and P . T adepalli. 1997. Activ e learning with committees for text categorization. In Pr o c e e dings of AAAI-97, pages 591{596. N. Littlestone and M. W arm uth. 1994. The w eigh ted ma jorit y algorithm. Information and Computation, 108(2):212{261. K. Nigam, A. McCallum, S. Thrun, and T. Mitc hell. 2000. Learning to classify text from lab eled and unlab eled do cumen ts. Machine L e arning, 39:1{32. T. P edersen and R. Bruce. 1997. Distinguishing w ord senses in un tagged text. In Pr o c e e dings of the Se c ond Confer enc e on Empiric al Metho ds in Natur al L anguage Pr o c essing, pages 399{401. R. Quinlan. 1993. C4.5: Pr o gr ams F or Machine L e arning. Morgan Kaufmann Publishers. P . Utgo , N. Berkman, and J. Clouse. 1997. Decision tree induction based on eÆcien t tree restructuring. Machine L e arning, 29:5{44. Y. Wilks and M. Stev enson. 1998. W ord sense disam biguation using optimised com binations of kno wledge sources. In Pr o c e e dings of COLINGA CL '98, pages 1398{1402. D. Y aro wsky . 1995. Unsup ervised w ord sense disam biguation riv aling sup ervised metho ds. In Pr o c e e dings of the 33r d A nnual Me eting of the A CL, pages 189{196. B.-T. Zhang. 1994. Accerlated learning b y activ e example selection. International Journal of Neur al Systems, 5(1):67{75.
2000
69
      !"#$"%'& "()+*-,/.0 12"#! 34 1+  534 1 687:9;$<1=?>/@BADCE<FCG HJI:K;L:MONPMQCRS;UT$;UL VXWJY$Z\[^]`_aWJbc]edgf h Z/]`iWJ_jZ/]lknmJoXZgbp qdr_aYs+]`Wt[uvmPknWJbmJW w-Z/[lxzy|{FZ}b~€bk‚vWt[`ok‚]zƒ „r…‡†rˆ‡ˆŠ‰ Zg_jZ\]‹ZgbŒ$yŽo[PZgWt{ /c‘+’+“+”}•-–—+“c˜+“r™Ešg›Uœv(ž Ÿ¡J‘ež“+œ(žt¡g” ¢Š£#HJ¤¥>g;U¦‡¤ §©¨8ªz«2¬®­B¯+°J¯±l²t³‡°´8±l´8µ¶²z·g¸©¹+°¶­z±`º8¯+°J²­Ž¸ ¬»¨¼½´¾±lª«2µ‡º¬®­¿±lÀgª±l¨ºc±`º)ÁFµ¶²Â«U°J¨c¸ ºcÃĬ»¨2¼8Å`µ¶´¯Uµ/­z¬»ªÆ¬®µ¶¨+°¶ÃÇ­Žªz²È2ÅlªzÈc²Æ±`­^É1Ê#«± ´¾±Æªz«2µ‡º¿¬®­jµ¶²Æ¬®±l¨}ªÆ±`º¿Á˵¶²?î±P°J²¨¬»¨2¼Ìªµ ¯+°J²Æ­z±j°J¨g·Ì­±`î±`ÅlªÆ±`º0­ Èv¹2­z±Æª5µ¶Áeªl°J²Æ¼/±lª ­Ž·}¨gª^°¶Ålª¬®Å¾­ ªz²ÈÅÆªzÈc²Æ±`­^ɾ§©ª€¬®­€ÃĵrÅP°¶Ã˳·/±lª ÅP°J¨«U°J¨2ºcî±°¶Ã®­zµ€Å`µ¶´¯µ/­z¬»ª¬Äµ¶¨+°¶Ãc­Žª²zÈ2Ål¸ ªzÈc²Æ±^­`ÉBÍB°J²ªÆ­#µ¶Á­Ž¯±^±`Åz«a°¶­BÎ#±`îð¶­:±l´¸ ¹±`ºcºc±`ºÏ¬»¨2­Žª^°P¨Å`±`­°J²Æ±¹±`¬»¨2¼5È2­z±`º4­z¬»¸ ´XÈ2ûª^°J¨2±`µ¶È2­zû·/ɾÊ#«2±µ¶Èvª¯cÈvªX¬®­€°8¯+°J²z¸ ªÆ¬‚°¶ÃЯU°J²­z±Ñ¬»¨4Î «¬®Å«Ì¬Ò¨2­ ª^°J¨2Å`±`­µ¶Á ªz«2± ª^°J²¼/±lª1­Žª²zÈ2ÅlªzÈc²±`­e°J²±e´Ñ°J²Ó}±`ºUÉ Ô NPLE¤J>gCM:7Q¦r¤}KËCÇL Õ×Ö °J²¬Ä±Æª|·½µ¶Á ­ ª^°Jª¬®­Žª¬ÄÅt°¶Ã8´¾±lª«µrºc­ÏÎ#±l²Æ±Ø¯v²µ¶¸ ¯µ/­±`ºÙµ Ö ±l²Úª«2±Û²±`Å`±l¨gªÚ·/±P°J²­ÜÁFµ¶²Úî±P°J²z¨2¬»¨2¼ ªÆµ%¯c²ÆµrºrÈÅ`±Û°ÝÁÞÈîà ¯+°J²Æ­±Úµ¶ÁÌÁn²Æ±^±l¸ßª±lÀgªà­z±l¨c¸ ªÆ±Æ¨Å`±`­âá˱¶Éã¼cÉ®³ä µrºSá å¥æ/æ/ç\èÆ³Xé?°¶¼/±Æ²z´Ñ°J¨áŽåJæ/æ/ê\èÆ³ ë µ/î¨­Xá å¥æ/æ/ì¶èl³ í°Jªz¨+°J¯+°J²zÓ}«¬QáŽå¥æ/æ/ì¶èƳ °J¨2º î ±lӇ¬»¨2±ÑáŽå¥æ/æ/ï¶èzèÆÉ𧩨̯+°J²l°¶Ã®Ãı`Ã˳ °jîµ¶ª¾µ¶ÁñÎ(µ¶²Óج®­ ¹±`¬»¨2¼5ºcµ¶¨2±µ¶¨ òÆó‡ô¶õÞõÄöJ÷ÌøcôJù^òúnûvüýá Õ ¹c¨±Æ·/³B奿/ærå/þ ÿ ²Æ±U±l¨­ ªÆ±lªªÆ±¶³ 奿/æ/èÆ³ÚÁFµ‡ÅÆÈ­¬»¨¼ µ¶¨ ¯+°J²zª¬‚°¶Ã °J¨+°¶Ã»·‡­z¬®­Dµ¶ÁS­±l¨gª±l¨Å^±`­O°JªÌª«2±¿Ãı Ö ±`Ãjµ¶ÁSîµrÅP°¶Ã ¯c«v²l°¶­z±`­-°J¨2ºÑªz«2±-²Æ±`°Jª¬®µ¶¨­ ¹U±lª|Î(±`±Æ¨?ª«2±l´ É î «U°¶Ã®Ã®µtÎ ¯U°J²­z¬»¨2¼ ª^°¶­ Ó‡­ °J²±Úµ¶Ánª±l¨ Á˵¶²z´XÈc¸ °JªÆ±`º×°¶­ ºc¬ Ö ¬®ºc¬Ò¨2¼ ªz«2±%­z±l¨gª±l¨2Å`±¬»¨gªÆµ¨2µ¶¨c¸ µ Ö ±l²°J¯c¯¬»¨2¼ ­z±/È2±l¨2Å`±`­ µ¶ÁÜ­Ž·}¨gª^°¶ÅƪƬ®Å'­Žª²zÈ2Ål¸ ªzÈc²±`­`³ ° ªl°¶­ŽÓ Åt°¶ÃÄî±`º Æó‡û PúÞûvüJÉ éaµ/­Žª µ¶Á̪«±ÚÅ«gÈc¨cӇ¬»¨¼Î(µ¶²Ӈ­ «U° Ö ±ÚÅ`µ¶¨2Å`±l¨gªz²l°JªÆ±`º µ¶¨à¨µ¶Èc¨c¸©¯v«c²^°¶­±`­0á Í:­^³ý±¶Éã¼cÉ ë «}Èv²Åz«âáŽåJæ/ï/ï\èÆ³ í°J´8­Ž«+°`Î °J¨2º5éj°J²ÅlÈ­áŽå¥æ/æ/ê¶èƳ ë °J²ºv¬®± °J¨2º ÍQ¬Ä±l²Å`± á å¥æ/æ/ï/èl³ :±`±l¨2­Žª²^°ŠáŽå¥æ/æ/ï¶èèlÉ ªz«2±l² Å«gÈc¨vÓr¬Ò¨2¼¿ªl°¶­ŽÓ‡­?¬Ò¨ Ö µ/Ã Ö ±Ì²±`Å`µ/¼¶¨2¬`¬»¨¼0­ Èv¹z±`Ålª¸ Ö ±l²z¹ á î è °J¨2º Ö ±l²¹v¸|µ¶¹z±`Ålªá €èQ¯U°¥¬Ò²­€á Õ ²Æ¼g°J¸ ´8µ¶¨ ±lª°¶ÃËÉ®³$奿\æ/ærþvéŠÈv¨2µ€±lªe°¶ÃËÉ®³奿/æ/æ/èÆÉ Ê#«±¿µ¶Ècªz¯cÈcªµ¶ÁS­Ž«+°¶Ã®Ã®µPÎ ¯+°J²Æ­±l²Æ­Â¬®­ÌÈ2­z±lÁÞÈà Π«±Æ²Æ± ° Å`µ¶´¯2î±lªÆ±ý¯+°J²Æ­z±ýªz²±`± ¬®­¨2µ¶ª²±/È2¬»²Æ±`º áF±¶Éã¼cÉ®³$Å`°¶­­z¬2ÅP°JªÆ¬®µ¶¨Ç³B­ŽÈc´´ý°J²¬t°JªÆ¬®µ¶¨Ç³¹¬®Ã®¬»¨2¼¶ÈU°¶Ã °¶Ã®¬Ä¼¥¨v´8±l¨gªlèlÉDÊ#«2± Å`µ¶´¯2î±lÀ‡¬»ª|·Ìµ¶Áeª²^°¶¬»¨2¬»¨¼jÁÞÈîà ¯U°J²­z¬»¨2¼ °¶Ã®¼/µ¶²¬Òª«c´¾­´ý°^·aªz«2±l²Æ±lÁ˵¶²Æ±¹±° Ö µ/¬Äºc±`º ¬»¨Ï­ŽÈ2Å«ªl°¶­ŽÓ‡­^ɨSªz«2±8µ¶ªz«2±l²€«U°P¨ºU³EÁnÈ2îÃB¯+°J²Æ­ ¸ ¬»¨¼-«+°¶­Bª«2±°¶º Ö °J¨gªl°¶¼/±`­:µ¶Á¯v²µ‡ºrÈ2Å`¬»¨¼XÅ^µ¶´¯µ/­¬»¸ ªÆ¬®µ¶¨+°¶ÃB­ ªz²zÈ2ÅlªÈv²±`­`³v¨2ºc¬»¨¼a´XÈ2ûªÆ¬»¯2î±Ñ­Žª²zÈ2ÅlªzÈc²Æ±^­ ­z¬»´XÈ2ûª^°J¨2±`µ¶È2­zû·/³1°J¨2ºÈ2­z¬»¨2¼S­ Èv¹c¸|­Žª²zÈ2ÅlªzÈc²±`­8ÁFµ¶² ¬»¨vÁ˱l²Æ±l¨2Å`±°J¹µ¥Èvª«2¬®¼¶«2±l²z¸|î± Ö ±`Ã2µ¶¨±^­`ÉD«2¬®Ãıñ­Ž«+°¶Ã»¸ îµPÎÌ­Ž·r­Žª±l´¾­#ª|·g¯2¬®ÅP°¶Ã®Ã»·´Ñ°JÓ/± È2­z±µ¶Á­z¬»¨2¼/î±l¸ßÎ#µ¶²Æº º°Jªl°r³+°ÁnÈ2ÃÄÃǯ+°J²Æ­±l²eÅt°J¨ È2­z± «¬®¼¶«2±l²¸ Ã®± Ö ±`í ªz²ÈÅl¸ ªzÈc²Æ±`­ ¬Ò¨Ì°ŠÅ`µ¶´¯Uµ/­z¬»ªÆ¬®µ¶¨+°¶ÃB´Ñ°J¨c¨±Æ²P³B±¶Éã¼cÉ®³:° Í ÁFµ¶² ¬®ºv±Æ¨gªÆ¬»Án·r¬Ò¨2¼5° e͍³vµ¶²°Å^µ¶¨ Èv¨2Ålª¬Äµ¶¨aµ¶Á ÍQ­ ¬»¨ µ¶²Æºc±l²1ªµ¬®ºc±l¨gª¬»Án· °îµ¶¨¼/±Æ² ÍÉ Õ ¯+°J²zª¬‚°¶Ãv¯+°J²­z±l² ¬®­Bª|·}¯¬®ÅP°¶Ã®Ã»·ÑÅ`µ¶¨2Å`±l²z¨2±`º5µ¶¨2û· Î1¬»ªz«Ï°8­Ž´Ñ°¶Ã®ÃE¨gÈc´X¹±l²Xµ¶ÁQª^°J²¼/±lªe­ ·g¨gª^°¥ÅlªÆ¬®Å¯+°Jª¸ ªÆ±l²¨­^³ °J¨2ºÌ´Ñ°`·Øªz«2±l²Æ±lÁ˵¶²Æ±j²±/È2¬»²Æ±Sî±`­­5ª²^°¶¬»¨c¸ ¬»¨¼à¬»¨cÁFµ¶²z´Ñ°JªÆ¬®µ¶¨ÇÉ Ê#«+°JªÌÎ(µ¶È2îºÛ¨2µ¶ªÌ¹±Âªz«2± ÅP°¶­z±ñÎ «±Æ¨ ± Ö °¥ÃÒÈ+°Jª¬Ò¨2¼°eÁnÈ2ÃÄÃv¯U°J²­z±l² µ¶¨5­z±^Ãı^ÅlªÆ±`º ª^°J²¼/±lªj¯U°JªªÆ±l²¨2­`³¹U±`ÅP°JÈ2­z±Ì¬»ªÆ­?ªz²l°¶¬»¨¬»¨2¼Â´ý°Jª±l¸ ²Æ¬‚°¶ÃcÎ#µ¶ÈîºÑ­ ªÆ¬®Ã®Ã2¬Ò¨2Å`ûÈ2ºc±1ÁnÈ2îÃ2¯+°J²Æ­±l¸ßª²Æ±`±^­(°J¹±`î±`º Î1¬»ªz« µ¥ªz«2±l² ¯U°JªªÆ±l²¨2­e°¶­#Î#±`îÃFÉ Ê#«±5°J¯c¯c²Æµg°¶Å«Ï¯c²Æ±`­±l¨gª±`º «±l²±¶³:µ¶Á#ªz²l°¶¬»¨U°J¹2î± øvôJù Þúßô¶õQøvôJùlòÆúÞûvüJ³e°JªªÆ±l´¯vª­5ªµÏ²±`ºrÈÅ`± ªz«2±j¼g°J¯ ¹±lª|Î#±`±l¨ ­ «U°¶Ã®Ã®µtÎ °J¨2ºÑÁÞÈîïU°J²­z¬»¨2¼cÉ:§©ª1¬®­1°J¨ ±lÀg¸ ªÆ±l¨2­z¬®µ¶¨Dµ¶Á-­Ž«+°¶Ã®Ã®µPÎݯU°J²­z¬»¨2¼SªµPÎ °J²Æºc­«U°J¨2ºcî¬Ò¨2¼ Å`µ¶´¯Uµ/­z¬»ªÆ±°J¨º´XÈ2ûªÆ¬»¯2î±1¯+°Jªzª±l²z¨2­`³rÎ «¬®Ã®±ñ´ý°¶¬»¨c¸ ª^°¶¬»¨2¬»¨¼ªz«2±€Ã®µ‡ÅP°¶Ã¨U°JªÈc²Æ±€µ¶ÁEªz«2±-ª^°¶­ Ó+³+°J¨ºý­z¬»´¸ ¯Å`¬»ª|·5µ¶Áª²^°¶¬»¨2¬»¨¼´ý°Jª±l²Æ¬‚°¶ÃËÉ ¨2±a°J¯v¯c²Æµ}°¶Åz«Ìªµj¯+°J²zªÆ¬‚°¶Ã#¯+°J²Æ­¬»¨¼?Îñ°¶­¯c²Æ±l¸ ­z±l¨}ªÆ±`º¹}·)ä#È2Å«v«2µ/À±lªe°¶ÃËÉ1áŽå¥æ\æ/æ\èÆ³ÏÎ «2µÛ±lÀg¸ ªÆ±l¨2ºc±`º °­Ž«+°¶Ã®Ã®µPÎ ¸©¯+°J²Æ­z¬»¨2¼ªÆ±`Å«v¨2¬/È2±€ªµ¯+°J²zª¬‚°¶Ã ¯U°J²­z¬»¨2¼cÉðÊ#«2±Šµ¶Èvª¯vÈcª8µ¶Á ÍܰJ¨2º! eÍàÅz«}Èv¨cÓg¸ ¬»¨¼-Î °¶­BÈ2­z±^ºý°¶­#°J¨¾¬»¨c¯cÈvªQªÆµ€¼¶²l°J´´Ñ°JªÆ¬®ÅP°¶Ãc²±`°J¸ ªÆ¬®µ¶¨j¬»¨vÁ˱l²Æ±l¨2Å`±¶ÉXÊ#«2±¬»¨cÁF±l²Æ±Æ¨Å`±^­e¯c²µ‡Å`±`­­€¬Ä­-ÅP°¶­ ¸ ÅP°¶ºc±`º³c°J¨º8°-Å`î±P°J² ¬Ò´¯c²Æµ Ö ±l´¾±l¨}ª Îñ°¶­Qµ¶¹vªl°¶¬»¨±`º ¹g·Ñ¯+°¶­z­¬Ò¨2¼²Æ±^­ŽÈ2ûªÆ­e°¶ÅƲƵ/­z­ñÅP°¶­zÅP°¶ºc±`­^É "# $&%('*)+ ,.-/-0+1$2,&3(' 45$&+6-7,8+(%19:,&;<-=,8+?>@9# A B ,&>C-0+1)>()#2%1)DFEHGJIHK2L0%M,8#*DONP+Q,8#2%?>SRUT8VVWX1Y Z ' )9+\[])1%@' $^D<9>\,8#_)1`2%1)# >@9$&#a$&4b%('=,8%c$&4 de'2L0+13('fRUTgVWW&Xh45$&+Si/# D09#*Akjmlno>Qpq,&3@' 9)r)DfE2G )`2%?)# D/9# As%('*)4 ).,8%(L/+?)t>u-7,&3)%1$v9# 3;L*D0)w>U%(+@L 3x %@L0+,&;^9#04 $&+@[v,8%19$&#yYlz+?$^3)>(>@9# AhA$^)>>(9[SL ;{%,8# )x $&L >@;G|4 $&+M>u%@+(L*31%@L0+1)>t,8%S,&;;;)r);>py4+1$&[};)4~%M%?$ +19A&'H%.YI9{# 3)h%(' )+1),8+?)€#*$v3,&>@3.,&D0)>p*%@' )S>U%(+@L 3x %@L0+,&;‚;)r);$&4%@' )S$&L0%@-0L0%9>;9[ƒ9{%?)D„E2Gs%('7,8%$&4 %@' )€4 ).,8%(L0+1)h>@)%Y Z ' 9>…-=,8-†)+‡-0+1)Q>@)#2%?>ˆ,8#‰)`H%1)# >@9$&#C$&4s%(' ) ,&;A$&+19%('/[ $&4Š"‹+1A2,8[ƒ$&#Œ)%,&;5YŽRUTgVVWp‡TgVVVp ' )+1).,84~%1)+C|NI^X1p B '*93('‘'=,8#*D0;)>C,8# D’)`2x - ;$9{%?>v3$&[w-†$>@9%19$&#=,&;>u%@+(L*3%(L0+1)>QY“|NI\9>v, [ƒ)[]$&+(G2x”E7,&>()Dˆ,&;A$&+19%@'0[•%@'=,8%SL >@)>–+Q, B xuD ,8%, >@)QA&[])#H%1>S45$&+–;).,8+(# 9{# A|3('2L/#0K^>QY—”% B $&+@K^> B 9%(' lP˜IO%,&A>p‚,8# Dk3$&[SE 9#*)>h>()A&[])#H%1>S$&4rH,8+?9$gL*> ;)# A&%@' >P9{#v$&+?D/)1+q%1$D/)3Q9D0) B ' )%('*)1+™-=,8+@%P$&4†%(' ) >@)1#2%1)# 3)–[O,QGE†)w,8#k9{# >u%Q,8# 3)t$&4™,w%,8+1A)1%-=,8%@x %1)1+@#yYO"m>S,v[ƒ)[]$&+(G2x”E=,&>@)Df,&;A$&+19%@'0[„pš9%SD0$^)> # $&%›,8E >U%(+Q,&3%„$.r)+|%@' )…D ,8%Q,fDL0+19# Af%@+,&9#*9# A0p E0L/%h[v,8K)>h%@' )–#*)3Q)>@>?,8+@G‡,8E >U%(+Q,&3%?9$&# >SDL0+19# A 9#04 )+1)1#*3)hx4 $&+).,&3('|-7,8+(%193L ;:,8+9#*>u%Q,8# 3)&Y —”#t)`H%1)# D/9# Ah|NI^p B ' 93@'ƒ9>z,œ=,8%[])%(' $^D7p B )Ž'=,Qr)ŽK)-0%„%@' )Ž>u%@+(L*3%(L0+1)k$&4h%@' )|9#/45)+1)# 3) [ƒ)3@'=,8# 9>U[ ,8#*D\9%?>k;$^3.,&;S#=,8%(L/+?)&Y Z '*9>k-=,8x -†)+wD/)>(3+19E†)Q>t%('*)v)`H%1)# D0)D!rH)+?>@9$&#ˆ9#ž,„>();4x 3$&#2%,&9#*)QDˆ[O,8#0# )+.p B ' 9;)|);:,8E†$&+,8%19# Ak[]$>u%1;G $&#|%@' )S)`2%?)# >@9$&# >M# ))D0)D|%?$ƒ'7,8# D0;)S3$&[-7$>@9x %19$&#=,&;P3.,&>@)>QYŸI)3%?9$&#¡ |D0)>@31+19E†)>w%@' )v-=,8+@%?9:,&; -=,8+1>()+.Yq¢‹)>UL ;%1>45$&+jl¡,8# Ds£€l¡,8+1)m-0+1)>()#2%1)D 9#cI)3&Y¤p4 $;;$ B )DžE2G¥,Š>U' $&+(%D/9>(3L >@>@9$&#¦9# I)3&Y §0Y ¨ ©kª5«‚¬7­^®~¯°e± Z ' )…%@+,&9# 9{# A!-0'7,&>()…+?)3)9r)>…,b;9>u%Ž$&4S%,8+1A)% %²G2-†)Q>s$&4S>UG2#H%Q,&3%?93Ž-7,8%(%1)+(#*>]%@'=,8%v# ))D¡%?$‡E7) 9D0)#2%19i )D³R )&YoA0Ypjl´,8# Dk£€l™X1p,8# Db,]%@+Q,g9{# 9# A 3$&+(-/L >S9# B ' 93('‡%Q,8+?A)%S-=,8%@%?)+@#‡9# >U%,8#*3Q)>,8+?) [v,8+@K)D B 9%@'…;:,8E†);)D|E0+Q,&3(K)%1>QYt—”%%('*)1#…>U%1$&+?)> +Q, B x²D*,8%,t>u%Q,8%?9>U%?93Q>m,8E7$&L/%%('*)Q>@)h9#*>u%Q,8# 3)>‹9{#|, [ƒ)[]$&+(GˆD*,8%,…>U%(+@L 3%(L/+?)&Y Z '*)„-=,8+1>(9{# A…-0'=,&>@) +1)Q3)9rH)Q>s,k>@)#H%1)# 3)„%1$|E†)v-=,8+?>@)D¡,8# Db9D0)#2%?9x i )>9# >U%,8#*3Q)>M$&4%('*)h%,8+1A)%‹-7,8%(%1)+(# >E=,&>@)D|$&# %@' )€9#/45$&+@[v,8%19$&#s>U%?$&+1)Dv9#O%@' )[ƒ)[]$&+(GYq—”#s%(' ) +1)1[O,&9# D/)1+t$&4q%('*)–-=,8-†)+ B )w[ƒ$>U%?;{G…,&D0$&-0%%(' ) %1)1+@[ƒ9{# $;$A&Gs$&4%@' )€œ=,8%r)+1>(9$&#‚Y µ7¶ · ¸¹bº.» »5¼y½¾g¿8À¾Á ¹y‡ÙĂÀ ŇÆ»~Ç I2L0-0-†$>@)P%('*)™%@+Q,g9{# 9# AMD ,8%Q,€3$&#H%Q,&9# >%@' )>()#/x %1)# 3)>,g>v9{#³È9A0YTp,8#*DŸ%@' )k>@)#H%1)# 3)k%?$…E†) -7,8+?>@)D„9>É ÊHÊ ËHË Ê2Ê Ì2Í ÎHÍ/Ï Ð^Ñ Ò2Ò Ê2Ê Ó Ô Õ Ö × Ø Ù Ú Û Ü ,8#*DO3$&# >@9D0)+%('*)€%,&>UKs$&4i0#*D09# A£€lq>QY Ý r)+@G+Q,8# A)™$g4 B $&+1D0>9#t%(' )‹>()#2%?)# 3)9>3$&#0x >@9D0)+1)D!,bÞ?ß8à†á8âãáß&ä”åh45$&+E7)9#*Ak,v£€lY Z '*)ƒœ=,8% ŽNIŒ,&;A$&+?9%@'0[ %@+?9)>k%1$³>UL0-0-†$&+@%Š%@'=,8%k'2G2x -†$&%@' )>(9>„E2G³[O,.%13('*9# A¡lP˜hIæ>uL/E >()çL )#*3Q)>‡$&4 %@' )ƒ3.,8# D/9D ,8%1)kR”,8#*D›-†$>@>(9E*;G‡>($&[])v$&49%?>S>UL0+@x +1$&L0# D/9# As3$&#H%1)`2%X B 9%@'‡>uL/E >()çL )#*3Q)>$&4e£Mle> %@'=,8%S,8-/-†),8+1)D…9#Š%@' )t%(+Q,&9# 9#*A„3$&+@-0L >Y Z '*)>() >UL0E >@)çL )# 3)>Qp3.,&;;)Däâ èåé?p>U' $&L ;Df3$&#H%Q,&9#æ,8% ;).,&>U%$&# )€£MlˆE7$&L/# D ,8+@GvE0+Q,&3(K)%.Y —”#ˆ%@' )s)`/,8[w- ;)k,8E†$r)&p3$&#*>(9D/)+v%(' )v+Q,8# A) $&4 B $&+1D0>v§&xUWk,&>O,Ž3.,8#*D09D ,8%1)„£MlšY Z '7,.%OlP˜hI >@)çL*)# 3)sD0$)>h# $&%,8-/-7).,8+,&>t,s3Q$&[- ;)%1)v£€l 9#¡%@+,&9#*9# A0pE0L/%s>($&[])|$&4S9%1>ƒ%?9;)>OD/$f,8-0-†).,8+ ,8#*Dh3.,.#S-/+?$.r9D0)e>UL0-/-7$&+@%?9{# Am)r^9D/)1#*3)gY Z '*)%19;) ê ë Î^ìžÌ2͍Î2Í/Ïí -/+?$.r9D/)>f,³-†$>(9{%?9rH)ž)r^9D0)#*3Q) E†)3.,8L >@)]%@' )v$&#*;Gf,8-/-7).,8+Q,8# 3)v$&4]ê Ì2ͳÎ2Í/Ïí 9> ,8%q%('*)mE†)A9#0# 9#*A–$&4,M£MlY/˜m#v%(' )M$&%('*)+P'=,8# D†p ê Ê2Ê ë Î^ì2í -0+?$.r^9D0)>, B ).,8K)+)r^9D0)# 3)E†)Q3.,8L*>() ê Ê2Ê^í ,8-/-†),8+1>|% B 93)!E†)4 $&+?)Ÿ,b£MlîE0L/%k%('/+?)) %19[ƒ)>v9#æ$&%@' )+s-7$>@9%19$&# >Y¦"33Q$&+1D09#*A;Gp€).,&3@' %19;)h'=,&>,w-†$>(9{%?9rH)–3$&L0#2%R~ï/ð8é Þ?ð8ñ^à=äXp†>u-†)394G2x 9#*A™%@' )q#HL/[SE7)+z$&4^%?9[])>%('*)elP˜Ih>()çL )# 3),8-0x -†).,8+1)QDt9#t%(+Q,&9# 9{# Ah,8%%@' )P>1,8[ƒ)P-†$>@9%19$&# B 9{%(' 9{# %@' )„£Mlò,&>v9%s,8-/-7).,8+1> B 9%('*9#ž%@' )k3.,8#*D09D ,8%1)&p ,8#*D!,v# )A2,8%?9{r)v3$&L0#2%„RUà†å²ó Þ?ð8ñ^à=äXp>u-†)394G^9# A %@' )e#2L0[SE†)+™$g4*%?9[])>š%@' )>@)çL*)# 3),8-0-†).,.+1)Dw9{# $&%@' )+t-7$>@9%19$&# >Y!È $&+›ê ë Î^ì³Ì2͍Î2Í0Ïí B )v'7,Qr) ô†õ&ö ÷.õø ù‚ú€û TO,8# D ù‚üý ÷.õø ù‚úmûÿþ p B ' 9;)ƒ4 $&+ ê Ê2Ê ë Î^ì2í ô†õ&ö ÷õ.ø ùyúû  w,8# D ù‚üý ÷.õø ù‚úû ¤Y Z 9;)>O9{#³%@' )„œ7,.%vrH)+?>@9$&#\,8+1)|3$&[w-/+?9>@)D$&4 l™˜hI!>@)QçL*)1#*3)Q>s$&# ;{GY Z ' )|3$&[-†$>(9%19$&#=,&;,&;x A$&+19%('/[<3$&# >(9D0)+?>],&;>($)[SE†)D0D0)Df>U%(+@L 3%@L0+?)>p '*)1#*3)m,€%19;)[v,G],g33$&+1D09#*A;{Gƒ9#*3Q;{L D0)M,&;>($M-=,8%(x %1)+(#Ž;:,8E†);>‹%@'=,8%>U%,8# Ds4 $&+€,3Q$&[- ;)%1)h-=,8%@%?)+@# 9#*>u%Q,8# 3)ƒR%²GH-*93.,&;;GŽ,-/'0+,&>@).X1Y I^$&[ƒ)$&4e%@' ) Î^ì %19;)>€L >@)D›E2Gk%@' )3Q$&[-†$>(9x %19$&#=,&;/,&;A$&+?9{%('0[‰,8+?)™-0+?)>@)#H%1)Dv9#ƒÈ9A0Y Y Z 9;)Q> Tx…,8+1)|3$&[w-†$>@)D³$&4lP˜hIf$&# ;{GpM,8#*Dˆ3,8#¡E†) L*>()DžE2Gˆ%@' )|œ=,8%vrH)1+1>@9$&#%1$$0Y\"[ƒ$&# A‡%('*)>() %19;)>p Z 9;)>ƒ k,8# DŸ¤„-0+1$r^9D0)s)r^9D0)# 3)s45$&+t%@' ) +Q,8# A)w§&x(W]E†)9# AŽ,O£MlE†)3,8L*>()ƒ%1$A)%('*)1+S%@' )G                   ! "   #$ % & !"     '   !  "  #$ % & ! "     (*),+.-0/214365798:1<; =?>4@BA,1DCE/F=.)G8H)G8B+JIH=?CF= KBLM N0O6P      '         Q '    !&  Q RS &  ' T &#  Q U          Q V        Q W   ' U (*),+.-0/21YX 5SZ [.>\1D[.]^CE_B1  C`),A,1bacI01</2)Gd61bIe]f/`[.>gCh_H1DIB=?C<=i)G8:(*),+0j3 k [ld 12/mCh_H1onp_H[6A,1o/<=?8B+61qj"7cahar-0>\)G8H+sCh_H1e)t808H1</ ucvwnp=.axI01<C`1 k C21bIzy0C2),A,1bap{?|E}~/`1<H1 k C€Ch_H1c)GC2a@0/21bar| 1<8 k 1en9)tCE_H)t8Ch_H1 k [.>4@‚[6aE)GC21oƒ„v*jS…p),A,1:{†=.A,[.8H1 =.AG/21=.I ‡ˆ@0/2[ld),I01bao=‰@‚[6aE)tC`)Gd61Š12d),I 128 k 1:]‹[./\CE_H1 1<8C`)G/21„ƒŒvyB=.ap)GC k [ld 12/2a9CE_H1cnp_H[6A,1„/F=?8H+61„[.]CE_H1 k =?8HI0)ŽIH=?C`1mnx[./`I aFj†uc[.C`1mCE_z=?CCh_H1Jz=?C4d61</2aE),[.8 _=.IYC2[~/`1bAG‡[.84…p),AŽ19‘ y6[./`)Ž+’)t8=.A,AG‡D]f/2[.>“Z1<8 C21<8 k 1 ‘ y”)G8•[./2I01</–C2[ k [6A,A,1 k C ar- @0@‚[./EC2)G8H+—1<d )ŽI01<8 k 1 Ch_=?C k [d61</`apCh_H19np_H[6A,1~/F=?8H+61c[.]zCh_H1 k =?8HI0),IB=?C`1ba n€[./2I0abj˜…€_H1 k [.>™@‚[6aE)GC21š=.A,+6[./`)GCh_0> k [.-HA,I›@ /`[.| I k 1oCh_H1o1<d ),I 1<8 k 1en9)GCh_H[.-0C™CE_B),a4ah1<8 C21<8 k 1.ypœ‡ /21l=.AŽ),b)G8H+4Ch_H1 k [.>™@z[6ah)GC2),[.8=.Až1<d),I01<8 k 1qj Ÿz ¡Ÿ ¢¤£¥?£˜¦B¥q§.¨ž©.¥’¨ž§qª «m£B¬ž­"¢¤ª6®¬^¯°¥’¯¡±¬ž« ²€= k _³aE1<8C2128 k 1›),ao/`1<@0/21baE1<8C21bIœ‡=$´lµ2¶·¸µ<¶‚¹`µ º?»`¼`½H¾ 5¿CE_H1i8H[I01baD[.]Ch_H1™+./<=?@ _ k [./h/`1ba@‚[.8HI‰C`[ @‚[6aE)tC`),[.8Bažœ‚12CÀnx1b1<84Ch_H1ah1<8C`1<8 k 1€nx[./`I aFy=?8BIDCE_H1 1bI0+61ba k [./E/21bar@‚[.8HIsC2[e1b)GCh_H1</iv€ÁDZ:CF=.+6aD[./D@=?Ch| C212/h8Â)G8HaC<=?8 k 1FabjÃ(*),+.- /`1đa_H[n9as=ah1<8 C21<8 k 1 +./F=?@0_o]‹[./9Ch_H1D1<; =?>4@HAŽ1D+6)Gd61<8s=?œ‚[ld 1qj ņ1Š8B[lnÆI01<Ç08B1wCh_H1wC2),A,1ba:[.]\=ˆ@=?ChC`1</h8³)G80| aC<=?8 k 1”)G8–C`1</h>Jam[.]cCE_B1eah1<8 C21<8 k 1s+./<=?@ _žj"Ȅ1<| 8H[.C21Yœ ‡wÉDÊlË́ÍlˆCh_H1Y>m=?;)G>m=.AA,1<8H+.Ch_ k [.8Hah),I | 1</`1bIo]Î[./pCE_H1Œ@0/`1 k 1bI0)G8B+o=?8HIo]‹[6A,AŽ[ln9)G8H+ k [.8 C21<; C2a [.]Ch_H1™+6)Gd61<8˜)G8HaC<=?8 k 1.yÏ=?8HI‰A,1<CDЌ=?8HIšÑ¤œ‚1iCE_H1 aC<=?/hC`)G8B+o=?8HI‰1<8HI0)t8H+m@‚[6aE)GC2),[.8Hac[.]ÏCh_H1i)G8HaCF=?8 k 1 Ò 1qjÓ+0j,ycԈ=?8HI Պ]Î[./mCh_H1eƒŒvg)G8&(*),+.- /`1s‘6Ö<j³×8 Ch_H1baE1„C21</E>\aFyž=¿C2),A,1„Ø€[.]CE_B1D+6)Gd61<8”)G8HaC<=?8 k 1Y),a9= @=?Ch_†)G8wCh_H1\aE1<8C`1<8 k 1m+./F=?@0_sCh_=?CYa2=C2),aÇH1baYCE_H1 ]Î[6A,A,[ln9)t8H+ k [.8HI )GC`),[.8BaF5 36jSØp),a91<>Yœ‚1bI0I01bI‰n9)GCh_H)G8:Ch_H1D/F=?8H+61Y[.]*8H[I01ba Ð^ÙÄÉDÊlËÌSÍlËsC`[iÑDږɿÊl˞ÌSÍlËÛHCh_=?C„)ŽaFyCh_H1 C2),A,1š),aJ1<>Yœ‚1bI0I01bI–n9)GCh_H)G8"CE_B1¤@=?ChC212/h8")G80| aC<=?8 k 1Y=?8HIo)GC2a k [.8C`1<;Cj X jØo)G8 k At-HI01ba†=?C‰A,1=.aCs[.8H1˜[.]4Ch_H1w)t8HarCF=?8 k 1 œ‚[.-08BIH=?/h‡D8H[I01baÐ2Ü¸Ñ y6np_H) k _ k [./h/`1ba@z[.8BIC2[ Ch_H1cœz[.- 8HIH=?/h‡mœ0/F= kEÝ 1<C2a Ò 1qjÓ+0j,y  y   Ö<j ‘ j798†1bI0+61J[.]xØ~Ch_=?C k [./h/`1ba@z[.8BI0aYC`[š=m@=?CE| C21</E8Þ)G8HaC<=?8 k 1 Ò /F=?CE_B1</›Ch_=?8ßC2[à=vxÁDZ0Ö >Y-BarCœ‚1€]¡-BA,AG‡™1<>¿œ‚1bI0I 1bI4n9)GCh_H)G8JCh_H1€/F=?8H+61 [.]Ch_H1c)G8HaC<=?8 k 1c)GC2aE1bAG] Ò 8H[I01baÐ^CE_ /`[.-H+._™ÑÖ2j Ÿz ¡á ⳪ ㆱH§lä ¦¥?§’¨ž©.¥.¨§?ª 7cao),A,AG-HaCE/F=?C21FI )G8Z 1 k C`),[.8àX j,36ypCE_B1†=qAŽ+6[./`)GCh_0> -BaE1ba~Ch_H1YCE/F=.)G8H)t8H+ k [./h@0-Ba„aC<=?C2),arC2) k aFyå‚Ê.æ çÊlèHËØ =?8BIÄØÊqØé ê çÊlèHËØ¿]Î[./m1<d61</h‡–C`),AŽ1e)GCo1<8 k [.- 8 C21</`abj …€_B1~>J1<>\[./E‡m1<8 k [ I01ba€Ch_H1bah1„arCF=?C2),arC2) k ap)G8š=DCE/2),1 IB=?C<=9arCh/Ek CE-0/21.5*1= k _™@=?Ch_¿]f/2[.>Ch_H1S/`[[.C*[.]0Ch_H1 Ch/`)Ž1\C2[†= k 1</ECF=.)G8–8H[I01 k [./h/21Fa@‚[.8HI0aJC2[s=:C2),A,1.y aC`[./2)G8H+DCh_H1pC`)ŽA,19arCF=?C`),aC`) k a€)G8JCh_H1 k [./h/21Fa@‚[.8HI0)t8H+ 8B[ I01.j\…€_H1m=?/ k A°=?œ‚1bA,ai=.A,[.8B+\Ch_H14@z=?CE_˜=?/21Ch_H1 vxÁDZBy)G8HaC<=?8 k 1JCÀ‡@‚1ba4=?8BIwA¡=?œz1bA,1bI†œ0/F= kEÝ 1<C`a™[.] Ch_H1 k [./E/21bar@‚[.8HI )G8H+D@=?ChC`1</h8žj…p),AŽ1Fax[.] k =?8BI0),IH=?C21 )G8BarCF=?8 k 1ba k =?8†œ‚14/21<CE/2),1<d61bI›]f/`[.>CE_B14>J1<>\[./E‡ I-0/`)t8H+~@=?/2ah)G8H+9œ ‡Y]Î[6A,A,[n9)G8H+~Ch_H1 k [./h/21Fa@‚[.8HI0)t8H+ @z=?CE_i)G8YCh_H1Ch/`),1.jυ€_H1€CE/2),1S),a k /`1=?C21FIiI - /`)G8B+~Ch_H1 Ch/<=.)G8B)G8H+s@0_=.ah1eœ‡ k [.8HaCE/hk C2)G8H+wCh_H1”aE1<8C`1<8 k 1 +./F=?@0_i]‹[./Ï1l= k _Jah1<8 C21<8 k 1c=?8HI™+’1<8B1</<=?C2)G8H+D=.AŽA Ch_H1 C2),A,1bap]‹[./91= k _e@z=ChC212/h8”)G8HaC<=?8 k 1.j NP VP 1 0 2 3 4 5 6 7 8 9 10 11 .START NN CC NN RB VBZ DT JJ NN . .END NP ë*ì,í.î ï`ðYñ òóô2õ?ö4÷Bø,ðYôEð<ùú`ð<ùBûbðí.ïFõ?÷0üoý9ìGúhü:ûbþ.ùú`ð<ÿú„ôözþ6øŽôHúhüHð„úFõ?ï`í6ð<úp÷zõ?úEú2ð<ïEùHôcõ?ï2ð "õ?ù    "!$#&%'(*),+ -€üHðY÷õ?ïhú`ì¡õ.øz÷õ?ï2ôEìGùBíoõ.ø,í6þ.ï`ìtúEü0ö ï`ðbûbðbì/.6ðbôiõ?ù”ìGù10 ÷0î úôhð<ù ú2ð<ùHûbðï`ð<÷0ï2ðbôEð<ùú2ð &2™õ3546¿ôhð876îHð<ùHûbð õ?ù mø°õ9‚ðbø,ôxþ;:úEüBð úFõ?ï`í6ð<úx÷õ?úEú2ð<ïEùBô<>=ú@? ï`ôúpûbþ.ù10 ôúEïhîHû<ú`ôwõÄôhð<ù ú2ð<ùHûbð–í.ïFõ?÷0ü³ûbþ.ùHôhì,ôrú2ìGùHí þ:Jþ.ùHøA@4B6›ð8 0í6ðbôoõ?ù ›úhüHð<ù–÷‚ð<ïC:Îþ.ïEö\ô4úhüHðD:‹þ6ø,ø,þý9ìGùHí úÀý€þ\ôrú2ð<÷Hôbò E <GFH '(), HJI ),KLMK I 9+N()PO I ) H +2OJQ -€üHð¿õ.ø/0 í6þ.ï2ìGúEü ö ûbþ.ùHôhìR 0ð<ï2ô›ðõ.ûEügôî1BôEð876îHð<ùHûbð þ: úhüHðôhð<ùú`ð<ùHûbð õ.ô^õSUT9VXW9YZWJT[]\^:Îþ.ï^ðõ.ûEüiþ:úEüHð úFõ?ï`í6ð<úS÷õ?úEú2ð<ïEù\ú_ ÷‚ðbôpõ?ù oõ.ôEôhì,í.ùHôxõDûlõ?ù 0ì/0 Hõ?ú2ðeôhûbþ.ï`ð<a`põ?ù 0ìR Bõ?ú`ðbôJý9ìGúhüõ”÷‚þ6ôhìGú`ì/. ð ôhûbþ.ï`ðYõ?ï2ðYõ 1 0ð8 oú`þiúEüBðDôEð<ùú2ð2ùBûbðí.ïFõ?÷0ü,< `põ?ù ìR Hõ?ú2ðbô9õ?ï`ðcûbþ.ùHôEì 0ð<ï`ð8 oìGùoþ.ïb 0ð<ïpþ:ìGù10 û<ï2ðlõ.ôhìGùHíoø,ð<ùBíqúhü,<B-€üHì,ô„ý õJ‚ýpüHð<ù†ôhûFþ.ï2ìGùHí õ‰ûbð<ïEúFõ.ìGù"ûõ?ù 0ì Hõ?ú`ð ôrüHþ.ïhú2ð2ï\÷õ?úEú2ð<ïEùìGù10 ôú<õ?ùBûFðbôcúEüõ?ú9ö\ì,í.üú3‚ðYûbþ.ùHôEì 0ð<ï`ð8 sõ.ôcð<öc0 ‚ð8 1 0ð8 :÷zõ?úEú2ð<ïEùBôDõ?ï`ðiõ.øGï2ðõ doï`ð<÷ ï`ðbôhð<ù ú2ð8 ìGù”úEüHð¿ôEð<ùú`ð<ùHûbð™í.ï<õ?÷ ü,õ?ù :ûbþ.ù úhï2ì/0î0ú2ðDú`þ ú2ì,ø,ðbôcõ?ù eôhûbþ.ï`ìtùHí1< e <>f +2OghMiP(), HJI )PKLMK I 9+ H h)jk H 9OJQ óG:0 ú2ð<ïDúEüBðl?0ï2ôrúôrú2ð<÷,*ôhþ.öJð™þ;:SúhüHð4ûõ?ù ìR Hõ?ú2ð ìGùBôrúFõ?ùHûbðbôeömõ8m‚ð†ûbþ.ù1nHì,û<ú2ìGùHíˆý9ìGúhü³ðõ.ûEü þ.úhüHð<ï3oÎû<ï`þ6ôhôEìtùHíið í6ðbô€ìGùJúhüHðcí.ï<õ?÷0üpb<q-€üHì,ô ôú`ð<÷™ï`ðbôhþ6ø/.6ðbôôîHûEü\ûbþ.ù1nHìŽû2ú2ôÏîBôEìGùBí~úhüHðpôhìGöc0 ÷HøŽðeûbþ.ùHôúEïFõ.ìGùúm÷0ï2þ.÷õ.íõ?ú`ìŽþ.ùˆôhûEüHð<öJð”îHôEð8 2$úhüHðrnõ?útsvuw6yx“öJð<úhüHþd ‚ò"ûõ?ù 0ìR Bõ?ú`ðbô õ?ï2ðpõ?÷0÷0ï2þz. ð8 4ìGùiþ.ïb 0ð<ïþ: 0ðbû<ï2ðlõ.ôhìGùHí¿ôEûbþ.ï2ð; ýpüHìŽø,ð€ðbø,ìGö\ìGùõ?ú2ìGùHíŒõ.ø,ø ûõ?ù 0ìR Hõ?ú2ðbô^úEüzõ?ú^ûbþ.ù10 nHìŽû2ú9ý9ìGúhüš÷0ï`ð{.ì,þ.îHôhø/Šõ?÷0÷0ï2þz.6ð8 ”þ.ùHðbô< -€üHðw:Îþ6ø,ø,þlý9ìGùBíYôrî1BôEðbû<ú2ì,þ.ùt 0ðbôEû<ï2ì/‚ðbô€úEüHð9ûbþ.ï2ð ÷õ?ïhú9þ:*úEüBðDõ.ø,í6þ.ï2ìGúEü öHôhûbþ.ï`ìtùHímõiûlõ?ù 0ìR Hõ?ú2ð< M| }Bh!~q%*),€} I )PKLMK I 9+ FXH 9+  ì/.6ð<ù†õiü‚÷‚þ.úhüHðbôEìRƒbð8 ‰ûõ?ù ìR Hõ?ú2ð:‹þ.ïŒõ4ûbð<ïhú<õ.ìGù ÷õ?úhú`ð<ïhùú_÷zðiô÷õ?ù0ùBìGùHí„‚ð2úÀýxðbð<ùà÷‚þ6ôhìGú`ìŽþ.ùHô… õ?ù ‡†ˆ^úhüHðoõ.ø,í6þ.ï2ìGúhü0ö ôhðõ?ï`ûhüHðbôl:Îþ.ï\õ.ø,øÏú2ì,ø,ðbôiþ: úhüHðsûõ?ù 0ìR Bõ?ú`ð†ìGù&úhüHð‰öJð<ö\þ.ïC úEï2ì,ð<‰-€üHì,ô”ì,ô þ.ùHð 2˜úEïFõ.6ð<ï2ôEìGùBí›úEüBðeôhð<ù ú2ð<ùHûbð‰í.ïFõ?÷0üŠ:fï`þ.ö ðõ.ûEüŠ÷zþ6ôhôEìAHø,ð4ôúFõ?ïEú2ìGùHím÷‚þ6ôEìGú2ì,þ.ù‹:Îþ.ï~úhüHðiûõ?ù 0ìA0 Bõ?ú`ð€ú2ì,ø,ðbô6îBôEìGùBíiõ Œ„ëk620Àø,ì/6ðcôEðõ?ï2ûEüP<LŽüBð2ùBð{.6ð<ï õ?ùið8 0í6ðxì,ôžúEïFõ. ð2ï2ôhð \ìGùYúEüBð€ôEð<ùú2ð2ùBûbð í.ïFõ?÷0üP?úhüHð ûbþ.ïhï`ðbô÷‚þqù 0ìGùBí‰ð í6ðì/:cõ. õ.ì,ø°õ9Hø,ðì,ôDúhï<õ8.6ð<ï2ôEð8 õ.ø,ôhþ‰ìGùˆúhüHðJúhï`ì,ðmú2þ‹:‹ð<ú2ûEüˆúhüHðoôrúFõ?ú2ì,ôrú2ì,ûbô4þ:9úhüHð ûbþ.ïhï`ðbô÷‚þqù 0ìGùBí4ú`ì,øŽð;< -€üBðpõ.ø,í6þ.ï2ìGúEü öûbþ.ö4÷ î0ú`ðbô^úhüHðk:‹þ6ø,ø,þý9ìGùHícôhûFþ.ï2ð :fî0ùBû2ú2ì,þ.ùD:Îþ.ï~ðõ.ûhüeú2ì,ø,ð3:Îþ.î0ù šìGùoúEüBð„úEï2ì,ð4ò z o*‘pq’ ÷‚þ6ô ûbþ.î0ùú;o(‘p ÷‚þ6ô ûbþ.î0ùúo*‘p,“wùHðbí ûbþ.î0ùú;o(‘p ” -pì,øŽðFô9ô2õ?ú`ìŽô•:ìGùHí g o*‘p>–˜— k™ ýpüBð2ï2ðw—  ì,ôxõ~÷0ï2ð{0Àôr÷‚ðbûbì/?Hð8 Júhü0ï2ðbôrüHþ6ø õ?ï`ð9ûbþ.ù10 ôhìR 0ð<ï2ð8 ‰õ.ô~ôî0÷ ÷zþ.ïhú2ìGùHímð{.ìR 0ð<ùHûbð:‹þ.ïcúhüHðDûõ?ù 0ìA0 Bõ?ú`ðDõ?ù eûbþ.ùúEï2ì/0î ú`ðDú2þ4ìGú2ô9ôEûbþ.ï2ð< 4cùHûbðYõ.ø,øžôî0÷ ÷zþ.ïhú`ìtùHí4ú2ì,ø,ðbôcõ?ï`ð3:Îþ.î0ù 0úhüHðYõ.ø/0 í6þ.ï2ìGúEü ö“úhï`ì,ðbôú`þDîBôEð9úEüBð2öš:‹þ.ïSûbþz.6ð<ï2ìGùHíiúEüBð ð<ùˆ0 ú2ìGï2ð ûõ?ù ìR Hõ?ú2ð<›-pìŽø,ð~ìGùˆ:‹þ.ïhömõ?ú2ì,þ.ùmì,ôSôú`þ.ï2ð mìtùšõ Sbœ9\{žGŸgžbTU ¡J õ.ô*ìGù4úhüHð@nzõ?ú". ð<ï`ôhì,þ.ù,ú2þ :¸õ.ûFìŽø,ìGú<õ?ú2ð ð{¢\ûFìŽð2ùú¿ôEûbþ.ï2ìGùHí1<pó‰SUœ9'\{ž£:Îþ.ï„õ™ûlõ?ù 0ìR Hõ?ú2ðYì,ôŒõ ôhð<ú~þ:ú`ì,ø,ðbô9ôîHûEü”úEüzõ?úlò ¤ ðõ.ûhüwýxþ.ïU †þ:úhüHðJûõ?ù 0ìR Hõ?ú2ðJì,ôDûbþ.ùú<õ.ìtùHð8 ìtù:õ?ú9ø,ðõ.ôúcþ.ùHð„ú2ì,ø,ð ¤ ùBþú2ì,ø,ðDûbþ.ùú<õ.ìGùBô~õ?ùBþ.úEüBð2ï9ú2ì,ø,ðDð<ùú`ìtï`ðbø/J< 4 :fú2ð<ù,6õ€ùî0ö‚ð<ïSþ;: ûbþz. ð2ï2ôûõ?ùB‚ðxû2ï2ðõ?ú`ð8 :fï`þ.ö õ‰ôhð<úJþ:cú`ì,ø,ðbô8pðõ.ûEü–ö¤õÄûFþ.ùúFõ.ìGù"þz.6ð<ï2ø°õ?÷0÷BìGùHí ú2ì,ø,ðbô€þ.ïSú`ì,øŽðFô€ýpüBì,ûEüšûbþz. ð2ïw ì/¥‚ð<ï`ð<ùúp÷õ?ïhú`ô€þ:‚úhüHð ûbþ.ùú`ð<ÿúg< -€üBðàôhûbþ.ï`ìGùBíš:fî0ùBû<ú`ì,þ.ù îHôEðbô&úhüHð¦:Îþ6ø,ø,þlý9ìtùHí 76îõ?ùú2ìGú`ì,ðbôw:Îþ.ïcõí6ì/.6ð<ùŠûlõ?ù 0ìR Hõ?ú2ð§.ò ¤ -€üBð ïFõ?ú2ì,þ þ:àí.ïFõ?ù d0¸ú`þ.úFõ.ø$÷‚þ6ôEìtú`ì/. ðú2þ í.ïFõ?ù d0¸ú`þ.úFõ.øp÷zþ6ôhìGú2ì/.6ðg“~ùHðbíõ?ú2ì/.6ð†ûbþ.î0ùú`ôoþ: ú2ì,ø,ðbô9ìGùŠõ.ø,øBúhüHðDûbþg.6ð<ï`ô8 ;1 I M I 'M o*§zp{ ¨r©@ª«­¬b®¬{¯°v±2²1³´X«{µ­®¶¸·1¹/ºX«{µU«{±2¬¦»8®z¼‚«{µU½8¾ ¿XÀ^Á˜Â*ÃgÄb¾ ¨r©@ª« °R«{±Å¬Æª ®¶Ç¬Cª« ½ª®µC¬b«8½•¬ »8®z¼J«{µg¾ Á¸È(¿,É;ÈMÊJËhÂ*ÃzÄ{¾ ¨r©@ª«t³Ì¯9Íy¹/³²1³Î¯9³Ï®²1±2¬Ì®¶3¬b®¬{¯°£»8®±2¬U«{Í2¬ ¹/±‡¯9±2ÐD»®g¼J«{µcÂ*°R«{¶¬wѰ/²½£µb¹RŪ2¬ »8®±2¬b«bÍ2¬Äb¾ ÁÓÒdÔ,ÕJÖh¿×;ËJÔ×yÂ(ÃgÄ{¾h¯9±· ¨r©@ª«£³ ¯9Íd¹A³B²ˆ³Ø®g¼J«{µ@¯°R°ˆ»8®z¼J«{µb½›®¶h¬Æª«@¬U®¬¯° ±2²1³´X«{µ®¶L¬b¹R°R««8°R«{³&«{±2¬U½3¬Æªh¯9¬Ù®g¼J«{µU°¯9ы´X«{Ú ¬_Û5«8«b±‡»8®±1±«8»{¬U¹A±Å&¬U¹°R«8½¾ÁÒdÔPÖdÜË2Ý'ÞMÒ1ß5Â(ÃgÄ{à ©@ª«½C»8®µb«®¶,¬Æª«3»g¯9±·ˆ¹R·¯9¬b«¹R½w¯l°¹/±«g¯9µ£¶²1±»{Ú ¬b¹R®±®¶k¹A¬U½£»8®z¼‚«bµ½¬{¯9¬b¹R½•¬b¹R»8½8á â9ã Â*ÃgÄåä æLç1×;Ö1×9Ò ÞMÝ9Òy×ÈÖ^Â*ÃzÄXèéæ,ê"¿À"Á€Â*ÃgÄ ë æ,ìPÁíÈ*¿,É;ÈMÊJËÂ*ÃgÄ èæî"ÁÒyÔ,ÕJÖh¿×;Ë'ÔX×1Â*ÃgÄ èæ,ïPÁÒyÔ,ÖyÜË2ÝÞÒ1ßwÂ(ÃgÄ ð ¶L»g¯9±·ˆ¹R·¯9¬b«à ªh¯½@± ®c»8®z¼‚«{µU½8¾ â9ã Â(ÃgÄkämñ1àkòÙ®¬U« ¬Æªh¯9¬óÁíÈ*¿,É;ÈMÊJËv¹½Û5«8¹RŪ2¬U«8·Ó±«8Å2¯9¬U¹/¼‚«°AÐ'¾>½Æ¹/±»8«Ì¯ »8®z¼‚«bµÌÛw¹A¬CªŠ¶*«{Û@«{µÌ¬b¹R°R«8½cÑ1µb®z¼y¹R·1«8½Ì½¬Cµb®±ÅJ«{µÌ«{¼y¹/Ú ·1«{±»8«¶*®µ£¬Æª«»g¯9±·1¹·¯9¬U«à ©@ª«ÙÛ@«8¹RŪ2¬U½w¹/±D¬Æª«Ù»b²ˆµCµb«{±2¬ ¹/³ôѰR«{³Ï«{±‚¬¯9¬U¹®± Û@«{µb«&»Cª ®J½C«{±¸½C®t¯½ ¬b®DÅJ¹/¼J«Ì¯Ì°R«{Íy¹R»8®Jŵ{¯9шª¹R»&®µÆÚ ·1«{µb¹/±Ŋ®±õ¬Cª«‹¶*«g¯9¬C²ˆµU«8½8¾3Ñ1µb«{¶*«bµÆµb¹/±Åéö1µb½•¬N»g¯9±1Ú ·1¹R· ¯9¬U«8½3Ûw¹/¬ÆªÓ¯&ª¹RŪ «bµBŵ{¯9± ·dÚ]¬b®¬{¯°,µ¯9¬U¹®1¾h¬Cª«{± ¯»8»8®µU·ˆ¹/±Å¬b®¬Æª«c¶*®J°R°R®gÛw¹/±Åv®µb·1«{µgáó»g¯9±·1¹R· ¯9¬U«8½ Ûw¹/¬Æª÷³Ï®µU«Ø»8®g¼J«{µU½8¾„Ûw¹/¬Cªø»8®z¼‚«bµb½ù»®±2¬¯¹/±¹/± Å ¶*«{Û@«{µ3¬b¹R°R«8½8¾hÛw¹/¬Cªr°¯9µbÅJ«{µ3»8®g¼J«{µU«8·r»8®±2¬U«{Í2¬b½¾"¯9±· Û£ª«{±Ï¯°R°J«8°R½Æ«@¹R½^«8úJ²h¯°(¾'»g¯9±·1¹R· ¯9¬U«8½^Û£ª®J½Æ«›»8®g¼J«{µU½ ªh¯8¼J«³&®µb«®z¼‚«bµb°¯9Ñv´X«{¬_Û5«8«b±‡»8®±1±«8»{¬U¹A±Å&¬U¹°R«8½à ©@ª«Šûh¯9¬¸¼J«{µU½Æ¹R®±ü²½Æ«8·‰¯ý½C¹/³Ï¹R°¯9µé¶²1±»{¬b¹R®± Ûw¹/¬Æª®²1¬^² ½C¹/± Å£×9Ö1×;ÒÞÝ9Òd×'ÈÖ,¾zª «{±»8«£¿XÀ^ÁþÛ£¯½P¬Cª« ³&®J½¬Ì¹/³ôÑX®µC¬¯9±‚¬DúJ²h¯9±2¬b¹/¬_ÐJà 𠱄¬Cª «»8®³ôÑ®J½Æ¹/¬b« »g¯½C«¾>¹/±ˆ±«{µc¹A±½•¬¯9±»8«8½c¹A±»{µU«g¯½Æ«Ì¬Cª«ô±‚²ˆ³´«{µÏ®¶ ÑX®J½C½Æ¹/´°« »8®z¼‚«{µU½£¬U®¬Æª«Ù«{͂¬b«{±2¬£¬Cªh¯9¬5¹/¬>±®ó°R®±ÅJ«{µ ´X«8»8®³&«8½L¯wÅJ®d®y· ³&«g¯½²1µb«›®¶yµU«8°R¹¯9´ ¹R°R¹/¬_ÐÌÂZ¯9¬^°R«g¯½¬ ±®¬Ù¯9¬w¶Z¯»8« ¼J¯°A²«gÄbà ÿ     ©@ª«r½Ðd½¬b«b³ÎÛ£¯½D¬Cµ¯¹/±«8·õ®±m¬Æª«L«{±1± ©,µb««{Ú ´h¯9±˜Â‹¯9µb»{²½c«{¬ô¯°(ྠJÄ! "y«8»b¬b¹R®±½$#9Ú #%ϯ9±·‹¬U«8½¬U«8·Ó®±&d«8»{¬U¹®±'# rÂ(©q¯9´°R«(;Ä{¾P½U¯9³Ï« ¯½£²½Æ«8·´2Ð)‡¯ÅJ«bµÆ³Ì¯9±¸Â* +'Äb¾-,@®J°°R¹/±½Â*.  /'Äb¾ ¯9±·10Ù¯9¬C±h¯9ѯ9µ22ª¹@Â*  /Ä{¾h¯9±·t´X«»g¯9³Ï«ó¯c»8®³cÚ ³&®±t¬b«½¬Æ´«8·Xà ©@ª«‹¬¯½3y½Û5«{µU«Ó½C«8°R«8»{¬b«8· ½Æ®Š¯½D¬U®Š·ˆ«b³Ï®±1Ú ½¬Cµ¯9¬U«c¬Æª«ô´«{± «böˆ¬c®¶£² ½C¹/± Åv¹/±2¬U«{µÆ±h¯°@½¬ÆµC²»{¬Æ²1µb« ©Pµ{¯¹/±54¯9¬¯d¾%! ñ!#9Ú2#%J¾#6 6 687ô½C«{±2¬b«b± »8«8½ ´h¯½Æ« »8®³ôÑX®J½C¹/¬b« ´¯½C«á ¯°R° ò9 :: #87;# :; 687 / < =  7;  / / #6ñ>/ :%.< ©^«8½¬?4¯9¬{¯d¾@! A# d¾B#87>.:½Æ«{±‚¬b«{±»8«8½ ´h¯½Æ« »8®³cÑX®J½Æ¹/¬U« ´h¯½Æ«;á/¯°R° ò?  + #87 +;gñ;: /  < =  7; : ## : / :%< ©L¯9´ °R«CJáDd¹FE8«8½Ù®¶^¬Cµ¯¹/±¹A±Å̯9±·D¬U«8½¬Ù·¯9¬¯2¾y±®¬b« ¬Æª«3½Æ¹/³Ï¹R°¯9µ шµU®ÑX®µÆ¬U¹R®± ½£®¶^´h¯½Æ«¹/±½¬{¯9± »«8½ · ¯9¬{¯‹¶(®µ °R«g¯9µC±¹A±ÅÓ»®³ôÑX®J½C¹/¬b«v½¬CµÆ²»{¬Æ²1µb«½8àGÓ« ª¯¼J«&½¬Æ²·1¹R«8·r¬Cª «&«{ºX«»{¬ó®¶>±®²1±ˆÚ]Ñ1ª1µ¯½C«Ï¹/±1¶*®µCÚ ³ ¯9¬U¹R®± ®±ý°R«g¯9µÆ±¹/± Åé¼J«{µC´ýшª1µ¯½C«8½´2Ðõ½C«{¬Æ¬U¹A±Å °R¹A³&¹/¬b½G®±N¬Cª «3±2²1³´X«bµB®¶k«{³B´X«8·1·ˆ«8·‹¹/±½¬{¯9± »«8½8¾ H-IKJML ¹/±ô¯£¬U¹°R«àMNé°R¹/³Ï¹/¬L®¶OE8«{µb®3«{³²°¯9¬b«8½^¬Cª «›ûh¯9¬ ¼J«{µb½C¹®±Š½C¹/± »8« °«z¯9µÆ±¹/± Åv¬{¯.‚«½óѰ¯»8«Ì¶µU®³PRQS ¬¯ÅJ½›®± °/ÐJàk©@ª«Göˆ±h¯°h®²1¬ÆÑ1²1¬g¾2ª®zÛ5«{¼J«{µz¾ˆ³Ì¯8Ð̹/±1Ú »8°/² ·1«›«{³´X«8·1·1«8·ô¹/±½¬{¯9± »«8½"½C¹/± »«5¹/±½¬{¯9± »«8½^³ ¯Ð ´X«Ù»8®³cÑX®J½C¹/¬b«àT0w«8½•² °/¬U½w¹/±·ˆ¹R»g¯9¬U«Ù¬Cª¯9¬ HBI3JMLVU ñ ¯°R°®zÛw½,¶*®µ^ò?‹¹/±1¶*®µÆ³Ì¯9¬b¹R®±¬U®w»8®±2¬Cµb¹/´1²ˆ¬U«›¬b® =  °R«g¯9µÆ±¹/± Å1à N ³&¹/±¹A³Ì¯°›»8®±2¬b«{͂¬ó®¶£®± «lÛ5®µU·Ó¯9±·r²1Ñr¬b® ¬_Û5®€Û5®µU·ˆ½v®±­«g¯»Cª­½Æ¹R·1«ÓÛ£¯½t²½C«8·X¾ó¯9±·ý¬b¹R°R« ¬Æª1µb«8½•ª®J°·BÛG¯½L½C«{¬L¬b®9WYX‹ämñ>Z[:£¶*®J°R°R®zÛw¹A±ÅGµb«8½•² °/¬U½ ®¶¬Cª«Dûh¯9¬Ì¼J«{µb½C¹®±,àGN ³Ï¹/±¹/³ ¯° »8®±2¬U«{Í2¬DÛ£¯½ ± ®¬›±«8»8«8½Æ½U¯9µÆÐD¹/±Ì¬Cª «Gûh¯9¬>¼J«{µb½C¹R®±P¾1´1²ˆ¬@ª«{µb«G¬Æª« ¯·ˆ·1¹/¬b¹R®±h¯°h«{¼d¹·1«{±»8«3¶µU®³ü«{³´X«8·1·1«8·t¹/±½¬{¯9± »8«½ ÅJ¹/¼‚«½Dµb¹R½C«‹¬b®í³&®µb«‹Ñ1µb«»8¹R½Æ¹R®±¦«{µÆµU®µb½ Ú ¯Ñ1ª «{Ú ± ®³&«{±®±¸»8®³cÑX«{±½b¯9¬U«8·í´2ÐÓ½C«{¬Æ¬U¹A±År¯ ³Ï¹/±¹/³ ¯° »8®±2¬U«{Í2¬gàL©@ª«›³ ¯9Íy¹/³Ì¯°2¬U¹R°«@°R«{±Å¬ÆªÛ£¯½L½Æ«b¬L¬b®+%\ ª ¹RŪ«{µ ¼‚¯°/²«8½ÙÅ2¯¼J«ó¯c¼J«{µÆÐv½³Ì¯°°^¹/³cшµU®g¼J«{³&«{±2¬ Û£ª ¹R»Cª‹·ˆ¹R·Ì±®¬T]•² ½•¬b¹/¶Ð̬Cª«l¯·1·1¹/¬b¹R®±h¯°h³&«{³Ï®µCÐJà ©q¯9´°R«&#íÑ1µU«8½Æ«{±2¬U½vµb«8½•²°A¬U½v¶*®µv½Æ¹/³²°A¬{¯9±«8®²½ ò? ¯9±· =  °R«g¯9µÆ±¹/±Å1¾ô¯9± ·ý¶(®µ‡°R«g¯9µC± ¹/±Å =  Ûw¹/¬Æª®²ˆ¬ò9Là^®µ H-IKJML äØñ1¾Pò9>½·1®D±®¬»8®±1Ú ¬ÆµU¹A´1²1¬b«‹¬U®Š¹/±1¶*«{µb«{±»8«8\c¬Æª«r½•³ ¯°R°ÑX«{µC¶*®µC³ ¯9±»8« ·ˆ¹/ºX«bµb«{±»8«ÌµU«8½²°/¬b½l¶µb®;³øò?a¹/±ˆû1²«{±»8«t®±é»8®±1Ú û ¹R»{¬lµb«8½C®J°/²ˆ¬U¹R®±PàÓ©@ª««{ºX«8»{¬&®¶wò?›½ô¹R½ó»8°R«g¯9µU°/Ð ¼y¹R½Æ¹/´°R«kÛ£ª«{± H I3JML ä_k¹R½^¯°R°®zÛ@«8·X¾Ðy¹R«8°R·1¹/± Å#87;< ³Ï®µU«kµU«8»g¯°R°ˆ¯9±·ó¯9±ó¹/³cшµU®g¼J«{³&«{±2¬›®¶gñ;<ý¹/±`a à ^®µkò?›½k®±°AÐ Â*©L¯9´°«bJÄb¾2¯°°R®zÛw¹/± Å«{³´«8·ˆ·1«8· ¹/± ½•¬¯9±»8«8½¹A³cÑ1µb®z¼J«8·ý¬Æª«‹µb«8»z¯°°¹/±¦«{Í2ÑX«b± ½C«Ó®¶ шµU«8»8¹R½Æ¹R®±,àD«ª¯¼J««{Í2ÑX«bµb¹/³Ï«{±‚¬b«8·‹Ûw¹/¬CªN½C«{ÑX«{µÆÚ ¯9¬b¹/±Å£ò?›½X¶Mµb®³ý´¯;½Æ«{Ú_ò?›½ ç àcNÙ½,©L¯9´°«R@½ª®zÛw½8¾ dfeg.h*ijkmlonp[i.qAi.loirFnhskKt.nmjup[vohCwxzyyh*nijK{b| }%~exoj yg.h€[h2xoYp[i.qƒ‚otxzi„yp…y‡†Txzi‰ˆsŠ.ˆs‹8ŒŽ‹„ˆŽŠ yg.h-j‡h2kKloi {o‘8xoj‡h2 loi9’xzyrFvoh*nfj‡p[loih3“„w8h*np[h3ikKh ”s•–-•o—™˜.šz›Žœžš2Ÿž•Ršo˜8”* @” ›Ž¡¢–>—z£Y¤ •¦¥CšsŸO•£Y¤ •o—o˜8§F§>–—z•o¨ ©¦›F”s›F£8œª£8—¬«9­D”™®V¯°ŸO•oœ²±B³K´Mµ·¶¹¸>ºš2Ÿž•o—z•»˜8”S˜ ¼ ®[½ ¾¿—•¦©Y˜8§F§¥>•¦©o—z•Y˜8”s•8º%»Ÿž•—•Y˜8” »ŸO•oœÀ±-³K´Mµ?¶¿Á º šsŸO•—•™©Y˜8§F§V¥>•¦©o—z•Y˜8”s•»˜8”A£8œO§Äß>®…Æ;¾$®_ÇRŸO•È•ª‡¨ ª•¦©šÉ£8ªÊ¡Ê£@¥>•¦§F›ŽœO°šsŸO•'›Žœ;š•o—2œ˜8§¢«9­Ë”*šs—2ÌO©ošsÌ>—• Í ±B³K´Mµ?¶¹ÁÎR©¦§F•Y˜.—§ŽÃ1”3ŸO£Y»b”Ϫ£8—b©¦£8¡C–-£ ”s›Žš•¬«?­ÏЅ”™® ÇRŸO•Sš™˜.ÑO§F•˜8§F”s£Ê”*ŸO£„»b”bš2ŸB˜.šbÒ­Ó›Žœ>ª£8—2¡(˜.šz›F£8œ5›Žœ ¥>›F¥ÔœO£8šcŸ˜™¤ •Ϙϔs›F8œO›ŽÕž©Y˜.œ!šD›Ž¡¢–˜8©ošT£8œ«9­§F•Y˜.—sœ>¨ ›ŽœO>® ¡(˜.֞® —z•¦©8® –>—•¦©8® ר ±-³K´Mµ Ò­Ù£8œž§ŽÃ ¸ Æ;Ú%®FÁ Ú Û%® ¼ ½ Ü%® ¼ Ò­Ù£8œž§ŽÃ Á ½ Ü%®[Ý Û ¼ ®[Ü Û8¸>®[Ú Ò­Þ»b›ŽšsŸ5«9­ ¸ Æ;½%®ßÆ Ú Ú%®ßÆ ½ Ú%® ¼ Ò­Þ»b›ŽšsŸ5«9­ Á Ü ¼ ®[Û Û%Á ®[½ Ú8¸>®[½ ÇM˜.ÑO§à• ¼%á ÒV­Þâb•¦”3ÌO§Äšz”¦º>ãYä¶Ó¸>å[Û%º;šz›F§F•?§F•oœO8šsŸæ°½ ¡(˜.֞® —z•¦©8® –—z•¦©8® ר ±B³3´Mµ «9­Ù£8œž§ŽÃ ¸ Ü%Á ®[Ü Ú Û%®[Ü Ú Ý%® ¼ Á Ü Ú%®[Û Û ½%®[Ü Ú ½%® ¼ «9­Þ»b›ŽšsŸ1Ò­ ¸ Ü%Á ®[Ú Ú Ú%®FÁ Ú Ý%®Žç Á Ü Û%®ß¸ Û Û%®ß¸ Ú8Æ>®ŽÚ ÑB˜8”2•Ô«?­ ¸ Ý ç%®ßÆ Ý ç%®[Û Ý ç%®Ž½ ©¦£8¡C–-£ ”s›Žšz• Æ ¼ ®ß¸ Û Ú%®FÁ ½%Á ®ŽÚ ˜8§F§«9­ Ú Ý%®[ç Ü Ü%®[½ Ü ç%®ŽÚ ÑB˜8”2•Ô«?­ Á Ý ç%® ¼ Ý ç%®[½ Ý ç%®Žç ©¦£8¡C–-£ ”s›Žšz• Ú%Á ®ßÆ Æ;Ý%®ß¸ ½ Ü%®àÁ ˜8§F§«9­ Ü Ú%® ¼ Ú Ú%®[Ú Ü ¼ ® ¼ «9­ Í ÇèÔé>Ý Ý Î Ú Û%®FÁ Ý%Á ®[ç Ü ç%®…¸ ÇM˜.ÑO§à•ç á «?­âb•¦”3ÌO§Äšz”¦º;ã ä ¶ê¸>å[Û%ºšz›F§à•R§F•oœO8šsŸæ°½%® âb£„»b”¢½YëžÁY¸)—•oª•o—šz£5•oÖ!–-•o—›Ž¡Ê•oœ;šz”»ŸO•o—•$ÑB˜8”2•o¨ «?­ ”R»ì•—•S¥>›à”3š›ŽœO8ÌO›à”3ŸO•¦¥Aª‡—z£8¡í©¦£8¡C–-£ ”s›Žšz•V£8œO•¦”™® ÇRŸO•o—•b˜.—z•쩏Ì—2—•oœ!š§ŽÃœO££8šsŸO•o—M–˜.—sš›î˜8§!–˜.—”s•—” £8œ²š2Ÿž•¦”2•šo˜8”* %”Sš£A©¦£8¡C–˜.—•CšsŸO•C©¦£8¡ÔÑO›Žœž•¦¥ÈÒV­ ˜.œO¥°«?­ï—•¦”3̞§Žšz”Aš£>®_ǃð2£8œžCèS›Ž¡ñé>˜.œž Í ÁÝ Ý Ý8Î –>—•¦”2•oœ;šz•¦¥Þ—•¦”3̞§ŽšÊª£8—©™£8¡¢–-£ ”2›Žš•A«9­MºT£8Ñ>š™˜8›ŽœO•¦¥ Ñ;׏•o–B•Y˜.š•¦¥É©Y˜8”s©Y˜8¥>›ŽœO>ºc”2›Ž¡$›F§î˜.—Sš£A£8Ì>—S—•™”*ÌO§Žš” »b›ŽšsŸC”s•o–-•o—o˜.š•Rј8”s•b˜.œO¥©™£8¡¢–-£ ”2›Žš•R«?­ ”T˜.œO¥ÔœO£ ›Žœ;šz•o—sœ˜8§”*š2—sÌO©oš2Ì—z•8®ìò9Ì>—ì—z•¦”*ÌO§Žš”˜.—z•b§F£Y»R•o—šsŸ˜.œ šsŸO£ ”2•£8ªTªu̞§F§–˜.—”2•o—”™º•8®ß>®FºóR£ §F§à›ŽœO” Í ÁÝ Ý Ú8Îìë˜8” ¡Ê›à8Ÿ!š?Ñ-•S•oÖ!–-•¦©oš•¦¥”2›Žœž©¦•V¡Ô̞©2Ÿ²§F•¦”s”9”*š2—sÌO©ošsÌ>—o˜8§ ¥O˜.š™˜%ºO˜.œž¥(œž£C§F•oÖ@›F©Y˜8§ƒ¥O˜.š™˜C˜.—•Ñ-•¦›ŽœO·Ìž”2•¦¥B® ô õÉö÷8ø>ùD÷8÷ öúû ¯"•ÔŸ˜™¤!•·–>—•¦”2•oœ;š•¦¥Þ˜Ê¡Ê•o¡Ê£8—sÃ;¨fј8”s•¦¥²§F•Y˜.—2œO›ÄœO ¡$•oš2ŸO£@¥Âª£8—9–B˜.—2š›î˜8§ƒ–˜.—”2›Žœž$»Ÿž›F©2Ÿ"©Y˜.œŸ˜.œž¥>§F• ˜.œž¥É•oÖ;–O§F£ ›Žš©¦£8¡¢–B£ ”s›Žš›F£8œ˜8§D›Žœ>ª£8—2¡(˜.šz›F£8œ®)ü›Ž  • £8šsŸO•o—”*Ÿ˜8§F§F£Y»¨f–˜.—”2›ÄœOC”*Ã@”3š•o¡Ê”¦ºO›Žš›à”R¡Ê£ ”*šRÌO”s•o¨ ª‡ÌO§»ŸO•oœýšsŸO•'œ;Ì>¡ÔÑ-•o—Þ£8ªÀš™˜.—z •oš–B˜.š2š•o—2œO”"›F” ”*¡À˜8§F§®Tþfœ)–˜.—sšz›à©oÌO§î˜.—Yº>š2Ÿž•¡Ê•ošsŸO£@¥)¥>£@•¦”RœO£8šb—z•o¨ ÿ ÌO›Ž—•Aªu̞§F§ŽÃ;¨f–˜.—”2•¦¥°”2•oœ;š•œž©¦•¦”)˜8”Cšs—™˜8›ŽœO›ŽœO>ºìÌ>œ>¨ §F›Ä  •š2—™˜8›Žœ˜.ў§F•ª‡ÌO§F§B–B˜.—z”s›ŽœO¢¡Ê•oš2Ÿž£%¥”™® ÇRŸž•Àš2—™˜8›ŽœO›ŽœO5¡À˜.š•—›î˜8§ ŸB˜8”¬š£©¦£8œ;šo˜8›Žœ £8œO§ŽÃ Ñ—o˜8©2 !•š›Žœžê£8ªCšsŸO•"šo˜.— •ošÂ–˜.šsšz•o—sœO”¦ºÔ›Ž¡C–ž§ŽÃ@›ŽœO ¡ÔÌO©sŸ ”s›Ž¡C–ž§F•o—¹šs—o˜8›Žœž›ŽœOí¡À˜.š•—›î˜8§"»ŸO•oœ šsŸO• –B˜.—z”s›ŽœOGšo˜8”*  ›à”Þ§F›Ž¡Ê›Äšz•¦¥B® ÇRŸO›F”Ř.œO¥ ”2›Ä¡Ê›F§î˜.— ¡$•oš2ŸO£@¥>”Ô˜.—•8ºM˜8©¦©¦£8—¥>›ŽœO §ŽÃ º ˜.š2šs—o˜8©oš›Ž¤ •Ê›Žœ&©Y˜8”2•¦” »Ÿž•—•˜Rª‡ÌO§F§ŽÃS–B˜.—z”s•¦¥Ô©¦£8—2–>̞”M›F”œO£8šT˜¦¤ ˜8›F§î˜.ў§F•‰ª£8— šs—o˜8›Žœž›ŽœO>º;£8—컟O•oœ1˜ªu̞§F§O–˜.—z”s•?›F”‰œO£8šDœO•¦©¦•¦”2”˜.—sà ª£8—Ÿ˜.œž¥>§F›ŽœO¢š2ŸO•?–>—£8ÑO§F•o¡)® é@©2Ÿ˜¢•šV˜8§® Í ÁÝ Ý Ý8΃–>—£„¤@›F¥>•V˜9šsŸO£8—z£8̞8ŸÊ©¦£8¡C¨ –B˜.—z›F”s£8œÂ£8ªMšsŸO•  ˜.šbé@üÈ¡$•oš2ŸO£@¥)»b›ŽšsŸ¬ò­c® óR£8œž”2›F¥•—›ŽœO ­ìòSé ¥ž˜.šo˜ý£8œž§ŽÃ ºš2Ÿž•_©™£8¡¢–-£ ”2›Ž¨ š›F£8œ˜8§-¡Ê•oš2Ÿž£%¥Â—•¦”2•o¡ÔÑO§à•™”SšsŸO•SòV­ÏÁÔ¡$£%¥>•¦§ƒ›Äœ šsŸ˜.š"›Žš²ÌO”s•¦”É”*Ì>Ñ>¨3©¦£8œO”3š›ŽšsÌO•oœ;šÞ›Žœ>ª£8—s¡À˜.š›F£8œý›Äœ £8—¥>•o—š£Ù©¦£8œO”3šs—sÌO©oš²•o¤@›F¥>•oœO©¦•ɪ£8—Ÿž›F8ŸO•o—s¨K§F•o¤ •¦§ ”*š2—sÌO©ošsÌ>—•™”¦®ÇRŸž•o—z•1˜.—z•8ºcŸO£„»ì•o¤ •o—„ºì”2£8¡$•À¥>› -•o—s¨ •oœO©¦•¦” á óR£8œž”2›F¥•o—·šsŸO•Ê›Žœ–>Ì>š   O®8þfœSò­Mº˜.œO¥»b›F§F§;Ñ-•R§F•Y˜™¤!•¦”‰›ŽœšsŸO• šs—z•¦•»Ÿž£”s•—z£@£8šT›F”Oº@˜.œO¥»b›F§F§O©¦£8œ;š2—›ŽÑ>Ìšz• š£¢¤%›î˜Ôš2ŸB˜.š›Žœ>œž•o—9«?­M®%þfœ !bé%ü‰º@š2Ÿž•¦”2• š™˜8 ”‰»b›à§F§ž–B˜.—2š›F©¦›Ž–˜.š•›Žœ"Cš›F§F•¦”b˜8” »ì•¦§F§˜8” ›Äœ#Àšz›F§F•¦”¦®bÇRŸ˜.š?›F”¦º$$»ì£8ÌO§F¥ÂÑ-•S©¦£8œž”2›F¥@¨ •o—•¦¥A˜8” ©¦£8¡C–—z›F”s•¦¥A£8ª% &'ʘ8”T»R•¦§F§-˜8” &('O® *) ¤ •oœÔ”*Ì>–>–-£ ”s›ŽœO+S˜.œO¥,9¥>£œž£8𩦣8œ;š2—›ŽÑ>Ìšz• š£S•o¤@›F¥>•oœO©¦•ª£8—-Oº;š2Ÿž•.!bé%ü”s©¦£8—z•?£8ª/ »b›à§F§ƒÑB•Ô©Y˜8§F©o̞§î˜.šz•¦¥»b›ŽšsŸ(—•¦;˜Y—¥>•¦¥²˜8”¬˜ š•o—2¡$›Žœ˜8§®?ÇRŸB˜.š›F”¦ºOšsŸO•Ô›Žœ;šz•o—sœ˜8§M”3šs—sÌO©oš2Ì—z• ˜.œž¥C”s©¦£8—z•b£8ª$·›F”MœO£8šTš™˜.  •oœÀ›Žœ;šz£˜8©¦©¦£8Ì>œ;šY® þfœ Sò­Mºš2ŸO•Ê–—z£8ÑB˜.ÑO›F§F›ŽšKÃ"£8ªšsŸO•0œO£@¥>• »ì£8ÌO§F¥ÊÑ-•?©Y˜8§F©oÌO§î˜.š•™¥Àª‡—z£8¡ š2Ÿ˜.šì£8ªƒ›Žš”R©¦£8œ>¨ ”*šz›Äš2̞•œ;š”™® ÇRŸž•R”s©™£8—›ŽœOV–>—z£@©¦•¦”2”M£8ª1¬ò­›F”ƒÑ˜8”s•¦¥C£8œ$˜ ”*šo˜.š›F”*šz›F©Y˜8§O¡$£%¥>•¦§º;»Ÿž•—•Y˜8”b›Žœ&bé%üTº@›Žš›F” ÑB˜8”2•¦¥"£8œÉ–—z£8–-•o—2š›F•¦”©Y˜8§F©oÌO§î˜.š•™¥²ª‡—z£8¡ šsŸO• ©¦£Y¤ •o—8—™˜.–>Ÿ® þfœ2!bé%üޛŽš¬›F”–-£ ”2”s›ŽÑO§à•¬š£A”3–-•¦©¦›Žª‡Ã›ŽªDšz›F§F•¦” £8ª¬˜²©™£8¡¢–-£ ”2›Žš•)›ÄœO”3š™˜.œO©¦•1»b›F§F§ÑB•5©o—z•Y˜.š•¦¥ 3547686947:<;>=@?A47B6947?C;ED@4FHGJIKLG$DAMON :PN 626RQ$S MOSUTVSWMYX[Z\Q]NO^-_KLG035S\=$^8SU`a=$M$N G0Q]NOb7Q$M c;G$SW^d69SWD N G]^e6fKLG$_WSW^fIgFHQ$SUB9S bV4VN G$bhK7MOM\68Q$S"FHKWci6j47?@; D@4FHG#F+NkMOMl_UB9SKL6jSKmMO476+47`n69NOMOSW^WX o2p%qsr _KLG35SmG1KL68=@BfK7MOM c `Y47B8:=$MtKL6jSWD!6j4 K7M ; MO4Fi`u47BnFvNOMOD@_KLBjD@^fICK7^wN G6RQ]S p%qxrgy :04D@SWMYI 3cK7MkMO4zF+N G]b0^R47:PSm47`w68Q$SMOSKWTVSW^v6j4<35S%=@G@; {G$4FHGJX |vMOMO4F+N G$b}F+NOMODC_zKLBjD@^~N G€+‚ƒ Fg47=$MOD35S-:047BjS\_W47:<?]MONO_KL69SWD35SW_zKL=]^RSHN„6lB9SU; MONOSW^+47G 69NOMON„G$b<47`_f47G6jN G=$47=$^^RSW…V=$SUG$_WSW^WX |†?@Bj47?5SUBR6Ec2354768Q*:PSU6RQ]4D@^^dQ1KLBjS<NO^%68Q1KL6%68Q$SUc F+NOMOM‡F\47B8{2Q1KLBjD@SUBFHQ$SUGˆ68Q$SUBjSPNk^%?$MOSUG6Ec‰47`+SUT; NOD@SUG]_WSP`u47BŠK#_KLG]D@NOD$KL6jS‹‚N :"KŒKLGŽIl?JX_7XUX!Z\Q$NO^ _W47G6RBfK7D@NO_U6j^v68Q$SN G6R=$N„69NO47G!6RQ1KL6:047BjS0Œ^d6RBfK7NOb7Q6R; `u47BRF.KLB9D5Œ_zKLG]D@NOD$KL6jSW^JF\47=]MOD%35SNODCSUG‘6jN ’$SWD`>K7^e6jSUBzX “ Sx?$MtKLG 694Š6fK”_8{MOSx6RQ]NO^H?@B9473]MOSU:N G&68Q$Sx`a=@68=@BjS”X |+G$476RQ]SUBB9SWMtKL6jSWD†K7MObV47BjN 68Q@:•FHK7^?@BjSW^RSUG6jSWD 3c–‚SU{N G$SŠKLG]D#—vBjNO^dQ@:˜KG™‹eš”›V›Vœ‘I(|+?@?$MOS r NOS r KLB9^8SUBUhKLG$D‚SU{N G$S"‹dš”›V›Vž7jXŸZ\Q$S K7MObV47BjN 68Q@: SU¡6RBfK7_U69^0b7BfKL:<:˜KLB<B8=$MOSW^<F+N 68Q¢hKLG$Dˆ£¤‹>KLG$D ?54V^R^8N 3$M„c'4768Q$SUB^d6RB8=$_U68=@BjSf^jK7^"G$47GC;69SUB8:0N G/K7MO^fX H4768Q‰|+?@?]MOS r NkS‹¥Q]SUB9SKL`a69SUBŠ| r-r ,KLG]D2+‚ƒ =$^8SBfKfFH;eD$KL6UK+SU¡CKL:<?$MkSf^l`u47BJ?/KLB9^8N G$b@I‘KLG$D_KLG3/S BjSf^d68B9NO_U6jSWDP6j4^e?5SW_WN ’$SWD06fKLB9bVSU6[G$47G@;>69SUB8:0N G1K7Mk^\47B ?1KL6869SUB8G$^WX-ZgQ$SD@N ¦5SUB9SUG]_WSf^MONOS%N„GJ§ o +‚ƒ˜BjSW_W47:,3]N G$SW^‡`¥BfK7b7:0SUG69^w47`1N„G$^e6fKLG$_WSW^ `u47B\bVSUG$SUBfK7MONO¨KL6jNO47G$^WI@FHQ$NOMkSx| r[r =]^RSW^\B8=$MOSW^ D@SUBjN TVSWD`aB947:©_W47:?$MOSU6jS,N„G$^e6fKLG$_WSW^WX o Z\Q$S2b7BfKL:<:˜KLB"BR=]MOSW^#47`| r-r D@4ˆG]476&N G@; _WM =$DCS~_W47G‘6jSU¡‘6I0FHQ]NO_RQªNO^6UKL{VSUGªN G‘6j4K7_U; _W47=@G6xFHQ]SUG2bVSUG$SUBUKL6jN G$b"68Q$SG$47G@;6jSUBR:PN G1K7M ¢@X$«G +‚ƒI@68Q$S%_W47G6jSj¡6NO^+_W47G$^d=$M 6jSWD˜`u47B SK7_RQ!N G$^d6UKLG]_WS,_KLG]D@NOD$KL6jS7X o | r-r IiMON„{VS p%qsr I =$^RSW^¬K­?@B9473/KL3$NOMONO^d6jNO_ :P4D@SWMYX®Z\Q$S2?@Bj4731KL3$NOMON„6Ec47`0Kib7BfKL:<:˜KLB B8=$MOSH¯Š°­±ŠNO^²@BjSW…1‹Y¯Š°´³8µ7²@B9SW…1‹u¯UXV|+G1K7M ; 4VbV47=$^8M cVI6RQ]S-D@SUG]4”:PN G1KL6j47BlN Gm+‚ƒ<Fg47=$MOD 35S%²@BjSW…]‹a±CjX Z\Q$S&?@BjSW^RSUG69SWD~:0SU68Q$4Di_W47G$_WSUBRG]^0?@B9N„:"KLBjNOM c F+N 68Q2?@Q@BfK7^RSW^WIJFHQ]NO_RQ _KLG3/SBjSU?@B9SW^8SUG‘6jSWD‰3c2K 68B9SWS^d6RB8=$_U68=@BjS”XA«6&Nk^0G$476 K7N :0SWDKL6"Q1KLG$DCMON G$b D@SU?5SUG$D@SUG]_WNOSW^fI$FHQ]NO_RQ&B9SW…V=$N„B9SxQ$SKWTc0=$^8Sx47`lMOSU¡; NO_K7M1N G@`u47B8:"KL6jNO47G‹Y¶N G$DCMOSxKLG]D0·+44768QJI1š”›V› y I7`u47B r-r KL6R6fK7_RQC:0SUG6UUX[|v^x‹ p K7SWMOSU:˜KLG$^\SU6.K7MYXOI1š”›V›V›V ^dQ$4zF%I‘MOSU¡Nk_zK7MN G@`u47B8:"KL6jNO47GŠN :<?@Bj4zT‘SW^47GЏ r KLG$D ¹ r _8Q‘=CG@{N G$b!K7^.FgSWMOMYX,‚N G$_WSŠ47=@B:0SU68Q$4D =$^8Sf^ BfKfFAD$KL6fKI1B9SU?CB9SW^8SjG6jN G$bMkSU¡NO_K7M‡SUG6RBjNOSW^xF+NOMOMwB9SU; …V=$N BjSKŠMO476v47`n:0SU:047B8cVX «G#Kv`¥=C6R=CB9SHF\47B8{1IF\S+?$MtKLG06j4x=$^8SH6RQ$S+^dc^e6jSU: `u47B-?CB94TNkD@N G$bN G$^d6UKLG]_fSº9»L¼5½L¾>½V»7¿>ÀjÁjI5KLG$D0D@Nk^9KL:; 3]NOb7=1KL69S068Q$SU:Â=$^RN„G$bhKLG~K7MObV47BjN 6RQC:Ã:047BjS&^e=]N 6R; KL3]MOSx`Y47BHQ/KLG$D@MON„G$b0MOSU¡NO_K7MlN GC`Y47B8:"KL6jNO47GJX[|+GK7D; DCN 69NO47G/K7M]?54V^8^RN 3]NOMON 6Ec NO^H694Š=$^8SxF\47BjD;6Ec?5SW^fI/^e=$_8Q K7^ŠK#^d?5SW_WNtK7M-6fK7b#`u4”B,3/SU;>TVSUBR3]^fI[47B`Y47B?CB9SU?54V^RN„; 6jNO47G$^[MON {VSŠŒ47`zŒVFHQ]NO_RQ!KL6R6fK7_RQ]Sf^[:"K7N G]M c06j4sG]47=@G$^ ‹>‚SU{N G$SŠKLG$D —vBjNO^dQ@:"KLGŽIlš”›V›VœVjX «G˜K.^8N :PNOMtKLB‡TVK7N GŠ694,‚‘{=@6sKLG$D \BUKLG6j^,‹dš”›V›Vž7 KLG]D\=$_8Q@Q$4VMO¨SU6K7MYX‹dš”›V›V›7jI+68Q$S:0SU68Q$4D'SU¡; 6jSUG$D@^.KLG"SU¡Nk^e6jN G$bÄ1KL6\^dQ1K7MOMk4zFH;?/KLB9^8N G$b:0SU68Q$4D 6j4*Q/KLG$D@MOS*_f47:?54V^RN 6jS^d6RB8=$_U68=@BjSf^WXA«6"cNOSWMOD@^!K ^8NOb7G$N ’]_KLG‘6gN :<?CB94TVSU:0SUG6H4zT‘SUB\6RQ]S\Ä1KL6‡:0SU68Q$4D5I SW^d?5Sf_WNtK7MkM cˆ`u47B0MO47G]b2KLG$Dˆ:P47B9S _W47:<?$MOSU¡™^e68BR=]_U; 68=@BjSW^fX |^,_KLGh35S0SU¡?/SW_U6jSWD/Iw68Q$S0?5SUBR`u47BR:˜KLG$_WS 47`68Q$S?/KLBR6jNtK7M7:0SU68Q$4D,Nk^l^e6jNOMOMMO4zFgSUBl68Q1KLG6RQ/KL6l47` `a=$MOM?1KLBj^RSUBj^fIFHQ$NO_8Q"SU¡?$MO4VN 6s‹KLG$DŠB9SW…V=$N„B9S:=$_8Q BjNO_RQ]SjBsN G@`u47B8:"KL6jNO47GJX[Z\Q$S%BjSW^e=$M„69^47`w6RQ$Nk^.MkN G$S47` BjSW^RSKLBj_RQ2SUG@BjNO_RQ268Q$S^e?/K7_WSm47`[K7M 6jSUBRG/KL69N T‘Sm?1KLBj^e; N G]b0KL?@?CB94K7_8Q$SW^fI/K7N :0N G]b<694ŠB9SWD=$_WSx68Q$S%bKL?&3/SU; 6EFgSWSUG^eQ/K7MOMO4zFÅKLG$D˜`a=$MkM]?/KBj^8N G$b@X Æ&ÇLȎÉJÊË"Ìa͑ÎnÏ$ÍÐhÍÉCÑ”Ò Ó XhÔ<X268Q1KLG@{^†Õ47BRG ¹ SWSUG$^d6RBfKI ‚@KL3]N G$S \=]_RQ@Q]4ÖMk¨7I KLG$D ÔxQ1K7MkNOM ‚N :"K× KLG0`u47B‡6RQ]47B947=$b7Q#KLG$D<Q$SWM ?C`¥=]M1D@NO^8_U=$^R^8NO47G$^WI K7^F\SWMOM+K7^ “ K7M 6jSUB p K7SWMOSU:˜KLG$^0KLG]Dˆ|+G‘6fK7M\TVKLG DCSjBH4V^8_RQ `Y47BHQ]SWM ?@`a=$MJ_W47::0SUG69^WX ØÙ$ÚdÙ@ÛÙ$Ü[Ý@Ù$Þ ßà+á5àâãVäåRæ”àèçjééÖçàêá5ëfìdíîäVï ãLæ'ðEñ”òVäó7í9àõôuä ö à+÷\à[øwåRìEù‡î„ðeóú%ßà[á/à.âãVäåRæ”úëfäû ÷\àgü]å8äVäLæLú åjû7îþýdÿWìdí9ú      !"$#&%' )(+*,*--#  .$ /012# 3 4( *--9ú65ëfï”å9í8749:;<7:&=Và?>@òLùwåRìUú A ÿfìdû7ìdåjð>ñ”ýUà ßàjâìEï”ë4BxÿWä$úUôRà A ëWï”ëfä$úWëWäûDCvà>\ìEæ<Bsÿ@„ÿ9ù[íEóLî¥ànç9éé4=Öà âEBxåBxÿWìEæ FaãëzíEåjû<ë&5<5VìdÿzëðEñ"ýdÿG@ åjëfìEäVîäVï0íñë&@H@ ÿ9ù äëWýEòVìdë&@I@„ëfäVïòëWïLåJ5ëfýEýdå8ìEäí9àôYäKL#MMND#POQ"SR$TU VXWZY\[2] "6T/ú+5ëfï”å9í_^`:;<:&aÖú<bPÿfä”ýEìdå9ë&@¥úC÷wëWäëzûVëVà ßàjâìEï”ë4BxÿWä$úUôRà A ëWï”ëfä$úWëWäûDCvà>\ìEæ<Bsÿ@„ÿ9ù[íEóLî¥ànç9éééÖà âEBxåBxÿWìEæ FaãëzíEåjû<ë&5<5VìdÿzëðEñ"ýdÿG@ åjëfìEäVîäVï0íñë&@H@ ÿ9ù äëWýEòVìdë&@L@ ëWäVïòëfï”åG5ëWýEýdåRìEäí9àJc#&(   & L#dO8e$f 2  g %G )*,& U& ihj12#*--& ]'V ú1çzç4k a4^éM; azé4lVàg÷mbá$F nporq é4=l4^4lÖççà ö àløwÿÖû@à çjéé077àiâð9ÿ4Bs5VòVýdëWýEî ÿWäë&@DBxÿ7ûVå @\ÿtr@„ëfä<F ïòëWïLå85@å8ìPt¥ÿfìBsëfäðjåk A ëWýdë,ÿWìEî„åRäLýdåjûi5ëfìdíîäVïàvôuä "$# 3 Lú 5ëWïLåjíI=`99; =09WéVú u\ëWäLýdåjí9ú vÖìdëWäð9åà w)x8yLz{X|<|}~€ƒ‚2x…„m††‡ˆd‰XŠ‹+J‹&‡ŒŽx8‹4†~€†‘’‹&‡ˆx “””4” x–•‹4ˆX{‹4Œ<†Œ˜—4Š‹‘Z‘’‹&‰X™{‹~І~€‹‰X™€}&‡š‹4ˆXˆd™›—4‡<œ ‘’† ‡0‰Mx ,‡ZžLŸ M¡¡¢&£g¤`¥&¦S P§U¨m©«ª'¬6ž\­® ¬S¯U°,±0±`²‹&—0†ˆ ³&´”µ<³¶4·  ¸I‡<™H¹`†Šˆd™H‰»ºi}¼U½i‹&ŠXº ~€‹‡Œ ¸rw ¾r2‚z<‡†x •xM•S‹&ŠŒ ™†L‹&‡Œ…sx ¿$™€†Š{†x “”4”4À xjÁ$ŠXŠ}&ŠXœ,Œ ŠX™›¹`†‡8²<ŠXz<‡<œ ™H‡<—s}¼j‰XІ†Â‹&‡Ãs—4Š‹‘Z‘’‹&Šˆp¼-}&ŠL‹ˆX†_‡}&z+‡Ä²<|<Š‹ˆX† ™€Œ<†‡0‰X™HÅ{‹&‰X™}&‡6x ,‡ÆžLŸ MÇÈÉ P§Ê¯SË$¬6ÌXª’Í?­<Î8¯6¬U ²‹&—0†ˆ ³ “ÀMµ<³4³¶ <½i}&‡0‰XІ‹~ρj•S‹‡‹4Œ<‹<x Ð †‡<‡† ‰X|˜xS•L|0z<Š{X|6x “”4ÀÀ xѾ҈d‰} {X|‹4ˆd‰X™{ƒ²‹ŠX‰ˆ ²<Š}&—Š‹4‘ ‹‡ŒÓ‡}z<‡Ô²<|<Š‹ˆX†E²‹ŠˆX† ŠÕ¼-}&ŠKz<‡<І œ ˆd‰XŠX™€{‰†ŒÖ‰†× ‰Mx؝»‡ÚÙ Ÿ MÇMÈ_ d§ZÎ8¯6¬Û¯$ &¤M§¡ Ÿ¡ ¤ Ç¡J &¤ ÎLÙ`Ù)Ü3£-¡¢…ª…ÝÞ-ß ŸÝÜ2¬6Ý&¤ ¥4ß2Ý¥ ¡'žLŸ MÇ¡¦¦ £¤ ¥0x ½Éx•S}~H~›™H‡ˆx “”4”0à xÑá|<І†«—`†‡†Š‹&‰X™H¹0†4_~†×+™{‹~H™ˆX†Œ ‘’}+Œ+† ~ˆâ¼-}&ŠÕˆd‰‹&‰X™ˆd‰X™€{‹&~Q²‹Šˆd™H‡<—x »‡ÔžLŸ MÇMÈQ d§ Þgã2¡Îs¯6¬)­2¨$Î8¯6¬ÉÎ'¤¤ß2Ý&Ü)©Q¡¡Þ-£¤ ¥0²‹—`†ˆ “·µ<³´  ½i‹4Œ ŠX™€Œ  w0²‹™H‡6 ‚z<~Hº0x Ö‹&~›‰† ŠÉ‹† ~†‘’‹&‡ˆGw ‹&Â<™›‡†äyLz{X|<|}&~Ä‹&‡Œå‚`}ŠX‡ „m††‡ˆd‰XŠ‹<x “”4”4” xֽ؆‘’}&ŠXº œÏ‹4ˆX†Œæˆd|‹&~›~€}çʲ‹&Šˆdœ ™H‡<—2x_»‡ÉžLŸ M¡¡¢&£g¤`¥&¦’ P§G¯$ ªI¬6¬U°,±0±`)yS† ŠX—0† ‡6è}&ŠXœ çS‹º`<‚z<‡†4x éIІ—`}&ŠXºêéIŠ†ë † ‡ˆd‰†‰X‰†4x “””4´ x Ám¹4‹~Hz‹&‰X™}&‡ ‰†{X|<‡<™€ì0z†ˆ8¼-}&Š.‹&z<‰}4‘’‹&‰X™{ƒˆX†‘’‹&‡0‰X™€{i† × ‰XŠ‹{ ‰X™}&‡6í •S}‘8²‹ŠX™H‡<—Eˆdº ‡0‰‹4{‰™{Ջ‡Œ–ç™H‡Œ<}çX†ŒÛ‹&²<œ ²<Š}4‹{X|†ˆx'»‡ïÎ8¯j¬–ðñ ŸòM¦ã2 ÙÉ &¤iÎIÇó ß £€¦£Þ£- ¤É d§ ¬6¡ô&£-ÇÝ&Ü õD¤  &öSÜ¡¢¥ ¡÷ Ÿ øÔù ¡ôÞρúI|<™€}Gw0‰‹&‰†¸I‡<™Hœ ¹`†Šˆd™H‰»º0‚4z<‡†x sx<û'™›‡Œ ~€†8‹‡ŒJ½Éx<ü'} }&‰X|6x “””4´ x_w0‰XŠXz{‰Xz<Š‹&~?‹4‘Â<™Hœ —4z<™›‰,ºG‹&‡Œ’~† × ™{‹~І ~‹&‰X™}&‡ˆxs¯$ &ø'Ù)ß+Þ,ÝÞ-£- ¤ ÝÜ0¬?£¤)° ¥4ß £¦Þ-£-Ǧ “”<ýX“Mþ í “ÿ4´µ“³&ÿ x ‹¹+™ŒQ½Éxj½i‹&—`†Š‘’‹&‡6x “”4”x.w0‰‹&‰X™€ˆd‰X™{‹~Œ<†{™€ˆd™}&‡<œ ‰XІ†G‘Z} Œ<†~€ˆ…¼-}&Š’²‹&Šˆd™H‡<—2x֝,‡˜žŸ MÇMÈI P§ÄÞgã¡ Î'¤2¤ß2ÝÜ<©Ø¡¡Þ-£¤ ¥Ä P§Þã¡'ÎI¦¦M MÇ £-Ý4Þ£- ¤§ &Ÿ’¯$ øIÙ ß+° Þ,ÝÞ-£- &¤ ÝÜ+¬?£¤ ¥4ß+£¦ Þ-£-ǦMȯ$Ý&ø  Ÿ£-¢¥ ¡ 6©JÎ  4x ½Éx ¿px ½Ø‹&Š{zˆ yx w ‹&‡0‰}&ŠX™›‡<™Ï ‹&‡Œ ½Éx…½Ø‹Š{ ™›‡Ã`™†ç_™€{x “””4´ x yLz<™›~€Œ ™›‡<—–‹K~€‹&ŠX—0† ‹&‡<‡}‰‹&‰†ŒÚ{}&ŠX²<zˆs}¼IÁ$‡<—4~›™€ˆd|6í.áS|†’¿L† ‡<‡æá6І† œ ‹&‡Ã)x ¯$ øIÙ ß+Þ,ÝÞ-£- &¤ Ýܬ?£¤ ¥4ß £¦Þ-£-Ǧ “”<ý,³þ í ´+“´µ ´4´ÿ <‚4z<‡†x ½Éxj½ñz<‡}46„8x)¿mz<‡0º`‹4Ë&‡}4Ã)?sx ü'}‰X|6?‹&‡ŒQsx2™Hœ ‘’‹4Ã)x “”4”” x–¾ ~€†‹&ŠX‡<™›‡<—ä‹&²<²<Š}4‹{X|щ}Ɉd|‹&~›~€}ç ²‹&Šˆd™›‡<—xK,‡Ñ¨$©JªI¬6žS°M®j¬S¯ ±0± 8Þgã2¡2 &£g¤Þ<Ì&Ͱ  Îùâ¯$ &¤M§¡Ÿ¡ ¤ Ç¡' &¤’¨øIÙ £Ÿ £-ÇÝÜ<©Ø¡Þgã M¢M¦S£¤.ª…ÝÞϰ ß ŸÝÜ)¬6Ý&¤ ¥4ß2Ý¥ ¡žŸ MÇ¡¦¦ £¤ ¥«Ý¤ ¢Q® ¡ Ÿ s¬6Ý&Ÿ-¥ ¡Ä¯$ &Ÿ° Ù2 &ŸÝ`²‹—`†ˆ “·Àµ)“Mà&À ‚z<‡†x  xp¾rx\üI‹4‘’ˆd|‹ç ‹‡ŒÖ½Éx?¿px?½i‹&Š{zˆx “””xÖá?†× ‰ {X|0z<‡Ã0™H‡<—Qzˆd™›‡<—؉XŠ‹‡ˆ,¼-}&Š‘Z‹‰X™€}‡<œÏ‹4ˆX†ŒÉ~€†‹&ŠX‡<™H‡<—2x »‡ïžLŸ Ç¡¡¢&£g¤`¥&¦8 d§rÞgã¡Äùjã+£Ÿ¢Úðñ &ŸòM¦ã2 Ùï &¤K® ¡ Ÿ  ¬6Ý&Ÿ-¥ ¡Z¯$ Ÿ»Ù2 ŸÝx ¾rxmü'‹‰X‡‹&²‹&ŠÃ0|<™-x “”4”`à xѾ ~H™H‡†‹&Šñ}&ˆX†ŠX¹`†Œ˜‰X™€‘’† ˆd‰‹&‰X™ˆd‰X™€{‹&~D²‹&ŠˆX†Š.‹ˆX†Œ }&‡Ñ‘’‹&× ™‘Dz‘ †‡0‰XŠ}&²0º ‘’}+Œ+† ~ˆx?,‡Q¨m©«ª'¬6ž!  ¿$Š}¹ ™Œ<† ‡{†ü_<½Ø‹Š{X|6x ü'†‘ZÃ} w {X|‹+Qü'†‡ˆäyS} Œ ï‹‡Œ Ð |‹&~›™H~Úw0™‘Z‹"3‹‡6x “”4”” x?¾â‘’†‘Z}ŠXº œÏ‹4ˆX†ŒD‘’}+Œ<†~ }¼)ˆdº ‡`‰‹{ ‰X™{S‹&‡‹&~›œ º2ˆd™ˆí_‹‰‹&œ,}ŠX™€†‡`‰†Œi²‹&Šˆd™›‡<—x &ß Ÿ ¤ Ý&ÜU P§D¨$ôÙ2¡Ÿ ° £gøZ¡ ¤)Þ,Ý&Ü6Ý&¤ ¢iùjã2¡ &Ÿ¡Þ£-ÇÝÜ2Î'́ “4“ í ¶ÿ4”µ`¶4¶4ÿ x w ‹&‰}4ˆd|<™…w †Ã0™H‡†Q‹&‡Œ ü'‹~H²<|é'ŠX™ˆd|‘’‹&‡6x “”4”xE¾ {}ŠX²<zˆdœÏ‹4ˆX†ŒÉ²<Š}&‹&Â<™›~H™ˆd‰X™€{ñ—4Š‹4‘’‘’‹&Š8ç™H‰X|æ}&‡<~Hº ‰,çS}‡}‡<œÏ‰†Š‘8™›‡‹&~€ˆx ,‡Ó÷? &ß ŸÞãÌ¤)Þ,¡ Ÿ¤ Ý4Þ£- ¤ Ý&Ü ðƒ &ŸòM¦ã2 Ù  &¤ ž_Ý&Ÿ¦£g¤`¥ ù ¡Çã ¤  &Ü X¥#$ Ì ðGžù? ¿mŠ‹—4z†x w ‹&‰}4ˆd|<™pw †Ã0™H‡†x “”4”4À xZ¯$ Ÿ»Ù ß0¦°&%ݦ¡¢…ž_Ý&Ÿ¦ £¤`¥ƒÝ&¤ ¢ ß' Üݤ`¥0ß2Ý¥ ¡()Þߢ&£-¡¦xƒ¿$|6x 8x?‰X|†ˆd™€ˆ$è†ç*)m}&Šà ¸'‡<™›¹`†ŠˆP™›‰,º0x xw Ã0z<‰Q‹&‡ŒšáDx'yLŠ‹‡`‰ˆx “”4”4À x ¾ ‘Z‹× ™€‘Dz‘sœ † ‡0‰XŠ}&²0ºJ²‹&ŠX‰X™‹&~?²‹ŠˆX† Š_¼-}&ŠIz<‡<Іˆd‰XŠX™{ ‰†Œñ‰† × ‰Mx$,‡ žŸ MÇMÈ  d§LÞgã2¡m¦ £ôÞgãiðñ &ŸòM¦ã2 Ù« &¤ï® ¡ Ÿ _¬6Ý&Ÿ-¥ ¡¯$ Ÿ ° Ù &ŸÝ40½Ø}‡`‰XІ‹&~-\•S‹‡‹4Œ<‹<x Á'x,+xpá-d}&‡<— Ð ™€‘ w ‹‡<—x “””4” xiè}&z<‡æ²<|<Š‹4ˆX†JŒ<†œ ‰†{‰X™€}‡ƒÂ0º.І²j†‹&‰†ŒJ{X|0z<‡Ã0™H‡<—xL»‡ØžLŸ M¡¡¢£¤ ¥&¦s d§ ¯$ ªI¬6¬U°,±0±`<yS†ŠX—`†‡6+è}&ŠXçS‹º0‚4z<‡†x ‚x…„$††‡ˆd‰XŠ‹<x “”4”4À x.+2‹4ˆd‰˜è'¿ {X|`z<‡Ã0™›‡<—–zˆd™›‡<— ‘Z†‘’}&ŠXº œÏ‹4ˆX†Œñ~€†‹&ŠX‡<™›‡<—؉†{X|<‡<™ì0z†ˆxs,‡/+Sx)„m†ŠXœ Œ<† ‡<™›zˆL‹&‡Œsx&¹4‹&‡GŒ<†‡ÄyLŠ} †Ã)4†Œ ™›‰}&Šˆ2žLŸ MÇ¡¡¢° £g¤`¥&¦Ö d§0%¡ ¤ ¡ Ü¡Ý&Ÿ ¤S²‹&—`†ˆ à`“µ<à” 'Ö‹—`†‡<™H‡<—0† ‡6 ‰X|†è†‰X|† ŠX~‹&‡Œ<ˆx
2000
7
Imp ortance of Pronominal Anaphora resolution in Question Answ ering systems Jos  e L. Vicedo and An tonio F err  andez Departamen to de Lengua jes y Sistemas Inform aticos Univ ersidad de Alican te Apartado . 000 Alican te, Spain fvicedo,an [email protected] Abstract The main aim of this pap er is to analyse the e ects of applying pronominal anaphora resolution to Question Answ ering (QA) systems. F or this task a complete QA system has b een implemen ted. System ev aluation measures p erformance impro v emen ts obtained when information that is referenced anaphorically in do cumen ts is not ignored.  In tro duction Op en domain QA systems are de ned as to ols capable of extracting the answ er to user queries directly from unrestricted domain do cumen ts. Or at least, systems that can extract text snipp ets from texts, from whose con ten t it is p ossible to infer the answ er to a sp eci c question. In b oth cases, these systems try to reduce the amoun t of time users sp end to lo cate a concrete information. This w ork is in tended to ac hiev e t w o principal ob jectiv es. First, w e analyse sev eral do cumen t collections to determine the lev el of information referenced pronominally in them. This study giv es us an o v erview ab out the amoun t of information that is discarded when these references are not solv ed. As second objectiv e, w e try to measure impro v emen ts of solving this kind of references in QA systems. With this purp ose in mind, a full QA system has b een implemen ted. Bene ts obtained b y solving pronominal references are measured b y comparing system p erformance with and without taking in to accoun t information referenced pronominally . Ev aluation sho ws that solving these references impro v es QA p erformance. In the follo wing section, the state-of-theart of op en domain QA systems will b e summarised. Afterw ards, imp ortance of pronominal references in do cumen ts is analysed. Next, our approac h and system comp onen ts are describ ed. Finally , ev aluation results are presen ted and discussed.  Bac kground In terest in op en domain QA systems is quite recen t. W e had little information ab out this kind of systems un til the First Question Answ ering T rac k w as held in last TREC conference (TRE,  ). In this conference, nearly t w en t y di eren t systems w ere ev aluated with v ery di eren t success rates. W e can classify curren t approac hes in to t w o groups: textsnipp et extr action systems and noun-phr ase extr action systems. T ext-snipp et extraction approac hes are based on lo cating and extracting the most relev an t sen tences or paragraphs to the query b y supp osing that this text will con tain the correct answ er to the query . This approac h has b een the most commonly used b y participan ts in last TREC QA T rac k. Examples of these systems are (Moldo v an et al.,  ) (Singhal et al.,  ) (Prager et al.,  ) (T ak aki,  ) (Hull,  ) (Cormac k et al.,  ). After reviewing these approac hes, w e can notice that there is a general agreemen t ab out the imp ortance of sev eral Natural Language Pro cessing (NLP) tec hniques for QA task. P os-tagging, parsing and Name Entit y recognition are used b y most of the systems. Ho w ev er, few systems apply other NLP tec hniques. P articularly , only four systems mo del some coreference relations b et w een entities in the query and do cumen ts (Morton,  )(Brec k et al.,  ) (Oard et al.,  ) (Humphreys et al.,  ). As example, Morton approac h mo dels iden tit y , de nite nounphrases and non-p ossessiv e third p erson pronouns. Nev ertheless, b ene ts of applying these coreference tec hniques ha v e not b een analysed and measured separately . The second group includes noun-phrase extraction systems. These approac hes try to nd the precise information requested b y questions whose answ er is de ned t ypically b y a noun phrase. MURAX is one of these systems (Kupiec,  ). It can use information from di eren t sen tences, paragraphs and ev en di eren t do cumen ts to determine the answ er (the most relev an t noun-phrase) to the question. Ho w ev er, this system do es not tak e in to accoun t the information referenced pronominally in do cumen ts. Simply , it is ignored. With our system, w e w an t to determine the b ene ts of applying pronominal anaphora resolution tec hniques to QA systems. Therefore, w e apply the dev elop ed computational system, Slot Uni cation P arser for Anaphora resolution (SUP AR) o v er do cumen ts and queries (F err andez et al.,  ). SUP AR's arc hitecture consists of three indep enden t mo dules: lexical analysis, syn tactic analysis, and a resolution mo dule for natural language pro cessing problems, suc h as pronominal anaphora. F or ev aluation, a standard based IR system and a sen tence-extraction QA system ha v e b een implemen ted. Both are based on Salton approac h (  ). After IR system retriev es relev an t do cumen ts, our QA system pro cesses these do cumen ts with and without solving pronominal references in order to compare nal p erformance. As results will sho w, pronominal anaphora resolution impro v es greatly QA systems p erformance. So, w e think that this NLP tec hnique should b e considered as part of an y op en domain QA system.  Imp ortance of pronominal information in do cumen ts T rying to measure the imp ortance of information referenced pronominally in do cumen ts, w e ha v e analysed sev eral text collections used for QA task in TREC- Conference as w ell as others used frequen tly for IR system testing. These collections w ere the follo wing: Los Angeles Times (LA T), F ederal Register (FR), Financial Times (FT), F ederal Bureau Information Service (FBIS), TIME, CRANFIELD, CISI, CA CM, MED and LISA. This analysis consists on determining the amoun t and t yp e of pronouns used, as w ell as the n um b er of sen tences con taining pronouns in eac h of them. As a v erage measure of pronouns used in a collection, w e use the ratio b et w een the quan tit y of pronouns and the n um b er of sentences con taining pronouns. This measure appro ximates the lev el of information that is ignored if these references are not solv ed. Figure  sho ws the results obtained in this analysis. As w e can see, the amoun t and t yp e of pronouns used in analysed collections v ary dep ending on the sub ject the do cumen ts talk ab out. LA T, FBIS, TIME and FT collections are comp osed from news published in di eren t newspap ers. The ratio of pronominal reference used in this kind of do cumen ts is v ery high (from , % to ,0%). These do cumen ts con tain a great n um b er of pronominal references in third p erson (he, she, they , his, her, their) whose an teceden ts are mainly p eople's names. In this t yp e of do cumen ts, pronominal anaphora resolution seems to b e v ery necessary for a correct mo delling of relations b et w een en tities. CISI and MED collections app ear rank ed next in decreasing ratio lev el order. These collections are comp osed b y general commen ts ab out do cumen t managing, classi cation and indexing and do cumen ts extracted from medical journals resp ectiv ely . Although the ratio presen ted b y these collections (, % and ,%) is also high, the most imp ortan t group of pronominal references used in these collections is formed b y "it" and "its" pronouns. In this case, TEXT COLLECTION LAT FBIS TIME FT CISI MED CACM LISA FR CRANFIELD Pronoun type HE, SHE, THEY 38,59% 29,15% 31,20% 26,20% 15,38% 15,07% 8,59% 12,24% 13,31% 6,54% HIS, HER, THEIR 25,84% 21,54% 35,01% 20,52% 22,96% 21,46% 15,69% 31,03% 20,70% 10,35% IT, ITS 26,92% 39,60% 22,43% 46,68% 52,11% 57,41% 67,61% 47,86% 61,06% 79,76% HIM, THEM 7,04% 7,08% 7,82% 4,44% 6,38% 3,96% 4,87% 6,30% 3,45% 1,60% HIM, HER,IT(SELF), THEMSELVES 1,61% 2,63% 3,54% 2,17% 3,17% 2,10% 3,25% 2,57% 1,48% 1,75% Pronouns in Sentences Containing 0 pronouns 44,80% 48,09% 51,37% 64,04% 75,06% 77,84% 79,06% 83,79% 84,92% 90,95% Containing 1 pronoun 30,40% 31,37% 29,46% 23,07% 17,17% 15,02% 17,54% 13,01% 11,64% 8,10% Containing 2 pronouns 14,94% 12,99% 12,26% 8,54% 5,27% 4,75% 2,79% 2,56% 2,57% 0,85% Containing +2 pronouns 9,86% 7,55% 6,90% 4,34% 2,51% 2,39% 0,60% 0,64% 0,88% 0,09% Ratio of pronominal reference 55,20% 51,91% 48,63% 35,96% 24,94% 22,16% 20,94% 16,21% 15,08% 9,05% Figure : Pronominal references in text collections an teceden ts of these pronominal references are mainly concepts represen ted t ypically b y noun phrases. It seems again imp ortan t solving these references for a correct mo delling of relations b et w een concepts expressed b y noun-phrases. The lo w est ratio results are presen ted b y CRANFIELD collection with a ,0%. The reason of this lev el of pronominal use is due to text con ten ts. This collection is comp osed b y extracts of v ery high tec hnical sub jects. Bet w een the describ ed p ercen tages w e nd the CA CM, LISA and FR collections. These collections are formed b y abstracts and do cumen ts extracted from the F ederal Register, from the CA CM journal and from Library and Information Science Abstracts, resp ectiv ely . As general b eha viour, w e can notice that as more tec hnical do cumen t con ten ts b ecome, the pronouns "it" and "its" b ecome the most app earing in do cumen ts and the ratio of pronominal references used decreases. Another observ ation can b e extracted from this analysis. Distribution of pronouns within sentences is similar in all collections. Pronouns app ear scattered through sen tences con taining one or t w o pronouns. Using more than t w o pronouns in the same sen tence is quite infrequen t. After analysing these results an imp ortan t question ma y arise. Is it w orth enough to solv e pronominal references in do cumen ts? It w ould seem reasonable to think that resolution of pronominal anaphora w ould only b e accomplished when the ratio of pronominal o ccurrence exceeds a minim um lev el. Ho wev er, w e ha v e to tak e in to accoun t that the cost of solving these references is prop ortional to the n um b er of pronouns analysed and consequen tly , prop ortional to the amoun t of information a system will ignore if these references are not solv ed. As results ab o v e state, it seems reasonable to solv e pronominal references in queries and do cumen ts for QA tasks. A t least, when the ratio of pronouns used in do cumen ts recommend it. An yw a y , ev aluation and later analysis (section ) con tribute with empirical data to conclude that applying pronominal anaphora resolution tec hniques impro v e QA systems p erformance.  Our Approac h Our system is made up of three mo dules. The rst one is a standard IR system that retriev es relev an t do cumen ts for queries. The second mo dule will manage with anaphora resolution in b oth, queries and retriev ed do cumen ts. F or this purp ose w e use SUP AR computational system (section .). And the third one is a sen tence-extraction QA system that in teracts with SUP AR mo dule and ranks sen tences from retriev ed do cumen ts to lo cate the answ er where the correct answ er app ears (section .). F or the purp ose of ev aluation an IR system has b een implemen ted. This system is based on the standard information retriev al approac h to do cumen t ranking describ ed in Salton (  ). F or QA task, the same approac h has b een used as baseline but using sen tences as text unit. Eac h term in the query and do cumen ts is assigned an in v erse do cumen t frequency (idf ) score based on the same corpus. This measure is computed as: id f (t) = l og ( N d f (t) ) () where N is the total n um b er of do cumen ts in the collection and df(t) is the n um b er of do cumen ts whic h con tains term t. Query expansion consists of stemming terms using a v ersion of the P orter stemmer. Do cumen t and sen tence similarit y to the query w as computed using the cosine similarit y measure. The LA T corpus has b een selected as test collection due to his high lev el of pronominal references. . Solving pronominal anaphora In this section, the NLP Slot Uni cation P arser for Anaphora Resolution (SUP AR) is brie y describ ed (F err andez et al.,  ; F err andez et al.,  ). SUP AR's arc hitecture consists of three indep enden t mo dules that in teract with one other. These mo dules are lexical analysis, syn tactic analysis, and a resolution mo dule for Natural Language Processing problems. Lexical analysis mo dule. This mo dule tak es eac h sen tence to parse as input, along with a to ol that pro vides the system with all the lexical information for eac h w ord of the sen tence. This to ol ma y b e either a dictionary or a part-of-sp eec h tagger. In addition, this mo dule returns a list with all the necessary information for the remaining mo dules as output. SUP AR w orks sen tence b y sentence from the input text, but stores information from previous sen tences, whic h it uses in other mo dules, (e.g. the list of an teceden ts of previous sen tences for anaphora resolution). Syn tactic analysis mo dule. This mo dule tak es as input the output of lexical analysis mo dule and the syn tactic information represen ted b y means of grammatical formalism Slot Uni cation Grammar (SUG). It returns what is called slot structure, whic h stores all necessary information for follo wing mo dules. One of the main adv an tages of this system is that it allo ws carrying out either partial or full parsing of the text. Mo dule of resolution of NLP problems. In this mo dule, NLP problems (e.g. anaphora, extra-p osition, ellipsis or PPattac hmen t) are dealt with. It tak es the slot structure (SS) that corresp onds to the parsed sen tence as input. The output is an SS in whic h all the anaphors ha v e b een resolv ed. In this pap er, only pronominal anaphora resolution has b een applied. The kinds of kno wledge that are going to b e used in pronominal anaphora resolution in this pap er are: p os-tagger, partial parsing, statistical kno wledge, c-command and morphologic agreemen t as restrictions and sev eral heuristics suc h as syn tactic parallelism, preference for noun-phrases in same sen tence as the pronoun preference for prop er nouns. W e should remark that when w e w ork with unrestricted texts (as it o ccurs in this pap er) w e do not use seman tic kno wledge (i.e. a to ol suc h as W orNet). Presen tly , SUP AR resolv es b oth Spanish and English pronominal anaphora with a success rate of % and % resp ectiv ely . SUP AR pronominal anaphora resolution di ers from those based on restrictions and preferences, since the aim of our preferences is not to sort candidates, but rather to discard candidates. That is to sa y , preferences are considered in a similar w a y to restrictions, except when no candidate satis es a preference, in whic h case no candidate is discarded. F or example in sen tence: "R ob was asking us ab out John. I r eplie d that Peter saw John yester day. James also saw him." After applying the restrictions, the follo wing list of candidates is obtained for the pronoun him : [John, Peter, R ob ], whic h are then sorted according to their pro ximit y to the anaphora. If preference for candidates in same sen tence as the anaphora is applied, then no candidate satis es it, so the follo wing preference is applied on the same list of candidates. Next, preference for candidates in the previous sen tence is applied and the list is reduced to the follo wing candidates: [John, Peter ]. If syn tactic parallelism preference is then applied, only one candidate remains, [John ], whic h will b e the an teceden t c hosen. Eac h kind of anaphora has its o wn set of restrictions and preferences, although they all follo w the same general algorithm: rst come the restrictions, after whic h the preferences are applied. F or pronominal anaphora, the set of restrictions and preferences that apply are describ ed in Figure . Procedure SelectingAntecedent ( INPUT L: ListOfCandidates, OUTPUT Solution: Antecedent ) Apply restrictions to L with a result of L1 Morphologic agreement C-command constraints Semantic consistency Case of: NumberOfElements (L1) = 1 Solution = TheFirstOne (L1) NumberOfElements (L1) = 0 Exophora or cataphora NumberOfElements (L1) > 1 Apply preferences to L1 with a result of L2 1) Candidates in the same sentence as anaphor. 2) Candidates in the previous sentence 3) Preference for proper nouns. 4) Candidates in the same position as the anaphor with reference to the verb (before or after). 5) Candidates with the same number of parsed constituents as the anaphora 6) Candidates that have appeared with the verb of the anaphor more than once 7) Preference for indefinite NPs. Case of: NumberOfElements (L2) = 1 Solution = TheFirstOne (L2) NumberOfElements (L2) > 1 Extract from L2 in L3 those candidates that have been repeated most in the text If NumberOfElements (L3) > 1 Extract from L3 in L4 those candidates that have appeared most with the verb of the anaphora Solution = TheFirstOne (L4) Else Solution = TheFirstOne (L3) EndIf EndCase EndCase EndProcedure Figure : Pronominal anaphora resolution algorithm The follo wing restrictions are rst applied to the list of candidates: morphologic agreemen t, c-command constrain ts and seman tic consistency . This list is sorted b y pro ximit y to the anaphor. Next, if after applying restrictions there is still more than one candidate, the preferences are then applied, in the order sho wn in this gure. This sequence of preferences (from  to  ) stops when, after ha ving applied a preference, only one candidate remains. If after applying preferences there is still more than one candidate, then the most rep eated candidates  in the text are extracted from the list after applying preferences. After this is done, if there is still more than one candidate, then those candidates that ha v e app eared most frequen tly with the v erb of the anaphor are extracted from the previous list. Finally , if after ha ving applied all the previous preferences, there is still more than one candidate left, the rst candidate of the resulting list, (the closest one to the anaphor), is selected. . Anaphora resolution and QA Our QA approac h pro vides a second lev el of pro cessing for relev an t do cumen ts: Analysing matc hing do cumen ts and Sen tence ranking. Analysing Matc hing Do cumen ts. This step is applied o v er the b est matc hing do cumen ts retriev ed from the IR system. These do cumen ts are analysed b y SUP AR mo dule and pronominal references are solv ed. As result, eac h pronoun is asso ciated with the noun phrase it refers to in the do cumen ts. Then, do cumen ts are split in to sen tences as basic text unit for QA purp oses. This set of sentences is sen t to the sen tence ranking stage. Sen tence Ranking. Eac h term in the query is assigned a w eigh t. This w eigh t is the sum of in v erse do cumen t frequency measure of terms based on its o ccurrence in the LA T collection describ ed earlier. Eac h do cumen t sen tence is w eigh ted the same w a y . The only di erence with baseline is that pronouns are giv en the w eigh t of the en tit y they refer to. As w e only w an t to analyse the e ects of pronominal reference resolution, no more c hanges are in tro duced in w eigh ting sc heme. F or sen tence ranking, cosine similarit y is used b et w een query and do cumen t sen tences.  Ev aluation F or this ev aluation, sev eral p eople unacquain ted with this w ork prop osed 0 queries  Here, w e mean that rstly w e obtain the maxim um n um b er of rep etitions for an an teceden t in the remaining list. After that, w e extract from that list the an teceden ts that ha v e this v alue of rep etition. whose correct answ er app eared at least once in to the analysed collection. These queries w ere also selected based on their expressing the user's information need clearly and their b eing lik ely answ ered in a single sen tence. First, relev an t do cumen ts for eac h query w ere retriev ed using the IR system describ ed earlier. Only the b est 0 matc hing do cumen ts w ere selected for QA ev aluation. As the do cumen t con taining the correct answ er w as included in to the retriev ed sets for only  queries (a % of the prop osed queries), the remaining  queries w ere excluded for this ev aluation. Once retriev al of relev an t do cumen t sets w as accomplished for eac h query , the system applied anaphora resolution algorithm to these do cumen ts. Finally , sen tence matc hing and ranking w as accomplished as describ ed in section . and the system presen ted a rank ed list con taining the 0 most relev an t sen tences to eac h query . F or a b etter understanding of ev aluation results, queries w ere classi ed in to three groups dep ending on the follo wing c haracteristics:  Group A. There are no pronominal references in the target sen tence (sen tence con taining the correct answ er).  Group B. The information required as answ er is referenced via pronominal anaphora in the target sen tence.  Group C. An y term in the query is referenced pronominally in the target sentence. Group A w as made up b y  questions. Groups B and C con tained  and  queries resp ectiv ely . Figure  sho ws examples of queries classi ed in to groups B and C. Ev aluation results are presen ted in Figure  as the n um b er of target sen tences app earing in to the 0 most relev an t sen tences returned b y the system for eac h query and also, the n um b er of these sen tences that are considered a correct answ er. An answ er is considered correct if it can b e obtained b y simply lo oking at the target sen tence. Results Question: “Who is the village head man of Digha ?” Answer: “He is the sarpanch, or village head man of Digha, a hamlet or mud-and-straw huts 10 miles from ...” Group B Example Anaphora resolution: Ram Bahadu Question: “What did Democrats propose for low-income families?” Answer: “They also want to provide small subsidies for low-income families in which both parents work at outside jobs.” Group C Example Anaphora resolution: Democrats Figure : Group B and C query examples are classi ed based on question t yp e in troduced ab o v e. The n um b er of queries p ertaining to eac h group app ears in the second column. Third and fourth columns sho w baseline results (without solving anaphora). Fifth and sixth columns sho w results obtained when pronominal references ha v e b een solv ed. Results sho w sev eral asp ects w e ha v e to tak e in to accoun t. Bene ts obtained from applying pronominal anaphora resolution v ary dep ending on question t yp e. Results for group A and B queries sho w us that relev ance to the query is the same as baseline system. So, it seems that pronominal anaphora resolution do es not ac hiev e an y impro v emen t. This is true only for group A questions. Although target sen tences are rank ed similarly , for group B questions, target sen tences returned b y baseline can not b e considered as correct b ecause w e do not obtain the answ er b y simply lo oking at returned sen tences. The correct answ er is displa y ed only when pronominal anaphora is solv ed and pronominal references are substituted b y the noun phrase they refer to. Only if pronominal references are solv ed, the user will not need to read more text to obtain the correct answ er. F or noun-phrase extraction QA systems the impro v emen t is greater. If pronominal references are not solv ed, this information will Baseline Anaphora solved Answer Type Number Target included Correct answer Target included Correct answer A 37 (39,78%) 18 (48,65%) 18 (48,65%) 18 (48,65%) 18 (48,65%) B 25 (26,88%) 12 (48,00%) 0 (0,00%) 12 (48,00%) 12 (48,00%) C 31 (33,33%) 9 (29,03%) 9 (29,03%) 21 (67,74%) 21 (67,74%) A+B+C 93 (100,00%) 39 (41,94%) 27 (29,03%) 51 (54,84%) 51 (54,84%) Figure : Ev aluation results not b e analysed and probably a wrong nounphrase will b e giv en as answ er to the query . Results impro v e again if w e analyse group C queries p erformance. These queries ha v e the follo wing c haracteristic: some of the query terms w ere referenced via pronominal anaphora in the relev an t sen tence. When this situation o ccurs, target sen tences are retriev ed earlier in the nal rank ed list than in the baseline list. This impro v emen t is b ecause similarit y increases b et w een query and target sen tence when pronouns are w eigh ted with the same score as their referring terms. The p ercen tage of target sen tences obtained increases , p oin ts (from  ,0% to ,%). Aggregate results presen ted in Figure  measure impro v emen t obtained considering the system as a whole. General p ercen tage of target sen tences obtained increases , 0 p oin ts (from , % to ,%) and the lev el of correct answ ers returned b y the system increases , p oin ts (from  ,0% to ,%). A t this p oin t w e need to consider the follo wing question: Will these results b e the same for an y other question set? W e ha v e analysed test questions in order to determine if results obtained dep end on question test set. W e argue that a w ell-balanced query set w ould ha v e a p ercen tage of target sen tences that con tain pronouns (PTSC) similar to the pronominal reference ratio of the text collection that is b eing queried. Besides, w e supp ose that the probabilit y of nding an answ er in a sen tence is the same for all sen tences in the collection. Comparing LA T ratio of pronominal reference (,0%) with the question test set PTSC w e can measure ho w a question set can a ect results. Our question set PTSC v alue is a 0,%. W e obtain as target sen tences con taining pronouns only a ,0% more than exp ected when test queries are randomly selected. In order to obtain results according to a w ell-balanced question set, w e discarded v e questions from b oth groups B and C. Figure  sho ws that results for this w ell-balanced question set are similar to previous results. Aggregate results sho w that general p ercen tage of target sen tences increases 0, p oin ts when solving pronominal anaphora and the lev el of correct answ ers retriev ed increases , p oin ts (instead of , 0 and , obtained in previous ev aluation resp ectiv ely). As results sho w, w e can sa y that pronominal anaphora resolution impro v es QA systems p erformance in sev eral asp ects. First, precision increases when query terms are referenced anaphorically in the target sen tence. Second, pronominal anaphora resolution reduces the amoun t of text a user has to read when the answ er sen tence is displa y ed and pronominal references are substituted with their coreferen t noun phrases. And third, for noun phrase extraction QA systems it is essen tial to solv e pronominal references if a go o d p erformance is pursued.  Conclusions and future researc h The analysis of information referenced pronominally in do cumen ts has rev ealed to b e imp ortan t to tasks where high lev el of recall is required. W e ha v e analysed and measured the e ects of applying pronominal anaphora resolution in QA systems. As results sho w, its application impro v es greatly QA p erformance and seems to b e essen tial in some cases. Three main areas of future w ork ha v e app eared while in v estigation has b een dev elop ed. First, IR system used for retrieving relev an t do cumen ts has to b e adapted for QA Baseline Anaphora solved Answer Type Number Target included Correct answer Target included Correct answer A 37 (39,78%) 18 (48,65%) 18 (48,65%) 18 (48,65%) 18 (48,65%) B 20 (21,51%) 10 (50,00%) 0 (0,00%) 10 (50,00%) 10 (50,00%) C 26 (27,96%) 9 (34,62%) 9 (34,62%) 18 (69,23%) 18 (69,23%) A+B+C 83 (89,25%) 37 (44,58%) 27 (32,53%) 46 (55,42%) 46 (55,42%) Figure : W ell-balanced question set results tasks. The IR used, obtained the do cumen t con taining the target sen tence only for  of the 0 prop osed queries. Therefore, its precision needs to b e impro v ed. Second, anaphora resolution algorithm has to b e extended to di eren t t yp es of anaphora suc h as de nite descriptions, surface coun t, v erbal phrase and one-anaphora. And third, sen tence ranking approac h has to b e analysed to maximise the p ercen tage of target sen tences included in to the 0 answ er sen tences presen ted b y the system. References Eric Brec k, John Burger, Lisa F erro, Da vid House, Marc Ligh t, and Inderjeet Mani.  . A Sys Called Quanda. In Eighth T ext REtrieval Confer enc e (TRE,  ). Gordon V. Cormac k, Charles L. A. Clark e, Christopher R. P almer, and Derek I. E. Kisman.  . F ast Automatic P assage Ranking (MultiT ext Exp erimen ts for TREC-). In Eighth T ext REtrieval Confer enc e (TRE,  ). An tonio F err andez, Man uel P alomar, and Lidia Moreno.  . Anaphora resolution in unrestriced texts with partial parsing. In th A nnual Me eting of the Asso ciation for Computational Linguistics and th International Confer enc e on Computational Lingustics COLINGA CL. An tonio F err andez, Man uel P alomar, and Lidia Moreno.  . An empirical approac h to Spanish anaphora resolution. T o app e ar in Machine T r anslation. Da vid A. Hull.  . Xero x TREC- Question Answ ering T rac k Rep ort. In Eighth T ext REtrieval Confer enc e (TRE,  ). Kevin Humphreys, Rob ert Gaizausk as, Mark Hepple, and Mark Sanderson.  . Univ ersit y of Sheeld TREC- Q&A System. In Eighth T ext REtrieval Confer enc e (TRE,  ). Julian Kupiec,  . MURAX: Finding and Organising A nswers fr om T ext Se ar ch, pages { . Klu w er Academic, New Y ork. Dan Moldo v an, Sanda Harabagiu, Marius P asca, Rada Mihalcea, Ric hard Go o drum, Ro xana G ^ rju, and V asile Rus.  . LASSO: A T o ol for Sur ng the Answ er Net. In Eighth T ext REtrieval Confer enc e (TRE,  ). Thomas S. Morton.  . Using Coreference in Question Answ ering. In Eighth T ext REtrieval Confer enc e (TRE,  ). Douglas W. Oard, Jianqiang W ang, Dek ang Lin, and Ian Sob oro .  . TREC- Exp erimen ts at Maryland: CLIR, QA and Routing. In Eighth T ext REtrieval Confer enc e (TRE,  ). John Prager, Dragomir Radev, Eric Bro wn, Anni Co den, and V alerie Samn.  . The Use of Predictiv e Annotation for Question Answ ering. In Eighth T ext REtrieval Confer enc e (TRE,  ). Gerard A. Salton.   . A utomatic T ext Pr o c essing: The T r ansformation, A nalysis, and R etrieval of Information by Computer. Addison W esley , New Y ork. Amit Singhal, Stev e Abney , Mic hiel Bacc hiani, Mic hael Collins, Donald Hindle, and F ernando P ereira.  . A TT at TREC-. In Eighth T ext REtrieval Confer enc e (TRE,  ). T oru T ak aki.  . NTT D A T A: Ov erview of system approac h at TREC- ad-ho c and question answ ering. In Eighth T ext REtrieval Conferenc e (TRE,  ). TREC-.  . Eighth T ext REtrieval Conferenc e.
2000
70
        !#"%$&')(+*,-./ 0 123./465,127 ./89;:)12 < =?>+@BADCFEHG3CJI+>K@MLONP>K@3G>RQS>+TU>KV>+WFXHYML AB>KTZXHY3[;\'>K[^]_> LO`a>KG3>bADXHc>KEd]_e_> L3`;Cgf3>+@>ih?XHTkjZYML3`;XH]lc3>KT^Gh?CPC2G3T^Y mn>K@3G!o>K[ZXHEHe`)Y3[ =e_p3>KTZq^mber@sq)CKtvu-CFmwp3Y q^e_T)NP]rXHe_@ ]_e&>K@3Gx@ WFXH@3ere_TZXH@3W NPCFY q^c3e_TZ@yADerqUc CPG3Xd[Zqaza@ XdIFerT^[ZXdq|{ =?>KEHEH>K[_LP}'e~f >K[_L€FF‚KF„ƒ†…P‡„‚F‚ mbCFEdG3CJI+>K@ ˆ[^e_>K[_‰Š[^m‹Y‰ŒerG3Y Ž‘’J“„”•K’ –—s˜š™œ›2g›3žlŸ ›2Ÿ†ž~™|ž~¡+¢†™¢†—£ž¤gŸ/¥/—s˜š¢|ž~¥^¦ ¢†§sŸ†žJ¨F©J› žlŸZg¢†˜ª©„¡‹J¡s«¬Ÿ†ž~™†§s­ª¢†™‘©J®s¢†J˜¯¡£ž~« ° ˜ª¢†—±¢†—£ž³²´€µrµ¶¸·¹§£ž~™|¢†˜š©„¡»º-¡s™ ° žlŸ|¦ ˜¯¡£¼D™|½F™|¢|ž~¾«£žl¿Jž~­ª©J›3ž~« ˜š¡³¢†—£žÀ-g¢†§F¦ ŸZJ­£ÁOJ¡£¼„§sg¼Jž-Ÿ†©F¥Už~™†™†˜¯¡£¼ÁOg®3©JŸ/g¢|©JŸ†½ g¢RÃFÄiÅ)Ƒ–O©ÇP¡s«œJ¡s™ ° žlŸ/™l¨È¢†—£ži™|½F™É¦ ¢|ž~¾Ÿ†ž~­š˜ªž~™-©„¡ÊË¥U©„¾a®2˜š¡sg¢†˜ª©„¡©JÌv™|½F¡F¦ ¢†J¥U¢†˜¯¥J¡s«Í™†ž~¾ÍJ¡Î¢†˜¯¥È¢|ž~¥/—s¡2˜šÏ+§sž~™lÆ–—£ž ™†ž~gŸ/¥/—DÌd©JŸa¢†—£žÐJ¡s™ ° žlŸa˜¯™®2J™|ž~«Ê©„¡ ¡s©¿Jž~­vÌH©JŸ/¾n©JÌ4˜š¡2«£žUÑF˜¯¡£¼b¥lJ­š­šž~«›PgŸ/_¦ ¼JŸZg›2—±˜š¡s«£žUÑ£˜š¡£¼£Æ¸ºÒ™/¥U©JŸ†žÊ©JÌÓJÓKÆŠÓ„Ô ÌH©JŸ<™†—s©JŸ†¢J¡s™ ° žlŸ/™J¡s«ÐÕ_֣ƊӄԤÌd©JŸ'­ª©„¡£¼ J¡2™ ° žlŸZ™ ° J™ÈJ¥/—2˜ªžl¿Jž~«Rg¢v¢†—£ž–רvÙ'¦ ÚÍ¥U©„¾?›3žl¢†˜ª¢†˜ª©„¡Æ Û Ü ”3•ÎÝMÞ “„ßOà'á<â 㠘š¡s«s˜¯¡£¼¢†—£žÈJ¡s™ ° žlŸM¢|©;¹Ï+§£ž~™†¢†˜ª©„¡S®+½;Ÿ/žl¢†§£Ÿ/¡s˜š¡s¼ Ð™/¾ÍJ­š­ÌHŸ/g¼„¾?ž~¡+¢©J̝Т|žUÑK¢l¨ ° —£žlŸ†ž)¢†—£ž)J¡s™ ° žlŸ J¥U¢†§sJ­š­š½R­š˜ªž~™l¨2˜¯™›sŸ/©JÌd©„§s¡2«s­ª½&«2˜åä žlŸ/ž~¡Î¢4ÌdŸ†©„¾ ¢†—£ž ¢†J™|欩J̘¯¡£Ìd©JŸZ¾Íg¢†˜ª©„¡¬Ÿ†žl¢|Ÿ/˜ªžl¿gJ­çHèÉ×-é<©JŸ˜š¡£ÌH©JŸ/¾Í_¦ ¢†˜ª©„¡žUÑK¢|ŸZJ¥U¢†˜ª©„¡±çHèêØvéZÆÙv§sŸ†Ÿ†ž~¡+¢;èê×ë™|½F™|¢|ž~¾Í™SJ­å¦ ­ª© ° §2™&¢|©³­š©K¥lg¢|žÌì§s­š­-«£©K¥l§2¾?ž~¡Î¢†™¬¢†—sg¢Ë¾Í˜š¼„—΢ ¥U©„¡+¢†J˜š¡í›3žlŸ†¢†˜š¡sž~¡Î¢Ë˜š¡£ÌH©JŸ/¾Íg¢†˜š©„¡¨4­ªž~¿K˜¯¡£¼³˜ª¢Ë¢|© ¢†—£ž)§2™|žlŸ¢|©ÐžUÑF¢|Ÿ/J¥U¢4¢†—£ž)J¡s™ ° žlŸÌHŸ†©„¾ÍŸ/J¡£æJž~« ­š˜š™†¢4©JÌv¢|žUÑF¢†™lÆ?èê¡Ê¥U©„¡+¢|Ÿ/J™|¢l¨€èêØî™|½F™|¢|ž~¾Í™žUÑF¢|Ÿ/J¥U¢ ¢†—£ž!˜š¡sÌd©JŸ/¾Ðg¢†˜ª©„¡ ©JÌS˜š¡+¢|žlŸ†ž~™|¢l¨¹›sŸ/©¿F˜š«£ž~«˜ª¢Ë—sJ™ ®3žlž~¡±›sŸ†ž~™†ž~¡Î¢|ž~«î˜š¡íï›sŸ/ž~«£žUÇP¡£ž~«¨¹¢†gŸ†¼Jžl¢bŸ†žl›£¦ Ÿ†ž~™|ž~¡+¢†g¢†˜ª©„¡¨'æK¡s© ° ¡ïJ™Ëðòñlóô õåöJðòñlÆï–—£žË˜š¾Í¾?žU¦ «s˜šg¢|ž‹™|©„­š§s¢†˜ª©„¡!©JÌ-¥U©„¾)®2˜¯¡s˜š¡£¼RèÉ×÷J¡s«èêØ%¢|ž~¥Z—F¦ ¡s˜šÏK§£ž~™&Ìd©JŸËÏK§£ž~™|¢†˜ª©„¡ ø_J¡s™ ° žlŸ/˜š¡£¼çù·;ø_º¹é&˜š™¬˜š¾¦ ›sŸ/J¥U¢†˜¯¥lJ­-™/˜š¡s¥UžwèÉØë™|½F™|¢|ž~¾Í™‹gŸ†žæK¡s© ° ¡œ¢|©y® ž —s˜ª¼„—2­ª½Ë«£žl›3ž~¡s«£ž~¡+¢4©„¡«£©„¾ÍJ˜¯¡wæF¡£© ° ­ªž~«£¼JžJ¨ J¡s« Ìì§£Ÿ†¢†—£žlŸ/¾Í©JŸ†žJ¨v¢†—£žw¢|ž~¾?›P­šg¢|žw¼Jž~¡£žlŸZg¢†˜ª©„¡œ˜¯™Ð¡£©J¢ ›3žlŸ†ÌH©JŸ/¾?ž~«ËJ§£¢|©„¾Íg¢†˜¯¥lJ­š­ª½JÆ ú §sŸ»¾?žl¢†—£©F«£©„­ª©J¼J½û©JÌBÇP¡s«s˜¯¡£¼÷J¡s™ ° žlŸ/™í˜š¡ ­šgŸ†¼JžË¥U©„­š­šž~¥U¢†˜ª©„¡s™S©JÌ4«£©F¥l§s¾?ž~¡+¢†™SŸ†ž~­š˜ªž~™a©„¡ï¡sg¢É¦ §£ŸZJ­s­šJ¡£¼„§sg¼Jž›sŸ†©F¥Už~™†™†˜š¡£¼ÐçìÀ-ÁOÂvéM¢|ž~¥Z—s¡s˜šÏK§£ž~™<˜¯¡ ¡£©r¿Jž~­ ° ~½F™lÆ ã ˜ªŸZ™|¢l¨ ° ž)›3žlŸ†ÌH©JŸ/¾÷¢†—£ža›sŸ†©F¥Už~™†™†˜š¡s¼ ©JÌ¢†—£žSÏ+§sž~™|¢†˜ª©„¡Ë®K½R¥U©„¾)®P˜š¡s˜š¡£¼Í™†½K¡+¢†J¥U¢†˜š¥)˜¯¡£Ìd©JŸ†¦ ¾Íg¢†˜š©„¡¨ Ÿ†ž~™†§2­ª¢†˜š¡£¼‹ÌHŸ†©„¾Ë™†—sJ­¯­ª© ° ›2gŸZ™|žJ¨ ° ˜š¢†— ™|ž~¾ÐJ¡Î¢†˜š¥D˜š¡£ÌH©JŸ/¾Íg¢†˜ª©„¡ ¢†—sg¢w¥Z—sgŸ/J¥U¢|žlŸ/˜šülž~™w¢†—£ž ÏK§£ž~™|¢†˜ª©„¡³çHžJÆý¼£Æwþ~ÿ2ñ^ðyðZô£ñU¨vþlÿ2ñUð  lÿ|éZÆ ÃKž~¥U©„¡2«s­ª½J¨¢†—£ž¬™†ž~gŸ/¥/—Ìd©JŸS¢†—£ž‹J¡s™ ° žlŸa˜š™a®2J™|ž~« ©„¡bÍ¡£©¿Jž~­OÌd©JŸZ¾©J̘š¡s«£žUÑ£˜š¡£¼£¨£¥lJ­š­šž~«¬ôFö|ö|öZô ÎñËçìÄw©„­¯«£©¿gJ¡ÊJ¡s«iÄb˜š—2J­š¥Už~ÎéZÆ ã ˜å¦ ¡sJ­¯­ª½J¨+˜š¡Ð©JŸ/«£žlŸ'¢|©ažUÑF¢|Ÿ/J¥U¢vJ¡s™ ° žlŸ/™'J¡2«Í¢|©Sžl¿_J­å¦ §sg¢|ž¢†—£ž~˜ªŸ<¥U©JŸ†Ÿ/ž~¥U¢†¡£ž~™†™l¨ ° ž4§s™|ž®2g¢|¢|žlŸ/½S©J̝g®£¦ «s§2¥U¢†˜ª¿Jž&¢|ž~¥Z—s¡s˜šÏK§£ž~™‹ç©J®2®2™Sžl¢J­ùÆ !!"„éZ¨'™|©„¾?ž ®2J™†ž~«b©„¡iž~¾?›P˜ªŸ/˜š¥lJ­€¾?žl¢†—£©F«s™l¨3™|©„¾?žS©„¡i­šžUÑF˜š¥U©g¦ ™|ž~¾ÐJ¡Î¢†˜š¥ ˜š¡£ÌH©JŸ/¾Íg¢†˜ª©„¡Æ#–—£žï›sŸ/˜¯¡s¥l˜ª›2­ªž~™i¢†—sg¢ —s¿JžÍ¼„§s˜š«sž~«w©„§£Ÿ›PgŸ/g¼JŸ/g›2—ʘš¡2«£žUÑF˜¯¡£¼ÐJ¡s«¢†—£ž g®«s§s¥U¢†˜š¿Jž ˜š¡sÌdžlŸ†ž~¡2¥Už ©JÌw¢†—sž J¡s™ ° žlŸ/™ygŸ†žŸ†žU¦ ›3©JŸ†¢|ž~«R˜¯¡!ç4gŸ/g®Pg¼„˜š§RJ¡s«RÄiJ˜ª©JŸ/J¡s©#!!!„éZÆ $ %'&)( “ &+*,(.ß0/)’21 (436587972:<;>=  ?A@r’ (B –—sž gŸ/¥/—s˜š¢|ž~¥U¢†§£Ÿ†ž ©JÌ ²´€µµr¶ çìÄb©„­š«£©¿gJ¡¨ -gŸ/g®2g¼„˜š§wžl¢lÆÈJ­8!!!„é¥U©„¾?›sŸ/˜š™†ž~™È¢†—£Ÿ†žlžS¾?©K«£¦ §s­šž~™DCFEvÿsñ ^ðHGIJ Zñ Ka¾?©F«s§s­ªžJ¨.Gö|ö2ÉöZô L +ñM'¾?©F«s§s­ªžJ¡s«ONPQKR'ñDSGTU ^ñKK‘¾?©K«£¦ §s­šžJÆWV¹˜ª¿Jž~¡i&ÏK§£ž~™|¢†˜ª©„¡ ¨2©JÌ©J›3ž~¡F¦ùž~¡s«£ž~«b¡sg¢†§£Ÿ†žJ¨ žUÑF›sŸ†ž~™†™†ž~« ˜š¡œ¡sg¢†§£Ÿ/J­4­šJ¡£¼„§sg¼JžJ¨ ° žRÇ2ŸZ™|¢Í›sŸ†©g¦ ¥Už~™†™i¢†—£žBÏ+§sž~™|¢†˜ª©„¡î®+½í¥UŸ†ž~g¢†˜¯¡£¼  Ÿ†žl›sŸ†ž~™†ž~¡Î¢†_¦ ¢†˜ª©„¡y©JÌ-¢†—£žR˜š¡sÌd©JŸ/¾Ðg¢†˜ª©„¡BŸ†ž~Ï+§sž~™|¢|ž~«Æï–—K§s™ ° ž J§£¢|©„¾Ðg¢†˜š¥lJ­š­ª½¬ÇP¡s«!çìÎéÈ¢†—£ž&þ~ÿsñ ^ðÊðZô£ñÌdŸ†©„¾ ¢†—£ž)¢†_ÑK©„¡£©„¾a½R©JÌÏK§£ž~™|¢†˜ª©„¡s™®2§s˜š­ª¢v˜š¡+¢|©Í¢†—£ž)™†½K™É¦ ¢|ž~¾Ë¨¹çH® é)¢†—£ž¬žUÑK›3ž~¥U¢|ž~«»ö.KR'ñ ËðZô£ñ?ÌdŸ†©„¾ ¢†—£ž ™|ž~¾ÐJ¡Î¢†˜š¥ËJ¡2J­ª½K™/˜š™S©JÌ¢†—£žRÏK§£ž~™|¢†˜ª©„¡ ¨<J¡s«ï¾?©„™|¢ ˜š¾Í› ©JŸ/¢†J¡Î¢†­ª½J¨ çì¥é'¢†—£žÍþ~ÿsñ ^ðO  ~ÿ'«£žUÇP¡£ž~«‹J™ ¢†—£ž&¾ÍJ˜š¡Ê˜š¡£ÌH©JŸ/¾Íg¢†˜š©„¡Ÿ†ž~Ï+§2˜ªŸ†ž~«i®+½i¢†—sg¢)ÏK§£ž~™É¦ ¢†˜ª©„¡ Æ ã §£Ÿ†¢†—sžlŸ/¾?©JŸ†žJ¨‘¢†—£žXEvÿ2ñUð4GTU ^ñKK ¾?©F«s§s­šž)J­š™|©‹˜š«£ž~¡+¢†˜åÇ2ž~™¢†—£žaæJžl½ ° ©JŸ/«s™-ÌdŸ/©„¾ ¢†—£ž ÏK§£ž~™|¢†˜ª©„¡ ¨ ° —s˜¯¥/—ʝgŸ†žÍ›PJ™†™|ž~«Ê¢|©R¢†—sžGö|ö2ÉöZô L +ñMa¾?©F«s§s­ªžJ¨£J™˜š­¯­š§s™|¢|Ÿ/g¢|ž~«‹®K½ 㠘𼄧£Ÿ†žYgÆ ??????? !!!!!!! IR Search Engine Paragraph Quality Paragraph Ordering Paragraph Filtering Question Keywords Question Focus Answer Type Answer Correctness Answer Extraction Answer Identification Documents Question Processing Question Type Collection Index Paragraph Indexing Parse Yes Answer Processing Question Answer(s) No 㠘ª¼„§£Ÿ†žYC<º4Ÿ/¥Z—s˜ª¢|ž~¥U¢†§£Ÿ†ž)©JÌ¢†—sža²´€µrµ¶D·)ø_º¤ÃF½K™|¢|ž~¾ èÉ¡ ²´µµ¶¨w«£©F¥l§s¾?ž~¡+¢†™³gŸ†žî˜š¡s«£žUÑFž~«ë®+½÷ ¾?©F«s˜åÇ2ž~«[Z ›sŸ/˜š™†ž èÉ× ™|½F™|¢|ž~¾ ¿_J˜š­¯g®2­ªž ÌHŸ†©„¾ À4è|ÃF–;Æ ú §£ŸÐ™†ž~gŸ/¥/—œž~¡£¼„˜š¡£žb˜š¡s¥U©JŸ†›3©JŸ/g¢|ž~™&D™|žl¢ ©JÌY\‘©+©„­šž~J¡»©J›3žlŸ/g¢|©JŸ/™yçHžJÆý¼£Æ ´0])^¨a¶)_M¨`]€¶AaO¨ ]0bs´)_OéZÆdcʞ› ©„™†¢É¦ù›sŸ†©F¥Už~™†™v¢†—£žŸ†ž~™†§2­ª¢†™‘©J̀¢†—£ž¹èÉ× ™|ž~gŸ/¥Z—Íž~¡£¼„˜¯¡£ž®K½aÇ ­ª¢|žlŸ/˜š¡£¼¹©„§£¢<¢†—£žŸ†žl¢†§£ŸZ¡s™M¢†—sg¢ «£©Í¡£©J¢È¥U©„¡+¢†J˜š¡RJ­š­3¢†—sžæJžl½ ° ©JŸ/«s™È©JÌM?ÏK§£ž~™|¢†˜ª©„¡ ˜š¡ ¢†—£ž!™†J¾Íž!›2gŸ/g¼JŸ/g›P—Ƹ–—s˜š™¬©J›3žlŸ/g¢†˜ª©„¡±J­å¦ ­ª© ° ™ÌH©JŸ-©„¡F¦ù¢†—£žU¦e2½R¼Jž~¡sžlŸ/g¢†˜ª©„¡b©JÌ<Ð›PgŸ/g¼JŸ/g›2— ˜š¡s«sžUÑ ÆÊ–—£ž‹™|ž~¥U©„¡s«y˜š¾Í› ©JŸ/¢†J¡Î¢SÌdž~g¢†§sŸ†ž‹©JÌ¢†—£ž GöÉö2Éö^ô L +ñM¾?©F«s§s­šžR¥U©„¾?ž~™¬ÌdŸ†©„¾ ¢†—£ž žl¿gJ­š§sg¢†˜ª©„¡ï©J̹¢†—£žbÏ+§sJ­¯˜ª¢ò½B©J̹¢†—£žR›2gŸ/g¼JŸZg›2—s™lÆ c—£ž~¡»¢†—sžÏK§sJ­š˜ª¢ ½í˜š™w™†g¢†˜¯™|ÌHJ¥U¢|©JŸ/½J¨)¢†—£žB›2gŸ/_¦ ¼JŸ/g›2—2™<gŸ†ž-©JŸ/«£žlŸ†ž~«¬J¥l¥U©JŸ/«s˜¯¡£¼)¢|©;›2­¯J§s™†˜ª®2˜¯­š˜ª¢ò½ «£žl¼JŸ†žlž&©JÌ¥U©„¡Î¢†J˜¯¡s˜š¡£¼b¢†—£žÐJ¡s™ ° žlŸ~Æ ú ¢†—sžlŸ ° ˜š™|žJ¨ ° ž¬ÌH©JŸ/¾ ¡£ž ° ÏK§£žlŸ/˜ªž~™S®K½J«s«2˜š¡£¼w©JŸÍ«sŸ†©J›s›2˜š¡s¼ æJžl½ ° ©JŸ/«2™ÈJ¡s«‹Ÿ†ž~™†§s¾Íž4¢†—£ž¹›2gŸZg¼JŸ/g›2—‹Ÿ†žl¢|Ÿ/˜šžl¿_J­ ›sŸ†©F¥Už~™†™lƋ–—s˜¯™;­ª©K©J›D¼Jž~¡£žlŸ/g¢|ž~™SRÌHžlž~«F¦ù®2J¥/æ!Ÿ†žU¦ ¢|Ÿ/˜ªžl¿gJ­2¥U©„¡+¢|žUÑF¢È¢†—sg¢'ž~¡2g®2­ªž~™©„¡2­ª½Í;Ÿ†ž~J™†©„¡sg®2­ªž ¡K§s¾)®3žlŸ©JÌ2›2gŸZg¼JŸ/g›2—s™¢|©¹®3ž'›2J™/™|ž~«S¢|©-¢†—sžfNg.h KR'ñDPGIJ ^ñia¾Í©K«s§2­ªžJÆ –—sž‘J«£¿gJ¡+¢†g¼JžÈ©JÌ2›sŸ†©F¥Už~™†™†˜¯¡£¼›2gŸZg¼JŸ/g›2—s™M˜š¡F¦ ™|¢|ž~J«¤©J̬Ìì§s­š­S«£©K¥l§2¾?ž~¡Î¢†™Ê˜¯™ÊÌHJ™|¢|žlŸ™|½K¡+¢†J¥^¦ ¢†˜š¥‹›2gŸZ™†˜š¡£¼£Æ ú §£Ÿ›2gŸ/™†ž~™ÍJ­š™|©!˜š¡+¿J©„­ª¿JžRÀ-J¾?ž~« Ø'¡Î¢†˜ª¢ ½ Ÿ†ž~¥U©J¼„¡s˜š¢†˜ª©„¡s™BJ¡s«,­šžUÑF˜š¥U©g¦ò™†ž~¾ÍJ¡Î¢†˜¯¥ Ÿ†žU¦ ™|©„§sŸ/¥Už~™§s™†žlÌH§s­3˜š¡‹¢†—sž)J¡s™ ° žlŸžUÑF¢|Ÿ/J¥U¢†˜ª©„¡Æ j ; à ( r’ * ßOálk&“„ß• ( g * áÞ –—sž4Ÿ†©„­šž4©JÌ ¢†—sž¹ÏK§£ž~™|¢†˜š©„¡Ð›sŸ/©K¥Už~™†™/˜š¡£¼¾?©K«2§s­ªž˜š™ ¢|©QCvçUé «£žl¢|žlŸZ¾Í˜š¡£ž¢†—sž‘ÏK§£ž~™|¢†˜ª©„¡)¢ò½K›3žJ¨£ç„éO«£žl¢|žlŸ|¦ ¾Í˜¯¡£ža¢†—sž?žUÑK›3ž~¥U¢|ž~«ÊJ¡s™ ° žlŸ¢ò½K›3žJ¨'ç"„é4®2§s˜¯­š«wJ¡ J¡s™ ° žlŸ4Ìd©F¥l§s™l¨2J¡2«DçHÖ+év¢|Ÿ/J¡2™|Ìd©JŸZ¾÷¢†—£ž)ÏK§£ž~™|¢†˜ª©„¡ ˜š¡+¢|©ÍÏK§£žlŸ/˜ªž~™ÈÌH©JŸ¢†—£ža™|ž~gŸ/¥/—wž~¡£¼„˜š¡£žJÆ èÉ¡¬©JŸ/«£žlŸv¢|©ÇP¡s«&¢†—£ž¹Ÿ/˜ª¼„—+¢ÈJ¡s™ ° žlŸv¢|©ÍSÏK§£ž~™É¦ ¢†˜ª©„¡!ÌdŸ†©„¾ R­šgŸ†¼JžÍ¥U©„­š­šž~¥U¢†˜ª©„¡!©J̑¢|žUÑK¢†™~¨OÇ2ŸZ™|¢ ° ž —s¿JžÐ¢|©bæK¡£© ° ° —2g¢ ° ž&™†—£©„§s­š«Ê­š©+©JæÌH©JŸ~Æ&–—sž J¡s™ ° žlŸ¢ ½+›3žÍ¥lJ¡!§s™/§sJ­š­ª½Ë®3ž?«£žl¢|žlŸ/¾Í˜¯¡£ž~«iÌdŸ†©„¾ ¢†—£žÈÏK§£ž~™|¢†˜ª©„¡ Æ ã ©JŸ-®3žl¢|¢|žlŸ«£žl¢|ž~¥U¢†˜ª©„¡©JÌ2¢†—sž‘J¡F¦ ™ ° žlŸ~¨K¢†—£žÏK§£ž~™|¢†˜ª©„¡2™<gŸ†žÇ2Ÿ/™|¢<¥l­¯J™†™†˜åÇ2ž~«Í®+½¢†—£ž~˜ªŸ ¢ ½+›3žCYRSKö„ðì¨mRS_¨mRSg¨nR¨mRSFñD|ñaÏK§£ž~™|¢†˜ª©„¡s™~¨ žl¢†¥gÆ)º ÌH§£Ÿ/¢†—£žlŸ-¥l­¯J™†™†˜åÇP¥lg¢†˜š©„¡bÌH©„­š­ª© ° ™4¢|©¬®3žl¢|¢|žlŸ ˜š«sž~¡Î¢†˜ªÌH½;¢†—£žÏK§£ž~™|¢†˜š©„¡¢ò½K› žJÆ–g®2­ªžo™†—s© ° ™M¢†—£ž ¥l­šJ™/™†˜åÇP¥lg¢†˜ª©„¡‹ÌH©JŸ¢†—sž`Ж×4ؑÙ'¦ ÚÏK§£ž~™|¢†˜ª©„¡2™lÆ cʞ»Ÿ†ž~J­š˜ªülž~«ë¢†—sg¢ï¢†—sžíÏ+§£ž~™†¢†˜ª©„¡ë¢ò½K›3ž ° J™ ¡£©J¢Ê™†§Qp&¥l˜ªž~¡+¢wÌH©JŸÇP¡s«s˜š¡s¼ J¡s™ ° žlŸ/™lÆ ã ©JŸÊ¢†—£ž ÏK§£ž~™|¢†˜ª©„¡2™­š˜ªæJžrqsR<ö)ðFñnt+K^ðdN-óÐñ  Zö>i òôFö2 ZñQu†¨¢†—£žÍJ¡s™ ° žlŸ¢ò½K›3žÍ˜š™-©J®+¿F˜ª©„§s™DCFv.b_Oµ¶)]Æ 4© ° žl¿JžlŸ~¨s¢†—s˜¯™'«£©Kž~™v¡£©J¢Èg›s›2­ª½J¨KÌH©JŸ‘žUÑ£J¾?›2­šžJ¨+¢|© w)xzy|{ }0~ €Uxƒ‚…„† ‡ˆD‰KŠn†Œ‹ŽK‘’U‚,“Ž ~ ”DŽD•U’ {—– •U‹‰K {—– ,’˜•U‹‰ wAxzy{ ‰K~ €)y ™š›œ žŸ  Ÿ ¡ ›z¢ £ ¤.™šK›¥œ Ÿz¦ §Ÿ ¨Œ©KªM«,¬K­ªM®M¨°¯U«¥±M­ ²³´zµ9¶´¥·|µ¸³¹—ºd»¥¼¹,µ½´¥¾¿ÀJ´¥Á ÂD¹0» à ăÅzÆÇœ…›zÈÊÉ0Ë¥›zÌ ÍÇ Î «¥Ï,Ð ªMÐ ÑUÐ ©Kª­¥ÑUÐ ÑUÒ«­ µÓ³¹—Ô—»JÕ¹Á°Ö¹´J×¹—Ö¾؍ٝ¹|ظ¼fÚUÛJÜJÛDÝ ªMªMޝ­®Mª Î «¥Ï,Ð ªM« Î ™š›œ ßà™šÅ á á Þ¥«¥±Jâi©ª­ ²³´zµQ×»¥·µ¸ÂºS¹ƒãJ¹·iØÓ䝼 ¹¾AãJ¹×ؽãJ¹ã ¤Åz¢Êœ…ÍăÇåÇ¢ £ æzÆÇÈ ©±UçMèªÐ éèÑUÐ ©ª µÓ³´zµêSؽ׳´J¹Áë´J×읷,»¥¼m·i³»¥ÂKÁÓãd»¥¼Á ¿ ¶¹´¥¾0»¥¼ ¹äzÁÓ»¥ÀM¹MÝ ™š›œ ßà™šÇÆ § í Î èÑU« î ¼m¶³´zµ9¿¹´¥¾0ã¥ØÊãAî ¾¹ÁÓ´¥¼ ãd¹Á¸¹×µ ÉUÇ›È Øàµ¸·9ï¾·µ¶»¥ºS´¥¼+ðK¾¹·iؽãJ¹¼ µÝ ™š›œ ßà™šÇÈ…Ç ñ,Ÿ ñ¥í Ò©òzèÑUÐ ©ª ²³´zµ9ØÓ·0µÓ³¹)×´ðKØóµ½´¥Á9» Ãdô ¾ÂMäzÂD´¥¿Ý ¤›zõ£ œ…›zÌ ™šÅ ŸUá §zá Þ¥«¥±Jâi©ª­ ²³»)ØÓ·ƒµ¸³¹)´¥Âµ¸³»¥¾A» ×µ¸³¹)Õ»¥»¥ì ›zÍMœ…šKÅÈ ©±UçMèªÐ éèÑUÐ ©ª ö÷Œ³¹î ¾»¥¼Tø ´J㥿Mùûú ü9ؽ»iäz¾´ðK³J¿ » ×ên´¥¾Êä´¥¾¹µ÷ ³´zµÊ׳¹¾MýMÝ šKÅ™ §Jñ íJñ ¡ ›z¢ £ ¤šÅ,™ ñ ¦ ¨ 說M«¥± þ—»¥¶Oã¥Ø½ã)ÿ »¥×¾´zµ½¹·0ã¥Ø½¹Ý MÅJ¤ȅ›¥œ…Ç¢ šÅ™ßàă›zÆUÉ ñ ñ¥§ ªM®M¨°¯z«± þ—»¥¶FºS´¥¼¿ƒð¹»ðKÁ¸¹0ã¥Ø½¹ã0¶³¹¼ õ ÇŝõKÌ Ç µÓ³¹·µÊ»¥¼ؽ´A·¥´¥¼KìØÓ¼6ÚUÛJÛ°Ý šÅ™ßàÌ ÅÆKæ í í ÑzÐ ¨°«­ Î Ð âiÑèªMòU« þ—»¥¶FÁÓ»¥¼Kä+ãJ»¹·Øàµûµ½´¥ìM¹|µ½»µ¸¾´¥ÀM¹Á  ž»¥º ÷»¥ì¥¿»µÊ»)ÔQØÓ؍äM´zµÊ´ Ý šÅ™ßàėÍ¤š § í ¨Œ©KªM«,¬K­Þ¥±JÐ òU« þ—»¥¶Fº0ÂD׳nã¥Ø½ãên¹¾×¾¿A·Êð¹¼ ã SÇÈ ¤ÍKÈÊÉ »¥¼f´Jã¥ÀM¹¾,µ¸ØÓ·iظ¼äSØÓ¼6ÚUÛJÛ DÝ šÅ™ßàėͤšMß ñ ¦ ®Mª Î «Ï,Ð ª« Î þ—»¥¶Fº0ÂD׳S·µ¸¾»¥¼KäM¹¾|ØÓ·ƒµ¸³¹—¼ ¹¶FÀ¥ØóµÓ¾¹»¥ÂK· ÆÇ™gËM£ œ…È ÇÅzÍ¢ ăÅJå£ ÇÈ ×´¥¾¥Õ»¥¼nºS´zµ½¹¾ؽ´¥Á ØÓ¼ÀJ¹¼ µÊ¹ãdÕ¿dµÓ³¹d÷D»¥ì¿M» ¤›zÈ ¡ ŝƏă›¥œiÇÈ £ ›Ì î ¼·µ¸Øàµ¸ÂµÊ¹A» Ï÷¹×³J¼ »¥Á¸»i䝿d×»¥º|ðK´¥¾¹ヶØàµÓ³ µÓ³¹—ºS´zµ½¹¾ؽ´¥ÁŒºd´JãJ¹9ž»¥º>×¹ÁÓÁ ÂKÁÓ»¥·¥¹MÝ šÅ™ß¸›zÈ ñ ñ Î Ð âiÑèªMòU« þ—»¥¶Iô¥¾|ØÓ·D´¥¾»¥·ÁÓ´¥ÀÁž»¥º#ên»¥·,×»¥¶Ý ›È…ŝ¢ Ì ›ËMÌ šÅ™߸œ…›zÌ Ì § § ªM®M¨°¯z«± þ—»¥¶`µ½´¥Á¸ÁŒØÓ·0ê+µÀJ¹¾¹·µÝ œËzÇȅǢʜ šÅ™ßàÈ £ ¤š ñ ¦ ®Mª Î «Ï,Ð ª« Î þ—»¥¶F¾ؽ׳dØÓ·0ü9ØÓÁÓÁ´zµÊ¹·MÝ 2£ Ì ÌQ›œ…Ç¢ šÅ™ßàÌ ›È…æÇ ñ ¦ ªM®M¨°¯z«± þ—»¥¶FÁÓ´¥¾ äM¹ØÓ·ƒµ¸³¹úû¾×µ¸Ø½×—¾¹àÅÂäM¹—µÊ» ðK¾¹·¥¹¾ÀM¹Â¼Ø ¹—¶ØÓÁ¸ãUÁ Ø Ã¹´¥¼ ã)¶ØÓÁÓãJ¹¾¼¹·i· !ûÈ ¤œ…£ ¤.È ÇàÍæzÇ ÀJ´¥Á ÂD¹0»¥¼núûÁÓ´¥·ìJ´#" ·|¼ »¥¾,µÓ³m×»¥´¥·µÝ ™šÇÈ Ç ízí ñ¥ž Ò©òzèÑUÐ ©ª ²³¹¾¹ØÓ·d÷´$|ên´¥³´¥Á Ý % ›'&(S›zš›zÌ ™šÇÆ ñ) ñ¥§ Î èÑU« ²³¹¼fã¥Ø½ãµ¸³¹0ëU¾´¥··iØÊ׃Ö¹¾ØÊ»¥ãS¹¼ ã Ý *UÍÈ ›z¢ ¢ £ ¤,+ŒÇÈ £ ÅJå ™š£ ¤iš ñ,¦  ™š£ ¤šMßà™šÅ ñ ñ Þ¥«¥±Jâi©ª ²³Jؽ׳|û¥¾ºS¹¾.-.Á Â-.Á Â0/1-QÁ¸´¥¼ 2.Ì Í32.Ì Í5462Ì ›zÆ ºS¹ºdÕ¹¾.¶»¥¼W´¥¼f¹ÁÓ¹×,µ½¹㏻87 ×¹ ăÇÄ ¡ ÇÈ ØÓ¼TµÓ³¹nô9 ÿ# Ý ™š£ ¤šMßà™šÇÈ…Ç Ÿ § Ò©òzèÑUÐ ©ª ²³Jؽ׳m×Øàµ¸¿³´¥·|µ¸³¹)»¥Á¸ãJ¹·µ2¾¹Á¸´zµÓؽ»¥¼·³JØ ð ¤£ œ½É ´¥··ØÓ·µ½¹¾;:…×Øàµ¸¿S¶ØàµÓ³8ø »¥·úû¼ä¹Á¸¹·MÝ ™š£ ¤šMßà™šÇÆ ñ ñ Î èÑU« î ¼m¶³Jؽ׳+¿¹´¥¾¶´¥·|ԗ¹¶=<¹´¥ÁÓ´¥¼ ã ÉUÇ›È ¹>/×Á ÂDãJ¹ãž»¥º µÓ³¹—ú.Ô?<Qô ÿW´¥ÁÓÁ ؽ´¥¼ ×¹Ý ™š£ ¤šMßà™š›œ Ÿ § ªMªMޝ­ ²³Jؽ׳në´ðK´¥¼ ¹·¥¹0×´¥¾|ºS´¥ìJ¹¾³´Jã *U›õK›ÆKÇ¢ Ç ©±UçMèªÐ éèÑUÐ ©ª Øàµ¸·0Õ؍ä¥ä¹·µŒðK¹¾×¹¼Dµ½´,乃» ÷¥´¥ÁÓ¹|ØÓ¼ ¤›zÈă›A@ÇÈ µÓ³¹)ãJ»¥ºS¹·µÓØÊחºS´¥¾ìJ¹,µÝ ÆK›ÄƒÇ Ÿ Ÿ Æ›zăÇ߸™šKÅ í í Þ¥«¥±Jâi©ª­ ԗ´¥ºS¹|µ¸³¹)ãJ¹·i؍äz¼¹¾A» ×µÓ³¹—·i³»¥¶ ©±UçMèªÐ éèÑUÐ ©ª µÓ³´zµ9· ðK´¥¶¼ ¹ã)º)ØÓÁÓÁ ؽ»¥¼K·A» ÃûðKÁ¸´¥·µÓØ½× åÇ¢…£ æzÆÇÈ ØÓº0ØàµÊ´zµ¸Ø½»¥¼K· B9쥼 »¥¶¼f´¥·fö $,¹Á¸Á ؽ¹·UýMÝ Æ›zăÇ߸™šKÇÈ Ç ñ ñ Ò©òzèÑUÐ ©ª ԗ´¥ºS¹0´d×»¥Â¼Dµ¸¾¿dµ¸³´zµ2ØÓ·)ãJ¹ÀM¹Á¸»ðKØÓ¼Kä ´)ºS´,䝼 ¹µ¸Ø½×Á¸¹ÀØàµÊ´zµÓؽ»¥¼m¾´¥ØÓÁ ¶´¥¿S·i¿z·µ½¹ºmÝ ¤ÅzÍÆUœ…ÈÊÉ Æ›zăÇ߸™šK›¥œ ñ ñ ÑzÐ ÑUÒ«­ªMªMÞ Ô—´¥ºS¹0´.ïÁ º µÓ³´zµ9³´¥·—¶»¥¼ µÓ³¹C»¥Á¸ãJ¹¼Tüû¹´¥¾—ØÓ¼TµÓ³¹0üû¹¾Á ØÓ¼ Ì Ä D ØÓÁ º D ¹·µ¸ØÓÀM´¥ÁÊÝ ™šUÉ í ¦ ±J«,èâi©ª ²³J¿8ã¥Ø½ã3E´¥ÀØ½ã1-—»¥¾¹·³8´¥·ì.Ã,»¥¾)´ FQ›ËM£ å32.ÅzÈ Ç¢ š ¶»¥¾ã|ðK¾»×¹·i·¥»¥¾Ý ™šÅzÄ ñ ¦ Þ¥«¥±Jâi©ª­ ²³»¥º ã¥Ø½ãdµÓ³¹6G³Jؽ×´,ä»)ü9ÂÁ¸Á ·)Õ¹´zµ2ØÓ¼ ©±UçMèªÐ éèÑUÐ ©ª µÓ³¹)ÚUÛJÛ +׳´¥º—ðKؽ»¥¼·³JØ ðÝ H2šK£ ¤›æzÅ2ÍÌ Ì ¢ I ŽK‚‰K‹ í¥¦z¦ ñ, z§   ázáKJ  –g®2­ªž C&–<½K›3ž~™)©JÌÏK§£ž~™|¢†˜š©„¡s™aJ¡2«™|¢†g¢†˜¯™|¢†˜š¥l™lÆbèê¡¢†—s˜š™a¢†g®2­ªž ° ž‹¥U©„¡s™†˜š«sžlŸ†ž~«D¢†—2g¢SiÏ+§sž~™|¢†˜ª©„¡ ° J™ J¡s™ ° žlŸ†ž~«R¥U©JŸ/Ÿ†ž~¥U¢†­ª½Ë˜ªÌ€˜š¢†™J¡s™ ° žlŸ ° J™J¾?©„¡£¼Ð¢|©J›RÇ2¿JžŸ/J¡sæJž~«R­ª©„¡£¼ÍJ¡s™ ° žlŸ/™~Æ RSKöJð‘ÏK§£ž~™|¢†˜ª©„¡s™~¨J™RSKöJ𑘚™aJ¾)®2˜š¼„§£©„§s™)J¡2«˜ª¢ ™†½K™¹¡£©J¢†—s˜¯¡£¼Ðg®3©„§£¢¢†—sžS˜š¡sÌd©JŸ/¾Ðg¢†˜ª©„¡RJ™|æJž~«i®K½ ¢†—£žvÏ+§sž~™|¢†˜ª©„¡ÆM–—£ž‘™†J¾?žÈg›s›2­š˜šž~™ ¢|©¾ÍJ¡+½;©J¢†—£žlŸ ÏK§£ž~™|¢†˜ª©„¡B¢ò½K›3ž~™lÆ–—£ž&›2Ÿ†©J®2­ªž~¾ ° J™?™|©„­ª¿Jž~«y®K½ «£žUÇP¡2˜š¡£¼Í¥U©„¡s¥Užl›s¢-¡sJ¾?ž~« D lÿZÆ º  lÿS˜š™? ° ©JŸ/«y©JŸÐÊ™†ž~Ï+§£ž~¡2¥UžË©JÌ ° ©JŸ/«s™ ° —s˜¯¥/—ï«£žUÇP¡sžË¢†—£žRÏ+§sž~™|¢†˜ª©„¡³J¡s« «s˜š™†J¾a®2˜ª¼„§sg¢|ž ¢†—£žSÏ+§£ž~™†¢†˜ª©„¡R®K½Ë˜š¡s«s˜š¥lg¢†˜¯¡£¼ ° —sg¢¢†—sžaÏK§£ž~™|¢†˜ª©„¡ ˜š™¹­ª©K©JæK˜¯¡£¼&ÌH©JŸ~Æ ã ©JŸ;žUÑ£J¾?›2­ªžJ¨ÌH©JŸ¢†—£žÍÏK§£ž~™|¢†˜ª©„¡ qsKö„ð8i¹ð£ñ;õªö+ñ ^ðI DìðYiMLñ ^óÍö.|u†¨2¢†—sžÌd©g¦ ¥l§s™'˜š™õªö+ñUð+ DìðgÆON;¡£© ° ˜š¡£¼¢†—£ž4Ìd©F¥l§s™<J¡2«Í¢†—£ž ÏK§£ž~™|¢†˜ª©„¡b¢ò½K›3žS˜ª¢4®3ž~¥U©„¾?ž~™-ž~J™†˜ªžlŸ¹¢|©¬«sžl¢|žlŸ/¾Í˜š¡£ž ¢†—£žË¢ ½+›3ž‹©JÌ4¢†—sžRJ¡s™ ° žlŸÍ™|©„§£¼„—+¢l¨‘¡sJ¾Íž~­ª½C¬¢†—£ž ¡sJ¾?ž©JÌM¢†—£ža­šgŸ†¼Jž~™|¢¥l˜ª¢ ½Ë˜š¡#V-žlŸ/¾ÍJ¡+½JÆ –—sž-ÌH©F¥l§s™È˜š™ÈJ­š™|©Í˜¯¾?›3©JŸ†¢†J¡Î¢È˜š¡‹«£žl¢|žlŸZ¾Í˜š¡s˜š¡s¼ ¢†—£žS­š˜š™|¢È©JÌæJžl½ ° ©JŸZ«s™ÌH©JŸ4ÏK§£žlŸ†½‹ÌH©JŸ/¾Íg¢†˜ª©„¡ Æ ú ̯¦ ¢|ž~¡¨È¾ÍJ¡+½ïÏ+§£ž~™†¢†˜ª©„¡ ° ©JŸ/«s™Í«£©!¡s©J¢Ðg›s›3ž~gŸÐ˜š¡ ¢†—£ž¬J¡s™ ° žlŸ~¨J¡2«!¢†—sg¢S˜š™;®3ž~¥lJ§s™|žÐ¢†—£ž~˜ªŸ)Ÿ/©„­ªžÐ˜š™ P §s™|¢¢|©¬Ìd©JŸZ¾¢†—£ž¥U©„¡Î¢|žUÑF¢-©JÌ¢†—£žaÏK§£ž~™|¢†˜š©„¡Æ ã ©JŸ žUÑ£J¾?›2­ªžJ¨‘˜š¡B¢†—£žRÏ+§sž~™|¢†˜ª©„¡ L RQTSSTUWV RSKöJð „ö U ðFñOR'ñZñYXs[ZAKi^ðìóÍö°I UöJõdõS+u†¨3¢†—£žSÌd©F¥l§s™ ˜š™ „ö J ð£ñ R'ñ/ñ\Xr¨ ‹¥U©„¡s¥Užl›s¢¹¢†—sg¢˜š™¹§s¡2­š˜ªæJž~­ª½ ¢|©B©K¥l¥l§£Ÿ‹˜š¡ ¢†—£žbJ¡s™ ° žlŸ~Æîèꡜ™†§s¥Z—œ™†˜š¢†§sg¢†˜ª©„¡s™l¨ ¢†—£ž-Ìd©F¥l§s™‘™†—£©„§s­¯«Ð¡£©J¢‘®3ž4˜š¡s¥l­¯§s«£ž~«Í˜š¡&¢†—£ž¹­š˜š™|¢<©JÌ æJžl½ ° ©JŸ/«2™¥U©„¡s™†˜š«£žlŸ/ž~«)ÌH©JŸ«£žl¢|ž~¥U¢†˜š¡£¼¹¢†—£žÈJ¡s™ ° žlŸ~Æ –—sž<›sŸ†©F¥Už~™†™€©JÌ£žUÑK¢|ŸZJ¥U¢†˜š¡£¼-æJžl½ ° ©JŸ/«s™M˜š™O®2J™|ž~« ©„¡ïÊ™|žl¢?©JÌ4©JŸ/«£žlŸ†ž~«ï—£ž~§sŸ/˜š™|¢†˜š¥l™~Æ!Ø<J¥/— —£ž~§£Ÿ/˜š™|¦ ¢†˜š¥?Ÿ/žl¢†§£Ÿ/¡s™b™|žl¢;©JÌvæJžl½ ° ©JŸ/«s™)¢†—sg¢agŸ/žÐJ«s«£ž~« ˜š¡Ê¢†—sž&™†J¾?ž&©JŸ/«£žlŸa¢|©b¢†—£ž¬Ï+§sž~™|¢†˜ª©„¡ÊæJžl½ ° ©JŸZ«s™lÆ cʞ˗s~¿Jž‹˜š¾?›P­ªž~¾?ž~¡+¢|ž~«!ž~˜ª¼„—+¢S«2˜åä žlŸ/ž~¡Î¢S—£ž~§£Ÿ/˜š™|¦ ¢†˜š¥l™lÆÈèÉ¡s˜ª¢†˜¯J­š­ª½J¨£©„¡s­ª½‹¢†—£ž)æJžl½ ° ©JŸ/«s™-Ÿ†žl¢†§£ŸZ¡£ž~«Ë®K½ ¢†—£žSÇ2Ÿ/™†¢¹™/˜åÑb—£ž~§£ŸZ˜š™|¢†˜š¥l™-gŸ†žÍ¥U©„¡s™†˜¯«£žlŸ†ž~«Æ;èêÌ<Ìì§£Ÿ|¦ ¢†—£žlŸ€æJžl½ ° ©JŸ/«s™gŸ†ž<¡£žlž~«sž~«)˜š¡¢†—sžŸ†žl¢|Ÿ/˜ªžl¿gJ­Î­š©+©J› ¨ æJžl½ ° ©JŸ/«2™€›2Ÿ†©¿F˜š«£ž~«a®+½)¢†—£ž'©J¢†—sžlŸ¢ ° ©—£ž~§sŸ/˜š™|¢†˜š¥l™ gŸ†ž;J«s«sž~«Ædc —sž~¡ËæJžl½ ° ©JŸ/«s™«£žUÇP¡£žJ¡ËžUÑ£¥Užlž~«F¦ ˜š¡£¼„­š½b™|›3ž~¥l˜åÇP¥ÏK§£žlŸ†½J¨ ¢†—sžl½igŸ†ž&«£Ÿ†©J›s›3ž~«˜š¡¢†—£ž Ÿ†žl¿JžlŸ/™†ž~« ©JŸ/«sžlŸ¬˜š¡ ° —s˜š¥Z—³¢†—£žl½³—2~¿Jži® žlž~¡œž~¡F¦ ¢|žlŸ†ž~«Æ'–—£ž;—£ž~§£ŸZ˜š™|¢†˜š¥l™vgŸ†žC ]_^ ñ R8Jh `SñUÿû…Uð _QCdc—£ž~¡£žl¿JžlŸÈÏK§£©J¢|ž~«ÐžUÑK¦ ›sŸ†ž~™/™†˜ª©„¡s™gŸ†ž'Ÿ†ž~¥U©J¼„¡s˜ªülž~«a˜š¡;4Ï+§£ž~™†¢†˜ª©„¡¨rJ­¯­„¡£©„¡F¦ ™|¢|©J› ° ©JŸ/«2™ ©JÌ£¢†—£ž<ÏK§£©J¢†g¢†˜ª©„¡a®3ž~¥lJ¾?žæJžl½ ° ©JŸZ«s™lÆ ]a^ ñ R8Jh;`añlÿi^ð [b2Cïº-­š­-¡sJ¾?ž~«ž~¡+¢†˜ª¢†˜ªž~™l¨ Ÿ†ž~¥U©J¼„¡s˜šülž~«±J™‹›sŸ†©J›3žlŸ¬¡s©„§s¡s™l¨gŸ/ž!™|ž~­ªž~¥U¢|ž~«±J™ æJžl½ ° ©JŸ/«2™lÆ ]=^ ñ R8Jh;`añlÿûKi^ð dcCMº4­š­£¥U©„¾Í›2­ªžUÑS¡s©„¾Í˜š¡sJ­š™ J¡s«¢†—£ž~˜šŸ¹J« P ž~¥U¢†˜ª¿gJ­¾?©F«s˜åÇ2žlŸZ™4gŸ†žÍ™†ž~­ªž~¥U¢|ž~«!J™ æJžl½ ° ©JŸ/«2™lÆ ]e^ ñD°R8Uh;`SñUÿûKi^ð afQC¸º4­š­©J¢†—sžlŸ¥U©„¾?›2­šžUÑ ¡£©„¾Ð˜š¡sJ­š™vgŸ†ž)™†ž~­ªž~¥U¢|ž~«wJ™æJžl½ ° ©JŸ/«s™lÆ ]g^ ñ R8Jh;`añlÿûKi^ð ihC¬º-­š­v¡£©„§s¡s™SJ¡s«B¢†—£ž~˜ªŸ J« P ž~¥U¢†˜ª¿gJ­ ¾?©F«s˜åÇ2žlŸZ™‘gŸ/ž;™|ž~­ªž~¥U¢|ž~«RJ™æJžl½ ° ©JŸ/«s™lÆ ]g^ ñ R8Jh;`añlÿûKi^ð kjC¬º-­š­‘¢†—£ž¬©J¢†—£žlŸÍ¡£©„§s¡s™ Ÿ†ž~¥U©J¼„¡2˜ªülž~«Í˜š¡a¢†—£žÏK§£ž~™|¢†˜ª©„¡gŸ†žÈ™|ž~­ªž~¥U¢|ž~«ÐJ™MæJžl½Î¦ ° ©JŸ/«s™lÆ ]l^ ñDR8Uh `SñUÿû…Uð nmDCœº4­š­¹¿JžlŸ/®2™RÌHŸ†©„¾ ¢†—£ž ÏK§£ž~™|¢†˜ª©„¡wgŸ†ž)™|ž~­ªž~¥U¢|ž~«wJ™æJžl½ ° ©JŸ/«2™lÆ ]o^ ñ R8Jh;`añlÿûKi^ð _pC<–—£ž¹ÏK§£ž~™|¢†˜ª©„¡¬ÌH©K¥l§s™v˜š™ J«s«sž~«Ë¢|©Í¢†—£žæJžl½ ° ©JŸZ«s™;Æ –g®2­ªžsR­š˜š™†¢†™;¢ ° ©Ï+§£ž~™†¢†˜ª©„¡s™;ÌHŸ†©„¾Ò¢†—£ž&–×ّؑ¦ ÚË¥U©„¾Í› žl¢†˜š¢†˜ª©„¡Ê¢|©J¼Jžl¢†—£žlŸ ° ˜ª¢†—Ê¢†—sž~˜ªŸ)J™†™|©F¥l˜šg¢|ž~« æJžl½ ° ©JŸZ«s™lÆ–—£žÈ–g®2­ªžÈJ­š™|©˜š­š­¯§s™|¢|Ÿ/g¢|ž~™¢†—£žv¢|Ÿ/J¥Už ©JÌ4æJžl½ ° ©JŸ/«s™§s¡+¢†˜š­'¢†—£ž&›PgŸ/g¼JŸ/g›2—s™¥U©„¡+¢†J˜š¡s˜š¡s¼ ¢†—£žÍJ¡2™ ° žlŸ ° žlŸ†žÌH©„§s¡s«Æ ã ©JŸ)Ï+§sž~™|¢†˜ª©„¡ JÕK¨ ¢†—£ž ›2gŸZg¼JŸ/g›2—s™-¥U©„¡Î¢†J˜¯¡s˜š¡£¼&¢†—£žSJ¡2™ ° žlŸZ™4¥U©„§s­¯«b¡£©J¢ ®3ž)ÌH©„§s¡s«w®3žlÌd©JŸ†ž«£Ÿ†©J›2›2˜š¡£¼Í¾ÍJ¡+½Ë©JÌ¢†—£ž˜š¡s˜š¢†˜šJ­ æJžl½ ° ©JŸZ«s™lÆ!èÉ¡y¥U©„¡Î¢|Ÿ/J™†¢l¨<¢†—£žËJ¡s™ ° žlŸ?ÌH©JŸÏK§£ž~™É¦ ¢†˜ª©„¡ " ° J™RÌH©„§s¡s« ° —£ž~¡±¢†—£ž!¿JžlŸ/®<†ñ 2ð ° J™ J«s«sž~«Ë¢|©Í¢†—£ž \‘©K©„­ªž~J¡wÏK§£žlŸ†½JÆ q ßóízž ²³´zµ9ØÓ·0µÓ³¹—¼´¥ºS¹A» ×µ¸³¹fö ùºS´¥ÁÓ¹Mý ×»¥Â¼Dµ½¹¾ ð´¥¾,µ9µÊ»1ÁŒÔQظ¼»AB2¶³Jؽ׳d¾¹·iÂKÁ µ¸·—ØÓ¼ ×»¥»¥Á ØÓ¼äSµ½¹º—ðK¹¾´zµÓÂK¾¹·—´¥¼ ã ÀJ¹¾¿+㥾¿¶¹´zµ¸³¹¾+Ý 2.ÇÉM¢ ¸Çă›zÌ Ç?Ìr£ ÆKŃåÈÊÉA™2Ç›¥œ…šKÇÈ9¤ÅJÅzÌ £ ÆKæ|œ…Çăõ ÇÈ ›œ…ÍÈ Ç¢ ¸Çă›zÌ Ç?Ìr£ ÆKŃåÈÊÉA™2Ç›¥œ…šKÇÈ9¤ÅJÅzÌ £ ÆKæ ¸Çă›zÌ Ç?Ìr£ ÆKŃåÈÊÉA™2Ç›¥œ…šKÇÈ ¸Çă›zÌ Ç?Ìr£ ÆKŃåÈÊÉ ¸Çă›zÌ Ç?Ìr£ ÆKÅ ¸Çă›zÌ Ç?Ì q ßÊñ¥§ þ—»¥¶Fº0ÂD׳n×»¥ÂKÁÓã0¿»¥Â8¾¹¼ µQ´tsD»¥Á 읷i¶´,ä¹¼ ÕÂä.û¥¾|ØÓ¼WÚUÛ5u uIÝ 2.ÇÉM¢ vÅÌ @J¢Ê™2›zæÇÆ ¡ Íæ sD»¥Á 읷i¶´,ä¹¼6ÕÂä)¾¹¼Dµ –g®2­ªž6CØÑ£J¾?›2­šž~™M©JÌ3–×ّؑ¦ Ú;·¹§£ž~™|¢†˜ª©„¡wNžl½Î¦ ° ©JŸ/«s™ x k” “Δ ÞO“„”zyn1|{_á<â (~}d* áÞ €‚Wƒ…„R€(†C‡,ˆ†C€ –—sžaèÉ¡£ÌH©JŸ/¾Íg¢†˜ª©„¡iמl¢|Ÿ/˜šžl¿_J­Ø<¡£¼„˜¯¡£ž)ÌH©JŸ)²´€µrµ¶ ˜š™MŸ/ž~­šg¢|ž~«¢|©;¢†—£žWZ ›2Ÿ/˜š™|ž'èêל™|ž~gŸ/¥/—Íž~¡£¼„˜š¡£žÈ~¿gJ˜š­å¦ g®2­šž?ÌdŸ†©„¾ Àè†ÃF–Æ–—£žlŸ/ž ° žlŸ†žÍ™†žl¿JžlŸ/J­ÌHž~g¢†§£Ÿ†ž~™ ©JÌv¢†—£ž'Z ›sŸ/˜š™†žSèÉ×,ž~¡£¼„˜š¡£ž ° —s˜š¥/— ° žlŸ†ž&¡£©J¢a¥U©„¡F¦ «s§2¥l˜ª¿JžÈ¢|© ° ©JŸ†æK˜¯¡£¼ ° ˜ª¢†—s˜š¡S¢†—£ž«sž~™†˜ª¼„¡©J̉δ€µrµ¶Æ \‘ž~¥lJ§s™|ž&©JÌv¢†—s˜š™l¨Ob¡£ž ° èÉ× ž~¡£¼„˜¯¡£ž ° J™a¼Jž~¡£žlŸ|¦ g¢|ž~« ¢|©™/§£›s›3©JŸ†¢¬²´€µµr¶ ° ˜š¢†—£©„§£¢Í¢†—£žbž~¡s¥l§s¾¦ ®sŸZJ¡s¥Už?©JÌ<¢†—£ž~™†žÍÌdž~g¢†§sŸ†ž~™lÆ–—£žÍ˜š¡s«sžUÑb¥UŸ†ž~g¢†˜ª©„¡ ° J™l¨2—£© ° žl¿JžlŸ¨2æJžl›s¢˜¯¡Ë˜ª¢†™Èž~¡+¢†˜ªŸ†žl¢ò½JÆ –—£žOZ ›sŸZ˜š™|ž¹èê×»ž~¡£¼„˜š¡£ž ° J™®2§s˜š­ª¢È§s™/˜š¡£¼ÍÍ¥U©g¦ ™†˜¯¡£ž!¿Jž~¥U¢|©JŸ!™|›2J¥Užy¾?©F«£ž~­kÆ–—2˜š™R¾?©F«£ž~­S«£©+ž~™ ¡£©J¢bJ­š­ª© ° Ìd©JŸR¢†—sž!žUÑK¢|ŸZJ¥U¢†˜ª©„¡í©JÌ¢†—£©„™|ž«£©K¥l§£¦ ¾?ž~¡+¢†™ ° —s˜š¥Z—w˜š¡s¥l­¯§s«£žJ­š­ ©JÌM¢†—£žaæJžl½ ° ©JŸ/«s™~¨P®2§s¢ žUÑF¢|Ÿ/J¥U¢†™«£©F¥l§s¾?ž~¡+¢†™-J¥l¥U©JŸZ«s˜š¡£¼¬¢|©‹¢†—£žS™/˜š¾Í˜š­šgŸ†¦ ˜ª¢ ½³¾?ž~J™†§£Ÿ†žb®3žl¢ ° žlž~¡œ¢†—sžb«£©F¥l§s¾?ž~¡+¢&J¡s«œ¢†—£ž ÏK§£žlŸ†½ÐJ™¥U©„¾Í›2§£¢|ž~«‹®K½Ð¢†—£ž¥U©„™†˜š¡sž-©J̀¢†—£žJ¡£¼„­ªž ®3žl¢ ° žlž~¡‹¢†—£ž¹¿Jž~¥U¢|©JŸ/™ÈŸ†žl›2Ÿ†ž~™|ž~¡+¢|ž~«&®K½Ð¢†—sž-«£©F¥l§F¦ ¾?ž~¡+¢‘J¡2«Í¢†—£ž¹Ï+§sžlŸ†½JÆ–—s˜š™<› žlŸZ¾Í˜ª¢†™«£©F¥l§s¾?ž~¡+¢†™ ¢|©®3ž'Ÿ/žl¢|Ÿ/˜ªžl¿Jž~« ° —£ž~¡S©„¡s­š½;©„¡£žv©JÌP¢†—sž‘æJžl½ ° ©JŸ/«s™ ˜š™€›sŸ†ž~™|ž~¡+¢lÆMº4«s«2˜ª¢†˜ª©„¡sJ­š­š½J¨~¢†—£ž‘æJžl½ ° ©JŸ/«s™›sŸ†ž~™†ž~¡Î¢ ˜š¡a©„¡£žvŸ†žl¢|Ÿ/˜ªžl¿Jž~««£©F¥l§s¾?ž~¡+¢M¾Í~½¡£©J¢® ž‘›sŸ†ž~™†ž~¡Î¢ ˜š¡ËJ¡£©J¢†—sžlŸŸ†žl¢|Ÿ/˜šžl¿Jž~«R«£©F¥l§s¾?ž~¡+¢lÆ ²´€µµr¶‹Š ™Ÿ/ž~Ï+§s˜šŸ†ž~¾?ž~¡+¢†™gŸ†ž4¾S§s¥Z—о?©JŸ†ž4Ÿ/˜ª¼„˜š« Æ ²´µµ¶ Ÿ†ž~ÏK§s˜ªŸ/ž~™;¢†—sg¢«£©F¥l§s¾?ž~¡+¢†™)®3žÐŸ/žl¢|Ÿ/˜ªžl¿Jž~« ©„¡s­ª½ ° —£ž~¡¤J­š­S©J̬¢†—£žïæJžl½ ° ©JŸ/«2™!gŸ†žï›sŸ†ž~™†ž~¡Î¢ ˜š¡%¢†—£ž³«s©K¥l§s¾Íž~¡Î¢lÆ6–—K§s™l¨Í˜š¢Ê® ž~¥lJ¾Íž³¡£ž~¥Už~™É¦ ™†gŸ†½±¢|©˜š¾?›P­ªž~¾?ž~¡+¢wœ¾Í©JŸ†ž›2Ÿ†ž~¥l˜š™|žB«£žl¢|žlŸ/¾Í˜å¦ ¡sJ¡+¢‹Ìd©JŸ‹žUÑF¢|Ÿ/J¥U¢†˜ª©„¡Æ ã ©JŸË¢†—£žž~gŸ/­ª½ ° ©JŸ†æ ¨-˜ª¢ ° J™a«£žl¢|žlŸZ¾Í˜š¡£ž~«D¢†—sg¢S \‘©+©„­ªž~J¡y«s˜¯™†¥UŸ/˜š¾Í˜¯¡sg¢|ž ° ©„§s­š«S™†§ûp&¥Už'›sŸ†©¿F˜š«£ž~«a¢†—sg¢¢†—£ž‘©J› žlŸZg¢|©JŸ/™M´0]0^ J¡s«w¶)_ ° žlŸ†ž;˜¯¾?›2­ªž~¾?ž~¡+¢|ž~«ÆMèê¢ ° J™J­¯™|©?¡£ž~¥Už~™É¦ ™†gŸ†½?¢|©a›sŸ†©r¿K˜š«sž¢†—£ž-g®2˜š­š˜ª¢ ½a¢|©S©JŸ†¼„J¡s˜šülž4ÏK§£žlŸ/˜ªž~™ ¢†—£Ÿ†©„§s¼„—Ë¢†—£ž;§s™|ž©JÌM›2gŸ/ž~¡Î¢†—£ž~™†ž~™lÆ cʞ&©J›s¢|ž~«DÌH©JŸ)¢†—sž \‘©+©„­šž~J¡D˜š¡s«sžUÑF˜š¡s¼‹J™a©J›£¦ ›3©„™|ž~«&¢|©S¿Jž~¥U¢|©JŸ4˜š¡s«£žUÑ£˜š¡£¼&ç\v§s¥/æK­šžl½Ížl¢ÈJ­kÆ !!JÚ„é ®3ž~¥lJ§s™|ž \‘©K©„­ªž~J¡R˜š¡s«sžUÑF˜š¡s¼a˜š¡s¥UŸ/ž~J™|ž~™È¢†—£ž †ñM Zögõdõ g¢¹¢†—£žSžUÑK›3ž~¡s™|žS©JÌô—|ñ DiKsÆ)–—2g¢ ° ©JŸ†æF™ ° ž~­š­ ÌH©JŸ-§s™4™†˜š¡s¥Už ° žS¥U©„¡Î¢|Ÿ†©„­¢†—£žaŸ†žl¢|Ÿ/˜ªžl¿gJ­ ›2Ÿ†ž~¥l˜š™†˜ª©„¡ ° ˜ª¢†—)¢†—£žnvJ´)_´Œ)_´0v~©J›3žlŸ/g¢|©JŸ ° —2˜š¥/—›2Ÿ†©¿F˜š«£ž~™ «£©F¥l§s¾?ž~¡+¢-ÇP­ª¢|žlŸ/˜š¡s¼£Æ-èÉ¡J«s«s˜ª¢†˜š©„¡¨P¢†—sž \‘©K©„­ªž~J¡ ˜š¡s«sžUÑF˜š¡s¼³Ÿ†ž~ÏK§s˜ªŸ†ž~™b­ªž~™†™i›sŸ†©F¥Už~™†™†˜š¡s¼ ¢†˜š¾?ž¢†—sJ¡ ¿Jž~¥U¢|©JŸ;˜š¡s«sžUÑF˜š¡s¼£¨2J¡s«b¢†—s˜š™4® ž~¥U©„¾Íž~™-˜¯¾?›3©JŸ†¢†J¡Î¢ ° —£ž~¡‹¢†—sž)¥U©„­š­ªž~¥U¢†˜ª©„¡w™†˜ªülž;˜š¡s¥UŸ/ž~J™|ž~™lÆ –€©ÐÌìJ¥l˜š­š˜ª¢†g¢|ž)¢†—£ž)˜š«£ž~¡+¢†˜åÇP¥lg¢†˜š©„¡Ë©JÌ¢†—£ž)«£©F¥l§F¦ ¾?ž~¡+¢™|©„§£ŸZ¥Už~™l¨2¢†—£ž)ž~¡£¼„˜š¡£ž ° J™Ÿ†ž~ÏK§s˜ªŸ/ž~«‹¢|©Ð›2§£¢ ¢†—£ž‹«£©F¥l§s¾?ž~¡+¢ ‹˜š¡ÌdŸ†©„¡+¢S©JÌž~J¥/—³­¯˜š¡£žÐ˜š¡¢†—£ž «£©F¥l§s¾?ž~¡+¢lÆ –—sž ˜š¡s«£žUÑû¥UŸ†ž~g¢†˜ª©„¡ ˜¯¡s¥l­š§s«£ž~™¢†—sž Ìd©„­¯­ª© ° ¦ ˜š¡£¼y™|¢|žl›2™CB¡£©JŸZ¾ÍJ­š˜ªülži¢†—£žDÃ.V¹ÄiÁ¤¢†g¼„™l¨-ž~­š˜š¾¦ ˜š¡sg¢|ž žUÑF¢|Ÿ/J¡£žl©„§2™ ¥/—2gŸ/J¥U¢|žlŸ/™l¨û˜š«£ž~¡+¢†˜ªÌH½ ¢†—£ž ° ©JŸ/«s™ ° ˜š¢†—s˜š¡¬ž~J¥/—i«£©F¥l§s¾?ž~¡+¢l¨2™|¢|ž~¾÷¢†—£ž)¢|žlŸ/¾Í™ ç ° ©JŸ/«s™Zé §s™†˜š¡£¼¢†—£ž'©JŸ†¢|žlŸ™|¢|ž~¾Í¾Ð˜š¡£¼4J­š¼J©JŸ/˜ª¢†—s¾Ë¨ ¥lJ­š¥l§s­¯g¢|žî¢†—£ž%­ª©F¥lJ­Bçì«s©K¥l§s¾Íž~¡Î¢Z霝J¡s« ¼„­ª©J®2J­ çì¥U©„­š­ªž~¥U¢†˜š©„¡Pé ° ž~˜š¼„—΢†™l¨Ð®2§s˜š­š«îí¥U©„¾?›sŸ/ž~—£ž~¡s™†˜ª¿Jž «s˜š¥U¢†˜š©„¡sgŸ†½)©JÌ ¢†—£ž¥U©„­š­šž~¥U¢†˜ª©„¡¨„J¡s«Í¥UŸ/ž~g¢|ž¢†—£ž˜š¡F¦ ¿JžlŸ†¢|ž~«b˜š¡s«£žUÑ&ÇP­ªžJÆ Ž ~‚W‡‚~1„’‘‹“•”W€‚…ˆ†6‡ –—£ži¡+§2¾)®3žlŸÍ©JÌ)«s©K¥l§s¾Íž~¡Î¢†™&¢†—sg¢&¥U©„¡+¢†J˜š¡ ¢†—£ž æJžl½ ° ©JŸ/«2™'Ÿ/žl¢†§£Ÿ/¡£ž~«Ð®+½Í¢†—sž¹ÃKž~gŸ/¥Z—‹Ø<¡£¼„˜š¡sž¾Í~½ ®3žb­šgŸ/¼Jži™†˜¯¡s¥Užw©„¡s­š½ ° ž~gæ \‘©K©„­ªž~J¡ ©J›3žlŸ/g¢|©JŸ/™ ° žlŸ†žD§s™|ž~« ƺ ¡sž ° ¨¾?©JŸ†ž!Ÿ†ž~™†¢|Ÿ/˜š¥U¢†˜ª¿JžÊ©J›3žlŸ/_¦ ¢|©JŸ ° J™¹˜š¡+¢|Ÿ†©F«s§s¥Už~«ƒCmvJ´0_´.Œ0_´0v4sÆ–—2˜š™È©J›£¦ žlŸ/g¢|©JŸ?™|ž~gŸZ¥/—£ž~™­š˜ªæJž&J¡´0])^œ©J›3žlŸ/g¢|©JŸSÌH©JŸa¢†—£ž ° ©JŸ/«s™˜š¡¢†—£žËÏ+§sžlŸ†½ ° ˜ª¢†—¢†—£žË¥U©„¡s™|¢|ŸZJ˜š¡Î¢¢†—sg¢ ¢†—£ž ° ©JŸ/«s™®3ž~­ª©„¡s¼Ð©„¡s­ª½‹¢|©‹™|©„¾?ž¬¥U©„¡2™|ž~¥l§£¢†˜ª¿Jž ›2gŸZg¼JŸ/g›2—s™l¨ ° —£žlŸ/ž`?˜š™‘a¥U©„¡+¢|Ÿ†©„­š­šg®P­ªž›3©„™†˜ª¢†˜š¿Jž ˜š¡+¢|žl¼JžlŸ~Æ –—£ž±›2gŸ/J¾?žl¢|žlŸl ™|ž~­šž~¥U¢†™ ¢†—£žî¡+§s¾a®3žlŸï©JÌ ›2gŸZg¼JŸ/g›2—s™l¨4¢†—+§2™¬¥U©„¡+¢|Ÿ†©„­š­¯˜š¡£¼B¢†—£žÊ™†˜ªülž©JÌ)¢†—£ž ¢|žUÑF¢!Ÿ†žl¢|Ÿ/˜šžl¿Jž~«¤ÌdŸ/©„¾ í«£©F¥l§s¾?ž~¡+¢¥U©„¡2™†˜š«£žlŸ†ž~« Ÿ†ž~­šžl¿_J¡+¢lÆ–—£ž!Ÿ/g¢†˜ª©„¡2J­ªžD˜š™Ë¢†—2g¢w¾?©„™|¢i­š˜ªæJž~­ª½ ¢†—£žb˜š¡£ÌH©JŸ/¾Íg¢†˜ª©„¡yŸ†ž~ÏK§£ž~™|¢|ž~«œ˜š™?ÌH©„§s¡s«ï˜š¡ï!Ìdž ° ›2gŸZg¼JŸ/g›2—s™ŸZg¢†—£žlŸ)¢†—sJ¡D®3ž~˜š¡£¼R«s˜š™†› žlŸZ™|ž~«i©¿JžlŸ J¡Ëž~¡+¢†˜ªŸ†ža«£©K¥l§2¾?ž~¡Î¢lÆ Ž ~‚~‡‚~3„—–,‚W˜1€(‚ˆ†C‡ ÂMgŸ/g¼JŸ/g›P—ï©JŸ/«£žlŸ/˜¯¡£¼i˜¯™› žlŸ/Ìd©JŸ/¾Íž~«®K½ybŸZJ«s˜åÑ ™|©JŸ/¢ ¢†—sg¢˜š¡+¿J©„­ª¿Jž~™û¢†—£Ÿ†žlž9«s˜ªä žlŸ†ž~¡+¢û™†¥U©JŸ†ž~™DC ¢†—£ž ­šgŸ/¼Jž~™|¢š™ ögóÐñ R8J ~ñ†þlÿ2ñD ZñDh¥D †ñU¨¤¢†—£ž ­šgŸ/¼Jž~™|¢œ›`…Uðùö ^ñ h |ñJ¡s«‹¢†—£ž)™/¾ÍJ­š­ªž~™|¢žr…KKh  XJñDR8Uh¥D †ñUÆ –—£ž «£žUÇP¡s˜š¢†˜ª©„¡%©JÌR¢†—£ž~™|ž ™†¥U©JŸ/ž~™R˜š™¬®2J™|ž~«©„¡±¢†—sž¡s©J¢†˜ª©„¡©JÌôFöÉö2Éö^ô9h Rd2RƋÂMgŸ/g¼JŸ/g›2—£¦ ° ˜š¡s«£© ° ™;gŸ†ž&«£žl¢|žlŸ/¾Í˜š¡sž~« ®K½;¢†—£žv¡£žlž~«S¢|©¥U©„¡s™†˜š«£žlŸM™|žl›2gŸZg¢|ž~­ª½;ž~J¥/—?¾Ðg¢†¥/— ©JÌ'¢†—£ž?™†J¾?žæJžl½ ° ©JŸ/«!˜¯¡w¢†—£ž?™†J¾ÍžS›PgŸ/g¼JŸ/g›2—Æ ã ©JŸ<žUÑFJ¾?›P­ªžJ¨„˜ªÌ ° žÈ—s~¿Jž¹™|žl¢©JÌPæJžl½ ° ©JŸ/«s™ Ÿ9X9QV Xb9VOXc9VOXYf¢¡SJ¡s«R˜š¡Ë?›PgŸ/g¼JŸ/g›2—gX9QJ¡s«gXbagŸ†ž ¾Íg¢†¥Z—£ž~« ž~J¥/—œ¢ ° ˜¯¥UžJ¨ ° —£žlŸ†ž~J™£Xci˜š™Í¾Íg¢†¥Z—£ž~« ©„¡s­š½¬©„¡2¥UžJ¨ J¡s«[XYf¬˜š™¡s©J¢4¾Íg¢†¥Z—£ž~«¨ ° žSgŸ†ž)¼J©g¦ ˜š¡s¼ï¢|© —s~¿JžÌd©„§sŸw«s˜åä3žlŸ†ž~¡+¢ ° ˜š¡s«£© ° ™l¨¹«£žUÇP¡£ž~« ®K½a¢†—sžæJžl½ ° ©JŸ/«s™C¥¤¦X9Qhêó?ö„ð, (QV1XWb9h óÍöJð, D(QV1Xc\§ù¨ ¤¦X9Qh óÍöJð, DTb9V¨XWb2hêóÍöJð, ¢QV©Xc\§ù¨M¤¦X…QhêóÍöJð, (Q…VªXb2h óÍöJð, DTb9V Xc\§ù¨2J¡2««¤¦X…QhêóÍöJð, b9V¬Xb2h óÍöJð, DTb9VXc\§ùÆ º ° ˜š¡s«s© ° ¥U©„¾?›sŸZ˜š™|ž~™ÈJ­š­ ¢†—£ž¢|žUÑF¢®3žl¢ ° žlž~¡R¢†—£ž ­ª© ° ž~™†¢'›3©„™†˜ª¢†˜š©„¡£ž~«ÍæJžl½ ° ©JŸZ«&˜š¡&¢†—£ž ° ˜š¡s«£© ° J¡s« ¢†—£ža—s˜ª¼„—£ž~™|¢v›3©„™†˜ª¢†˜š©„¡‹æJžl½ ° ©JŸ/«w˜¯¡‹¢†—£ž ° ˜š¡s«s© ° Æ ã ©JŸ!ž~J¥/— ›2gŸ/g¼JŸ/g›2— ° ˜¯¡s«£© °n° ž³¥U©„¾?›2§£¢|ž ¢†—£ž)Ìd©„­š­š© ° ˜š¡s¼™/¥U©JŸ†ž~™DC ] ™ ögóÐñ R8J ~ñ†þlÿ2ñD ZñDh¥D †ñ C÷¥U©„¾?›2§s¢|ž~™D¢†—£ž ¡K§s¾a® žlŸa©JÌ ° ©JŸ/«2™)ÌHŸ†©„¾¢†—sž&ÏK§£ž~™|¢†˜ª©„¡D¢†—sg¢agŸ†ž Ÿ†ž~¥U©J¼„¡2˜ªülž~«ï˜š¡B¢†—£žË™†J¾?žË™|ž~ÏK§£ž~¡s¥UžR˜š¡¢†—£žË¥l§£Ÿ|¦ Ÿ†ž~¡+¢›2gŸZg¼JŸ/g›2—F¦ ° ˜¯¡s«£© ° Æ ] ›`i^ðùö— Zñ h K†ñ C Ÿ/žl›sŸ†ž~™|ž~¡+¢†™R¢†—£ž¡+§s¾a®3žlŸË©JÌ ° ©JŸ/«s™D¢†—sg¢D™|žl›2gŸZg¢|žœ¢†—£žœ¾?©„™|¢D«s˜¯™|¢†J¡Î¢DæJžl½Î¦ ° ©JŸ/«s™˜¯¡‹¢†—£ž ° ˜š¡2«£© ° Æ ] r…KK XJñDR8UKh |ñDC¥U©„¾?›P§£¢|ž~™ ¢†—£ž'¡+§s¾?¦ ®3žlŸ©J̧s¡2¾Íg¢†¥/—£ž~«wæJžl½ ° ©JŸ/«s™~Æv–—s˜š™¾?ž~J™/§£Ÿ†ž)˜š™ ˜š«sž~¡Î¢†˜š¥lJ­PÌH©JŸJ­š­ ° ˜š¡s«£© ° ™<ÌHŸ†©„¾¸¢†—£ž¹™†J¾?ž¹›2gŸ/_¦ ¼JŸ/g›P—¨O®2§s¢;¿_gŸZ˜ªž~™;ÌH©JŸ ° ˜¯¡s«£© ° ™¹ÌdŸ†©„¾Ò«s˜åä3žlŸ†ž~¡+¢ ›2gŸZg¼JŸ/g›2—s™lÆ –—sžRŸ/J«s˜åѳ™|©JŸ/¢†˜š¡£¼!¢†gæJž~™¬›2­šJ¥UžbJ¥UŸ†©„™†™‹J­š­È¢†—£ž ° ˜¯¡s«£© ° ™/¥U©JŸ†ž~™ÌH©JŸJ­š­3›2gŸZg¼JŸ/g›2—s™lÆ ­ Êá< -( “ k&“„ß• ( g * áÞ –—£žFNPQR'ñ fGIJ Zñ K¾Í©K«s§2­ªž˜š«£ž~¡+¢†˜åÇ2ž~™‘J¡s« žUÑF¢|Ÿ/J¥U¢†™‘¢†—£ž4J¡s™ ° žlŸ'ÌdŸ†©„¾,¢†—£ž›PgŸ/g¼JŸ/g›2—s™<¢†—sg¢ ¥U©„¡+¢†J˜š¡ ¢†—£žBÏK§£ž~™|¢†˜š©„¡îæJžl½ ° ©JŸ/«s™lÆ9ّŸ/§s¥l˜¯J­;¢|© ¢†—£ž˜š«sž~¡Î¢†˜åÇ ¥lg¢†˜ª©„¡©JÌ3¢†—£žJ¡s™ ° žlŸ<˜š™M¢†—£žŸ†ž~¥U©J¼„¡s˜å¦ ¢†˜ª©„¡w©JÌ¢†—£žaJ¡s™ ° žlŸ4¢ò½K› žJÆ4㘚¡s¥Už)J­š¾Í©„™|¢4J­ ° ~½F™ ¢†—£žwJ¡2™ ° žlŸ¬¢ò½K›3žw˜š™Í¡s©J¢ÐžUÑF›2­š˜š¥l˜ª¢Í˜¯¡ï¢†—£žwÏK§£ž~™É¦ ¢†˜ª©„¡‹©JŸÈ¢†—£ž;J¡2™ ° žlŸ¨ ° ž;¡£žlž~«‹¢|©?Ÿ/ž~­ª½Ð©„¡Ë­ªžUÑ£˜š¥U©g¦ ™|ž~¾ÍJ¡+¢†˜š¥¹˜š¡£ÌH©JŸ/¾Íg¢†˜š©„¡Ð›sŸ/©¿F˜š«£ž~«¬®+½¬S›2gŸ/™|žlŸv¢|© ˜š«£ž~¡+¢†˜ªÌH½B¡2J¾?ž~«³ž~¡+¢†˜ª¢†˜ªž~™içHžJÆý¼£Æí¡sJ¾?ž~™Í©JÌ›3žl©g¦ ›2­ªžÊ©JŸw©JŸ†¼„J¡s˜ªü~g¢†˜ª©„¡2™l¨;¾?©„¡£žl¢†gŸ†½±§s¡2˜ª¢†™l¨«sg¢|ž~™ J¡s«y¢|ž~¾?›3©JŸ/J­Hø_­ª©K¥lg¢†˜š¿JžËžUÑK›sŸ/ž~™†™†˜ª©„¡s™~¨›sŸ†©F«s§s¥U¢†™ J¡s«ï©J¢†—sžlŸ/™ZéZƜ–—£žËŸ†ž~¥U©J¼„¡s˜ª¢†˜š©„¡³©JÌ-¢†—£žwJ¡s™ ° žlŸ ¢ò½K›3žJ¨'¢†—sŸ†©„§£¼„——£žb™|ž~¾ÍJ¡+¢†˜š¥‹¢†g¼DŸ†žl¢†§sŸ/¡£ž~«y®K½ ¢†—£ž&›2gŸ/™|žlŸ~¨M¥UŸ†ž~g¢|ž~™ Zö„öJðòñböQR'ñ ^Æw–—£ž žUÑF¢|Ÿ/J¥U¢†˜ª©„¡a©JÌK¢†—sž'J¡s™ ° žlŸ€J¡2«;˜ª¢†™ žl¿gJ­š§sg¢†˜š©„¡)gŸ†ž ®2J™|ž~«R©„¡RÍ™|žl¢©JÌ—£ž~§sŸ/˜š™|¢†˜š¥l™~Æ ® „6€ Ž ‚W¯#€(‚ –—£ž)›2gŸ/™|žlŸ¥U©„¾a®2˜š¡sž~™˜š¡£ÌH©JŸ/¾Íg¢†˜š©„¡‹ÌdŸ†©„¾ ®sŸ†©„J« ¥U©¿JžlŸZg¼Jž4­ªžUÑ£˜š¥lJ­F«s˜š¥U¢†˜ª©„¡2gŸ/˜ªž~™ ° ˜ª¢†—™|ž~¾ÍJ¡+¢†˜š¥v˜š¡F¦ ÌH©JŸ/¾Íg¢†˜ª©„¡y¢†—sg¢Í¥U©„¡+¢|Ÿ/˜ª®P§£¢|ž~™¢|©Ê¢†—£žR˜š«£ž~¡+¢†˜åÇP¥l_¦ ¢†˜ª©„¡¤©J̋¢†—£žœ¡sJ¾?ž~«¤ž~¡+¢†˜ª¢†˜ªž~™lÆ ÃF˜š¡s¥Užy›2gŸ†¢É¦ù©JÌd¦ ™|›3žlž~¥/—œ¢†g¼J¼„˜š¡£¼ï˜š™&J¡œ˜¯¡Î¢|Ÿ/˜¯¡s™†˜š¥Ë¥U©„¾?›3©„¡£ž~¡+¢¬©JÌ ›2gŸ/™†žlŸ~¨ ° ž³—s¿Jž³žUÑK¢|ž~¡2«£ž~« \‘Ÿ/˜š­š­ Š ™b›2gŸ†¢É¦ù©JÌd¦ ™|›3žlž~¥/—í¢†g¼J¼JžlŸb˜š¡ ¢ ° © ° ~½F™lÆ ã ˜ªŸ/™|¢l¨ ° ž!—2~¿Jž J¥lÏK§s˜ªŸ†ž~«!¡sž ° ¢†g¼J¼„˜š¡£¼RŸZ§s­ªž~™J¡s«!™†ž~¥U©„¡s«s­ª½J¨ ° ž —s¿Jž§2¡s˜åÇ2ž~«¤¢†—sž «s˜¯¥U¢†˜ª©„¡sgŸ/˜ªž~™D©JÌR¢†—£žœ¢†g¼J¼JžlŸ ° ˜ª¢†—œ™|ž~¾ÍJ¡+¢†˜š¥«s˜š¥U¢†˜ª©„¡2gŸ/˜ªž~™Ð«sžlŸ/˜ª¿Jž~«œÌdŸ†©„¾ ¢†—£ž V¹gülžl¢|¢|žlžlŸ/™aJ¡s«bÌdŸ†©„¾ cÊ©JŸ/«2Àžl¢ÍçìÄi˜š­š­ªžlŸ !!JӄéZÆ èÉ¡ J«s«s˜š¢†˜ª©„¡y¢|©D¢†—£ži˜š¾?›2­šž~¾?ž~¡Î¢†g¢†˜š©„¡ï©J̼JŸ/J¾¦ ¾ÍgŸMŸ/§s­šž~™l¨ ° žÈ—s~¿JžÈ˜š¾Í›2­ªž~¾?ž~¡+¢|ž~«a—sž~§£Ÿ/˜š™|¢†˜¯¥l™O¥l_¦ ›2g®2­šž<©JÌsŸ†ž~¥U©J¼„¡2˜ªü~˜š¡£¼¹¡sJ¾?ž~™©JÌ2›3žlŸ/™†©„¡s™l¨_©JŸ†¼„J¡s˜å¦ ü~g¢†˜ª©„¡s™~¨„­ª©K¥lg¢†˜š©„¡s™l¨„«sg¢|ž~™l¨+¥l§£Ÿ†Ÿ†ž~¡s¥l˜šž~™€J¡s«S›sŸ/©K«F¦ §s¥U¢†™lÆ<ÃF˜¯¾Í˜š­šgŸM—£ž~§£Ÿ/˜¯™|¢†˜š¥l™MŸ†ž~¥U©J¼„¡s˜šülž¡sJ¾?ž~«?ž~¡+¢†˜å¦ ¢†˜ªž~™¹™†§s¥l¥Už~™/™|ÌH§2­š­ª½Ë˜š¡RèÉØ±™|½F™|¢|ž~¾Í™lÆg-~¿F˜š¡£¼‹¢†—£ž~™|ž ¥lg›2g®2˜¯­š˜ª¢†˜ªž~™?›2Ÿ†©¿Jž~«œ¢|©®3žw§2™|žlÌH§2­ÌH©JŸ&­ª©F¥lg¢†˜š¡£¼ ¢†—£ž›3©„™†™/˜ª®2­ªžvJ¡s™ ° žlŸ/™ ° ˜ª¢†—s˜š¡;™|žl¢<©JÌ3¥lJ¡2«s˜š«sg¢|ž ›2gŸ/g¼JŸZg›2—s™lÆ °±†6¯#²_€(‚´³œµ”‚~ƒ9”ˆ>–¶† –—£ž¹›2gŸZ™|žlŸÈž~¡sg®2­ªž~™v¢†—sžŸ†ž~¥U©J¼„¡s˜ª¢†˜ª©„¡‹©JÌ¢†—£žÐöh KR'ñDs Zö„öJðòñ ˜¯¡D¢†—£ž¬›2gŸ/g¼JŸZg›2—ÆÊØ<J¥Z—žUÑK¦ ›sŸ†ž~™/™†˜ª©„¡S¢†g¼J¼Jž~«?®K½a¢†—£žv›2gŸZ™|žlŸ ° ˜ª¢†—S¢†—£žÈJ¡s™ ° žlŸ ¢ò½K›3žb®3ž~¥U©„¾?ž~™‹©„¡sžb©JÌa¢†—£žiJ¡2™ ° žlŸË¥lJ¡s«2˜š«sg¢|ž~™ ÌH©JŸ&!›PgŸ/g¼JŸ/g›2—Æ»ÃF˜¯¾Í˜š­šgŸ?¢|©¢†—£žw›2gŸZg¼JŸ/g›2—F¦ ° ˜š¡2«£© ° ™v§s™†ž~«Ë˜š¡¬©JŸ/«£žlŸ/˜š¡s¼a¢†—£ž›PgŸ/g¼JŸ/g›2—s™~¨ ° ž ž~™|¢†g®2­¯˜š™†—yJ¡íöQR'ñ hRd2RîÌd©JŸ?ž~J¥Z—³J¡s™ ° žlŸ ¥lJ¡s«s˜¯«sg¢|žJƀ–€©žl¿gJ­š§sg¢|ž'¢†—£ž<¥U©JŸ†Ÿ†ž~¥U¢†¡sž~™†™©JÌKž~J¥Z— J¡s™ ° žlŸ4¥lJ¡2«s˜š«sg¢|žJ¨ Ð¡£ž ° žl¿_J­š§2g¢†˜ª©„¡w¾?žl¢|ŸZ˜š¥;˜š™ ¥U©„¾?›P§£¢|ž~«³ÌH©JŸ&ž~J¥Z—œJ¡2™ ° žlŸ†¦ ° ˜š¡s«£© ° Æ cʞi§s™†ž ¢†—£ž)Ìd©„­š­š© ° ˜š¡s¼™/¥U©JŸ†ž~™DC ] ™ öJóÍñ R8U lñ/þlÿ2ñ  ^ñ h K|ñDC˘ª¢?˜š™¥U©„¾?›2§£¢|ž~« ˜š¡‹¢†—sž)™†J¾?ž ° ~½ËJ™Ìd©JŸvô£öÉö |öZô9hzRdi—2RSZÆ ] Gvÿû ðìÿsöJð KÊ.h K|ñDC¬˜¯™a ePg¼Ê™|žl¢ ° —£ž~¡ ¢†—£žJ¡s™ ° žlŸÈ¥lJ¡s«s˜¯«sg¢|ž-˜¯™'˜š¾Ð¾?ž~«s˜šg¢|ž~­ª½?ÌH©„­š­ª© ° ž~« ®K½¬Í›2§s¡s¥U¢†§sg¢†˜š©„¡Ë™†˜ª¼„¡Æ ] ZdJóSóÍö c R8U°h¥D †ñ C?¾?ž~J™/§£Ÿ†ž~™¢†—£ž&¡+§s¾?¦ ®3žlŸ-©JÌ<Ï+§sž~™|¢†˜ª©„¡ ° ©JŸZ«s™¢†—2g¢-ÌH©„­š­ª© ° ¢†—sžSJ¡s™ ° žlŸ ¥lJ¡s«2˜š«sg¢|ž ° —£ž~¡¤¢†—£ž ­šg¢|¢|žlŸB˜¯™D™†§s¥l¥Užlž~«sž~« ®K½ ï¥U©„¾Í¾ÐFƸº ¾Í_Ñ£˜š¾a§2¾ ©JÌa¢†—£Ÿ†žlž ° ©JŸ/«s™ËgŸ†ž ™|©„§s¼„—΢lÆ ] ™ ögóÐñ ô£ö~ñ ^ÿ~·~ð|ñZñ h K|ñDC ¥U©„¾?›2§£¢|ž~™%¢†—£ž ¡K§s¾a® žlŸ-©JÌ<ÏK§£ž~™|¢†˜ª©„¡ ° ©JŸ/«s™-Ìd©„§2¡s«b˜¯¡w¢†—£ž™†J¾?ž ›2gŸZ™|ž;™†§£®£¦ù¢|Ÿ/žlž)J™È¢†—£ž)J¡2™ ° žlŸ4¥lJ¡s«s˜š«sg¢|žJÆ ] ™ öJóÍñ lñ Pðòñ  ^ñ h |ñDCM¥U©„¾Í›2§£¢|ž~™¢†—£ž¡K§s¾a®3žlŸ ©JÌÏ+§£ž~™†¢†˜ª©„¡ ° ©JŸ/«2™'ÌH©„§s¡s«‹˜š¡&¢†—£ž™†J¾?ž™|ž~¡+¢|ž~¡s¥Už J™¢†—£ž;J¡s™ ° žlŸ4¥lJ¡s«2˜š«sg¢|žJÆ ] DöJð, Fñ XJñD°R8U°h¥D †ñ C ¥U©„¾?›2§£¢|ž~™û¢†—£ž ¡K§s¾a® žlŸ)©JÌ'æJžl½ ° ©JŸZ«s™)¾Íg¢†¥Z—£ž~«B˜š¡¢†—£žÐJ¡s™ ° žlŸ|¦ ° ˜¯¡s«£© ° Æ ] › i^ðùö ZñDh¥D †ñ CJ«s«s™Ë¢†—sžD«s˜š™|¢†J¡2¥Už~™çì¾?ž~_¦ ™†§sŸ†ž~«³˜š¡³¡K§s¾a®3žlŸÐ©JÌ ° ©JŸZ«s™ZéS®3žl¢ ° žlž~¡œ¢†—£žbJ¡F¦ ™ ° žlŸ¹¥lJ¡s«s˜š«sg¢|žaJ¡s«w¢†—sž)©J¢†—£žlŸ¹Ï+§sž~™|¢†˜ª©„¡ ° ©JŸ/«s™ ˜š¡‹¢†—sž)™†J¾?ž ° ˜š¡s«£© ° Æ –—sž'©r¿JžlŸ/J­š­F™†¥U©JŸ†žvÌH©JŸ-¼„˜ª¿Jž~¡J¡s™ ° žlŸM¥lJ¡s«s˜š«sg¢|ž ˜š™¥U©„¾?›2§£¢|ž~«Ë®K½—C ¸¹»º¥¼A½ ¾9¿ ÀYÁ•Â'Ã;¹»Ä ¿ Å«Æ Ç\ÈÊÉ9Ë0ºÌ¿ Í,¹»Ä À ÂA¿;ÎAϏ¿'¾…Ã'¿AÁ8Â'Ã;¹»Ä;¿‹Ð ÐÆ ÇÊÈ0ÑÏ#¾…ÃAÒϏË0Ò½8¹»¾ Â;½Ó»¾Á•Â'Ã;¹»Ä;¿CÐ Ð3Ô\Õ\ȅ¸¹»º ºtË Ö Í,¹»Ä À»ÂÁ8Â'ù0Ä;¿3Ð ÐÆ ÇÊÈ\É9Ë»º¥¿ ×Ë»ÄÂA¿ ÂÏW¼AÒÄ ¿¿AÁ8Â'Ã;¹»Ä;¿‹Ð ÐÆ ÇÊÈ\É9Ë»º¥¿ ÂA¿'¾Ò>¿'¾…Ã'¿AÁ8Â'Ã;¹»Ä;¿‹Ð ÐÆ ÇÊÈ»ØÙ˻ҕÃڏ¿ À Û\¿'Ü0Í,¹»Ä À0Â'Á8Â'Ã;¹»Ä;¿Ý Ý.ÞCÈß àtá>âKã¨ä#åTæAçCÝ_âKæAè0éYç Ùȧ£Ÿ†Ÿ†ž~¡+¢†­ª½ ¢†—£ž±¥U©„¾a®2˜š¡£ž~«÷™/¥U©JŸ†ž±Ÿ†žl›sŸ†ž~™†ž~¡Î¢†™ J¡i§s¡F¦ò¡£©JŸZ¾ÍJ­š˜ªülž~«Ë¾?ž~J™/§£Ÿ†ž)©J̝J¡s™ ° žlŸ¹¥U©JŸ†Ÿ†ž~¥U¢É¦ ¡£ž~™/™lÆ –—£ž J¡s™ ° žlŸDžUÑF¢|Ÿ/J¥U¢†˜ª©„¡¸˜š™D› žlŸ/Ìd©JŸ/¾Íž~« ®K½ ¥Z—£©K©„™†˜š¡£¼³¢†—sž!J¡s™ ° žlŸb¥lJ¡s«s˜¯«sg¢|ž ° ˜ª¢†—¢†—£ž —s˜š¼„—£ž~™|¢R™†¥U©JŸ†žJÆûÃK©„¾?žÊ©JÌ¢†—£ž!™†¥U©JŸ†ž~™bg›s›sŸ†©ÑF˜ª¦ ¾Íg¢|žw¿JžlŸ†½ï™†˜š¾?›2­šž¬g®«s§s¥U¢†˜š©„¡s™lÆ ã ©JŸÐžUÑ£J¾?›2­ªžJ¨ ¢†—£ž-Ÿ†ž~¥U©J¼„¡s˜š¢†˜ª©„¡¬©JÌ æJžl½ ° ©JŸ/«s™v©JŸv©J¢†—£žlŸÈÏK§£ž~™|¢†˜ª©„¡ ° ©JŸ/«s™˜¯¡RJ¡wg›s›3©„™†˜š¢†˜ª©„¡Ë«£žl¢|žlŸ/¾Í˜¯¡£ž~™v¢†—£žOGÈÿ— Dh ðìÿsöJð ½2.h¥D †ñU¨¤¢†—sžê™ ögóÐñ ôFöKlñ Zÿ·lð†ñ/ñDh D †ñU¨ë¢†—£žëZdgóó?ö c R8UKh |ñJ¡s«6¢†—£ž ™ öJóÍñ lñ Pðòñ  ^ñ h |ñ¢|©Ð¼J©&§£› Æ<Äb©JŸ†žl©¿JžlŸ¨P¢†—£ž ™†J¾Íž³™|ž~Ï+§sž~¡s¥Užï™†¥U©JŸ†žï¼„˜ª¿Jž~™Ê—s˜š¼„—£žlŸ›2­šJ§s™†˜š®2˜š­å¦ ˜ª¢ ½Í¢|©J¡s™ ° žlŸv¥lJ¡2«s˜š«sg¢|ž~™'¢†—sg¢È¥U©„¡Î¢†J˜¯¡¬˜¯¡Í¢†—£ž~˜ªŸ ° ˜¯¡s«£© ° ™|ž~ÏK§£ž~¡s¥Už~™©JÌÏK§£ž~™|¢†˜ª©„¡ ° ©JŸ/«s™v¢†—2g¢ÈÌd©„­å¦ ­ª© ° ¢†—£ž!™†J¾?žD©JŸ/«£žlŸ/™Ë˜š¡¢†—sž!Ï+§£ž~™†¢†˜ª©„¡Æ÷–—s˜š™ ™†¥U©JŸ†žag›s›sŸ†©ÑF˜š¾Ðg¢|ž~™¢†—£ž;J™†™/§s¾?›s¢†˜ª©„¡‹¢†—2g¢¥U©„¡F¦ ¥Užl›s¢†™¬gŸ†žw­ªžUÑ£˜š¥lJ­š˜šülž~«³˜š¡y¢†—£ži™†J¾?žw¾ÍJ¡2¡£žlŸÐ˜š¡ ¢†—£ž-Ï+§£ž~™†¢†˜ª©„¡ÐJ¡s«¬˜š¡?¢†—sž4J¡s™ ° žlŸ~Æ+4© ° žl¿JžlŸ~¨£¢†—£ž ¥U©„¾a®2˜š¡£ž~«Í™/¥U©JŸ†žJ­š­ª© ° ™MÌH©JŸ<æJžl½ ° ©JŸ/«2™J¡s«ÍÏK§£ž~™É¦ ¢†˜ª©„¡ ° ©JŸ/«s™¢|©Í®3ž;¾Íg¢†¥/—£ž~«b˜š¡‹¢†—£ž;™†J¾Íž;©JŸ/«£žlŸ~Æ –g®2­ªž "˘¯­š­š§s™|¢|ŸZg¢|ž~™-™†©„¾?žÍ©JÌv¢†—£žÐ™†¥U©JŸ/ž~™¢†—sg¢ ° žlŸ†ž,g¢|¢|Ÿ/˜š®2§£¢|ž~«Ò¢|© ¢†—£žë¥lJ¡s«s˜š«2g¢|ž J¡s™ ° žlŸ/™ ²´µµ¶œ—sJ™ažUÑF¢|Ÿ/J¥U¢|ž~«y™†§s¥l¥Už~™†™|Ìì§s­š­š½JÆRÙȧ£Ÿ†Ÿ†ž~¡+¢†­ª½ ° žv¥U©„¾Í›2§£¢|ž‘¢†—£žÈ™†J¾?žv™†¥U©JŸ†žvÌd©JŸM®3©J¢†—™†—£©JŸ†¢MJ¡s« ­ª©„¡£¼¬J¡s™ ° žlŸ/™l¨3J™ ° žJ¡sJ­ª½KülžS˜š¡w¢†—£ža™/J¾?ž ° ~½ ¢†—£ž)J¡2™ ° žlŸ ° ˜š¡s«£© ° ™lÆ q   ìWšK›¥œû£ ¢9œišÇÆK›ÄƒÇŰœ…šÇÈ ›È…ÇÆÇÍKÈ ÅzÌ Åzæz£ ¤›zÌ å£ ¢ Ç›z¢ ÇQ™£ œ…šS¢ÊÉMăõMœ…Åză¢¢ ͤšS›¢~í9£ ÆUËUÅzÌ ÍKÆUœ…›zÈÊÉ ÄƒÅËUÇăÇÆUœ…¢~…£ ¤¢•ï>ðK¢Ê™2Ç›zÈ £ Ææ5ð›zÆåS£ ƤÅzšÇÈ ÇÆUœ ËUÅJ¤›Ì £ ñ›¥œi£ ÅzÆ¢î¸æzÈ ÍÆUœ…¢ð¢ šÅzÍMœi¢¨ðÇœ…¤;ï8ò !ûÆK¢Ê™2ÇÈ M¤ÅzÈ Ç'í2í'Ÿ  Ÿz¦A¶³»A·¥´¥Ø½ã)·i³¹|³´¥·0Õ»zµ¸³ îࢠšÅzÈʜ>ï ÷»¥Â¾¹,µàµÊ¹»" ·ÿ¿¼ 㥾»¥ºS¹A´¥¼ã q  §Ÿ ìWšKÇÈ ÇQ£ ¢œ…šÇ.›z¤œ…È Ç¢…¢(S›zÈ £ ŝÆ3FQ›ËM£ Ç¢¨ð ¡ ÍKÈ £ ÇåCò !ûÆK¢Ê™2ÇÈ M¤ÅzÈ Ç'íñŸUíK  zž.ž»¥º µ¸³¹9û¥ÂK¼ µÊ´¥ØÓ¼TØÓ¼·ؽãJ¹ îࢠšÅzÈʜ>ï þ—»¥ÁÓÁ ¿¶»»¥ãOG¹ºd¹µÊ¹¾¿ q  áz§ ìWšKÇÈ ÇQ£ ¢œ…šÇ~%Œ›&¢S›šK›ÌWò !ûÆK¢Ê™2ÇÈ M¤ÅzÈ Ç'ퟝ¦KK ¦z¦AÁ ظ·µ.» úS»¥¾¹|µÓ³´¥¼Ì5u ódרóµÓؽ¹· îàÌ ÅzÆæKï µ¸³J¾»¥ÂMäz³»¥Âµ9µ¸³¹¶»¥¾Á¸ãAØÓ¼ ×Á ÂDãJ¹·ƒµ¸³¹6¾¹´zµ ô ¹¹óÃûظ¼8úû·µÓ¾´¥Á ؽ´ABµ¸³¹S÷´$|ên´¥³´¥Á°ØÓ¼nã¥Ø½´AB G³´¥¾,µÓ¾¹Y" ·.G´zµ¸³¹㥾´¥Á ØÓ¼ D ¾´¥¼ ×¹ B9´¥¼ ã ÿ ¹¾¹ÊäM¹¼ µ¸Øԗ´zµ¸Ø½»¥¼´¥Á2Ö´¥¾ìAØÓ¼g÷´¥¼KÙ¥´¥¼ؽ´ ÷Œ³¹ û¥ÂK¾—·ØàµÊ¹·0ë´ðK´¥¼m³´¥·—Á ØÓ·µÊ¹ã)ظ¼×Á ÂãJ¹ q  ñ¥ázž ìWšK›¥œû£ ¢9œišÇÆK›¥œi£ ÅzÆ›zÌ £ œóÉAÅ…+ŒÅzõ Ç~*UŝšKÆ +°›ÍKÌÊõ8õTò !ûÆK¢Ê™2ÇÈ M¤ÅzÈ Ç'ퟝ¦UáK ¦Už)·µÊ´JÕظÁ ؍ٝ¹)µÓ³¹0×»¥ÂK¼ µ¸¾¿A¶Øàµ¸³+Øàµ¸· îàÌ ÅzÆæKï ³¹Á ð\B2µ¸³¹6G´zµÓ³»¥Á ؽ×|³Jؽ¹¾´¥¾×³J¿)·µÊ»¥Âµ¸Á ¿S³¹Á¸ãS»¥Âµ û¥¾ðKÁ ¾´¥Á ØÓ·ºCBØÓ¼IÁ¸´¥¾ÊäM¹ûð´¥¾µQ´zµµÓ³¹|¾ÊäzØÓ¼ä+» à Ö»¥Á ØÓ·³0:…Õ»¥¾¼TÖ»ðK¹0뻥³J¼IÖ´¥ÂÁ°î î²S³¹¼fµ¸³¹ Ö»ð¹0¹º—ðK³´zµ¸Ø½×´¥ÁÓÁ ¿+ãJ¹óù¼ ãJ¹ãAµÓ³¹ÿ »¥Á ؽãJ´¥¾ØàµÓ¿ µ¸¾´JãJ¹Â¼ؽ»¥¼Wã¥Â¾ØÓ¼Kä+´dÚUÛJÜ5öµÊ»¥Â¾A» ×µ¸³¹ –g®2­ªž "C÷ØÑ£J¾?›2­šž~™Ê©JÌb²´€µrµ¶OŠ ™!¥U©JŸ†Ÿ†ž~¥U¢†¡sž~™†™ ™†¥U©JŸ†ž~™~Æ ÷ k ( “2/Éß “ B ”á<• ( (Q& ”ø à” ’ * ßOá ÃKžl¿JžlŸ/J­È¥UŸZ˜ª¢|žlŸ/˜šiJ¡s«y¾?žl¢|Ÿ/˜š¥l™?¾Ð~½D®3ž‹§s™|ž~«B¢|© ¾?ž~J™†§sŸ†ž¹¢†—£ž¹› žlŸ/Ìd©JŸ/¾ÐJ¡s¥Už4©J̝з¹ºí™|½F™|¢|ž~¾ËÆ<èê¡ –×4ؑÙ'¦ ÚK¨K¢†—£ž›3žlŸ†ÌH©JŸ/¾ÍJ¡2¥UžÌH©K¥l§s™ ° J™<©„¡ö2  ~ÿûh |ö2 DgÆ–g®2­ªžÖS™†§s¾Í¾ÍgŸZ˜ªülž~™¢†—£ž-™†¥U©JŸ†ž~™›2Ÿ†©¿F˜š«£ž~« ®K½‹À4è†ÃF–±ÌH©JŸ©„§£Ÿ™†½K™|¢|ž~¾RƑ–—sž;¾?žl¢|Ÿ/˜š¥;§2™|ž~«Ë®K½ À4è|ÃF– Ìd©JŸJ¥l¥l§£Ÿ/J¥U½!˜š™a«£ž~™†¥UŸ/˜š® ž~«Ê˜¯¡ ç¨ù©+©JŸZ—£žlž~™ J¡s«R–˜š¥UžY!!!„éZÆ ú¢û'ü;ýKûKþÊÿ»û    ý»ü;û  ûÿ»þ þœÿ  "!$#&%(')+*-,/.0# Ç132 Æ&4  2  4 5 !6)"78')+*-,.3# 99 2 9 4 Ç»Þ32  4 –g®2­ªžÖQC<º-¥l¥l§£Ÿ/J¥U½Ë›3žlŸ†ÌH©JŸ/¾ÍJ¡s¥Už º-¡£©J¢†—£žlŸv˜š¾?›3©JŸ†¢†J¡+¢›3žlŸ†ÌH©JŸ/¾ÍJ¡s¥Už›PgŸ/J¾?žl¢|žlŸ ˜š™¢†—£ž¬ô—U ^ñKKyðdóÐñ?¢|©DJ¡s™ ° žlŸÍ!ÏK§£ž~™|¢†˜ª©„¡Æ ;:39û<;üÊýKû-   þ =>¥û?9ûAü;ýKûKþÊÿ»û @  û-ÿAYþBWüÊýKû-   þ C 2ÆD4 úE5ü0üFG ûD0ü;ýF Ô»Þ32 C 4 úE5ü0üFG»üHWû'üþ Ç02 C 4 I þKJ¶ûAü.ûL#ÿ ü»ýAÿA»þ M 2 M 4 –g®2­ªž)ÓC'–˜š¾?ž›3žlŸ†ÌH©JŸ/¾ÍJ¡s¥Už ú ¡Í¢†—£ž~¿JžlŸ/g¼JžJ¨F¢†—£ž›sŸ†©F¥Už~™†™†˜š¡£¼¹¢†˜š¾ÍžÈ› žlŸÏK§£ž~™É¦ ¢†˜ª©„¡ ˜š™¬Õûb™|ž~¥gƪ¨-J¡s«³¢†—sžb¢†˜š¾ÍžwŸ/J¡£¼Jž~™&ÌHŸ†©„¾  ™|ž~¥gÆ ¢|©Ó_Ö2™|ž~¥gÆ –—£žlŸ†žRgŸ†žbÌd©„§£Ÿ&¾ÍJ˜š¡ï¥U©„¾¦ ›3©„¡£ž~¡+¢†™‘©JÌO¢†—£ž¹©¿JžlŸZJ­š­3¢†˜š¾?žCçUé'ÏK§£ž~™|¢†˜ª©„¡¬›sŸ†©g¦ ¥Už~™†™/˜š¡£¼i¢†˜š¾?žJ¨ç„é)›2gŸ/g¼JŸ/g›2—y™|ž~gŸZ¥/—B¢†˜š¾?žJ¨ç"„é ›2gŸZg¼JŸ/g›2—i©JŸ/«£žlŸ/˜¯¡£¼&¢†˜¯¾?žJ¨ J¡s«ïçHÖ+é-J¡s™ ° žlŸžUÑ+¦ ¢|Ÿ/J¥U¢†˜š©„¡œ¢†˜¯¾?žJÆ –g®2­ªžÊÓ™†§s¾Í¾ÍgŸZ˜ªülž~™Ð¢†—sžbŸ†ž~­ª¦ g¢†˜ª¿Jžw¢†˜š¾?žR™|›3ž~¡+¢?©„¡ïž~J¥/—œ›sŸ†©F¥Už~™†™†˜š¡s¼¥U©„¾Í› ©g¦ ¡£ž~¡+¢lÆ –—sžRJ¡s™ ° žlŸÍžUÑK¢|ŸZJ¥U¢†˜ª©„¡ «£©„¾Ð˜š¡sg¢|ž~™?¢†—£ž ›sŸ/©K¥Už~™†™/˜š¡£¼;¢†˜š¾Íž ° —s˜š­šžÈ¢†—£ž4ÏK§£ž~™|¢†˜ª©„¡Ð›sŸ†©F¥Už~™†™†˜š¡s¼ ›2gŸ/¢˜š™¡£žl¼„­š˜š¼„˜ª®2­ªžJÆ N O ßOá'•?øòà< * ßOá< èÉ¡±›sŸ/˜¯¡s¥l˜ª›2­ªžJ¨4¢†—£žD›2Ÿ†©J®2­ªž~¾©JÌSÇP¡s«2˜š¡£¼y©„¡£ž!©JŸ ¾?©JŸ/ž;J¡s™ ° žlŸ/™È¢|©Í?ÏK§£ž~™|¢†˜ª©„¡‹ÌHŸ†©„¾¿JžlŸ†½‹­šgŸ†¼Jž ™|žl¢©JÌ£«£©F¥l§s¾?ž~¡+¢†™€¥lJ¡a®3ž<J«s«£Ÿ†ž~™/™|ž~«;®K½;¥UŸ†ž~g¢†˜š¡£¼ !¥U©„¡+¢|žUÑF¢&ÌH©JŸÐ¢†—£žbÏ+§£ž~™†¢†˜ª©„¡³J¡s«³!æK¡£© ° ­ªž~«£¼Jž Ÿ†žl›2Ÿ†ž~™|ž~¡+¢†g¢†˜ª©„¡ë©JÌRž~J¥/— «£©K¥l§2¾?ž~¡Î¢ïJ¡s«¸¢†—£ž~¡ ¾Íg¢†¥Z—‹¢†—£žÏ+§sž~™|¢†˜ª©„¡¬¥U©„¡Î¢|žUÑF¢g¼„J˜š¡s™|¢vž~J¥Z—Ë«£©K¥^¦ §s¾Íž~¡Î¢Ÿ†žl›sŸ/ž~™|ž~¡Î¢†g¢†˜š©„¡ÆD–—s˜¯™ag›s›2Ÿ†©„J¥/—™¡£©J¢ ›sŸZJ¥U¢†˜š¥lJ­½Jžl¢;™†˜š¡s¥Už˜ª¢¹˜š¡Î¿J©„­š¿Jž~™J«£¿_J¡2¥Už~«¢|ž~¥/—F¦ ¡s˜¯Ï+§£ž~™Ë˜¯¡±æK¡s© ° ­ªž~«s¼Jž!Ÿ†žl›sŸ†ž~™†ž~¡Î¢†g¢†˜ª©„¡»©JÌ©J› ž~¡ ¢|žUÑF¢l¨ Ÿ†ž~J™|©„¡2˜š¡£¼£¨ ¡sg¢†§£ŸZJ­­¯J¡£¼„§sg¼Jž›sŸ†©F¥Už~™†™†˜š¡s¼£¨ J¡s«œ˜š¡s«£žUÑ£˜š¡£¼Ê¢†—2g¢&¥l§£Ÿ†Ÿ/ž~¡Î¢†­ª½ïgŸ†žb®3žl½J©„¡s«³¢†—£ž ¢|ž~¥Z—s¡£©„­ª©J¼J½ ™|¢†g¢|žy©JÌÍ¢†—£žygŸ†¢lÆ ú ¡»¢†—£žB©J¢†—£žlŸ —sJ¡2«¨Ð¢|Ÿ/J«2˜ª¢†˜ª©„¡sJ­&˜š¡£ÌH©JŸ/¾Íg¢†˜ª©„¡¤Ÿ/žl¢|Ÿ/˜ªžl¿gJ­&J¡s« žUÑF¢|Ÿ/J¥U¢†˜ª©„¡y¢|ž~¥Z—s¡s˜šÏK§£ž~™J­ª©„¡£ž‹¥lJ¡ï¡£©J¢S® ž‹§s™|ž~« ÌH©JŸÏ+§sž~™|¢†˜ª©„¡J¡s™ ° žlŸ/˜š¡£¼¹«s§£ž‘¢|©¢†—£žÈ¡£žlž~«¢|©›2˜¯¡F¦ ›3©„˜š¡+¢žUÑFJ¥U¢†­š½ÍJ¡&J¡s™ ° žlŸ‘˜š¡Í­šgŸ†¼Jž4¥U©„­š­ªž~¥U¢†˜ª©„¡s™<©JÌ ©J›3ž~¡!«£©„¾ÐJ˜š¡Ê¢|žUÑF¢†™lƬ–—+§s™~¨OR¾Í˜åÑF¢†§£Ÿ†ž?©JÌÈ¡sg¢É¦ §£ŸZJ­€­šJ¡s¼„§sg¼Jža›2Ÿ†©K¥Už~™/™†˜š¡£¼ÐJ¡2«b˜š¡sÌd©JŸ/¾Ðg¢†˜ª©„¡RŸ†žU¦ ¢|Ÿ/˜šžl¿_J­ ¾Ížl¢†—£©K«2™È¾Í~½‹®3ž¢†—£ž;™|©„­š§s¢†˜ª©„¡‹Ìd©JŸ4¡£© ° Æ èÉ¡ ©JŸ/«£žlŸ¬¢|©y® žl¢|¢|žlŸR§2¡s«£žlŸ/™|¢†J¡2«³¢†—£ž¡sg¢†§£Ÿ†ž ©JÌ¢†—£ž)·¹ºí¢†J™†æ¬J¡s«‹›P§£¢'¢†—2˜š™È˜š¡+¢|©S›3žlŸ/™†› ž~¥U¢†˜š¿JžJ¨ ° žÐ©gä3žlŸ?˜š¡!–g®2­ªž Õww¢†_ÑK©„¡£©„¾a½!©JÌÏK§£ž~™|¢†˜ª©„¡ J¡s™ ° žlŸ/˜¯¡£¼)™|½F™|¢|ž~¾Í™lÆMèê¢<˜š™'¡£©J¢<™†§ûp¬¥l˜ªž~¡Î¢<¢|©a¥l­šJ™É¦ ™†˜šÌd½©„¡s­ª½B¢†—£žË¢ò½K›3ž~™?©JÌÏ+§£ž~™†¢†˜ª©„¡s™?J­ª©„¡£žJ¨v™†˜š¡2¥Už ÌH©JŸ'¢†—sž-™†J¾Íž4ÏK§£ž~™|¢†˜ª©„¡&¢†—£ž4J¡2™ ° žlŸÈ¾Í½Í®3žž~J™É¦ ˜ªžlŸ©JŸa¾?©JŸ†žÐ«s˜Êp&¥l§s­ª¢-¢|©ËžUÑK¢|ŸZJ¥U¢a«£žl›3ž~¡s«s˜¯¡£¼Ð©„¡ —£© ° ¢†—£žSJ¡2™ ° žlŸ˜š™4›2—£ŸZJ™|ž~«b˜¯¡w¢†—£žS¢|žUÑF¢lƹ–—+§s™ ° ž‘¥l­¯J™†™†˜ªÌH½¹¢†—£žv·¹ºï™†½K™|¢|ž~¾Ð™l¨J¡£©J¢€¢†—sž'ÏK§£ž~™|¢†˜ª©„¡s™~Æ cʞ;›2Ÿ†©¿F˜š«£ž¢†_ÑF©„¡£©„¾)½¬®2J™|ž~«R©„¡‹¢†—£Ÿ†žlž;¥UŸ/˜ª¢|žU¦ P ‹‰K Q R SƒU‰KŽ~Œ“Ó~0T w?U6VXWYi~3Z Uˆ“Ó~3T ‡ˆD‰Š8†°‹U—‰~0Z P Ž ŠnŠnJ~‚ åK£ ¤œ…£ ŝÆK›È £ Ç¢ ¢ £ ăõKÌ Ç ¤ÅzăõÌ Ç4)ÆKŝÍKÆ\ð q §z§Kí)²³´zµ9ØÓ·0µ¸³¹—Á¸´¥¾ÊäM¹·µû×Øàµ¸¿Aظ¼ ¹¾ºS´¥¼¿Ý ñ šKÇÍÈ £ ¢Êœ…£ ¤¢¨ð ›zõõ Åz¢ £ œ…£ ŝÆÊð !í… üû¹¾Á ØÓ¼ÊB9µ¸³¹Á¸´¥¾ÊäM¹·µQרóµÓ¿ØÓ¼œ¹¾ºS´¥¼K¿0  õK›¥œ œiÇÈ Æ ¢…£ ăõKÌ Ç Äƒ›œ…¤š£ Ææ ¢…Ç㛝ÆUœ…£ ¤¢¨ð !ûƢʙ2ÇÈ£ ¢¨í2¢ £ ăõÌ Ç.囝œ…ÍKÄ ÅzÈÌ £ ¢ÊœûÅ£ œ…Çă¢TàŝÍKÆåAËzÇÈ ¡ ›¥œi£ Ä £ Æ @zÇÉJ™2ÅzÈ å ›ƒ¢ ÇÆUœ…ÇÆK¤ÇQŝÈûõK›È ›zæzÈ ›õKš\ £ ÆKåÇ4M£ Ææ í ÅzÆUœ…ÅzÌ Åæz£ Ç¢ Ì Å™ ËUÇÈ ¡ q ñ)' íþ—»¥¶ ã¥Ø½ã)ÿ »¥×¾´zµ½¹·ƒã¥Ø½¹Ý Ì ÇËUÇÌ ÆKŝă£ ÆK›Ì £ ñ›¥œi£ ÅzÆ\ð !í… Qÿ»×¾´zµÊ¹·QðK»¥ØÓ·¥»¥¼ ¹ã)³Jظº0·,¹Á Ã;  ¢…Ç㛝ÆUœ…£ ¤¢¨ð ¤ÅzšÇȅÇÆ¤Ç'ð !ûƢʙ2ÇÈ£ ¢¤ÅzÆUœ…›z£ ÆÇ口 ÆSėÍÌ œ…£ õÌ ÇQ¢ ÇÆUœiÇÆ¤Ç¢¨ð¢ ¤›œ œ…ÇÈ ÇåAœ…šKÈ ÅÍK杚KŝÍœ åK£ ¢ ¤ŝÍKÈ ¢ Ç ›ƒåÅJ¤ÍKăÇÆUœ § ËUÇÈÊÉÌ ›zÈ æzÇ ÄƒÇåK£ ÍKÄ ›zåM˝›ÆK¤ÇåAÆKÌ õÊð q íû²S³´zµû´¥¾¹|µÓ³¹A´¥¾ÊäÂKºS¹¼Dµ¸·2û¥¾A´¥¼ ã+´,äM´¥ØÓ¼·µ°ðK¾´¥¿¹¾|ØÓ¼I·¥×³»»¥ÁÊÝ 2 Ì ÇËUÇÌ ¢…Ç㛝ÆUœ…£ ¤ £ ÆKåÇ4M£ Ææ !ûƢʙ2ÇÈ›z¤È ŝ¢…¢¢…ÇËzÇÈ ›zÌ œ…Ç4Jœ…¢ Ÿ FQÅză›z£ Æ32~! šK£ æzš q íÿ³»¥ÂÁ¸ã D ¹ヾ´¥ØÓ·,¹|ØÓ¼Dµ½¹¾¹·µ2¾´zµ½¹·0´zµµÓ³¹ØÓ¾|¼¹¨/Uµ2ºS¹¹µ¸ØÓ¼äÝ ›zÆå Ì ÇËUÇÌ H9Ì ›¢ ¢…£ ¤›œ…£ ŝÆÊð !ûƢʙ2ÇÈ›z¤È ŝ¢…¢Ì ›zÈ æzÇ.ÆJÍÄ ¡ ÇÈ9Å'°åKÅJ¤ÍăÇÆUœ…¢¨ðåKŝă›z£ ÆA¢ õ Ǥ£ ¤ [ +92~ @JÆÅ,™Ì ÇåæzÇ.›z¤ \JÍ£ È Çå)›zÍMœiŝ㛝œ…£ ¤›Ì Ì ÉA   ìmÅÈ…Ì å ËUÇÈÊÉAšK£ 杚 q íû²S³´zµ2·³»¥ÂÁ¸ãdÕ¹|µ¸³¹+ô ÿÃ»¥¾¹ØÓ䝼+ðK»¥Á ؽ׿ØÓ¼fµÓ³¹ƒüû´¥Á ìJ´¥¼·¼ »¥¶Ý 2ÆÅ™Ì ÇåKæÇ Ì ÇËUÇÌ ð ¢ õ Ǥ£ ›zÌ !ûƢʙ2ÇÈ£ ¢›)¢ ÅÌ Íœ…£ ŝƏœ…Ń›0¤ÅzăõÌ Ç4Yðõ ŝ¢…¢ £ ¡ Ì Ç.åÇËUÇÌ ÅõK£ Æ既 ¤ÇÆ›zÈ £ Å  õKÍÈ õ Åz¢ Ç –g®2­ªžÕCºœ¢†_ÑF©„¡£©„¾)½©JÌ·¹§£ž~™|¢†˜ª©„¡?º-¡s™ ° žlŸ/˜š¡s¼ÃK½F™|¢|ž~¾Í™~Æ–—sžv«sžl¼JŸ†žlž©JÌP¥U©„¾Í›2­ªžUÑ£˜ª¢ò½˜š¡s¥UŸ†ž~J™†ž~™MÌdŸ†©„¾ Ùv­šJ™/™ a¢|©wÙv­šJ™†™¹ÓK¨ J¡s«i˜ª¢-˜¯™4J™†™/§s¾?ž~«b¢†—sg¢¹¢†—£žaÌHž~g¢†§£Ÿ†ž~™-©J̑&­ª© ° žlŸ¥l­šJ™/™-gŸ/žJ­¯™|©¬¿_J˜¯­šg®2­ªžSg¢¹ —s˜ª¼„—sžlŸ¥l­šJ™†™lÆ Ÿ/˜š¢†—sg¢ ° žÊ¥U©„¡s™/˜š«£žlŸ‹˜š¾?›3©JŸ†¢†J¡+¢&ÌH©JŸ‹®2§s˜š­š«2˜š¡£¼ ÏK§£ž~™|¢†˜ª©„¡íJ¡2™ ° žlŸZ˜š¡£¼ ™|½F™|¢|ž~¾Í™C çU鋿F¡£© ° ­ªž~«£¼Jž ®2J™|žJ¨Oç„évŸ†ž~J™|©„¡s˜š¡s¼£¨sJ¡s«!ç"„év¡sg¢†§£ŸZJ­­šJ¡£¼„§sg¼Jž ›sŸ†©F¥Už~™†™†˜¯¡£¼bJ¡s«B˜š¡s«sžUÑF˜š¡s¼Ë¢|ž~¥/—s¡s˜¯Ï+§£ž~™~ÆkN;¡s© ° ­å¦ ž~«£¼JžÈ®2J™†ž~™J¡s«Ÿ†ž~J™|©„¡2˜š¡£¼¹›sŸ†©r¿K˜š«sž‘¢†—sž¾?ž~«s˜š§s¾ ÌH©JŸa®2§2˜š­š«s˜š¡s¼‹ÏK§£ž~™|¢†˜š©„¡D¥U©„¡+¢|žUÑK¢†™?J¡s«B¾Íg¢†¥Z—s˜š¡£¼ ¢†—£ž~¾÷g¼„J˜š¡2™|¢È¢|žUÑK¢«s©K¥l§s¾Íž~¡Î¢†™lÆèÉ¡s«£žUÑ£˜š¡£¼S˜š«£ž~¡F¦ ¢†˜åÇ2ž~™€¢†—£žv¢|žUÑF¢›PJ™†™†g¼Jž~™ ° —£žlŸ/ž'J¡s™ ° žlŸ/™M¾Í½a­š˜ªžJ¨ J¡s«í¡2g¢†§£Ÿ/J­­šJ¡£¼„§2g¼Jž!›sŸ†©F¥Už~™†™†˜š¡s¼›sŸ/©¿F˜š«£ž~™R ÌHŸ/J¾?ž ° ©JŸ†æ&ÌH©JŸ4J¡2™ ° žlŸžUÑK¢|Ÿ/J¥U¢†˜š©„¡Æ ú §s¢a©JÌ4¢†—£žr~Ó"Ï+§£ž~™†¢†˜ª©„¡s™S¢†—sg¢?©„§£Ÿ?™|½F™|¢|ž~¾ —sJ™J¡s™ ° žlŸ†ž~« ¨8"JՋ®3ž~­ª©„¡£¼‹¢|©bÙv­šJ™/™ g¨OJ¡s«43] ¢|©RÙv­šJ™†™PKÆ ú ®K¿K˜š©„§s™†­ª½J¨s¢†—sžSÏK§£ž~™|¢†˜ª©„¡2™˜š¡Ùv­šJ™†™ gŸ†ž4¾?©JŸ†ž«s˜½p¬¥l§s­ª¢J™¢†—£žl½SŸ†ž~ÏK§s˜ªŸ†ž¾Í©JŸ†žÈ›3© ° ¦ žlŸ†Ìì§s­Í¡sg¢†§sŸ/J­&­¯J¡£¼„§sg¼JžœJ¡2« Ÿ†ž~J™|©„¡2˜š¡£¼»¢|ž~¥Z—F¦ ¡s˜šÏK§£ž~™lÆ º-™ ° ž‹­ª©+©JæÌH©JŸa¢†—sžÐÌì§£¢†§£Ÿ†žJ¨˜š¡Ê©JŸZ«£žlŸa¢|©J«F¦ «£Ÿ†ž~™/™?Ï+§£ž~™†¢†˜ª©„¡s™©J̹—s˜ª¼„—£žlŸÍ¥l­šJ™/™|ž~™ ° žw¡£žlž~«y¢|© —sJ¡s«2­ªžDŸ†ž~J­ª¦ù¢†˜š¾?žDæF¡£© ° ­ªž~«£¼JžyJ¥lÏ+§2˜š™†˜ª¢†˜ª©„¡ J¡s« ¥l­šJ™†™/˜åÇP¥lg¢†˜ª©„¡wÌdŸ†©„¾ «s˜åä3žlŸ†ž~¡+¢«£©„¾ÍJ˜š¡2™l¨2¥U©JŸ†žlÌHžlŸ|¦ ž~¡s¥UžJ¨?¾?žl¢|©„¡+½K¾a½J¨Í™|›3ž~¥l˜šJ­å¦ù›P§£Ÿ†›3©„™|ž!Ÿ†ž~J™†©„¡s˜š¡£¼£¨ ™|ž~¾ÍJ¡+¢†˜š¥‹˜š¡s«sžUÑF˜š¡s¼bJ¡s«B©J¢†—£žlŸÐJ«s¿_J¡s¥Už~«y¢|ž~¥Z—F¦ ¡s˜šÏK§£ž~™lÆ ^ ( / ( “ ( á'• (  _ `bacedgf h i-jkelmn(oqp0rtstp0agoucwvxa-pbn(yp0rblvqz{p3kw|}p3rts _ k~p0ceaxl<_?p0a-s6cwl€6og‚ƒ=„}…†ce‡3`v=ˆali&c~dxcw‰r;Š;„‹ƒ†Œ?_  XŽ rvx`bl?‘x’“-”-”•0–˜—™0š’›?œ˜t”Bž ”xŸœ+ ”œ¡‘D–¢”£¥¤0¦E§’—t¨ ›&”‘x”—"“-”8ž  /©ª§=¨D«nX¬­3­®b €6p3rtsbp¯…p3ap3°tp0‡cwhqp3rts±€vl²lroqp0ce‰3a-p0rb‰ ª³certs6cerb‡ p3rtdx´ ladµcwr¶kep3ax‡li&‰kwkeli&vxce‰3rtdµ‰0·;vl&¸$vdŠ=¹tp0a-p0‡ap3¹b` certs6l¸6cwrb‡(º»p0°+s6hti&vxce²3l cwr6·¼laxlrti&l½u’‘x¾0–˜—™/¿’œ ”&š ’›Àœ¼b”(Á¤¦˜¦+Ã/ÃÃÄÆÅ ÇÈÉt’¥šD–˜Ê6È˒—ÍÌ?Êt”&šDœ¡–¢’0—8Ã/— ¨ šDε”‘D–˜—™3ntφ‰²lÐB° la(¬­3­­b ylaaxmх‰3°b° dnÒoqp3axjӀ$vxc~i-j3lkÔnÒÕ/‰3hb‡Ö‚†¹b¹"lkwvn¯p0rts ˆ×p3hbkXoqp3avcwrX?Ž rvla¹baxlvp0vxce‰3rup3d†p3°"sbhtiDvcw‰r;Ã/‘D¨ œÔ– Ø “–¢¤¦+ÄD— œK”¦˜¦A–™”&—+“”ntÙÚbn6¹tp3‡3ld‹Ù­¥Û+¬ÜÝbnt¬­3­3Út ÞG ‚ÀÒoucekekwla z{‰3a-sbÏl&vŠß‚áà;l¸$c~ip3kuÕ/p0vp0° p3dxl3 §’0ÈBÈGÊ6—t–¢“-¤œÔ–¢’—⒛͜˜t”ãÃB§ä8n(²‰3kÒÚ®bŠåωt¬¬3n ¹ p0‡3ld‹Ú3­ÛÜt¬nω²3lЏ°"laƬ­­æ6 Õªp0rou‰3k~s6‰²p0rXnX€6p0rtstp¶…/p0a-p0°tp3‡3ceh;nEoqp0acwh dªˆ×pdxipbn ƒp3sbp ouce`tp0k~i&lpbn ƒ‹‰¥¸$ç p3rtp}Þªcea¢èh;n/ƒ‹cei-` p0a-s»Þª‰$‰6s6ahbÐép0rts»êpdcekwlqƒ†htd à‚/€b€ ëBŠ/‚섉$‰3k(·¼‰a{€$htaítrt‡îvx`blrtdx´ la±Ïl&v Ž rð?‘x’“-””-•0–˜—™0šñ’›už" ©Æ§¨ò3n;¹tp3‡3ldªÙæ¥Û  Ütn¬­3­­b `vxvx¹;ŠAó3ó¥vxali0 rbc~dv ‡‰²tó¥¹bhb° d-ó¥vxali®$ó¥¹tp3¹ ladDódxЏhX ¹td Õªp0rÀou‰kesb‰²¥p3r(p0rts(ƒ†psbp‹ouce`tp0k~i&lpb;Ž Ð¹ba‰²$cwrt‡ v`bl dxlp3ai-`‰rvx`bl¶Ž rvlarbl&vB°m{htdxcwrt‡qz±‰astφl&vp0rts kel&¸6c~ip3k"‰¹ lap0vx‰adŽ r¯Äx©µ©©»ÄD— œK”‘D—"”œ†§’ÈÉ"Ê6œÔ–˜—™3n ²‰3kÔ Ü n$rb‰te¬3n6¹tp3‡3ld‹Ú0ÜÛ$ÜÚtn$Ý3ô3ô3ôt ŒµkwkelrõoðBê׉$‰ax`blldÍp0rtsöÕ/p¥´‹röoðñ„<ceil3n „<`bl „‹ƒŒ?_µç ®(÷/htldvxce‰3r҂†r d´?lacwrt‡Æ„Âa-p3i-jŒµ²¥p3kwh pvxce‰3r Ž rð?‘x’“-””-•0–˜—™0šñ’›už" ©Æ§¨ò3n;¹tp3‡3ld/Üt¬&Û$Ù3Ütn¬­3­­b `vxvx¹;ŠAó3ó¥vxali0 rbc~dv ‡‰²tó¥¹bhb° d-ó¥vxali®$ó¥¹tp3¹ ladDóø$p®6 ¹ d
2000
71
     !" # %$'& )(+*, -  ((  ./( )0 !102 3465 798;:=<?>?@BAC78ED FHG I7'J<LKNMO@BAQPSR>?8 TU6VXW Y[Z]\N^_a`cbd\ G `N\Se \NW[Zgfih jlknmB^`Hh?o p2q"msr1qlt o2U6^\ F j I[uFHG `N\ I k G v ZwbdZwY[Z]\UHx-_a`cbd\ G `N\SyzmB\N`|{ G U6}dU6~H€Eo2j‚k _amƒ o2Y v U G ~„ e U G ~ ha…%Y v U G ~„ †Y‡m F \ ˆU G hЉ6‹ŒŽ„ H‹[qs\cW Y[‘ }’bd` UHx"oUH^w\ F ~H”“b’V–• —)U6^} I f˜“ F b v Zgf F `Hf˜“^ FHG I “ v `|{ U6b=•‚—)U6^w} I f˜“ F b v Zcf F `Hf˜“”^ ™›š"œž ŸŽ¡¢H £¥¤§¦©¨ª ¤¬«n­ ¦¯®Ž°g­n¤§°±¨²Nª ¤¬³´°gµµ¶ ²Ž° ·¸ ­n²¹«n²Ž³¯ª ¤º­ ¸¤ºµ¶ ² »”³©¤¬¼ ² ½z«nµa°g¶ «n¤ ¾ °g­ °¿­ ¸¶ ²ŽÀ®Ž¸ ¾ ¦©¼2¤¬¨”« ¦¯²Ž¨´¶ ¤ ¾ À”·]­ ¦¯²Ž¨Á ÂL¦©¨¤¬°g¶z° ³©® ¤|»¶ ° ¦©·Ã­n¤¬·¸¨¦©ÄHÀ¤±·|° ³i³¯¤ ¾ ÂOÅ6ÆÈÇ ÅHÉÊ ¦i«ËÀ«n¤ ¾ ­n² ̔¨ ¾ ·]²gÍ ¶ ¤¬³©°g­ ¦¯²Ž¨« ¸”¦¯µ”«O² ½?«nµa°g¶ «n¤%Î"² ¶ ¾ «¬ÁOÏ%¸¶ ¤|¤ ªg°g¶ ¦©° ¨­-¤¬«n­ ¦©¼°g­ ¦©²Ž¨S¼¤|­ ¸² ¾ «"°g¶ ¤Ð« À®gÍ ® ¤¬« ­n¤ ¾ ° ¨ ¾ ­ ¸¤|ÑÒ°g¶ ¤¿¤|ªc° ³iÀ°g­n¤ ¾ ½’² ¶ ¤¬« ­ ¦©¼°g­ ¦©¨®+À¨«n¤|¤¬¨ ¨²ŽÀ¨6ÍÓª ¤|¶ » ·]²gÍ ²6·|·|À¶ ¶ ¤¬¨·]¤Ôµ¶ ² »a°g»”¦©³©¦¯­;Ñ Á/Ï1¸¤X¼2² ¾ ¤¬³ « ¸²žÎ%«–µ ²Ž« « ¦©»”¦©³©¦¯­;Ñ´­n²Õ»[¤Ö° ³¯­n¤|¶ ¨”°g­ ¦¯ª ¤ µ”¶ ² »”°g»”¦©³i¦¯­EÑ׫ ¼2²² ­ ¸”¦©¨®2¼2¤|­ ¸² ¾ Á Ø Ù A  Ÿ >Ú@ ¢H 8E>?A Û ¨¤Ü² ½)­ ¸¤Ý¼2²Ž«n­« À6Þ ¤|¶ ¦©¨® ¾ ¦©ß×·|À³¯­ ¦¯¤¬«l¦©¨Ö«n­ °cÍ ­ ¦©«n­ ¦i·|° ³B³©° ¨®ŽÀ°g® ¤Üµ¶ ²6·]¤¬« « ¦©¨®à¦©« «n²gÍE·|° ³©³¯¤ ¾¾ °g­ ° «nµ”°g¶«n¤¬¨¤¬« «¥µ”¶ ² »”³¯¤¬¼ÝÁ#á)²â¼°g­n­n¤|¶¸²žÎº³©°g¶ ® ¤ ­ ¸¤›­n¶ ° ¦©¨”¦©¨®Ö«n¤|­Ý¦©«|ã)°Ö« À»”«n­ ° ¨­ ¦©° ³µ[² ¶ ­ ¦¯²Ž¨,² ½ ­ ¸¤ ¾ °g­ °â¦©«9À¨«n¤|¤¬¨Áå䍲 ¶­ ¸¤¬¼Ýã­ ¸¤´æX°c獦èÍ ¼‚À¼QÂL¦¯é ¤¬³©¦©¸²² ¾2ê «n­ ¦i¼°g­ ¦¯²Ž¨àë=æà ê-ì µ”¶ ² »”°g»”¦©³¯Í ¦¯­ ¦¯¤¬«?°g¶ ¤í|¤|¶ ²° ¨ ¾ ­ ¸¤¬«n¤í|¤|¶ ²Ž«?®Ž¦©ª ¤À«»”° ¾ ¶ ¤¬« À³¯­ ° ³©³ ­ ¸¶ ²ŽÀ®Ž¸Ý­ ¸¤« ­ °g­ ¦©«n­ ¦©·|° ³µ”¶ ²H·]¤¬« «|Á £¥¤î°g¶ ¤î¦©¨­n¤|¶ ¤¬«n­n¤ ¾ ¦©¨‚ï×ë’ð?ñnò ì ° ¨ ¾ ­ ¸¤"µ¶ ¤ ¾ ¦©·wÍ ­ ¦¯²Ž¨Ö­ ° «néÖï×ë’òŠó ð ì ã"­ ¸°g­¦©« °›»”¦¯® ¶° ¼ô³©° ¨®ŽÀ°g® ¤ ¼2² ¾ ¤¬³©¦©¨®–² ½ÐÎ"² ¶ ¾ ·]²gÍÓ²6·|·|À¶ ¶ ¤¬¨”·]¤¬«|ÁÕï×ë’òó ð ì ¦©« ­ ¸¤·]²Ž¨ ¾ ¦¯­ ¦¯²Ž¨° ³gµ”¶ ² »”°g»”¦©³i¦¯­EÑ1­ ¸”°g­?°îµ”° ¦¯¶Š¸”° «Š«n¤¬·wÍ ²Ž¨ ¾ ¤¬³¯¤¬¼¤¬¨õ­òÜöÔ÷C®Ž¦¯ª ¤¬¨2­ ¸°g­¦©­ «O̶ «n­¤¬³¯¤¬¼¤¬¨õ­ ¦©«Ðð¿öùøÝÁ2ú4¨›² ­ ¸¤|¶sÎ"² ¶ ¾ «¬ã?ï×ë’òŠó ð ì ·|° ¨‡» ¤ ¶ ¤]Í ®Ž°g¶ ¾ ¤ ¾ ° «°Ô¼2¤¬° « À¶ ¤² ½-¶ ¤¬³©°g­ ¦©²Ž¨« ¸¦¯µ–» ¤|­EÎ"¤|¤¬¨ Î"² ¶ ¾ ðù° ¨ ¾ òaÁÝ䍲 ¶ ¤]獰 ¼2µ”³¯¤ ãû½d² ¶‚°Ô®Ž¦¯ª ¤¬¨9² »Í ün¤¬·]­lð¿ýÿþ6ãO°Xª ¤|¶ »ò‡ý  '¦©«¼2² ¶ ¤×¶ ¤]Í ³©°g­n¤ ¾ ­ ¸° ¨–°×ª ¤|¶ »–òÜýãûë Šó þ ì ûë¬ó þ ì Á æà° ¨Ñ,½’¤¬°g­ À¶ ¤¬«X·|° ¨ » ¤¥À«n¤ ¾ ­n² µ¶ ¤ ¾ ¦©·]­û°Ð¶ ¤¬³©°g­ ¦¯²Ž¨« ¸”¦¯µl» ¤|­;ÎB¤|¤¬¨2­;ÎB²ÎB² ¶ ¾ «|㎻”À­ Î"¤s° « « À¼2¤s¸¤|¶ ¤È­ ¸°g­î­ ¸¤È²Ž¨³©Ñ ¦©¨½’² ¶ ¼°g­ ¦©²Ž¨ÜÎB¤ ¸°žª ¤l°g¶ ¤È­ ¸¤½d¶ ¤¬ÄHÀ¤¬¨·|¦©¤¬«|Á ÏL²¥²žª ¤|¶·]²Ž¼2¤X­ ¸¤ ¾ ¦¯ß×·|À³¯­;Ñ¥² ½« µ”°g¶ «n¤ ¾ °g­ °6ã °+« ¼2²H² ­ ¸¦©¨®­n¤¬·¸¨¦©ÄHÀ¤ ³©¦©é ¤ ²H² ¾ ÍEÏûÀ¶ ¦i¨® ¼2¤|­ ¸² ¾ ¦©« Î)¦ ¾ ¤¬³¯Ñ§À«n¤ ¾ Á ê «n­ ¦©¼°g­n² ¶Ò·]²Ž¼ Í »”¦i¨¦©¨®°gµ”µ¶ ²Ž° · ¸¤¬«‡« À·¸ ° «¥³i¦©¨¤¬°g¶›¦©¨õ­n¤|¶ µ[²Ž³i°cÍ ­ ¦¯²Ž¨–° ¨ ¾"! °g­ní$# «Ð»”° · éõÍÓ²gÞÖ¼2¤|­ ¸² ¾ °g¶ ¤lµ ² µ”À³i°g¶ ° ³©« ²”ë ! °g­ní ã&%'$() ì Á§Ï1¸¤|ÑÕÀ«n¤‡À¨”¦¯® ¶ ° ¼ µ¶ ² »Í °g»”¦i³©¦¯­EÑ/ï×ë’ò ì ­n²ù¤¬« ­ ¦©¼°g­n¤¥»”¦¯® ¶° ¼ µ¶ ² »”°g»”¦i³©¦¯­EÑ ï×ë’òŠó ð ì ½’² ¶1À¨« ¤|¤¬¨ ¾ °g­ ° µ”° ¦¯¶¬ã ¾ ¦i«n¶ ¤|®Ž°g¶ ¾ ¦i¨®­ ¸¤ ¶ ¤¬³i°g­ ¦¯²Ž¨« ¸¦©µ2»[¤|­;ÎB¤|¤¬¨S­;ÎB²‚ÎB² ¶ ¾ «|ÁOú;½À¨”«n¤|¤¬¨»”¦¯Í ® ¶ ° ¼×«°g¶ ¤×¼×° ¾ ¤Àµ‡² ½-À¨¦¯® ¶° ¼«Ð² ½-­ ¸¤×« ° ¼2¤ ½’¶ ¤¬ÄHÀ¤¬¨·]Ñ ã-­ ¸¤X¼2¤|­ ¸² ¾ «®Ž¦¯ª ¤Ô­ ¸¤¬¼ ­ ¸¤X« ° ¼2¤ µ¶ ² »”°g»”¦©³©¦©­EÑ ã ·|° À« ¦©¨®°Ðµ¶ ² »”³¯¤¬¼C­n²È¤¬«n­ ¦©¼°g­n¤1° ·wÍ ·|À¶°g­n¤sµ¶ ² »”°g»a¦©³©¦¯­;Ñ Á ú4¨ ° ¾¾ ¦¯­ ¦¯²Ž¨ ­n² ­ ¸¤#·|³©° « « ¦©·|° ³ÿ¼2¤|­ ¸² ¾ «¬ã « ¦i¼¦©³©°g¶ ¦©­EÑÍÓ»”° «n¤ ¾ « · ¸¤¬¼2¤¬«l°g¶ ¤×« À·|·]¤¬« « ½’À³i³¯Ñà°gµÍ µ”³i¦¯¤ ¾ ­n² ¾ °g­ °ô« µ”°g¶ «n¤¬¨¤¬« «Qµ”¶ ² »”³¯¤¬¼ÝÁ Ï1¸¤ ¨¤¬°g¶ ¤¬«n­4ÍE¨¤¬¦¯®Ž¸» ² ¶ «« ¦©¼¦i³©°g¶ ¦¯­;ÑõÍÓ»”° « ¤ ¾ ¼¤|­ ¸² ¾ À« ¤¬«2°¥«n¤|­2² ½* ¼2²Ž« ­« ¦©¼¦©³i°g¶lÎ"² ¶ ¾ « ð,+B­n²¥¤¬«4Í ­ ¦©¼×°g­n¤S·]²Ž¨ ¾ ¦¯­ ¦¯²Ž¨° ³Bµ¶ ² »”°g»a¦©³©¦¯­;іï×ë’òŠó ð ì ã»[¤¬¦i¨® « ° ¦ ¾ ­n²sµ ¤|¶ ½d² ¶¼° ³©¼2²Ž«n­.-/0»[¤|­n­n¤|¶­ ¸° ¨2»”° · éõÍ ²gÞÿë Êаg®Ž° ¨Õ¤|­Ô° ³=Á¯ã1%'$'$' ì Á Ï%¸¤|сÀ«n¤›ªg°g¶ ¦¯²ŽÀ« ¾ ¦i«n­n¶ ¦¯»”À­ ¦¯²Ž¨° ³Ž« ¦i¼¦©³©°g¶ ¦©­EÑs¼2¤¬° « À¶ ¤¬«O­n²È̔¨ ¾ « ¦©¼2Í ¦©³i°g¶ ¦¯­EÑà» ¤|­EÎ"¤|¤¬¨¥ÎB² ¶ ¾ «È« À·¸‡° « ! ÂŠÍ ¾ ¦¯ª ¤|¶ ® ¤¬¨·]¤ ² ¶32HÅÍ ¾ ¦¯ª ¤|¶ ® ¤¬¨”·]¤Xë Š¤|¤ ã4%'$'$' ì Á䍲 ¶­ ¸¤«nµ”°g¶ « ¤ Î"² ¶ ¾ ãû¸²žÎ"¤|ª ¤|¶¬ãû­ ¸¤ ¾ ¦©« ­n¶ ¦¯»”À­ ¦©²Ž¨Xï×ë’òŠó ð ì ¦©­ «n¤¬³¯½ ¦©«B«nµ”°g¶ « ¤)° ¨ ¾ ¦¯­B¦©« ¾ ¦¯ß×·|À”³¯­­n²̔¨ ¾ ·]² ¶ ¶ ¤¬·]­-« ¦©¼2Í ¦©³i°g¶ ¦¯­EÑ9» ¤|­EÎ"¤|¤¬¨ùÎB² ¶ ¾ «|ãB« ¦©¨·]¤Ü­ ¸¤Ý²Ž¨³¯Ñ¼2¤¬° ¨« ½’² ¶?¼2¤¬° « À¶ ¦©¨®-Î"² ¶ ¾ « ¦©¼×¦©³©°g¶ ¦¯­;Ñ%¦©« ­ ¸¤½d¶ ¤¬ÄHÀ¤¬¨·]Ñ Á Ï1¸¤¼2² ¶ ¤×«nµa°g¶ «n¤2­ ¸¤ ¾ ¦i«n­n¶ ¦¯»”À­ ¦¯²Ž¨X² ½-Î"² ¶ ¾ ¦©«¬ã ­ ¸¤l¼2² ¶ ¤ ¾ ¦©ß×·|À³¯­-̔¨ ¾ ¦©¨® ° ·|·]¤|µ­ °g»”³¯¤‚« ¦©¼¦©³i°g¶ ¦èÍ ­ ¦¯¤¬«1»[¤|­;ÎB¤|¤¬¨XÎB² ¶ ¾ «|Á ú4¨´­ ¸¦©«Üµa°gµ[¤|¶žã)Î"¤‡¦©¨ª ¤¬«n­ ¦¯®Ž°g­n¤'°¿¨²žª ¤¬³s°gµÍ µ¶ ²Ž° · ¸Õ­n²¿«n²Ž³©ª ¤–­ ¸¤¥µ¶ ² »”³¯¤¬¼ ² ½ «nµ”°g¶ «n¤ ¾ °g­ ° »HÑ/·|°gµ­ À¶ ¦©¨®Ö­ ¸¤¬¦¯¶Ý³©°g­n¤¬¨­Ý¶ ¤¬³©°g­ ¦©²Ž¨« ¸¦¯µa«×Î%¦©­ ¸ ²Ž¨³©Ñདྷ¶ ¤¬ÄHÀ¤¬¨·]ћ¦©¨½’² ¶ ¼°g­ ¦¯²Ž¨Á2Ï1¸¶ ²ŽÀ®Ž¸›¶ ¤ ¾ À”·wÍ ¦©¨® ¾ ¦i¼2¤¬¨« ¦¯²Ž¨/»Hс³©¦©¨¤¬°g¶Ý° ³¯® ¤|»”¶ ° ¦©·à­n¤¬· ¸¨¦iÄÀ¤ ÂOÅ6ÆÐÇ Å6ÉÊ65¬ãOÎB¤Ô·|° ¨¿¤¬³©¦©¼¦©¨”°g­n¤ í|¤|¶ ²¥ªg° ³©À¤¬«2¦i¨ 7 89:<;=8>@? ACB?D9AFEG>HB? IKJL:MB>HNPOQIRQFS9TDU<;V9IKBWXNR>ZY ûë’òŠó ð ì ° «'Î"¤¬³©³×° «9Î"¤/·|° ¨·|°gµ­ À¶ ¤/¶ ¤¬³©°g­ ¦¯²Ž¨6Í « ¸¦©µ”«Ý» ¤|­EÎ"¤|¤¬¨ ÎB² ¶ ¾ «|Á£¥¤Ö» ¤¬³©¦¯¤|ª ¤‡­ ¸°g­–­ ¸¤ ¾ ¦©¼¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¤¬«n­ ¦©¼×°g­ ¦¯²Ž¨/¼² ¾ ¤¬³)·|° ¨,»[¤ ° ³¯­n¤|¶ ¨”°g­ ¦¯ª ¤sµ¶ ² »a°g»”¦©³©¦¯­;Ñ׫ ¼2²H² ­ ¸¦©¨® ¼¤|­ ¸² ¾ Á Ï1¸¤‡¼2² ¾ ¤¬³È·]²Ž¨« ¦©«n­ «Ô² ½l­ ¸¶ ¤|¤¥µ”°g¶ ­ «[¼×°géõÍ ¦©¨®° ·]²Ž¨ ¾ ¦¯­ ¦¯²Ž¨”° ³Xµ¶ ² »”°g»”¦©³©¦©­EÑ ¼°g­n¶ ¦èç 㥵¶ ²gÍ ün¤¬·]­ ¦©¨®§­ ¸¤ ¼°g­n¶ ¦èç ¦i¨õ­n²§³¯²žÎ"¤|¶ «nµ”° ·]¤ ã´° ¨ ¾ ¤¬«n­ ¦©¼×°g­ ¦©¨®ÿµ¶ ² »”°g»”¦©³©¦©­ ¦¯¤¬«²Ž¨±¶ ¤ ¾ À·]¤ ¾ «nµ”° ·]¤ Á ú4¨ ­ ¸¤Q­ ¸¦©¶ ¾ µa°g¶ ­|ã¿­ ¸¶ ¤|¤ ªg°g¶ ¦©° ¨­C¤¬« ­ ¦©¼°g­4Í ¦©¨®Ã¼2¤|­ ¸² ¾ « °g¶ ¤ÿ« À® ® ¤¬«n­n¤ ¾ ° ¨ ¾ ­ ¸¤|Ñô°g¶ ¤ ·]²Ž¼2µ”°g¶ ¤ ¾ Î%¦¯­ ¸ ! °g­ní$# «Ý»”° · éõÍÓ²gÞ¼2¤|­ ¸² ¾ ° ¨ ¾ « ¦©¼µ”³©¦¯Ì¤ ¾ ¨¤¬°g¶ ¤¬«n­4ÍE¨¤¬¦©®Ž¸õ» ² ¶« ¦©¼¦i³©°g¶ ¦¯­;ÑõÍÓ»”° « ¤ ¾ ¼2¤|­ ¸² ¾ Á £¥¤Õ¤|ªg° ³©À°g­n¤ ¾ ­ ¸¤Õ¼2¤|­ ¸² ¾ «Ö¦©¨ ° µ”«n¤¬À ¾ ²Î1¶ ² ¾ « ¤¬¨«n¤ ¾ ¦©« ° ¼l»”¦¯®ŽÀ°g­ ¦©²Ž¨Ý­ ° «né[Á)° ¨ ¾ ¼° ¾ ¤ µ¶ ²Ž¼¦i« ¦©¨®Ý¶ ¤¬« À³¯­|ÁÔäÀ­ ¸¤|¶l¤|ªg° ³©À°g­ ¦¯²Ž¨¦©« ¨¤|¤ ¾ ¤ ¾ ²Ž¨Ô¼2² ¶ ¤s¶ ¤¬° ³©¦©«n­ ¦©·­ ° «né[ã­ ¸²ŽÀ®Ž¸Á Ï1¸¤2² µ­ ¦©¼° ³ ¾ ¦©¼2¤¬¨« ¦©²Ž¨¥« ¦¯í|¤² ½î« À»a«nµ”° ·]¤¦©« ° ³©«n² ¦©¨õª ¤¬« ­ ¦¯®Ž°g­n¤ ¾ ã « ¸²žÎ)¦©¨®2­ ¸¤s» ¤¬«n­%¶ ¤¬« À³¯­î» ¤]Í ­EÎ"¤|¤¬¨\' /*]_^ /$/6㔰g» ²ŽÀ­`%/0² ½?­ ¸¤È² ¶ ¦¯®Ž¦©¨° ³ ¾ ¦èÍ ¼2¤¬¨« ¦¯²Ž¨« ¦©í|¤ ÁûäO¦©¨° ³i³¯Ñ ãžÎ"¤"« ¸²žÎ­ ¸°g­û­ ¸¤B¼2² ¾ ¤¬³ ¾ ²H¤¬«È¨² ­ ¾ ¤|® ¶ ° ¾ ¤2µ ¤|¶ ½’² ¶ ¼° ¨”·]¤‚° «È­ ¸¤ « µ”°g¶ «n¤]Í ¨¤¬« «%¦©¨·]¶ ¤¬° «n¤¬«¬Á a b 8ED,J”A œ 8E>?AOKc›J”Ú@ ¢ J”Úd >ÚJ”: ÊЦ©¼2¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À”·]¤ ¾ ¼2² ¾ ¤¬³ À« ¤¬« ³©¦i¨¤¬°g¶±° ³èÍ ® ¤|»¶ ° ¦i·Ò­n¤¬· ¸¨¦iÄÀ¤·|° ³©³©¤ ¾ ÂûÅ6ÆÈÇ ÅHÉÐÊlãÜÎ%¸”¦©· ¸ µ¶ ²cün¤¬·]­ «%°2¼×°g­n¶ ¦èçݦ©¨õ­n²×¶ ¤ ¾ À·]¤ ¾ «nµa° ·]¤ Á äO¦¯¶ « ­² ½-° ³©³=ã­n²Ô°gµ”µ”³¯ÑX³©¦i¨¤¬°g¶È° ³¯® ¤|»¶ ° ¦©· ­n¤¬·¸6Í ¨¦©ÄHÀ¤ ãHÎB¤È¨¤|¤ ¾ ­n² ¶ ¤|µ¶ ¤¬«n¤¬¨õ­î·]²Ž¨ ¾ ¦¯­ ¦¯²Ž¨° ³aµ¶ ² »Í °g»”¦©³i¦¯­EÑ2ï×ë’òŠó ð ì ° «-° ¼°g­n¶ ¦èç½’² ¶ ¼§ëÓÅH¤¬·]­ ¦¯²Ž¨e^HÁf% ì Á Æ)½d­n¤|¶‚­ ¸°g­|ãûÎB¤ µ¶ ²cün¤¬·]­­ ¸¤×¼×°g­n¶ ¦è燦©¨­n²X³¯²žÎ"¤|¶ ¾ ¦©¼¤¬¨« ¦¯²Ž¨ « À»”« µ”° ·]¤Ö­ ¸¶ ²ŽÀ®Ž¸ ÅHÉÐʂÁÈ£¥¤¿Î%¦©³i³ « ¸²NÎz¸²NÎz­ ¸¤Ý¶ ¤¬« À³¯­ ° ¨­×«nµ”° ·]¤Ý¶ ¤|µ”¶ ¤¬«n¤¬¨­ «¶ ¤]Í ³©°g­ ¦¯²Ž¨”« ¸¦¯µÔ» ¤|­;ÎB¤|¤¬¨–­ ¸¤‚®Ž¦¯ª ¤¬¨–ÎB² ¶ ¾ ð‡° ¨ ¾ ­ ¸¤ µ¶ ¤ ¾ ¦©·]­ ¦©¨® Î"² ¶ ¾ òSÎB¤¬³i³ëÓÅH¤¬·]­ ¦¯²Ž¨g^HÁR^ ì Á-Æ-­³©° «n­|ã Î"¤-« À® ® ¤¬«n­û­ ¸¶ ¤|¤-µ¶ ² »”°g»a¦©³©¦¯­;Ñ)¤¬«n­ ¦i¼°g­ ¦¯²Ž¨ ¼2¤|­ ¸6Í ² ¾ «î²Ž¨Ý¶ ¤ ¾ À·]¤ ¾ «nµ”° ·]¤×ëÓÅ6¤¬·]­ ¦¯²Ž¨<^HÁRh ì Á i=jk l6m,nMoDprqpsmtn uwvyxzmw{Du|{}pvsprq~€u‚qz prƒ ƨõÑ ¾ ¦©« ·]¶ ¤|­n¤·]²Ž¨ ¾ ¦¯­ ¦¯²Ž¨”° ³µ¶ ² »a°g»”¦©³©¦¯­;Ñ ¾ ¦©«n­n¶ ¦¯»aÀ6Í ­ ¦¯²Ž¨«S·|° ¨,» ¤X¶ ¤|µ¶ ¤¬«n¤¬¨õ­n¤ ¾ »Hс°¼°g­n¶ ¦è翽’² ¶ ¼ÝÁ 䍲 ¶È° ¾ ¦©«n­n¶ ¦©»”À­ ¦¯²Ž¨„ûë’òŠó ð ì ã ®Ž¦¯ª ¤¬¨àÎ"² ¶ ¾ «)ð'ö‡ø ¼°gé ¤Àµ ¶ ²žÎ ¤¬¨õ­n¶ ¦©¤¬«‡° ¨ ¾ µ¶ ¤ ¾ ¦©·]­ ¦©¨®´Î"² ¶ ¾ « òÜö†… ¼°gé ¤sÀµÜ·]²Ž³©À¼×¨×¤¬¨­n¶ ¦¯¤¬«|Á ê ° · ¸Ü¤¬³¯¤¬¼¤¬¨õ­ ² ½?¼°g­n¶¦èç׸° «B¤¬«n­ ¦©¼°g­n¤ ¾ ·]²Ž¨ ¾ ¦©­ ¦¯²Ž¨° ³µ”¶ ² »”°g»”¦©³¯Í ¦¯­;Ñתg° ³©À¤Ð² ½?­;ÎB²2Î"² ¶ ¾ «Dûë’òŠó ð ì Á£¥¤ ¾ ¤|̔¨¤Ð·]²Ž¨6Í ¾ ¦¯­ ¦©²Ž¨° ³?µ”¶ ² »”°g»”¦©³i¦¯­EÑݼ°g­n¶ ¦¯ç–° ¨ ¾ ­ ¸¤ ¶ ²NÎsã?·]²Ž³èÍ T,>HNKXA}UDAFJF‡E‰ˆ$‡HQIP? IR‡HB À¼×¨Üª ¤¬·]­n² ¶ «[ ŠŒ‹`Ž ý ‘K’“[ý ï×ë’ò’ó ð,‘ ì “ ëF% ì ” ðw‘Šý• ï×ë’ò 5 ó ðw‘ ì ñ–––?ñ ï ë’ò Ž ó ð,‘ ì “ ë^ ì ” ò’%ý ï×ë’ò’ó ð 5 ì ñ–––Lñ ï×ë’ò’ó ð ‹ ì “ Î%¸¤|¶ ¤\— ý+ó øàó ㌠ý#ó …2ó° ¨ ¾ %™˜š<˜›—¥ã %œ˜šž˜Ÿ Á 䍲 ¶X¤]獰 ¼2µ”³©¤ ãЦ©½Î"¤‡À«n¤'æàÂ ê ¤¬«n­ ¦i¼°g­n² ¶¬ãw ‘K’ ý´ï} 1¡¢1ë’ò ’ ó ð ‘Eì ý¤£¥§¦H¨ª©« ¬ ­®@¯ £¥§¦H¨ª©«°¯ Á ú4¨à­ ¸¤l­ °g»a³¯¤ ãa­ ¸¤‚¨²ŽÀ”¨\± ·]²gÞ ¤|¤±Ý° ¨ ¾ ±g» ¤|¤|¶@± ¾ ²H¤¬«È¨² ­s·]²gÍÓ²6·|·|À¶sÎ%¦©­ ¸à­ ¸¤« ° ¼2¤2ª ¤|¶ »‡° ¨ ¾ ¦©­ ¦©« ¾ ¦¯ß×·|À”³¯­î­n²×̔¨ ¾ « ¦©¼¦i³©°g¶ ¦¯­;Ñ×» ¤|­EÎ"¤|¤¬¨–­ ¸¤¬¼ ¦i¨ ­ ¸¦i««nµ”° ·]¤ ÁàÏ?²à̔¨ ¾ ­ ¸¤¬¦©¶l³©°g­n¤¬¨­l¶ ¤¬³©°g­ ¦¯²Ž¨« ¸”¦¯µŠã Î"¤Ô·|° ¨ùµ¶ ²cün¤¬·]­2¤¬° · ¸¿¶ ²žÎz° ¨ ¾ ·]²Ž³iÀ¼¨Öª ¤¬·]­n² ¶ ¦©¨­n² ³¯²žÎ"¤|¶ ¾ ¦©¼2¤¬¨« ¦¯²Ž¨×«nµa° ·]¤­ ¸¶ ²ŽÀ®Ž¸S³©°g­n¤¬¨õ­î«n¤]Í ¼° ¨­ ¦©·s° ¨”°¬Ñ6³©« ¦©«|Á Ïû°g»”³¯¤²%Ö« ¸²NÎ%«¥° ¨ ¤]ç6° ¼2µa³¯¤Ö² ½SæàÂ ê ¤¬«n­ ¦èÍ ¼°g­ ¦i¨®¼°g­n¶ ¦¯ç[ÁzÏ%¸¤¥­ ° «néÕ¦©«Ýµ¶ ¤ ¾ ¦©·]­ ¦©¨®Ö­ ¸¤ ¼° ¦i¨Ýª ¤|¶ »XÎ)¦¯­ ¸Ý®Ž¦¯ª ¤¬¨X² »6ü4¤¬·]­|ãa­ ¸°g­)¦i«î¤¬«n­ ¦©¼°g­4Í ¦©¨®X¨²ŽÀ¨‡° ¨ ¾ ª ¤|¶ »·]²gÍÓ²H·|·|À¶ ¶ ¤¬¨·]¤Sµ¶ ² »”°g»”¦i³©¦¯­EÑ ûës³ó ì Î%¸¤|¶ ¤´ ö™µàñ§³ö·¶×ÁÜá%² ­n¤S­ ¸°g­l¤¬° · ¸ ¨²ŽÀ”¨‚·|° ¨»[¤"¶ ¤|®Ž°g¶ ¾ ¤ ¾ ° «°Ðµ ²Ž¦©¨õ­û² ¶°Ðª ¤¬·]­n² ¶B¦i¨ ¼‚À³¯­ ¦èÍ ¾ ¦©¼2¤¬¨« ¦©²Ž¨° ³Ž«nµ”° ·]¤î² ½aÎ%¸¦©·¸ ° ¾ ¦©¼2¤¬¨« ¦¯²Ž¨ « ¦©í|¤È¤¬ÄÀ° ³Š­n²–óP¶Üó Á Ïû°g»”³¯¤†% [‚Æ)¨ ê ç6° ¼µ”³¯¤ ² ½Œ¸"²Ž¨ ¾ ¦¯­ ¦©²Ž¨° ³}¹¶ ² »”°cÍ »”¦i³©¦¯­EÑÜæX°g­n¶¦èç º» » » » » » ¼ µùóP¶ ½¾Œ ¿|½À=  |³ÁÂ,ÃV½¾ÄÄÁ¾ þ /ÆÅRh$hÇ/È/ÆÅRh$h /ÆÅRh$h / / ¾*ɂʽ ‚|ò /Ë/ÆÅRÌÍ/ÆÅRÌ / / / Î Á$ÏÐÏÐ/ % / / / / þH / / / /ÆÅRh$hÍ/ÆÅRh$h /ÆÅRh$h ½Â|¿ / / / / /ÆÅRÌ /ÆÅRÌ ÑÒ Ò Ò Ò Ò Ò Ó iÐjfi xzmÔÕÖ$qprmtn·×3ØÙu‚qÕntqÛÚÐÕÜÝuwntqpsÖ Þ n uwvr~Vßpsß ÂL°g­n¤¬¨õ­/ÅH¤¬¼×° ¨õ­ ¦©·´Æ¨° ³¯Ñ6« ¦©«Õë ÂûÅÆ ì ¦©«é6¨²žÎ%¨ ° «Ô°Ö­ ¸¤|² ¶ Ñ´½d² ¶Ý¤]ç6­n¶ ° ·]­ ¦©¨®/° ¨ ¾ ¶ ¤|µ¶ ¤¬« ¤¬¨õ­ ¦©¨® ­ ¸¤l·]²Ž¨õ­n¤]ç6­ À° ³èÍEÀ« °g® ¤‚¼2¤¬° ¨”¦©¨®2² ½OÎB² ¶ ¾ «|ÁBÂOÅ6Æ À« ¤¬«« ¦©¨®ŽÀ³i°g¶Cªg° ³©À¤ ¾ ¤¬·]²Ž¼2µ ²Ž« ¦©­ ¦¯²Ž¨ëÓÅHÉÊ ì Á ú ­à¸° «à» ¤|¤¬¨âÎ)¦ ¾ ¤¬³¯Ñ´À«n¤ ¾ ¦©¨ ¦©¨½d² ¶ ¼×°g­ ¦¯²Ž¨ ¶ ¤]Í ­n¶ ¦©¤|ªc° ³-­ ° «néÖ° «°›ªg°g¶ ¦©° ¨­2² ½)­ ¸¤Sª ¤¬·]­n² ¶Ü«nµ”° ·]¤ ¼2² ¾ ¤¬³Eë ʤ|¤|¶ Î"¤¬«n­n¤|¶"¤|­B° ³=Á¯ãt%'$' / ì ë ÊÐÀ¼° ¦©«B¤|­B° ³=Á¯ã %'$') ì Á Ȧ¯ª ¤¬¨×­ ¸¤)·]²Ž¨ ¾ ¦©­ ¦¯²Ž¨° ³µ¶ ² »”°g»”¦©³©¦©­EÑl¼×°g­n¶ ¦èç Š ° ¨ ¾ $ ë Š ì ý€6ãg­ ¸¤1ÅHÉʁ² ½ Š ° ¨ ¾ ­ ¸¤¶° ¨éõÍ Ü°gµµ¶ ²¬ç¦©¼°g­ ¦¯²Ž¨Ô¼×°g­n¶ ¦èç ŠÃà ¦©« ¾ ¤|̔¨¤ ¾ ° « Š ýâáãG¶`ä›ý Ž å ‘Àæ 5  ‘ –ç ‘ –³ä ‘ ëh ì Šà ýÇá à ã à ¶`ä à ý à å ‘Àæ 5  ‘ –ç ‘ –³ä ‘ ësì Î%¸¤|¶ ¤6áÒ° ¨ ¾ ¶ ·]²Ž¨õ­ ° ¦©¨”«1³¯¤|½’­1° ¨ ¾ ¶ ¦¯®Ž¸­î« ¦©¨®ŽÀ6Í ³©°g¶Sª ¤¬·]­n² ¶ «Ü² ½ÆlãB¶ ¤¬«nµ ¤¬·]­ ¦¯ª ¤¬³¯Ñ ã%° ¨ ¾ ­ ¸¤èãý Ê¿[ëç 5 ñ–––?ñZç Ž ì ¦©«ù­ ¸¤ ¾ ¦©°g® ²Ž¨° ³Ô¼°g­n¶¦èç ² ½ « ¦©¨®ŽÀ³©°g¶–ªc° ³©À¤¬«à² ½ Š Á Ï?¶ À”¨·|°g­n¤ ¾ ÅHÉÊ Šà ã Î%¸¦i· ¸2¦©«·]²Ž¨« ­n¶ À·]­n¤ ¾ ½’¶ ²Ž¼­ ¸¤Œ ÍE³i°g¶ ® ¤¬«n­« ¦©®Ž¨À6Í ³©°g¶O­n¶ ¦¯µa³¯¤¬«û² ½ Š ãõ¦©«û­ ¸¤î·|³¯²Ž«n¤¬«n­¶° ¨éõÍÊ ‚¼°g­n¶¦èç­n² ÆécÁ”Ï1¸¤s³¯¤|½’­1« ¦©¨®ŽÀ³©°g¶-ª ¤¬·]­n² ¶ ” Â,‘° ¨ ¾ ­ ¸¤È¶ ¦©®Ž¸õ­ « ¦©¨®ŽÀ³©°g¶ª ¤¬·]­n² ¶ ” ³ ‘·]² ¶ ¶ ¤¬«nµ ²Ž¨ ¾ «­n²­ ¸¤)¶ ²žÎ,ª ¤¬·wÍ ­n² ¶ ” ð,‘O° ¨ ¾ ­ ¸¤s·]²Ž³©À¼¨Üª ¤¬·]­n² ¶ ” ò$‘ ㍶ ¤¬«nµ ¤¬·]­ ¦¯ª ¤¬³¯Ñ Á ê"Ñ×­ °gé6¦©¨®´ ¤¬³¯¤¬¼2¤¬¨­ «-² ½ ”  ‘ ° ¨ ¾ ” ³ ‘ ã6¤¬° ·¸Ý®Ž¦¯ª ¤¬¨ Î"² ¶ ¾ ð2° ¨ ¾ µ¶ ¤ ¾ ¦i·]­ ¦©¨®1Î"² ¶ ¾ òȲ ½ï ë’òó ð ì ¦©«?¶ ¤|µÍ ¶ ¤¬«n¤¬¨­n¤ ¾ ° «%°2ª ¤¬·]­n² ¶¦i¨Ü­ ¸¤s¶ ¤ ¾ À”·]¤ ¾ ÍE«nµ”° ·]¤ Á äO¦¯®ŽÀ¶ ¤Ã%"¦©«L° ¨‚¤]獰 ¼2µ”³¯¤B² ½[ÅHÉÐÊ¿²Ž¨‚­ ¸¤-¨²ŽÀ¨6Í ª ¤|¶ »Ö·]²Ž¨ ¾ ¦¯­ ¦©²Ž¨° ³µ¶ ² »”°g»”¦©³©¦©­Eћ¼°g­n¶ ¦è燲 ½1Ïû°g»”³¯¤ %gÁú4¨ äO¦¯®ŽÀ¶ ¤„%wÍEʂã6» ² ­ ¸ ¨²ŽÀ”¨×ðݰ ¨ ¾ ª ¤|¶ »Üò×°g¶ ¤ ¶ ¤|µ¶ ¤¬«n¤¬¨õ­n¤ ¾ »HÑ´°Öª ¤¬·]­n² ¶à¦©¨,­EÎ"² ¾ ¦i¼2¤¬¨« ¦¯²Ž¨”° ³ «nµ”° ·]¤ Á)á%²ŽÀ¨”«%Î%¸”¦©· ¸Ý²6·|·|À¶Î%¦¯­ ¸à« ¦©¼¦©³i°g¶îª ¤|¶ »”« °g¶ ¤È® ¶ ²ŽÀµ[¤ ¾ ¤¬° ·¸Ô² ­ ¸¤|¶î¤|ª ¤¬¨Ô¦©½?­ ¸¤|ÑS¨¤|ª ¤|¶%·]²gÍ ²6·|·|À¶lÎ)¦¯­ ¸‡­ ¸¤S« ° ¼2¤ ª ¤|¶ »´ë ¾ ¦©¼†% [ » ¤|ª ¤|¶ °g® ¤¬«|ã ¾ ¦©¼ë^[½d²H² ¾ « ì ÁL䍲 ¶û¤]獰 ¼2µ”³¯¤ ãg¨²ŽÀ¨6± ·]²gÞ[¤|¤±Ð° ¨ ¾ ª ¤|¶ »œ±g» ¤|¤|¶@± ¾ ²¨² ­S·]²gÍÓ²6·|·|À¶ÝÎ%¦¯­ ¸/­ ¸¤›« ° ¼2¤ ª ¤|¶ »×¦i¨l­ ¸¤î² ¶ ¦¯®Ž¦©¨° ³6¼°g­n¶¦èçÜë Ïû°g»”³¯¤`% ìHì ¸²žÎ"¤|ª ¤|¶ ­ ¸¤|Ñݰg¶ ¤‚¨¤¬°g¶%¦©¨Ý­;ÎB² ¾ ¦©¼2¤¬¨”« ¦¯²Ž¨° ³«nµa° ·]¤sÎ%¸¤¬¨ ¼2¤¬° « À¶ ¤ ¾ Î%¦¯­ ¸S°‚·]²Ž« ¦©¨¤ ¾ ¦©«n­ ° ¨”·]¤ ÁOÏ%¸¦©«"¼2¤¬° ¨« ­ ¸°g­À¨« ¤|¤¬¨Î"² ¶ ¾ µ”° ¦¯¶ «-ë’ð?ñnò ì Î%¸¦©·¸ ¾ ²¨² ­O·]²gÍ ²6·|·|À¶È¦©¨X­ ¸¤‚­n¶° ¦©¨¦©¨® ¾ °g­ °Ü¼°¬Ñ–¨²Ž¨¤‚­ ¸¤2³¯¤¬« « » ¤%¨¤¬°g¶-¦©¨¶ ¤ ¾ À·]¤ ¾ ÍE« µ”° ·]¤ ÁÏ1¸¦i« ¾ ¤|¶ ¦¯ª ¤ ¾ ¶ ¤|µÍ ¶ ¤¬«n¤¬¨­ °g­ ¦¯²Ž¨´Î%¸”¦©· ¸/·|°gµ­ À¶ ¤¬«SÎ"² ¶ ¾ ë’ð ì ÍÓÎ"² ¶ ¾ ë’ò ì ° « «n²6·|¦©°g­ ¦¯²Ž¨”«s¦©«ÈÀ«n¤ ¾ ½d² ¶s¤¬« ­ ¦©¼°g­ ¦©¨®Ýµ”¶ ² »”°g»”¦©³i¦èÍ ­ ¦¯¤¬«î² ½À¨”«n¤|¤¬¨ ¾ °g­ °6Á i=jfí îßqpÜïu‚qpsnMðèxzmw{Du|{}pvsprqpsÕß´m,n ñ ÕoDòMÖÕoÇÚ=óDu|ÖÕ ô¨õ­ ¦©³õ¨²žÎsãgÎ"¤·]²Ž¨«n­n¶À·]­n¤ ¾ Î"² ¶ ¾ ·]²gÍÓ²H·|·|À¶ ¶ ¤¬¨·]¤ µ¶ ² »a°g»”¦©³©¦¯­;Ñ༰g­n¶¦è燰 ¨ ¾ µ¶ ²cün¤¬·]­n¤ ¾ ­ ¸¤ ¼°g­n¶¦èç ¦©¨­n²×³¯²NÎB¤|¶ ¾ ¦©¼2¤¬¨« ¦¯²Ž¨Ý«nµ”° ·]¤ Á-á%²NÎsã[ÎB¤« À® ® ¤¬«n­ õÊö B÷‡@? øACY‰ùt‡HYúQFSt? øA*ˆY ‡@ûsAFJC? IK‡BeIKB? ‡6? øAY A§úXJFA§ú Qˆ >@JFAIKQüJø‡HQAFB&QXJø´? ø>@?Ù? øAGY AFˆY AFQACB?>Z? IK‡BQyIRBý? øA ‡HY IKWHIRB>HN=Qˆ>HJFAÃ>@Y AGJø >@BWAFú1>HQÙNRIP?? NKAŒ>HQ4ˆ$‡QQIKþNKAGùVøAFB E‰A§>@QXY A§úyþO4? øAVQXEè‡@ÿ$? øAVQX >@Y ACQ,‡@ÿ? øAVúIACY AFBJFAFQ  BAÐJF>HBüˆY ‡A=? ø >@? MIKQ|? øAÐþ$AFQ?,>HˆˆY ‡ IREG>Z? IK‡By? ‡L: ÿ ‡@YV>HBOyXBIP?Y>FO‰IKB>ZY IR>HB?=B‡HY EIKJÊø>HAFNtACYYO‰>HB ú  AFQQXˆS ­ ¸¶ ¤|¤ªg°g¶ ¦©° ¨­1µ¶ ² »”°g»a¦©³©¦¯­;Ñפ¬«n­ ¦©¼×°g­ ¦¯²Ž¨X¼2¤|­ ¸² ¾ « ¦©¨ ¾ ¦©¼¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ «nµa° ·]¤ Á äO¦¯¶«n­9¦©«9¤¬«n­ ¦èÍ ¼°g­ ¦i¨®Lûë’òŠó ð ì »HÑ·]²Ž¼2µ”À­ ¦i¨® ¾ ¦©« ­ ° ¨·]¤B» ¤|­;ÎB¤|¤¬¨ ®Ž¦¯ª ¤¬¨ÎB² ¶ ¾ ð‚° ¨ ¾ µ¶ ¤ ¾ ¦i·]­ ¦©¨®-ÎB² ¶ ¾ òȦ©¨Ð¶ ¤ ¾ À·]¤ ¾ «nµa° ·]¤ ÁÃÅH¤¬·]²Ž¨ ¾ ãÈÎB¤9·|° ¨âÀ«n¤'¶ ° ¨éÍÊ Õ°gµµ¶ ²žçÍ ¦©¼×°g­ ¦¯²Ž¨Õ¼°g­n¶ ¦èç ÁQÏ%¸¦¯¶ ¾ ãî­ ¸¤‡«n­ °g­n¤]ÍÓ² ½iÍÓ­ ¸¤]ÍE°g¶ ­ « ¦i¼¦©³©°g¶ ¦©­EÑÍÓ»”° «n¤ ¾ ¼2¤|­ ¸² ¾ «Ô·|° ¨C» ¤‡¼2¤|¶ ® ¤ ¾ ­n² ²ŽÀ¶ ¾ ¦©¼2¤¬¨« ¦©²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¼2² ¾ ¤¬³=Áê"¤¬·|° À«n¤¥­ ¸¤ ̶«n­B­;ÎB²¼¤|­ ¸² ¾ «"°g¶ ¤Ð¨² ­"»a° «n¤ ¾ ²Ž¨S« ­ °g­ ¦©«n­ ¦©·|° ³ ­ ¸¤|² ¶ Ѡ㔦¯­1« ¸²ŽÀ”³ ¾ » ¤¤]ç6µ”³¯² ¶ ¤ ¾ iÐjÀíVjk Õq mÐok"!$#&%('*),+ -/.10243+ '5016 7 08)9;:<6 Ï%¸¶ ²ŽÀ®Ž¸‚ÂûÅ6Æl㞭 ¸¤"¼°g­n¶¦èç Š ¦©«?½’° ·]­n² ¶ ¤ ¾ ¦©¨õ­n² ­ ¸¤µ¶ ² ¾ À·]­›² ½ ­ ¸¶ ¤|¤¿¼°g­n¶ ¦©·]¤¬«9° «‡¦©¨ ê ÄÀ°cÍ ­ ¦¯²Ž¨€hHã-° ¨ ¾ ” Âw‘а ¨ ¾ ” ³’Ô°g¶ ¤X·]²Ž¨« ¦ ¾ ¤|¶ ¤ ¾ ° «­ ¸¤ ¶ ²NÎCª ¤¬·]­n² ¶ ” ð,‘° ¨ ¾ ­ ¸¤l·]²Ž³©À¼×¨Ýª ¤¬·]­n² ¶ ” ò’s¦©¨XéõÍ ¾ ¦i¼2¤¬¨« ¦¯²Ž¨–« À»a«nµ”° ·]¤ ¶ ¤¬«nµ ¤¬·]­ ¦¯ª ¤¬³¯Ñ Á2äO¦¯®ŽÀ¶ ¤Û%wÍEÊ « ¸²žÎ%«3^NÍ ¾ ¦©¼2¤¬¨« ¦©²Ž¨° ³Oµ”³¯² ­È² ½"¶ ¤¬« À³©­ ° ¨õ­1á é ã}¶ é ¼°g­n¶¦èç[ÁÏ1¸¤ ¾ ¦©«n­ ° ¨·]¤]ÍÓ»a° «n¤ ¾ ¼2¤|­ ¸² ¾ À« ¤l¨² ¶nÍ ¼° ³i¦¯í|¤ ¾'¾ ¦i«n­ ° ¨·]¤×» ¤|­;ÎB¤|¤¬¨ ” Âw‘)° ¨ ¾ ” ³’ ½d² ¶‚¤¬«n­ ¦èÍ ¼°g­ ¦i¨®2µ¶ ² »”°g»a¦©³©¦¯­;Ñ×ï×ë’ò ’ ó ð ‘=ì [ ï ë’ò’õó ðw‘ ì ý % = à –"> à ë ”  ‘ ñ ” ³ ’Nì@? % ^ > à ë ”  ‘ ñ ” ³ ’Nì ý A à B æ 5 Âw‘nës ì ³ ä ’ ës ì C A à B æ 5  ‘ ës ì é C A à B æ 5 ³ ’ ës ì é ã×ëÌ ì Î)¸¤|¶ ¤D= à ¦©«î¨² ¶ ¼° ³i¦¯í¬¦©¨®l½’° ·]­n² ¶° ¨ ¾ > à ¦©«î° ·]²Ž« ¦i¨¤ ¾ ¦©«n­ ¨”° ·]¤s¦©¨† Í ¾ ¦©¼2¤¬¨”« ¦¯²Ž¨° ³ «nµ”° ·]¤ Á iÐjÀíVj°i Õq mÐoœiE!GFH+I- JK2 +8LEL/MN:PO % 7 +I)%Q: 7 +I)MR%O 7 0)9;:<6 ú4¨´ÂûÅ6Ælã"ÎB¤¥·|° ¨Õ·]¶ ¤¬°g­n¤‡°¶ ° ¨éÍÊ ,°gµµ¶ ²žç6¦¯Í ¼°g­ ¦©²Ž¨ ¼°g­n¶ ¦èç ŠÃà ­n²ù­ ¸¤‡¼°g­n¶¦èç´Æ »HÑ/«n¤|­4Í ­ ¦©¨® ° ³i³ »”À­-­ ¸¤3 ܳ©°g¶ ® ¤¬« ­1« ¦©¨®ŽÀ³i°g¶Bªg° ³©À¤¬«-² ½OÆ ¤¬ÄHÀ° ³­n²–í|¤|¶ ²Öë ê ÄHÀ°g­ ¦¯²Ž¨Ýì ÁÝú ¨'­ ¸¦©«¼¤|­ ¸² ¾ ã Î"¤s·]²Ž¨« ¦ ¾ ¤|¶B¤¬° ·¸Ý¤¬³¯¤¬¼2¤¬¨­î² ½L°‚¶ ° ¨éÍÊ S°gµµ¶ ²žçÍ ¦©¼×°g­ ¦¯²Ž¨¼°g­n¶ ¦¯ç Šà ° «µ¶ ² »”°g»”¦©³©¦©­EÑ ¾ ¦©«n­n¶ ¦¯»aÀ­ ¦¯²Ž¨ ² ½‚Oë’òó ð ì ë äO¦¯®ŽÀ¶ ¤„%wÍC¸ ì Á6ÏL²‚« °g­ ¦i«n½dÑ2­ ¸¤)¶ ¤¬ÄHÀ¦¯¶ ¤]Í ¼2¤¬¨­ «ÙãMOë’òó ð ì ý%-° ¨ ¾ ûë’òŠó ð ìTS /6ãõÎB¤îÀ«n¤-­ ¸¤ ½’²Ž³©³¯²NÎ%¦©¨®2¨² ¶ ¼×° ³©¦¯í¬¦©¨®‚¤¬ÄHÀ°g­ ¦¯²Ž¨Ð[ ï×ë’ò’Hó ðw‘ ì ý % = àVU ŠÃà ës ñ ì ]‡¼¦©¨<W Šà ës ñ ì ?YXNZ ã =‚ë’ð ì ý å W U ŠÃà ës ñ ì ]‡¼¦©¨<W ŠÃà ës ñ ì[?YX Z ë]\ ì Î%¸¤|¶ ¤_^ë ¨ ì ¦©«¨² ¶ ¼×° ³©¦¯í¬¦©¨®›½ ° ·]­n² ¶S° ¨ ¾`X ¦©«° « ¼²² ­ ¸¦i¨®·]²Ž¨«n­ ° ¨­|Á ÆlÁt¸"²Ž¨ ¾ ¦©­ ¦¯²Ž¨° ³=¹¶ ² »a°g»”¦©³©¦¯­;Ñ ê « ­ ¦©¼°g­ ¦¯²Ž¨Ôæà°g­n¶ ¦èçÝ»HÑS¨° ¦©ª ¤È½d¶ ¤¬ÄHÀ¤¬¨·]Ñ Š ý• ï×ës³ó ì “ý º» » » » » » ¼ ½¾Œ ¿ ½À   ³ÁÂ,  ½¾ÃÄÄÁ¾ þ /ÆÅRh$h$h$h / /ÆÅRh$h$h$h /ÆÅRh$h$h$h / / ¾ɂC½ ‚¬ò / /ÆÅRÌ /$/$/ /ÆÅRÌ /$/$/ / / / Î Á$ÏÐÏÐ/ % ÅK/$/$/$/ / / / / þH / / / /ÆÅRh$h$h$h /ÆÅRh$h$h$h /ÆÅRh$h$h$h ½Âw¿ / / / / /ÆÅRÌ /$/$/ /ÆÅRÌ /$/$/ ÑÒ Ò Ò Ò Ò Ò Ó êÁ[Å6¦©¨®ŽÀ³©°g¶-ɤ¬·]­n² ¶Ê¤¬·]²Ž¼2µ ²Ž« ¦©­ ¦¯²Ž¨ Š ýÇá`㌶ ä ý º» » » » » » ¼  —è%` —\^  —\hý —e-& —\Ì /ÆÅK/' /ÆÅf%\ ]G/ÆÅR(Æ% /ÆÅRh$h,]G/ÆÅP-^ /ÆÅRÌ$h /ÆÅK/^ ]G/ÆÅRh$(,]G/ÆÅa\) /ÆÅRh$^ /ÆÅR(-t]G/ÆÅK/ /ÆÅRh$h /ÆÅRh$(,]G/ÆÅf%\ /ÆÅK/‚% /ÆÅa\$^ ]G/ÆÅK/R\ /ÆÅRh$' /ÆÅa\) /ÆÅK/$/ /ÆÅª)*\ /ÆÅR^$Ì,]G/ÆÅRh$Ì,]G/ÆÅP-) ÑÒ Ò Ò Ò Ò Ò Ó º» » » » ¼ % Åf%/ / / / //ÆÅR() / / / / / /ÆÅa\/ / / / /¤/ÆÅRh$̤/ / / / /¤/ÆÅf%\ ÑÒ Ò Ò Ò Ó º» » » » » » » » ¼  —è% —\^ —\hý —e-ë —\Ì /ÆÅK/^ /ÆÅK/R\ ]G/ÆÅP-^ /ÆÅRhÆ%]G/ÆÅR(/ÆÅR'5\w]Œ/ÆÅK/h /ÆÅR^Æ% /ÆÅf%h,]G/ÆÅK/^ /ÆÅR^$Ì /ÆÅK/) ]G/ÆÅª)%]G/ÆÅa\$^ /ÆÅf%h /ÆÅK/h /ÆÅR^$' ]G/ÆÅP-Ì /ÆÅa\) /ÆÅP-' /ÆÅK/$/ /ÆÅa\) /ÆÅf%\,]G/ÆÅf%^,]G/ÆÅK/) /ÆÅK/$/ /ÆÅa\) /ÆÅf%\,]G/ÆÅf%^,]G/ÆÅK/) ÑÒ Ò Ò Ò Ò Ò Ò Ò Ó ä ¸Á"b)° ¨éõÍÊ^Æ%µµ”¶ ²¬ç¦©¼°g­ ¦¯²Ž¨Ôæà°g­n¶ ¦èç Š é ýÇá é ã é ¶`ä é º» » » » » » ¼ ½¾Œ¿ ½À   ³ÁÂ,  ½¾ÃÄÄÁ¾ þ /ÆÅK/‚%$%' /ÆÅK/'$Ì$' /ÆÅK/h$( / /ÆÅK/ -\5\ /ÆÅK/'$(Æ% /ÆÅK/'$(Æ% ¾*ɂʽ ‚|ò /ÆÅK/‚%)/ÆÅRÌ$($'$' /ÆÅf%Ì$'$( /ÆÅK/^$h) /ÆÅK/‚%Ì$h /ÆÅK/‚%Ì$h Î Á$ÏÐÏÐ/ÆÅK/^$h$h /ÆÅR'$h$^$( /ÆÅR^-)/ /ÆÅK/‚%) Ì ]G/ÆÅK/^ /R\ ]G/ÆÅK/^ /R\ þ@ /ÆÅK/h-) ]G/ÆÅK/$/) ' /ÆÅK/ -$-^ /ÆÅf%\$h$' /ÆÅRh5\5\$( /ÆÅRh5\5\$( ½Â|¿ /ÆÅK/ -^$^ ]G/ÆÅK/^ /R\ /ÆÅK/ÌÆ%h /ÆÅR^ /$/) /ÆÅP-$-'$' /ÆÅP-$-'$' ÑÒ Ò Ò Ò Ò Ò Ó Ê‚Á”ÏîÎ"²gÍ ¾ ¦©¼2¤¬¨”« ¦¯²Ž¨° ³[µ”³¯² ­î² ½ÅHÉÐÊcb1¤¬« À”³¯­ de4f*eghiKejQge4klh5mneoqpsr*go1t u rNro*jQge4klh5mneovpsr*go1t wyxz { x xz { xz | xz } xz ~ x xRz{ xRz | xz } xRz ~  €sƒ‚ { €sƒ‚  „8……† ‡/ˆ l‰nŠ …‹ Œ,NŽ …… „R†Q…N ‰n‘R’ † ‰ ‡ ƒ’ ‰nƒ“ 5† ƒ”RŠ 5…•  ‘ † …–— ‰ ‡ ˜ƒ˜  ‡ ™ š › œ  ‘R”‰       ž …†(„ ‰ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ äû¦©®ŽÀ¶ ¤ý% [Bƨ ê 獰 ¼2µ”³©¤² ½Å6¦i¨®ŽÀ³©°g¶-ɰ ³©À¤sʤ¬·]²Ž¼2µ ²Ž« ¦©­ ¦¯²Ž¨ Ïû°g»”³¯¤3^[ƨÝú4³©³©À« ­n¶ °g­ ¦¯ª ¤ ê 獰 ¼2µ”³¯¤ æà ê ! °g­ní¬« Å6¦©¼¦©³i°g¶ ¦¯­EÑ ÊЦ©¼2¤¬¨« ¦©²Ž¨vb1¤ ¾ À·]¤ ¾ æÔ² ¾ ¤¬³ »”° · éÍÓ²gÞ ÍÓ»”° « ¤ ¾ ÊЦ©«n­ ° ¨·]¤ éõÍnb)° ¨é Ê bîÍ ÅHúnæ ¼2¤|­ ¸² ¾ ÍÓ»”° «n¤ ¾ ¼°g­n¶¦èç ë ×ýœh ì ë — ýœ^ ì ë X ýœ/ÆÅf% ì ëy¡lý²/ ì ï×ë½¾Œ¿Xó Î Á$ÏÐÏÐì / ¢ j ¢ £ ¢ /6Áf%\$^) /6ÁK/) Ì5\ /6Áf%$% ï×ë½À›ó Î Á$ÏÐÏÐì % % % /6ÁR^-^$Ì /6ÁRÌ$Ì$h5\ % ï×ë Ôó Î Á ÏÐÏÐì / /6Áf%( /6Áf%) /6ÁR^$h$Ì$' /6Áf%'$h$^ /6ÁR^$( ï×ë³ÁÂ, ó Î Á ÏÐÏÐì / /6Áf%( /6Áf%$% /6Áf%^)% /6ÁK/) ^5\ /6Áf%$% ï×ë%ó Î Á$ÏÐÏÐì / /6Áf%( /6Áf%$% /6Áf%$%Ì$' /6ÁK/Ì$^5\ / ï×ë½¾ÃÄÄsÁ¾ ó Î Á$ÏÐÏÐì / /6Áf%( /6Áf%$% /6Áf%$%Ì$' /6ÁK/Ì$^5\ / i=jfíVjÀí Õq MmÐo²í@!H#&% 7 0-¤'*%(:I-¤2MN06I¥¦.1016 '*% 7 %(§¨+IMR%n)©P23+I'*016 7 0)9;:<6 Ï1¸¤« ¦©¼¦i³©°g¶ ¦¯­;ÑõÍÓ»”° « ¤ ¾ ¼2¤|­ ¸² ¾ ë Êаg®Ž° ¨X¤|­Ð° ³=Á¯ã %'$'$' ì ° ¨ ¾S¾ ¦©¼2¤¬¨”« ¦¯²Ž¨2¶ ¤ ¾ À·]­ ¦¯²Ž¨ ­n¤¬· ¸¨¦iÄÀ¤Ð·|° ¨ » ¤)¼2¤|¶ ® ¤ ¾ ¦©¨­n²²Ž¨¤¼2² ¾ ¤¬³ ªNÁ«b%¤ ¾ À·]¤ ¾ ¾ ¦©¼2¤¬¨6Í « ¦¯²Ž¨›·|° ¨¥» ¤ » ¤|­n­n¤|¶È¶ ¤|µ¶ ¤¬« ¤¬¨õ­ °g­ ¦¯²Ž¨¥« µ”° ·]¤2­ ¸° ¨ ­ ¸¤ ² ¶ ¦¯®Ž¦©¨° ³« µ”° ·]¤×½’² ¶̔¨ ¾ ¦©¨®Ý« ¦©¼×¦©³©°g¶ ¦¯­ ¦©¤¬«)» ¤]Í ­EÎ"¤|¤¬¨àÎB² ¶ ¾ «|Á Ï1¸”¦©«9°gµ”µ¶ ²Ž° · ¸ Ìa¨ ¾ «9­ ¸¤Õ¼2²Ž«n­™ ¨¤¬°g¶ ¤¬«n­ Î"² ¶ ¾ «Ü­n²ùðC¦i¨/¶ ¤ ¾ À·]¤ ¾ «nµ”° ·]¤‡° ¨ ¾ À”«n¤¥­ ¸¤¬«n¤ Î"² ¶ ¾ ­n²¤¬«n­ ¦i¼°g­n¤s­ ¸¤sµ”¶ ² »”°g»”¦©³i¦¯­EÑ`Oë’òó ð ì [ ï×ë’ò$‘ ó ðw‘ ì ý­¬ ï ‹¯®a° ë’ò’õó ðw‘ ì Î ë’ðLñnò ì‰ / 5 ± ²"± A ©4³ «y´ ² ï ‹µ®a° ë’ò’õó ð + ‘ ì Î ë’ðLñnò ì ý²/ ë ) ì Î%¸¤|¶ ¤·¶,ý¹¸¬ð + ‘ ó Î Á$½ë ” ð ‘ ñ ” ð ‘ + ì3 ¡ ²Ž¨Ýé ¾ ¦©¼ÝÁ»ºŽã ­ ¸¤· ¦i«›­ ¸¤ù¶ ¤ ¾ À·]¤ ¾ ¾ ¦©¼2¤¬¨« ¦¯²Ž¨C« ¦¯í|¤¿¨² ­'° ·]²ŽÀ¨­1² ½¨¤¬°g¶ ¤¬«n­)¨²ŽÀ¨«|ÁÏ%¸¤s·]²ŽÀ¨­1² ½¨¤¬°g¶ ¤¬«n­ ¨²ŽÀ¨”«L°g¶ ¤ ¾ ¤|­n¤|¶¼¦©¨¤ ¾ »Ѽ¡ãgÎ%¸¦©·¸l¦i«L­ ¸¶ ¤¬« ¸²Ž³ ¾ ² ½·]²Ž« ¦©¨¤Èªg° ³©À¤ Á ½ ö B`? øA4ˆY AIK‡XQ}QIKE‰IRNR>ZY IK?rO;rþ>HQA§ú*ùt‡HY¾$SPÀU.>HW>HB3AC? >HN¿KSKÐXQA§úG? øA.JF‡HE‰ˆNKIKJ§>@? A§úŒAFQ? IKEG>@? IKBWyAX >@? IK‡HB8À Á  ¿Ã ®Ä Å$« "Æ Ç ÈÉËÊÌ1ÍÎÈÏ ÐRÑ  Å8ÒÓÅ ³  Ô  Å ÖÕ Â ¿Ã Ä Å ³  Ò Ñ  ÅKÒÓÅ ³ PÆ_×NØ*ÙÚ Ì1ÍÎÈÛ Û È É Ð ùVøAÊY A Ñ  Å8ÒyÅ ³ LIKQ‰>`QIKE‰IKNR>@Y IP?sOýE‰AF>HQXY AŒúACY IÎA§ú„ÿ Y ‡HE ? øA6úIKQQIKE‰IKNR>@Y IP?sOeE‰AF>HQXY A  9eúIÎAÊY WAFBJFAÝÜE¿Þ Òß ÃIKQ ? øA.QAÊ?D‡@ÿ ß ù=‡HYúQ ùVIP? ø*? øA.QEG>HNKNKAFQ?  9;rúIÎACY WHAFBJCAL? ‡ Å "àMACY ASø‡§ù=AACY§SùtA.XQA.E‰‡@Y A}QIKE‰ˆNKIPÿ A§úAX>@? IK‡BÀ Á  ¿Ã ® Ä Å « "Æ  ß Ç È Éá Ê4Ì1ÍÎÈ á Ï Ð Â ¿Ã ® Ä Å ³ «  ⠙ A Ù :;:E@ œN ŸŽ¡[ 8]ã?Jå伿 ¡ DèçB:ÓJ é)¤|¶ ¤2Î"¤À«n¤2°Ý·]²Ž¨·]¶ ¤|­n¤¤]獰 ¼2µ”³©¤2­n²Ô¦©³©³©À« ­n¶ °g­n¤ ¤]Þ ¤¬·]­ ¦¯ª ¤¬¨¤¬« « ² ½ ²ŽÀ¶z¼2² ¾ ¤¬³=Á Ï1¸¤ ¤]獰 ¼ Í µ”³©¤ ¦©«´»a° «n¤ ¾ ²Ž¨ÃÏL°g»”³©¤Í%° ¨ ¾ ­ ¸¤ ­ ° « é ¦©« ¤¬«n­ ¦i¼°g­ ¦©¨® ¨²ŽÀ”¨ ° ¨ ¾ ª ¤|¶ » ·]²gÍÓ²H·|·|À¶ ¶ ¤¬¨·]¤ µ¶ ² »”°g»”¦©³©¦©­EѲûës³ó ì Á Ï1¸¤|¶ ¤Õ°g¶ ¤´­EÎ"² ® ¶ ²ŽÀµ”« ² ½ Î"² ¶ ¾ «|ã » ¤|ª ¤|¶ °g® ¤]ÍÓ¶ ¤¬³i°g­n¤ ¾ Î"² ¶ ¾ «žë]ê deegë p¤ìí]îtï5e4ðëñîròVee]ónô<õë ê tp/í¿i*ëötía÷Pëño*gíÓøùïQónô¯úsì ° ¨ ¾ ½’²² ¾ ÍÓ¶ ¤¬³i°g­n¤ ¾ ÎB² ¶ ¾ «žë]ê d4gehRo1ëÓtûRih5g¨ó(ô<õë ê oef*r*û<gëne,hRmÓë¨tpsh*k¨k¿r*pùó(ô¯ú ì Á ú4¨‚ÏL°g»”³©¤Ã%gã$± ·]²gÞ[¤|¤±È¦i«?°«nµ”°g¶ «n¤¬³©Ñ ¾ ¦©«n­n¶ ¦¯»aÀ­n¤ ¾ ¨²ŽÀ”¨ ° ¨ ¾ Î"¤ÿ¤]çHµ ¤¬·]­ ï ë  Šó Î Á$ÏÐÏÐì  ï×ë¬ó Î Á$ÏÐÏÐì ÁË£,¦¯­ ¸ æà ê 㦯­¥¦©«¥¨² ­'µ[²Ž«nÍ « ¦©»”³¯¤ ­n²¶° ¨éz­;ÎB²ÿµ”¶ ² »”°g»”¦©³i¦¯­ ¦¯¤¬«¿« ¦i¨·]¤Ò­ ¸¤|Ñ °g¶ ¤ ° ³©³™/6Á ! °g­ní$# «Q»”° · éõÍÓ²gÞ+° ³i«n²½ ° ¦©³©«­n² ¾ ¦i«n­ ¦©¨®ŽÀ¦i« ¸ ­ ¸¤¬¼ « ¦©¨·]¤ÃÀ¨¦©® ¶ ° ¼ µ¶ ² »”°g»”¦i³èÍ ¦¯­ ¦©¤¬«žOë  ì ý ûë ì ý /ÆÅf%(HÁ ú ¨ ­ ¸¤« ¦i¼¦©³©°g¶ ¦©­EÑÍÓ»”° «n¤ ¾ « · ¸¤¬¼2¤Î"¤·]²Ž¼2µ”À­n¤ 2HÅÍ ¾ ¦¯ª ¤|¶ ® ¤¬¨”·]¤­n² ̔¨ ¾ « ¦©¼¦©³©°g¶¦¯­EÑ » ¤|­;ÎB¤|¤¬¨ ¨²ŽÀ”¨«´° ¨ ¾ýü ¶ÿþ ûës³ó þ ì ñûës³ó Î Á$ÏÐÏÐì ý ü ¶þÀûës³ ó þH ì ñûës³ó Î Á ÏÐÏÐìý/ÆÅa\$'$hÆ%gãŠÎ)¸¦©· ¸ ¾ ²H¤¬«1¨² ­ ¾ ¦©« ·]¶ ¦i¼¦©¨°g­n¤*±g» ¤|¤|¶@±2° ¨ ¾ ±g»¶ ¤¬° ¾ ±6Á Ï%¸¤ ¾ ¦i«n­ ° ¨·]¤]ÍÓ»”° «n¤ ¾ ¼2² ¾ ¤¬³=ã[¸²žÎ"¤|ª ¤|¶¬ã«n²Ž³¯ª ¤¬« ° ³©³6­ ¸¤¬«n¤1µ¶ ² »a³¯¤¬¼ÝÁ?£´¸¤¬¨2Î"¤î² »”«n¤|¶ ª ¤1­ ¸¤î­ ¸¦¯¶ ¾ ¶ ²NÎ˦©¨ äO¦¯®ŽÀ¶ ¤%wÍC¸Ðã­ ¸°g­¿¦©«Öµ¶ ¤]ÍE¨² ¶ ¼° ³©¦©í|¤ ¾ ûës³ó Î Á ÏÐÏÐì 㛭 ¸¤|¶ ¤ °g¶ ¤Ò¨² í|¤|¶ ²ªc° ³iÀ¤¬«À¨Í ³©¦©é ¤ôæà ê Á äÀ­ ¸¤|¶¼2² ¶ ¤ ãÿÎ"¤ ·|° ¨ ¤¬¨ ¾ Àµ Î%¦©­ ¸Õ­EÎ"²® ¶ ²ŽÀµ”«Ý² ½ ª ¤|¶ »”«[»[¤|ª ¤|¶°g® ¤]ÍÓ¶ ¤¬³©°g­n¤ ¾ ª ¤|¶ »a«ù¸°¬ª ¤ µ ²Ž« ¦©­ ¦¯ª ¤/ªg° ³©À¤¬« Š é ë½À?ñ Î Á$ÏÐÏÐì 㠊 é ë  [ñ Î Á$ÏÐì 㠊 é ë½¾Œ ¿”ñ Î Á ÏÐÏÐì  / ° ¨ ¾ ½’²² ¾ ÍÓ¶ ¤¬³©°g­n¤ ¾ ª ¤|¶ »”«S¸°žª ¤–¨¤|®Ž°g­ ¦©ª ¤–ªg° ³©À¤¬« Š é ëñ Î Á$ÏÐÏÐì ñ Š é ë½¾ÄÄÁ¾ñ Î Á$ÏÐì /6Á  ÜE Ò  Æ 7 õ    Í  Ð õ     Í  Ð õ @S    PÆ < ]"!$#% Í  Ð Í  Ð ÏL²â·]²Ž¨·]¶ ¤|­n¤/²ŽÀ¶'¤]獰 ¼2µ”³©¤ ã`ûës³ ó Î Á$ÏÐÏÐì ¦©« ·]²Ž¨«n­n¶À·]­n¤ ¾ ¦©¨ÖÏû°g»”³¯¤\^¥À”« ¦©¨®›µ¶ ² »a°g»”¦©³©¦¯­;ч¤¬«4Í ­ ¦©¼°g­ ¦©²Ž¨›½’À¨”·]­ ¦¯²Ž¨«È° « ¾ ¤¬« ·]¶ ¦¯» ¤ ¾ ¦i¨›­ ¸¤°g» ²žª ¤ «n¤¬·]­ ¦¯²Ž¨ŠÁÒæàÂ ê « ¸²NÎ%«S̪ ¤–í|¤|¶ ²9ªg° ³©À¤¬«¬ã%·|° À«4Í ¦©¨® ¾ °g­ °«nµ”°g¶«n¤¬¨¤¬« «Xµ¶ ² »”³¯¤¬¼×«|Á ! °g­ní$#)»”° · éÍ ²gÞS¼¤|­ ¸² ¾ «L° ¨ ¾ « ¦©¼×¦©³©°g¶ ¦¯­;ÑõÍÓ»a° «n¤ ¾ ¼2¤|­ ¸² ¾ ·|° ¨6Í ¨² ­ ¾ ¦©«n­ ¦©¨®ŽÀ¦©« ¸S½’²² ¾ ÍÓ¶ ¤¬³©°g­n¤ ¾ ª ¤|¶ »”«)° ¨ ¾X¾ ¶¦©¨éÍ ¶ ¤¬³©°g­n¤ ¾ ª ¤|¶ »a«|Á ú ¨ ·]²Ž¨­n¶ ° «n­|ãX° ³i³ ¾ ¦i¼2¤¬¨« ¦¯²Ž¨Í ¶ ¤ ¾ À”·]¤ ¾ ¼² ¾ ¤¬³i«¶ ¤¬« ²Ž³¯ª ¤ ¾ °g­ °2«nµ”°g¶ « ¤¬¨¤¬« «"µ¶ ² »Í ³¯¤¬¼° ¨ ¾ ­ ¸¤|т·|³©À”«n­n¤|¶¨²ŽÀ¨«° ¨ ¾ ·]²gÍÓ²H·|·|À¶ ¶ ¤¬¨·]¤ ª ¤|¶ »”«‚¶ ¤¬° « ²Ž¨°g»”³¯Ñ Á›Ï1¸HÀ«|ãOÎB¤Ü·|° ¨¤]çHµ ¤¬·]­‚­ ¸°g­ ¾ ¦©¼¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¼2² ¾ ¤¬³ŽÎ)¦©³©³ « ¸²NÎ'µ¶ ²Ž¼¦©« ¦©¨® ¶ ¤¬« À”³¯­î¦©¨Ý°2¶ ¤¬° ³¤]ç6µ ¤|¶ ¦©¼2¤¬¨­|Á & ä æ/çîJ Ÿ 8ED,J”A  £¥¤S¤|ªc° ³©À”°g­n¤ ¾ ­ ¸¤ ¾ ¦©¼¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¼2² ¾ ¤¬³©« ²Ž¨‡°Üµ”« ¤¬À ¾ ²ÝÎ"² ¶ ¾ «n¤¬¨«n¤ ¾ ¦©« ° ¼l»”¦¯®ŽÀ°g­ ¦¯²Ž¨–­ ° «né ° «¦©¨Õë Êаg®Ž° ¨¿¤|­° ³ÓÁ¯ãÃ%'$'$' ì Á ê ° ·¸¿¼2¤|­ ¸² ¾ ¦©« µ¶ ¤¬« ¤¬¨õ­n¤ ¾ Î%¦¯­ ¸È°1¨²ŽÀ”¨È° ¨ ¾ ­EÎ"²%ª ¤|¶ »”«|ã ¾ ¤¬·|¦ ¾ ¦©¨® Î%¸¦i· ¸Èª ¤|¶ »‚¦©«Š¼2² ¶ ¤B³©¦¯é ¤¬³©Ñ%­n²)¸”°¬ª ¤"­ ¸¤¨²ŽÀ¨° «L° ¾ ¦¯¶ ¤¬·]­² »ü4¤¬·]­|Á2Êаg­ °Ýµ¶ ¤|µa°g¶ °g­ ¦¯²Ž¨¥¼2¤|­ ¸² ¾ ° ¨ ¾ ¤|¶ ¶ ² ¶ ·]²ŽÀ¨õ­ ¦i¨®'« ·¸¤¬¼2¤X°g¶ ¤à° ³©¼2²Ž«n­« ¦i¼¦©³©°g¶ ­n² ­ ¸°g­Ý² ½‚« ¦©¼¦©³i°g¶ ¦¯­EÑÍÓ»”° «n¤ ¾ ¼2¤|­ ¸² ¾ «‡ë Êаg®Ž° ¨Õ¤|­ ° ³=Á¯ã %'$'$' ì ë Â?¤|¤‚° ¨ ¾ ¹û¤|¶ ¤¬¦¯¶ °6ãV%'$'$' ì Á ¹û¤|¶ ½’² ¶ ¼° ¨·]¤È¦©«-¼2¤¬° « À¶ ¤ ¾ »Ñ×­ ¸¤Ð¤|¶ ¶ ² ¶-¶ °g­n¤ ã ¾ ¤|̔¨¤ ¾ ° « ¤|¶ ¶ ² ¶%¶ °g­n¤ý % ' ë ("² ½¦©¨”·]² ¶ ¶ ¤¬·]­1· ¸²Ž¦©·]¤¬« ì Î%¸¤|¶ ¤ÐÏÕ¦i«-­ ¸¤s« ¦¯í|¤È² ½û­n¤¬«n­1«n¤|­|ÁÏL¤¬«n­%¦i¨«n­ ° ¨·]¤¬« ·]²Ž¨« ¦i«n­Ý² ½2¨²ŽÀ¨6ÍÓª ¤|¶ »ÍÓª ¤|¶ » ­n¶ ¦¯µa³¯¤¬«‡ës ñ§³,%gñ§³Æ^ ì ã Î%¸¤|¶ ¤S» ² ­ ¸Õës ñ§³,% ì ° ¨ ¾ ës ñ§³^ ì °g¶ ¤Ü» ² ­ ¸ùÀ¨6Í «n¤|¤¬¨Õ¦©¨/­ ¸¤¥­n¶ ° ¦i¨¦©¨®«n¤|­|Á ës ñ§³,% ì ¦©«Ü«n¤¬³¯¤¬·]­n¤ ¾ « À·¸â­ ¸”°g­à¦¯­–°gµµ ¤¬°g¶ ¤ ¾ °g­›³¯¤¬° «n­à­EÎ%¦©·]¤° «–² ½iÍ ­n¤¬¨ ­ ¸”° ¨Qës ñ§³^ ì ¦©¨Õ­ ¸¤‡² ¶ ¦¯®Ž¦©¨”° ³Èª ¤|¶ »ÍÓ² »6ün¤¬·]­ µ”° ¦¯¶«° ¨ ¾ Oës ñ§³,% ì3 ûës ñ§³Æ^ ì ¦©«Ð°Ý·]² ¶ ¶ ¤¬·]­s° ¨6Í «nÎ"¤|¶¬Áú ¨Õ° ¾¾ ¦¯­ ¦¯²Ž¨ã)­n²¿·]²Ž¨« ¦ ¾ ¤|¶ ! °g­ní¬«Ý»”° · éÍ ²gÞ/¼2¤|­ ¸² ¾ ° «‚­ ¸¤ »a° «n¤¬³©¦©¨¤ ãD³^঩«‚· ¸²H²Ž«n¤ ¾ ° « Ït)ës³,% ì* Ït)6ës³Æ^ ì,+ ûës³,% ì- ûës³Æ^ ì ãl° ¨ ¾ ­ ¸¤›¤|¶ ¶ ² ¶Ý¶°g­n¤¥² ½l»”° · éõÍÓ²gÞ ¼2¤|­ ¸² ¾ ¦©«Ý° ³¯Î-°¬Ñ6« %/$/0/.lÁ«b%À¨¨”¦©¨®s¼2¤|­ ¸² ¾ ¦©«B­ ¸¶ ¤|¤]ÍÓ½’²Ž³ ¾ ·]¶ ²Ž« «4Í ªg° ³©¦ ¾ °g­ ¦¯²Ž¨l° ¨ ¾ ° ³i³Ž¶ ¤¬« À”³¯­ «?°g¶ ¤-°¬ª ¤|¶°g® ¤¬«O²žª ¤|¶­ ¸¤ ­ ¸¶ ¤|¤­n¤¬«n­%«n¤|­ «|Á 021 >Z? 354 QVþ >HJn¾;°‡`AFQ? IKEG>Z? ‡HYVIKQVúA76AFB úŒ>HQ=? øAMÿ ‡NKNK‡ZùÐ; IKBWüA>@X? IK‡Bù1A.QAC?98E Å « "Æ_.øACY A Â;:=< ¿Ã ® Ä Å «  Æ?> ÂA@ ¿Ã ®NÄ Å$«  BDC  Å$«ÓÒ Ã ® FE × 8E Å «   ¿Ã ® GBDC  Å « Ò Ã « "Æv× ä² ¶L« ¦©¼×¦©³©°g¶ ¦¯­;ÑõÍÓ»a° «n¤ ¾ ¼2¤|­ ¸² ¾ 㞭 ¸¤µ”°g¶ ° ¼2¤|­n¤|¶ ­ À¨”¦©¨®X¦©«l¦©¼2µ ² ¶ ­ ° ¨­­n²›¦©¼2µ¶ ²žª ¤×µ ¤|¶ ½’² ¶ ¼° ¨”·]¤ »”À­"Î"¤sÀ«n¤Ð­ ¸¤s« ¦©¼2µ”³©¦©Ì¤ ¾ À¨ÎB¤¬¦©®Ž¸õ­n¤ ¾ °žª ¤|¶ °g® ¤ ¤¬ÄHÀ°g­ ¦¯²Ž¨Ò° «–¦©¨ë Â?¤|¤° ¨ ¾ ¹û¤|¶ ¤¬¦¯¶°6ãë%'$'$' ìIH Á Å6¦i¨·]¤Ô­ ¸¦©«¤¬ÄÀ°g­ ¦©²Ž¨¦©«×­ ¸¤à« ° ¼¤X° «×²ŽÀ¶×¤¬«n­ ¦èÍ ¼°g­ ¦©²Ž¨¼¤|­ ¸² ¾ ¦©¨ùÅH¤¬·]­ ¦¯²Ž¨ ^HÁRhHÁRhHã"ÎB¤Ý·|° ¨ù« °žÑ ­ ¸°g­­ ¸¤·]²Ž¼2µ”°g¶ ¦i«n²Ž¨‡¦©«Ð½ ° ¦¯¶¬ÁáÀ¼l»[¤|¶È² ½-« ¦©¼2Í ¦©³i°g¶ ¨²ŽÀ¨”«÷ /¦©« ¾ ¤|­n¤|¶ ¼¦©¨¤ ¾ « À·¸­ ¸”°g­S« ¸²žÎ%« » ¤¬«n­î¶ ¤¬« À³¯­-²Ž¨Ô­n¤¬«n­)«n¤|­|Á J jk Keu‚quèx3zÕóMuwzu‚qprmtn £¥¤lµ¶ ¤|µ”°g¶ ¤ ¾ ­n¤¬« ­%«n¤|­ «%° «%½d²Ž³i³¯²žÎ%«[ %gÁ ê ç6­n¶ ° ·]­X­n¶ ° ¨”« ¦¯­ ¦¯ª ¤‡ª ¤|¶ »C° ¨ ¾ ¸¤¬° ¾ ¨²ŽÀ¨ µ”° ¦¯¶ «-½’¶ ²Ž¼¹û¤¬¨¨ÔÏL¶ ¤|¤|»”° ¨é ú úwÁ ^HÁîÅH¤¬³¯¤¬·]­×­ ¸¤Ôµ”° ¦¯¶ «½d² ¶ ­ ¸¤Ý%gãK/$/$/¼2²Ž«n­ ½d¶ ¤]Í ÄÀ¤¬¨­1¨²ŽÀ¨”«|Á hHÁy¹û°g¶ ­ ¦¯­ ¦¯²Ž¨s­ ¸¤«n¤¬³¯¤¬·]­n¤ ¾ µ”° ¦¯¶ « )/0 ½’² ¶?­n¶ ° ¦©¨Í ¦©¨®2«n¤|­%° ¨ ¾ h /0 ½’² ¶1­n¤¬«n­)« ¤|­|Á)ëh ½’²Ž³ ¾aì Á -Á"䍲 ¶%¤¬° ·¸X­n¤¬«n­)«n¤|­|ã ë ° ì ¶ ¤¬¼2²Nª ¤l«n¤|¤¬¨Ôµ”° ¦¯¶ «|Á 뒻 ì ½’² ¶¤¬° · ¸ ës ñ§³,% ì ㇷ]¶ ¤¬°g­n¤ ës ñ§³t%gñ§³^ ì « À·¸2­ ¸°g­üÏtL)ës ñ§³t% ìy ^NMtÏt)ës ñ§³^ ì ° ¨ ¾ Ït)ës³,% ìO ÏtL)ës³Æ^ ì Á ÅH­n¤|µï^S¼°gé ¤¬«‰ûës³ ó ì ¼°g­n¶ ¦¯çà« ¦¯í|¤ ̍ç6¤ ¾ ÁlÅ6¦©¨”·]¤ ¦¯­û¦©« ¾ ¦¯ß×·|À³©­Š­n²)Ìa¨ ¾ ës ñ§³t%gñ§³^ ì ­n¶ ¦¯µa³¯¤¬«Š­ ¸°g­O« °g­4Í ¦©« ½dÑ×ÅH­n¤|µë-gÍ뒻 ì ·]¶¦¯­n¤|¶ ¦©°6ãH°¬ª ¤|¶ °g® ¤Ð­n¤¬«n­-«n¤|­-« ¦¯í|¤%¦©« « ¼×° ³©³=Á é)¤¬¨·]¤ ãÎ"¤‚À« ¤ ¾ ¶ ¤¬³i°g­ ¦¯ª ¤‚³i°g¶ ® ¤µ ² ¶ ­ ¦¯²Ž¨Šã h /0ò ½­ ¸¤2µ”° ¦©¶ «Ð½d² ¶È»aÀ¦©³ ¾ ¦i¨®­n¤¬«n­« ¤|­|ÁÏL°g»”³©¤ h2« À¼¼°g¶ ¦©í|¤¬«-­ ¸¤s¤]ç6µ[¤|¶¦©¼2¤¬¨­ ¾ °g­ °6Á ÏL°g»a³¯¤3h[ÏL¶ ° ¦©¨¦©¨® ° ¨ ¾ Ï?¤¬«n­)ʰg­ °6Á %gÁ Ïû°g¶ ® ¤|­*¸-² ¶ µ”À« ¹L¤¬¨”¨ÔÏ?¶ ¤|¤|»”° ¨éSú ú hHÁ ɤ|¶ »ÍÓ² »6ü4¤¬·]­)µ”° ¦¯¶ « %($(-hµ”° ¦©¶ « hHÁó µùó;P9óP¶Só6¼°g­n¶¦èçS« ¦©í|¤ %/$/$/QP\^ /$/( -Á ÏL¶ ° ¦©¨¦i¨®2«n¤|­%« ¦¯í|¤ %h / -/×µ”° ¦©¶ « ÌHÁ ÏL¤¬«n­)« ¤|­%« ¦¯í|¤ )%h2­n¶ ¦¯µa³¯¤¬« J jfi ñ ÕßòDvrq Ïû°g»”³¯¤`-« ¸²žÎ%«î­ ¸¤s¤]çHµ ¤|¶ ¦i¼2¤¬¨õ­ ° ³ ¤|¶ ¶ ² ¶î¶ °g­n¤s²Ž¨ ­ ¸¤à­ ¸¶ ¤|¤Ô­n¤¬« ­ «n¤|­ «|ãîÀ« ¦©¨® ! °g­ní$# « »”° · éÍÓ²gÞC° « ­ ¸¤Ü»”° « ¤¬³©¦©¨¤ Á¥ÏîÎ"² ¾ ¦©¼2¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À”·]¤ ¾ ¼2¤|­ ¸6Í ² ¾ «?« ¸²žÎÖ¼lÀ”· ¸l»[¤|­n­n¤|¶Lµ ¤|¶ ½d² ¶¼° ¨·]¤­ ¸° ¨l² ­ ¸¤|¶ ¼2¤|­ ¸² ¾ «¬Á R Á <TS Ä Þ8PÆ 7 U ÉËÊ4Ì1Í U Ï Ð <TS Ä Þ ³  ÏL°g»a³¯¤`-‚[ ê ç6µ ¤|¶ ¦©¼2¤¬¨­ ° ³@b1¤¬« À³¯­ë ê ¶ ¶ ² ¶ b%°g­n¤ ì ! °g­ní¬« Å6¦©¼×¦©³©°g¶ ¦¯­;Ñ ÊЦ©¼2¤¬¨« ¦©²Ž¨ b%¤ ¾ À·]¤ ¾ æX² ¾ ¤¬³ »”° · éÍÓ²gÞ ÍÓ»”° «n¤ ¾ ʦi«n­ ° ¨·]¤ éÍnb%° ¨é Ê bîÍ Å6ú4æ 뒻”° « ¤¬³©¦©¨¤ ì ¼2¤|­ ¸² ¾ ÍÓ»”° «n¤ ¾ ¼°g­n¶ ¦èç ë ×ýœ^ / ì ë —§ýœ' / ì ë X ý²/ÆÅf% ì ëy¡lýœ/ÆÅRÌ ì 䍲޳ ¾ % %gÁK/ /6Áa\$^$h /6ÁRh5\$^ /6ÁRh$(5\ /6ÁRÌ$(5\ 䍲޳ ¾ ^ %gÁK/ /6Áa\$h5\ /6ÁRh)/6ÁP-^$h /6ÁRÌ$'䍲޳ ¾ h %gÁK/ /6Áa\-Ì /6ÁRh5\5\ /6ÁP-/^ /6ÁRÌ$'$h Ï1¸¤¶ ¤¬° «n²Ž¨'² ½1« À· ¸9°Ý® ²H² ¾ µ[¤|¶ ½d² ¶ ¼×° ¨·]¤×¦©« ­ ¸°g­S²ŽÀ¶ ¼² ¾ ¤¬³%­n¶ ¦¯¤¬« ­n²'̔¨ ¾ « ¦©¼×¦©³©°g¶ ¦¯­ ¦©¤¬«l» ¤]Í ­EÎ"¤|¤¬¨ Î"² ¶ ¾ «›­n²NÎ"°g¶ ¾ ­EÎ"²â« ¦ ¾ ¤Õë ·]²Ž³iÀ¼¨° ¨ ¾ ¶ ²NΫnµ”° ·]¤ ì Á Ï1¸¤›«n­ °g­n¤]ÍÓ² ½iÍÓ­ ¸¤]ÍE°g¶ ­à« ¦i¼¦©³©°g¶ ¦©­EÑÍ »”° «n¤ ¾ ¼2¤|­ ¸² ¾ «Ìa¨ ¾ « ¦©¼×¦©³©°g¶ ¦¯­ ¦©¤¬«l­n²NÎ"°g¶ ¾ ²Ž¨”³¯Ñ ²Ž¨¤« ¦ ¾ ¤ ÁBä² ¶)¤]獰 ¼2µ”³¯¤ ã”­ ¸¤¬¦¯¶1Î"¤¬³©³èÍÓé6¨²žÎ)¨Ô« ¦©¼ Í ¦©³©°g¶¦¯­EÑܼ2¤¬° « À¶ ¤¬«Œ2ÅÍ ¾ ¦¯ª ¤|¶ ® ¤¬¨·]¤s» ¤|­;ÎB¤|¤¬¨à®Ž¦¯ª ¤¬¨ Î"² ¶ ¾ ð ‘ ° ¨ ¾ ° ¨² ­ ¸¤|¶àÎB² ¶ ¾ ð + ‘ ¦©« ¾ ¤|Ìa¨¤ ¾ ° « ü ¶îëªûë’òŠó ðw‘ ì ñûë’òŠó ðw+ ‘ ìnì Á2ú ­s¼2¤¬° ¨«È­ ¸°g­¤¬° · ¸‡¶ ²žÎ ûë’òŠó ðw‘ ì ¦©«-·]²Ž¼2µ”°g¶ ¤ ¾ ­n²2° ¨² ­ ¸¤|¶1¶ ²žÎ Oë’òó ð + ‘ ì ¦©¨ Ïû°g»”³¯¤6%gÁÙ¸-²Ž¼2µ”°g¶ ¦©« ²Ž¨«²Ž¨S·]²Ž³©À¼¨ « ¦ ¾ ¤1°g¶ ¤Ð¨² ­ µ ¤|¶ ½’² ¶ ¼2¤ ¾ ÁÏ1¸°g­# «2Î%¸Ñ­ ¸¤Ô« ¦©¼¦i³©°g¶ ¦¯­;ÑõÍÓ»”° « ¤ ¾ ½ ° ¦©³%­n²Ö® ¶ ° « µÕ­n¶ À¤–¶ ¤¬³©°g­ ¦¯²Ž¨« ¸¦¯µ”«×²Ž¨/Î"² ¶ ¾ ·]²gÍ ²6·|·|À¶ ¶ ¤¬¨·]¤¬«¬Á Û ¨­ ¸¤"² ­ ¸¤|¶¸° ¨ ¾ ãŽÅHÉÐÊùÎ%¸¦©·¸l¦i«Š­ ¸¤-¼°g­ ¸6Í ¤¬¼°g­ ¦©·|° ³»”° · éH® ¶ ²ŽÀ¨ ¾ ² ½ ²ŽÀ¶X¼2² ¾ ¤¬³È®Ž¦©ª ¤¬«XÀ« °¥¶ ¤ ¾ À·]¤ ¾ ÍÓ¶ ° ¨é9»”° « ¦©« ½’² ¶» ² ­ ¸ù·]²Ž³©À¼×¨¿«nµ”° ·]¤ ° ¨ ¾ ­ ¸¤â¶ ²NÎå« µ”° ·]¤C« ¦i¼lÀ³©­ ° ¨¤|²ŽÀ« ³¯Ñ§ë äO¦¯®ŽÀ¶ ¤ % ì ë=æX¦©·¸°g¤¬³–£CÁÛê"¤|¶ ¶ Ñÿ° ¨ ¾ 2 ¤¬« « ÀµŠã %'$'$' ì Á Æ«'Î"¤,« ¸²žÎ"¤ ¾ ¦i¨ ­ ¸¤,µ¶ ¤|ª6¦¯²ŽÀ«¥¤]獰 ¼2µ”³©¤/¦©¨ ÅH¤¬·]­ ¦¯²Ž¨ hHãî­ ¸¤ ¾ ¦©¼2¤¬¨« ¦©²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¼2² ¾ ¤¬³î¤]çHÍ ­n¶ ° ·]­sÀ¨ ¾ ¤|¶ ³¯Ñ6¦©¨®² ¶s³i°g­n¤¬¨õ­È«n­n¶ À”·]­ À¶ ¤¬«² ½Î"² ¶ ¾ ·]²gÍÓ²6·|·|À¶ ¶ ¤¬¨·]¤¬«9Î"¤¬³©³=ÁËÏ1¸¤|¶ ¤|½d² ¶ ¤ ã²ŽÀ¶'¼2² ¾ ¤¬³ « ¸²NÎ%«û« À·|·]¤¬« «n½ À³õ¶ ¤¬« À³¯­?²Ž¨¤¬«n­ ¦i¼°g­ ¦©¨®)ÎB² ¶ ¾ ·]²gÍ ²6·|·|À¶ ¶ ¤¬¨·]¤µ¶ ² »”°g»”¦i³©¦¯­ ¦¯¤¬«a² ½ t÷<h*gt4eû¾ °g­ °6ÁL½ À­ ¸¤|¶ é)²žÎ"¤|ª ¤|¶¬ãŽ­ ¸¤"¤]çHµ ¤|¶ ¦i¼2¤¬¨õ­û¦©«û°g¶ ­ ¦¯Ì”·|¦©° ³° ¨ ¾ ÅHÉÊ ¦©«s¨² ­ ¾ ¦¯¶ ¤¬·]­ ³©Ñà¶ ¤¬³i°g­n¤ ¾ ­n²X­ ¸¤µ¶ ² »”°g»a¦©³¯­Eі­ ¸¤]Í ² ¶ Ñ ãc½’À­ ¸¤|¶Š­ ¸¤|² ¶ ¤|­ ¦i·|° ³õ¦©¨ª ¤|­ ¦¯®Ž°g­ ¦¯²Ž¨l¦©«¶ ¤¬ÄHÀ¦¯¶ ¤ ¾ Á JVjfí VýóMqpÜÝuwvLßòM{DßóMuwÖÕ"uwn oWKeÕðwzÕÕgmX ßóMuwzßÕÆnMÕßß £¥¤%° ³©« ²¦©¨ª ¤¬«n­ ¦¯®Ž°g­n¤ ¾ ­ ¸¤)· ¸° ¨® ¤%² ½[­ ¸¤îµ[¤|¶ ½d² ¶nÍ ¼° ¨·]¤¬«"° «« À»”«nµ”° ·]¤)« ¦¯í|¤%° ¨ ¾ ¾ ¤|® ¶ ¤|¤)² ½ « µ”°g¶ «n¤]Í ¨¤¬« «Üªg°g¶ Ñ Á äO¦¯®ŽÀ¶ ¤è^ù« ¸²žÎ%«Üµ ¤|¶ ½’² ¶ ¼° ¨”·]¤¬«Ü² ½ ¾ ¦©« ­ ° ¨·]¤]ÍÓ»”° «n¤ ¾ ÊVb´¼2² ¾ ¤¬³° «B­ ¸¤ ¾ ¦©¼2¤¬¨« ¦©²Ž¨ ² ½ « À»a«nµ”° ·]¤ ¦©¨·]¶ ¤¬° «n¤ Á£,¸¤¬¨¥° ¾ ¦i¼2¤¬¨« ¦¯²Ž¨à« ¦¯í|¤2¦©« » ¤|­EÎ"¤|¤¬¨<' /° ¨ ¾ ^ /$/6ãa¦¯­1« ¸²žÎ%«î­ ¸¤È»[¤¬« ­î¶ ¤¬« À³¯­|Á Ï1¸HÀ«|ãOÎ"¤S·|° ¨ù·]²Ž¨·|³iÀ ¾ ¤×­ ¸”°g­l­ ¸¤Ý« À»”«nµ”° ·]¤ ² ½ YZ [ Y\Z [ ] YZ ^ Y\Z ^ ] YZ ] Y\Z ]_] YZ ` a?badca?ea?fgadf h a?fia g f a g j a g k a b a a b ba b ca b ea j ga j h a j ia lnm oqp"r"s"m turwvm xyp z { {| { } ~  € äO¦¯®ŽÀ¶ ¤3^[4¹û¤|¶ ½’² ¶ ¼° ¨”·]¤sªH«¬ÁÊЦ©¼2¤¬¨« ¦©²Ž¨S« ¦¯í|¤ « ¼×° ³©³ ¾ ¦i¼2¤¬¨« ¦¯²Ž¨X« ¦¯í|¤Ýë' /Q %/$/$/ ì ¦i«)« Àß×·]¤¬¨õ­ ­n²È·|°gµ­ À¶ ¤"³©°g­n¤¬¨­OÎB² ¶ ¾ ·]²gÍÓ²6·|·|À¶ ¶ ¤¬¨·]¤î¶ ¤¬³©°g­ ¦¯²Ž¨Í « ¸”¦¯µŠÁ äO¦¯®ŽÀ¶ ¤Ûh–« ¸²žÎ%«‚­ ¸¤S¤]Þ[¤¬·]­2² ½)­ ¸¤ ¾ ¤|® ¶ ¤|¤Ü² ½ «nµa°g¶ «n¤¬¨¤¬« «¬Á Ï1¸¤%|«n­9¶° ¨é ¤ ¾ ¨²ŽÀ¨ °gµµ ¤¬°g¶ « ­ ¸¤–¼2²Ž«n­S½d¶ ¤¬ÄÀ¤¬¨­×­ ¦i¼2¤¬«×° ¨ ¾ %/$/$/g­ ¸,¶ ° ¨é ¤ ¾ ¨²ŽÀ”¨ÿ°gµµ ¤¬°g¶ «ù­ ¸¤ ³©¤¬° «n­¿½’¶ ¤¬ÄÀ¤¬¨õ­¿­ ¦i¼2¤¬«|ãà¦i¨ ­ ¸¤ ­n¶ ° ¦©¨”¦©¨®Q« ¤|­|Á Ï%¸¤C°žª ¤|¶ °g® ¤ ¤|¶ ¶ ² ¶,¶ °g­n¤ ¾ ²H¤¬«¨² ­ · ¸° ¨® ¤–¼lÀ”· ¸¿° «×­ ¸¤X«nµ”°g¶«n¤¬¨¤¬« «2¦©¨Í ·]¶ ¤¬° « ¤¬«|Á×Ï1¸¤|¶ ¤|½d² ¶ ¤¦©­s¦©«Ðµ”³©° À« ¦¯»”³¯¤l­n²X« °¬Ñ›­ ¸°g­ ­ ¸¤ ¾ ¦©¼¤¬¨« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¼2² ¾ ¤¬³ ¾ ²¤¬«‚¨² ­l« ¸²žÎ µ ¤|¶ ½’² ¶ ¼° ¨·]¤ ¾ ¤|® ¶ °g­ ¦¯²Ž¨Ý²Ž¨Ýª ¤|¶ ÑÜ« µ”°g¶ «n¤ ¾ °g­ °6Á ‚ P >?A ¢ :E@ œ 8E>?A £¥¤ µ¶ ² µ[²Ž« ¤ ¾ ° ¨²Nª ¤¬³#°gµµ¶ ²Ž° · ¸ ·|° ³©³¯¤ ¾ ¾ ¦i¼2¤¬¨« ¦¯²Ž¨ÍÓ¶ ¤ ¾ À·]¤ ¾ ¤¬«n­ ¦©¼°g­ ¦©²Ž¨2¼2² ¾ ¤¬³½’² ¶ ¾ ¤¬° ³èÍ ¦©¨® Î)¦¯­ ¸ ¾ °g­ °«nµ”°g¶ «n¤¬¨¤¬« «Öµ¶ ² »”³©¤¬¼ÝÁ Ï1¸¶ ¤|¤ ªg°g¶ ¦©° ¨­¥¼2² ¾ ¤¬³©«›°g¶ ¤ù« À® ® ¤¬«n­n¤ ¾ ° ¨ ¾ ­ ¸¤|ÑC°g¶ ¤ ·]²Ž¼2µa°g¶ ¤ ¾ ­ ¸¤ µ ¤|¶ ½d² ¶¼° ¨·]¤Q°g®Ž° ¦i¨«n­ ! °g­ní$# « »”° · éõÍÓ²gÞX¼¤|­ ¸² ¾ ° ¨ ¾ « ¦©¼¦i³©°g¶ ¦¯­;ÑõÍÓ»”° « ¤ ¾ « ·¸¤¬¼2¤ Á ÊЦ©¼2¤¬¨”« ¦¯²Ž¨6ÍÓ¶ ¤ ¾ À·]¤ ¾ ¼2² ¾ ¤¬³·|° ¨¥» ¤° ³¯­n¤|¶ ¨°g­ ¦¯ª ¤ ƒ „…ƒ †_ƒ ‡_ƒ ˆ ƒ ‰ ƒ Š_ƒ ‹ ƒ Œ_ƒ _ƒ „"ƒ_ƒ „…ƒ ƒ,† ƒ_ƒI‡ ƒ_ƒ ˆ ƒ ƒ ‰ ƒ ƒIŠ ƒ_ƒ ‹ ƒ_ƒ,Œ ƒ_ƒ,_ƒ ƒŽ„"ƒ_ƒ ƒ \u‘"’”“ •u’_–D—…˜L™ “ š"›u‘…š_’…œy˜  ž žŸ ž ž  ¡ ¢ £ ¤ ¥ äO¦¯®ŽÀ¶ ¤´h[`¹û¤|¶ ½’² ¶ ¼° ¨·]¤2ª6«|Á Ê)¤|® ¶ ¤|¤² ½"« µ”°g¶ «n¤]Í ¨¤¬« « µ¶ ² »a°g»”¦©³©¦¯­;Ñ׫ ¼2²H² ­ ¸¦©¨® « · ¸¤¬¼2¤ Á Ï1¸¤–°g»”¦i³©¦¯­Eс² ½lÂûōÆÃ­ ¸°g­Ý¤]ç6­n¶ ° ·]­ «Ô° ¨ ¾ ¦©¨6Í ½’¤|¶ «Ý³©°g­n¤¬¨­Ý¶ ¤¬³©°g­ ¦¯²Ž¨«Ü² ½lÎB² ¶ ¾ «Ý¼°gé ¤¬«X¦©­ µ ²Ž«4Í « ¦¯»a³¯¤ ­n²X¤¬«n­ ¦©¼×°g­n¤µ¶ ² »”°g»a¦©³©¦¯­ ¦¯¤¬«² ½î«nµ”°g¶«n¤ ¾ °g­ ° ¶ ¤¬° «n²Ž¨”°g»”³¯Ñ Á‡ÂûÅÆ ¦©« °X½ À³©³¯Ñ¥° À­n²Ž¼×°g­ ¦©·Ý¼°g­ ¸6Í ¤¬¼°g­ ¦©·|° ³S­n¤¬·¸¨¦©ÄHÀ¤ Á ú;½ÝÎ"¤Õ¼°gé ¤Õ°C¼°g­n¶¦èç ½’¶ ²Ž¼ ° ¨ÑÔ®Ž¦¯ª ¤¬¨–¦©¨½’² ¶ ¼°g­ ¦©²Ž¨Ô²Ž¨·]¤ ã Î"¤‚·|° ¨–À«n¤ ­ ¸¤à¶ ¤ ¾ À·]¤ ¾ ¼°g­n¶ ¦èçù½’² ¶ ¤¬«n­ ¦i¼°g­ ¦©¨®‡µ”¶ ² »”°g»”¦©³¯Í ¦¯­;Ñ Á#£,¸”¦©³¯¤Ö­ ¸¤/ÅHÉÐʰ ¨° ³¯Ñ6« ¦©«›¦©«‡« ²Ž¼2¤|Î%¸°g­ ·]²Ž«n­ ³¯Ñ¥¦©¨›­n¤|¶¼«È² ½"­ ¦©¼2¤2½’² ¶³©°g¶ ® ¤×¼×°g­n¶ ¦èç ã?³¯¤¬« « ¤]ç6µ[¤¬¨”« ¦¯ª ¤2° ³¯­n¤|¶ ¨°g­ ¦©ª ¤¬«s« À·¸¥° «È½d²Ž³ ¾ ¦©¨®gÍE¦©¨–° ¨ ¾ ÅHÉÐÊ)ÍEÀµ ¾ °g­ ¦©¨®×¸”°¬ª ¤ » ¤|¤¬¨X« À® ® ¤¬« ­n¤ ¾ ë=æX¦©·¸°g¤¬³ £ Áwê"¤|¶ ¶ Ñܰ ¨ ¾ 2 ¤¬« « ÀµŠãÐ%'$'$' ì Á äÀ¶ ­ ¸¤|¶¦©¨ª ¤¬«n­ ¦¯®Ž°g­ ¦¯²Ž¨×¦©«O¨¤|¤ ¾ ¤ ¾ ¦©¨ » ² ­ ¸ ­ ¸¤]Í ² ¶ ¤|­ ¦©·|° ³à° ¨ ¾ ¤]ç6µ ¤|¶ ¦©¼2¤¬¨­ ° ³Ô« ¦ ¾ ¤ Á Ï1¸¤â« À®gÍ ® ¤¬«n­n¤ ¾ ¼² ¾ ¤¬³ ¾ ²¤¬«î¨² ­î¸°žª ¤ ¾ ¤|¤|µÝ»”° · é® ¶ ²ŽÀ¨ ¾ ²žª ¤|¶-µ¶ ² »”°g»”³©¦¯­;Ñ­ ¸¤|² ¶ Ñ Á«é%² µ ¤|½ À³©³¯Ñ ãaëyé)² ½ ¼° ¨¨ã %'$'$' ì « À® ® ¤¬«n­n¤ ¾ µ¶ ² »”°g»”¦©³©¦i«n­ ¦©·¿ÂOÅHú'Î%¸”¦©· ¸ ¦©« »”° «n¤ ¾ ²Ž¨ °¿«n­ °g­ ¦¯­ ¦i·|° ³s³©°g­n¤¬¨­X·|³©° « «Ô¼² ¾ ¤¬³Ð½’² ¶ ½ ° ·]­n² ¶%° ¨° ³¯Ñ6« ¦©«"² ½L·]²ŽÀ”¨õ­ ¾ °g­ °6Áú ¨Ü° ¾¾ ¦©­ ¦¯²Ž¨ãHÎB¤ °gµµ”³i¦¯¤ ¾ ²ŽÀ¶%¼2² ¾ ¤¬³ ­n²¤¬«n­ ¦i¼°g­n¤s»”¦¯® ¶° ¼ µ¶ ² »”°cÍ »”¦©³i¦¯­ ¦¯¤¬«B²Ž¨³¯Ñ Áy¸"² ¶ µ”À«4ÍÓ»”° « ¤ ¾ á)ÂM¹¦©«î«n² ¼°g­ À¶ ¤ ° ¨ ¾ ­ ¸¤Ô¼2¤|­ ¸² ¾ «¼‚À«n­2» ¤Ý­n¤¬«n­n¤ ¾ Î)¦¯­ ¸ù¼2² ¶ ¤ ¶ ¤¬° ³©¦i«n­ ¦©·2­ ° «né6«|Á¥Å6¦©¨·]¤S° ¨õÑ'·]²Ž¨ ¾ ¦¯­ ¦¯²Ž¨° ³µ¶ ² »”°cÍ »”¦©³i¦¯­EÑ ¾ ¦©« ­n¶ ¦¯»”À­ ¦©²Ž¨«·|° ¨,» ¤à¶ ¤|µ”¶ ¤¬«n¤¬¨­n¤ ¾ »Hс° ¼°g­n¶ ¦¯ç×½d² ¶¼ÝãÎ"¤È·|° ¨Ý·]²Ž¼»a¦©¨¤¬«"² ­ ¸¤|¶1¦©¨½’² ¶ ¼°cÍ ­ ¦¯²Ž¨Ý¦©¨S°2¼°g­n¶ ¦¯ç[㍰gµµ”³©ÑH¦©¨®²ŽÀ¶î¼2² ¾ ¤¬³[­n²¼2² ¶ ¤ ® ¤¬¨¤|¶ ° ³­ ° «néH«¬ã« À·¸2° «Î"² ¶ ¾ «n¤¬¨« ¤ ¾ ¦©« ° ¼l»”¦¯®ŽÀ°cÍ ­ ¦¯²Ž¨Ô° ¨ ¾ ÎB² ¶ ¾ ·|³©À«n­n¤|¶¦©¨®Á ¦ ™¥¢w§ A>©¨à:EJ”Ú«ªŠJaD/JaA Žœ Ï1¸”¦©«Î"² ¶ é2Î"° «"« Àµ”µ[² ¶ ­n¤ ¾ »Ñ ! Û Å ê 䛭 ¸¶ ²ŽÀ®Ž¸ ­ ¸¤ ±ŽæXÀ”³¯­ ¦©³©¦©¨®ŽÀ° ³ ú ¨½’² ¶ ¼°g­ ¦©²Ž¨ b1¤|­n¶ ¦©¤|ªc° ³f± µ¶ ²cü4¤¬·]­×°g­ ­ ¸¤XÆ)ú ÏL¶ ·Ý° ¨ ¾ Î"° « « Àµµ ² ¶ ­n¤ ¾ »HÑ æà¦©¨¦©« ­n¶ Ñ9² ½3¸-À”³¯­ À¶ ¤Ô° ¨ ¾ Ï?²ŽÀ¶ ¦©« ¼ À”¨ ¾ ¤|¶­ ¸¤ µ¶ ² ® ¶ ° ¼ ² ½ ! ¦i¨®lÅH¤ ün²Ž¨®3¹¶ ²cü4¤¬·]­B­ ¸¶ ²ŽÀ®Ž¸ ! Û Í bOÏ ê b)æ'Á"æà° ¨õÑ¿½ À¨ ¾ ° ¼2¤¬¨õ­ ° ³1¶ ¤¬«n¤¬°g¶ ·¸¤¬«SÎ"° « « Àµµ ² ¶ ­n¤ ¾ »HÑ¿­ ¸¤b­¬Èʱ½’À”¨ ¾ ² ½læà¦©¨¦i«n­n¶ ÑÖ² ½ Å6·|¦©¤¬¨·]¤9° ¨ ¾ ÏL¤¬· ¸¨²Ž³©² ® ÑâÀ¨ ¾ ¤|¶–°¿µ¶ ²cü4¤¬·]­–² ½ µ”³i° ¨XÅ6Ï ê ¹y^ /$/$/6Á c›J¯®4J Ÿ JaA ¢ J œ °   €  ’  ”±³²P ˜ƒ˜   ”´² …… ±  ” ´µ…† ”  ”  ·¶ z œ z ¸ …†(…  † z[n¹º¹»¹Rz½¼ ƒ‚V ˜l†  –‹ w „ ‰ … ‚  5…˜ ‰ »¾ ‡  †Q Œ**ŒŒ ‘ †Q†Q… ” Œ … “ †  „N„  ˜  –  … ‰ zÀ¿QÁ»ÂÃLÄTÅ;ƎÇÆÁÉÈ\ÅqÄTÅDÊ ± Ë |”Ì | ËÉÍ }»¹z ¼ Œ –(– € ……† ‡ … ‰ –(…† ± ¼ ‘8‰  ” €s‘R‚  ƒ‰n±ÏÎ  …† ’ …е ‘ † w ”  ‰n±ÒÑ ˆ  ‚  ‰Ó²  ” R ‘ …† ±  ” ÕÔ  Œ ˆ †*ÖsN† ‰ ˆ w ‚  ” z¤n¹»¹xRz °]” 5…Ø× ƒ”R’ „*‹ÿ˜ƒ–(… ” – ‰ … ‚  ” –  Œ  ” N˜»‹ w ‰(l‰ zÚÙÛÝÜÈ\ÅAÁÉÞFÛ ßáà=ÃwÆãâå俯ØÈ\ÄyÂÁÉÅQç;Û5ÂÄyÆØàèåßÛÉÈåéÅnßÛÉÈ\ê äŽÁÉà…ÄyÛÉÅQç;ÂØÄyÆÅ;ÂÆ ± |8ºë¨}ºì\Ì Ë ¹ Í |xDí5z ¼ ‘8‰  ”îÑ z €µ‘R‚  l‰Ø±ï$ Œ ˆ …˜ ² z ²" –n– ‚  ”¯±  ”  Ñ ˆ  ‚  ‰åð z ²  ” R ‘ …† z n¹º¹Dí5z«ñ ‘ –  ‚ 4–  Œ Œ †  ‰Q‰ w ˜l ”’N‘  ’ …T†(…–(†  …•4˜ ‘‰(ƒ”R’ ˜l4–(… ” – ‰ … ‚  ” –  Œ »” 5…× w ƒ”R’ z °]” éÅòâåâÚâ«éçÝó;È\ÄTÅDÊôçAèÉä«ó”Ûöõ\ÜÄTäÛÝÅI÷½È2ÛÉõõê Ç©ÁÉÅLÊÝܔÁnʺÆÏøqÆ2ùºàúÁÝÅAûæç»ówÆÆÂÎüÚÆØàÈ\ÄyÆØýÉÁÉÞ ±R“  ’ … ‰ ~ Í {| ± ¼ – ” ¾Ë †Qôþ ” •…† ‰n –‹ ±wï N† Œ ˆ z Ñ ˆ  ‚  ‰ Ö »¾ ‚  ”R” zÿ5¹»¹º¹Rz ¸ †  „N„  ˜ l‰ –  Œ ˜ƒ–(… ” – ‰ … w ‚  ” –  Œ »” R…× »”’ z °]” ÿ È2Û5ÂÆÆ7ûÝÄTÅDÊÝõôÛ ßŽçLé  é2ü  ± “  ’ … ‰ x Í  í5zDñ ¶ ï ¸ †(… ‰(‰ z ¼ ˜l•4 ï z ð 4–  z n¹~Dí*z ‰ – ƒ‚ 4–   ” Ý¾ “ †  „N„  ˜  –  … ‰ ¾ †  ‚ñ‰(“ N† ‰ … R–Q ¾Ë †¯– ˆ …V˜l ”’N‘  ’ … ‚  5…˜ Œ ‚ w “  ” … ” – »¾  ‰(“ …… Œ ˆ †(… Œ, ’”R …† zOé   øwÈ2ÁÉŔõÁºÂê à…ÄyÛÉŔõÒÛÉÅQâåÂÛÉÜLõ\àÄyÂõ9ç»ówÆÆÂ7ÃÓÁÝÅAû çqÄ Ê»ÅAÁÝÞ ÿ È2ÛnÂÆõõê ÄTÅLÊ ± ñå¼w¼ ¸ w Ë  ë Ë ì\Ì |xx Í |xR ±‚ † z ²P ˜ƒ˜   ”À² ……« ” æµR…† ”  ”   ¸ …†Q…  † z;5¹»¹»¹z €µƒ‰ –(†  „ ‘ w –   ” ˜ ‰(»‚  ˜ƒN†  –‹ ‚  5…˜ ‰ Ì ¶ ˜ ‘‰ –(…† ƒ”R’ • ‰ z ” …N†(… ‰ – ” … ƒ’ ˆ „  † ‰ z °]”,ÿ È7ÛnÂÆÆûÉÄTÅDÊÝõ Û ß àTÔÆnàTÃâåÅqŔܔÁÉÞ ¿QÆÆØàÄTÅLÊ Û ßôà=ÃwÆ âãõ7õØÛ5ÂÄyÁÝàÄyÛÝÅ ßÛÉȳ÷ ÛÉäÚóAÜà_ÁÉà…ÄyÛÉÅ;ÁÉÞ Ç ÄTÅDÊ»ÜÄ õ\àÄyÂõ ±«“  ’ … ‰ ˺ËöÍ |x ± ¼  ‚ …† ‰ …,– ± œ … ‡ …† w ‰ …‹ zºñ ‰(‰ 5Œ  4–   ” ¾Ë † ¶[ ‚V“‘ –Q–   ” N˜ ²"ƒ”R’‘Rƒ‰ –  Œ ‰ z ²P ˜ƒ˜   ”ô² …… z¤n¹º¹»¹Rz ï … ‰n‘ †(… ‰ »¾  l‰ –Q†  „ ‘ –   ” ˜ ‰(»‚ w  ˜l†  –‹ z °]”Àÿ È2Û5Â7ÆÆûÉÄTÅLÊÉõåÛuß à=ÃwÆØà=à âåÅqŔܔÁÝÞL¿ ÆÆØàê ÄTÅLÊ Ûuߎà=ÃwÆæâOõõØÛnÂØÄyÁÉà…ÄyÛÉÅôßÛÉÈ ÷ ÛÉäÚóAÜà_ÁÉà…ÄyÛÉÅ;ÁÉÞ Ç½ÄTŔê Ê»ÜÄ õ\àÄyÂõ ± “  ’ … ‰ {  ÍË { ± ¼  ‚ …† ‰ …,– ± œ … ‡ …† ‰ …‹ z ñ ‰(‰ 5Œ  4–   ” ¾Ë † ¶[ ‚ “R‘ –4–   ” ˜ ²"ƒ”R’‘Rƒ‰ –  Œ ‰ z  ˜l4– Š  € † ‚  Œ ï$ Œ ˆ N…˜ z …†Q†Q‹  ”  ˜  „K…,– ˆ Ô z  … ‰Q‰(‘R“ zVn¹»¹º¹Rz ï 4–(†  Œ … ‰n± •N… Œ –  † ‰(“  Œ … ‰n±  ”  ƒ” w ¾Ë † ‚ –   ” †(…–(†  …•4˜ zòçLé7âO¿ üÚÆØýnÄyÆ ± |Dë¨{»ìÌ ËºË  Í Ë }{Rz
2000
72
Distribution-Based Pruning of Backoff Language Models Jianfeng Gao Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Kai-Fu Lee Microsoft Research China No. 49 Zhichun Road Haidian District 100080, China, [email protected] Abstract We propose a distribution-based pruning of n-gram backoff language models. Instead of the conventional approach of pruning n-grams that are infrequent in training data, we prune n-grams that are likely to be infrequent in a new document. Our method is based on the n-gram distribution i.e. the probability that an n-gram occurs in a new document. Experimental results show that our method performed 7-9% (word perplexity reduction) better than conventional cutoff methods. 1 Introduction Statistical language modelling (SLM) has been successfully applied to many domains such as speech recognition (Jelinek, 1990), information retrieval (Miller et al., 1999), and spoken language understanding (Zue, 1995). In particular, n-gram language model (LM) has been demonstrated to be highly effective for these domains. N-gram LM estimates the probability of a word given previous words, P(wn|w1,…,wn-1). In applying an SLM, it is usually the case that more training data will improve a language model. However, as training data size increases, LM size increases, which can lead to models that are too large for practical use. To deal with the problem, count cutoff (Jelinek, 1990) is widely used to prune language models. The cutoff method deletes from the LM those n-grams that occur infrequently in the training data. The cutoff method assumes that if an n-gram is infrequent in training data, it is also infrequent in testing data. But in the real world, training data rarely matches testing data perfectly. Therefore, the count cutoff method is not perfect. In this paper, we propose a distribution-based cutoff method. This approach estimates if an n-gram is “likely to be infrequent in testing data”. To determine this likelihood, we divide the training data into partitions, and use a cross-validation-like approach. Experiments show that this method performed 7-9% (word perplexity reduction) better than conventional cutoff methods. In section 2, we discuss prior SLM research, including backoff bigram LM, perplexity, and related works on LM pruning methods. In section 3, we propose a new criterion for LM pruning based on n-gram distribution, and discuss in detail how to estimate the distribution. In section 4, we compare our method with count cutoff, and present experimental results in perplexity. Finally, we present our conclusions in section 5. 2 Backoff Bigram and Cutoff One of the most successful forms of SLM is the n-gram LM. N-gram LM estimates the probability of a word given the n-1 previous words, P(wn|w1,…,wn-1). In practice, n is usually set to 2 (bigram), or 3 (trigram). For simplicity, we restrict our discussion to bigram, P(wn|wn-1), which assumes that the probability of a word depends only on the identity of the immediately preceding word. But our approach extends to any n-gram. Perplexity is the most common metric for evaluating a bigram LM. It is defined as, ∑ = = − − N i i i w w P N PP 1 1) | ( log 1 2 (1) where N is the length of the testing data. The perplexity can be roughly interpreted as the geometric mean of the branching factor of the document when presented to the language model. Clearly, lower perplexities are better. One of the key issues in language modelling is the problem of data sparseness. To deal with the problem, (Katz, 1987) proposed a backoff scheme, which is widely used in bigram language modelling. Backoff scheme estimates the probability of an unseen bigram by utilizing unigram estimates. It is of the form:    > = − − − − otherwise w P w w w c w w P w w P i i i i i i d i i ) ( ) ( 0 ) , ( ) | ( ) | ( 1 1 1 1 α (2) where c(wi-1,wi) is the frequency of word pair (wi-1,wi) in training data, Pd represents the Good-Turing discounted estimate for seen word pairs, and α(wi-1) is a normalization factor. Due to the memory limitation in realistic applications, only a finite set of word pairs have conditional probabilities P(wn|wn-1) explicitly represented in the model, especially when the model is trained on a large corpus. The remaining word pairs are assigned a probability by back-off (i.e. unigram estimates). The goal of bigram pruning is to remove uncommon explicit bigram estimates P(wn|wn-1) from the model to reduce the number of parameters, while minimizing the performance loss. The most common way to eliminate unused count is by means of count cutoffs (Jelinek, 1990). A cutoff is chosen, say 2, and all probabilities stored in the model with 2 or fewer counts are removed. This method assumes that there is not much difference between a bigram occurring once, twice, or not at all. Just by excluding those bigrams with a small count from a model, a significant saving in memory can be achieved. In a typical training corpus, roughly 65% of unique bigram sequences occur only once. Recently, several improvements over count cutoffs have been proposed. (Seymore and Rosenfeld, 1996) proposed a different pruning scheme for backoff models, where bigrams are ranked by a weighted difference of the log probability estimate before and after pruning. Bigrams with difference less than a threshold are pruned. (Stolcke, 1998) proposed a criterion for pruning based on the relative entropy between the original and the pruned model. The relative entropy measure can be expressed as a relative change in training data perplexity. All bigrams that change perplexity by less than a threshold are removed from the model. Stolcke also concluded that, for practical purpose, the method in (Seymore and Rosenfeld, 1996) is a very good approximation to this method. All previous cutoff methods described above use a similar criterion for pruning, that is, the difference (or information loss) between the original estimate and the backoff estimate. After ranking, all bigrams with difference small enough will be pruned, since they contain no more information. 3 Distribution-Based Cutoff As described in the previous section, previous cutoff methods assume that training data covers testing data. Bigrams that are infrequent in training data are also assumed to be infrequent in testing data, and will be cutoff. But in the real world, no matter how large the training data, it is still always very sparse compared to all data in the world. Furthermore, training data will be biased by its mixture of domain, time, or style, etc. For example, if we use newspaper in training, a name like “Lewinsky” may have high frequency in certain years but not others; if we use Gone with the Wind in training, “Scarlett O’Hara” will have disproportionately high probability and will not be cutoff. We propose another approach to pruning. We aim to keep bigrams that are more likely to occur in a new document. We therefore propose a new criterion for pruning parameters from bigram models, based on the bigram distribution i.e. the probability that a bigram will occur in a new document. All bigrams with the probability less than a threshold are removed. We estimate the probability that a bigram occurs in a new document by dividing training data into partitions, called subunits, and use a cross-validation-like approach. In the remaining part of this section, we firstly investigate several methods for term distribution modelling, and extend them to bigram distribution modelling. Then we investigate the effects of the definition of the subunit, and experiment with various ways to divide a training set into subunits. Experiments show that this not only allows a much more efficient computation for bigram distribution modelling, but also results in a more general bigram model, in spite of the domain, style, or temporal bias of training data. 3.1 Measure of Generality Probability In this section, we will discuss in detail how to estimate the probability that a bigram occurs in a new document. For simplicity, we define a document as the subunit of the training corpus. In the next section, we will loosen this constraint. Term distribution models estimate the probability Pi(k), the proportion of times that of a word wi appears k times in a document. In bigram distribution models, we wish to model the probability that a word pair (wi-1 ,wi) occurs in a new document. The probability can be expressed as the measure of the generality of a bigram. Thus, in what follows, it is denoted by Pgen(wi-1,wi). The higher the Pgen(wi-1,wi) is, for one particular document, the less informative the bigram is, but for all documents, the more general the bigram is. We now consider several methods for term distribution modelling, which are widely used in Information Retrieval, and extend them to bigram distribution modelling. These methods include models based on the Poisson distribution (Mood et al., 1974), inverse document frequency (Salton and Michael, 1983), and Katz’s K mixture (Katz, 1996). 3.1.1 The Poisson Distribution The standard probabilistic model for the distribution of a certain type of event over units of a fixed size (such as periods of time or volumes of liquid) is the Poisson distribution, which is defined as follows: ! ) ; ( ) ( k e k P k P k i i i i λ λ λ − = = (3) In the most common model of the Poisson distribution in IR, the parameter λi>0 is the average number of occurrences of wi per document, that is N cfi i = λ , where cfi is the number of documents containing wi, and N is the total number of documents in the collection. In our case, the event we are interested in is the occurrence of a particular word pair (wi-1,wi) and the fixed unit is the document. We can use the Poisson distribution to estimate an answer to the question: what is the probability that a word pair occurs in a document. Therefore, we get i e i P w w P i i gen λ λ − − − = − = 1 ) ;0 ( 1 ) , ( 1 (4) It turns out that using Poisson distribution, we have Pgen(wi-1,wi) ∝c(wi-1,wi). This means that this criterion is equivalent to count cutoff. 3.1.2 Inverse Document Frequency (IDF) IDF is a widely used measure of specificity (Salton and Michael, 1983). It is the reverse of generality. Therefore we can also derive generality from IDF. IDF is defined as follows: ) log( i i df N IDF = (5) where, in the case of bigram distribution, N is the total number of documents, and dfi is the number of documents that the contain word pair (wi-1,wi). The formula i df N = log gives full weight to a word pair (wi-1,wi) that occurred in one document. Therefore, let’s assume, i i i i i gen IDF w w C w w P ) ( ) , ( ,1 1 − − ∝ (6) It turns out that based on IDF, our criterion is equivalent to the count cutoff weighted by the reverse of IDF. Unfortunately, experiments show that using (6) directly does not get any improvement. In fact, it is even worse than count cutoff methods. Therefore, we use the following form instead, α i i i i i gen IDF w w C w w P ) ( ) , ( ,1 1 − − ∝ (7) where α is a weighting factor tuned to maximize the performance. 3.1.3 K Mixture As stated in (Manning and Schütze, 1999), the Poisson estimates are good for non-content words, but not for content words. Several improvements over Poisson have been proposed. These include two-Poisson Model (Harter, 1975) and Katz’s K mixture model (Katz, 1996). The K mixture is the better. It is also a simpler distribution that fits empirical distributions of content words as well as non-content words. Therefore, we try to use K mixture for bigram distribution modelling. According to (Katz, 1996), K mixture model estimates the probability that word wi appears k times in a document as follows: k k i k P ) 1 ( 1 ) 1( ) ( 0 , + + + − = β β β α δ α (8) where δk,0=1 iff k=0 and δk,0=0 otherwise. α and β are parameters that can be fit using the observed mean λ and the observed inverse document frequency IDF as follow: N cf = λ (9) df N IDF log = (10) df df cf IDF − = − × = 1 2 λ β (11) β λ α = (12) where again, cf is the total number of occurrence of word wi in the collection, df is the number of documents in the collection that wi occurs in, and N is the total number of documents. The bigram distribution model is a variation of the above K mixture model, where we estimate the probability that a word pair (wi-1,wi) , occurs in a document by: ∑ = − − = K k i i i gen k P w w P 1 1 ) ( 1 ) , ( (13) where K is dependent on the size of the subunit, the larger the subunit, the larger the value (in our experiments, we set K from 1 to 3), and Pi(k) is the probability of word pair (wi-1,wi) occurs k times in a document. Pi(k) is estimated by equation (8), where α , and β are estimated by equations (9) to (12). Accordingly, cf is the total number of occurrence of a word pair (wi-1,wi) in the collection, df is the number of documents that contain (wi-1,wi), and N is the total number of documents. 3.1.4 Comparison Our experiments show that K mixture is the best among the three in most cases. Some partial experimental results are shown in table 1. Therefore, in section 4, all experimental results are based on K mixture method. Word Perplexity Size of Bigram (Number of Bigrams) Poisson IDF K Mixture 2000000 693.29 682.13 633.23 5000000 631.64 628.84 603.70 10000000 598.42 598.45 589.34 Table 1: Word perplexity comparison of different bigram distribution models. 3.2 Algorithm The bigram distribution model suggests a simple thresholding algorithm for bigram backoff model pruning: 1. Select a threshold θ. 2. Compute the probability that each bigram occurs in a document individually by equation (13). 3. Remove all bigrams whose probability to occur in a document is less than θ, and recomputed backoff weights. 4 Experiments In this section, we report the experimental results on bigram pruning based on distribution versus count cutoff pruning method. In conventional approaches, a document is defined as the subunit of training data for term distribution estimating. But for a very large training corpus that consists of millions of documents, the estimation for the bigram distribution is very time-consuming. To cope with this problem, we use a cluster of documents as the subunit. As the number of clusters can be controlled, we can define an efficient computation method, and optimise the clustering algorithm. In what follows, we will report the experimental results with document and cluster being defined as the subunit, respectively. In our experiments, documents are clustered in three ways: by similar domain, style, or time. In all experiments described below, we use an open testing data consisting of 15 million characters that have been proofread and balanced among domain, style and time. Training data are obtained from newspaper (People’s Daily) and novels. 4.1 Using Documents as Subunits Figure 1 shows the results when we define a document as the subunit. We used approximately 450 million characters of People’s Daily training data (1996), which consists of 39708 documents. 0 1 2 3 4 5 6 7 8 9 10 550 600 650 700 750 800 Millions Word Perplexity Size (Number of Bigrams) Count Cutoff Distribution Cutoff Figure 1: Word perplexity comparison of cutoff pruning and distribution based bigram pruning using a document as the subunit. 4.2 Using Clusters by Domain as Subunits Figure 2 shows the results when we define a domain cluster as the subunit. We also used approximately 450 million characters of People’s Daily training data (1996). To cluster the documents, we used an SVM classifier developed by Platt (Platt, 1998) to cluster documents of similar domains together automatically, and obtain a domain hierarchy incrementally. We also added a constraint to balance the size of each cluster, and finally we obtained 105 clusters. It turns out that using domain clusters as subunits performs almost as well as the case of documents as subunits. Furthermore, we found that by using the pruning criterion based on bigram distribution, a lot of domain-specific bigrams are pruned. It then results in a relatively domain-independent language model. Therefore, we call this pruning method domain subtraction based pruning. 0 1 2 3 4 5 6 7 8 9 10 550 600 650 700 750 800 Millions Word Perplexity Size (Number of Bigrams) Count Cutoff Distribution Cutoff Figure 2: Word perplexity comparison of cutoff pruning and distribution based bigram pruning using a domain cluster as the subunit. 4.3 Using Clusters by Style as Subunits Figure 3 shows the results when we define a style cluster as the subunit. For this experiment, we used 220 novels written by different writers, each approximately 500 kilonbytes in size, and defined each novel as a style cluster. Just like in domain clustering, we found that by using the pruning criterion based on bigram distribution, a lot of style-specific bigrams are pruned. It then results in a relatively style-independent language model. Therefore, we call this pruning method style subtraction based pruning. 0 1 2 3 4 5 6 7 8 9 10 500 520 540 560 580 600 620 640 660 680 700 Millions Word Perplexity Size (Number of Bigrams) Count Cutoff Distribution Cutoff Figure 3: Word perplexity comparison of cutoff pruning and distribution based bigram pruning using a style cluster as the subunit. 4.4 Using Clusters by Time as Subunits In practice, it is relatively easier to collect large training text from newspaper. For example, many Chinese SLMs are trained from newspaper, which has high quality and consistent in style. But the disadvantage is the temporal term phenomenon. In other words, some bigrams are used frequently during one time period, and then never used again. Figure 4 shows the results when we define a temporal cluster as the subunit. In this experiment, we used approximately 9,200 million characters of People’s Daily training data (1978--1997). We simply clustered the document published in the same month of the same year as a cluster. Therefore, we obtained 240 clusters in total. Similarly, we found that by using the pruning criterion based on bigram distribution, a lot of time-specific bigrams are pruned. It then results in a relatively time-independent language model. Therefore, we call this pruning method temporal subtraction based pruning. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1200 1300 1400 1500 1600 1700 1800 1900 2000 Millions Word Perplexity Size (Number of Bigrams) Count Cutoff Distribution Cutoff Figure 4: Word perplexity comparison of cutoff pruning and distribution based bigram pruning using a temporal cluster as the subunit. 4.3 Summary In our research lab, we are particularly interested in the problem of pinyin to Chinese character conversion, which has a memory limitation of 2MB for programs. At 2MB memory, our method leads to 7-9% word perplexity reduction, as displayed in table 2. Subunit Word Perplexity Reduction Document 9.3% Document Cluster by Domain 7.8% Document Cluster by Style 7.1% Document Cluster by Time 7.3% Table 2: Word perplexity reduction for bigram of size 2M. As shown in figure 1-4, although as the size of language model is decreased, the perplexity rises sharply, the models created with the bigram distribution based pruning have consistently lower perplexity values than for the count cutoff method. Furthermore, when modelling bigram distribution on document clusters, our pruning method results in a more general n-gram backoff model, which resists to domain, style or temporal bias of training data. 5 Conclusions In this paper, we proposed a novel approach for n-gram backoff models pruning: keep n-grams that are more likely to occur in a new document. We then developed a criterion for pruning parameters from n-gram models, based on the n-gram distribution i.e. the probability that an n-gram occurs in a document. All n-grams with the probability less than a threshold are removed. Experimental results show that the distribution-based pruning method performed 7-9% (word perplexity reduction) better than conventional cutoff methods. Furthermore, when modelling n-gram distribution on document clusters created according to domain, style, or time, the pruning method results in a more general n-gram backoff model, in spite of the domain, style or temporal bias of training data. Acknowledgements We would like to thank Mingjjing Li, Zheng Chen, Ming Zhou, Chang-Ning Huang and other colleagues from Microsoft Research, Jian-Yun Nie from the University of Montreal, Canada, Charles Ling from the University of Western Ontario, Canada, and Lee-Feng Chien from Academia Sinica, Taiwan, for their help in developing the ideas and implementation in this paper. We would also like to thank Jian Zhu for her help in our experiments. References F. Jelinek, “Self-organized language modeling for speech recognition”, in Readings in Speech Recognition, A. Waibel and K.F. Lee, eds., Morgan-Kaufmann, San Mateo, CA, 1990, pp. 450-506. D. Miller, T. Leek, R. M. Schwartz, “A hidden Markov model information retrieval system”, in Proc. 22nd International Conference on Research and Development in Information Retrieval, Berkeley, CA, 1999, pp. 214-221. V.W. Zue, “Navigating the information superhighway using spoken language interfaces”, IEEE Expert, S. M. Katz, “Estimation of probabilities from sparse data for the language model component of a speech recognizer”, IEEE Transactions on Acoustic, Speech and Signal Processing, ASSP-35(3): 400-401, March, 1987. K. Seymore, R. Rosenfeld, “Scalable backoff language models”, in Porc. International Conference on Speech and Language Processing, Vol1. Philadelphia,PA,1996, pp.232-235 A. Stolcke, “Entropy-based Pruning of Backoff Language Models” in Proc. DRAPA News Transcriptionand Understanding Workshop, Lansdowne, VA. 1998. pp.270-274 M. Mood, A. G. Franklin, and C. B. Duane, “Introduction to the theory of statistics”, New York: McGraw-Hill, 3rd edition, 1974. G. Salton, and J. M. Michael, “Introduction to Modern Information Retrieval”, New York: McGraw-Hill, 1983. S. M. Katz, “Distribution of content words and phrases in text and language modeling”, Natural Language Engineering, 1996(2): 15-59 C. D. Manning, and H. Schütze, “Foundations of Statistical Natural Language Processing”, The MIT Press, 1999. S. Harter, “A probabilistic approach to automatic keyword indexing: Part II. An algorithm for probabilistic indexing”, Journal of the American Society for Information Science, 1975(26): 280-289 J. Platt, “How to Implement SVMs”, IEEE Intelligent System Magazine, Trends and Controversies, Marti Hearst, ed., vol 13, no 4, 1998.
2000
73
Computational Linguistics Research on Philippine Languages Rachel Edita O. ROXAS Software Technology Department De La Salle University 2401 Taft Avenue, Manila, Philippines [email protected] Allan BORRA Software Technology Department De La Salle University 2401 Taft Avenue, Manila, Philippines [email protected] Abstract This is a paper that describes computational linguistic activities on Philippines languages. The Philippines is an archipelago with vast numbers of islands and numerous languages. The tasks of understanding, representing and implementing these languages require enormous work. An extensive amount of work has been done on understanding at least some of the major Philippine languages, but little has been done on the computational aspect. Majority of the latter has been on the purpose of machine translation. 1 Philippine Languages Within the 7,200 islands of the Philippine archipelago, there are about one hundred and one (101) languages that are spoken. This is according to the nationwide 1995 census conducted by the National Statistics Office of the Philippine Government (NSO, 1997). The languages that are spoken by at least one percent of the total household population include Tagalog, Cebuano, Ilocano, Hiligaynon, Bikol, Waray, Pampanggo or Kapangpangan, Boholano, Pangasinan or Panggalatok, Maranao, Maguin-danao, and Tausug. Aside from these major languages, there are other Philippine dialects, which are variants of these major languages. Fortunato (1993) classified these dialects into the top nine major languages as above (except for Boholano which is similar to Cebuano). 2 Language Representations Linguistics information on Philippine languages are extensive on the languages mentioned above, except for Maranao, Maguindanao, and Tausug, which are some of the languages spoken in Southern Philippines. But as of yet, extensive research has already been done on theoretical linguistics and little is known for computational linguistics. In fact, the computational linguistics researches on Philippine languages are mainly focused on Tagalog.1 There are also notable work done on Ilocano. Kroeger (1993) showed the importance of the grammatical relations in Tagalog, such as subject and object relations, and the insufficiency of a surface phrase structure paradigm to represent these relations. This issue was further discussed in the LFG98, which is on the problem of voice and grammatical functions in Western Austronesian Languages. Musgrave (1998) introduced the problem certain verbs in these languages that can head more than one transitive clause type. Foley (1998) and Kroeger (1998), in particular, discussed about long debated issues such as nouns in Tagalog that can be verbed, the voice system of Tagalog, and Tagalog as a symmetrical voice system. Latrouite (2000) argued that a level of semantic representation is still necessary to explicitly capture a word’s meaning. Crawford (1999) contributed to an issue on interrogative sentences and suggested that the restriction on wh-movement reveals the syntactic structure of Tagalog. Potet (1995) and Trost (2000) provided general materials on computational morphology, though, both presented examples on Tagalog. Rubino (1997, 1996) provided an in-depth analysis of Ilocano. Among the major contributions of the work include an extensive treatment of the complex morphology in the language, a thorough treatment of the discourse 1 Tagalog (or Pilipino) has the most number of speakers in the country. This may be due to the fact that it was officially declared the national language of the Philippines in 1946. particles, and the reference grammar of the language. 3 Applications in Machine Translation Currently, most of the empirical endeavours in computational linguistics are in machine translation. 3.1 Filipino MT Software There are several commercially available translation software, which include Philippine language, but translation is done word-for-word. One such software is the Universal Translator 2000, which includes Tagalog among 40 other languages. Although omni-directional, translation involving Tagalog excludes morphological and syntactic aspects of the language Another software is the Filipino Language Software, which includes Tagalog, Visayan, Cebuano, and Ilocano languages. 3.2 Machine Translation Research IsaWika! is an English to Filipino machine translator that uses the augmented transition network as its computational architecture (Roxas, 1999). It translates simple and compound declarative statements as well as imperative English statements. To date, it is the most serious research undertaking in machine translation in the Philippines. Borra (1999) presented another translation software that translates simple declarative and imperative statements from English to Filipino. The computational architecture of the system is based on LFG, which differs from IsaWika’s ATN implementation. Part of the research was describing a possible set of semantic information on every grammar category to establish a semantically-close translation. 4 Conclusion There are various theoretical linguistic studies on Philippine languages, but computational linguistics research is currently limited. CL activities in the Philippines had yet to gain acceptance from its computing science community. References Borra, A. (1999) A Transfer-Based Engine for an English to Filipino Machine Translation Software. MS Thesis. Institute of Computer Science, University of the Philippines Los Baños. Philippines. Crawford, C (1999) A Condition on Wh-Extraction and What it Reveals about the Syntactic Structure of Tagalog. http://www.people.cornell.edu/pages/cjc26/ l304final.html Foley, B (1998) Symmetric Voice Systems and Precategoriality in Philippine Languages. In LFG98 Conference, Workshop on Voice and Grammatical Functions in Austronesian Languages. Fortunato, Teresita, Mga Pangunahing Etnolingguistikong Grupo sa Pilipinas, 1993. Kroeger, P (1998) Nouns and Verbs in Tagalog: A Response to Foley. In LFG98 Conference. _____ (1993) Phrase Structure and Grammatical Relations in Tagalog. CLSI Publications, Center for the Study of Language and Information, Stanford, California. Latrouite, Anja (2000) Argument Marking in Tagalog. In Austronesian Formal Linguistics Association 7th Annual Meeting (AFLA7). Vriji Universiteit, Amsterdam, The Netherlands. Musgrave, S (1998) The Problem of Voice and Grammatical Functions in Western Austronesian Languages. In LFG98 Conference. National Statistics Office (1997) “Report No. 2: Socio-Economic and Demographic Characteristic”, Sta Mesa, Manila. Potet, J (1995) Tagalog Monosyllabic Roots. In Oceanic Linguistics, Vol. 34, no. 2, pp. 345-374. Roxas, R., Sanchez, W. & Buenaventura, M (1999) Final Report of Machine Translation from English to Filipino: Second Phase. DOST/UPLB. Rubino, C (1997) A Reference Grammar of Ilocano. UCSB Dissertation, UMI Microfilms. _____ (1996) Morphological Integrity in Ilocano. Studies in Language, vol. 20, no. 3, pp. 333-366. Trost, Harald (2000) Computational Morphology. http://www.ai.univie.ac.at/~harald/handbook.html
2000
74
Development of Computational Linguistics Research: a Challenge for Indonesia Bobby Nazief, Ph.D. Computer Science Center, University of Indonesia Jakarta, Indonesia [email protected] 1 Introduction The emergence of Internet as a global information repository, where information of all kind is stored, requires intelligent information processing tools (i.e., computer applications) to help the information seeker to retrieve the stored information. To build these intelligent information processing tools, we need to build computer applications that understand human language since most of those information is represented in human language. This is where computational linguistics becomes important, especially for countries like Indonesia that hosts more than 200 million people. We need to develop a systematic understanding of the Bahasa Indonesia (the Indonesian national language) to enable us develop the needed computer applications that will help us manage information intelligently. However, until recently, there is only a few research activities in computational linguistics conducted in Indonesia. The establishment of Computer Science departments in Indonesian universities that did not start until the beginning of 1980’s may be partially responsible for this1. In addition, the Indonesian linguists seem to be keen on working “manually” instead of using computers in conducting their linguistics researches as stated in Muhadjir (1995), only few of them really make use of the technology. While, on the other hand, most computer scientists tend to use the practical approach rather than constructing a complete framework to understand the language when building related applications such as a specific information retrieval system. In the following, I will describe past research activities in computational linguistics 1 Bandung Institute of Technology was the first among public universities that established Computer Science department in 1980. on Bahasa Indonesia. This description is by no means exhaustive since it is very difficult to find out research activities in computational linguistics in Indonesia. 2 Past Research Activities 2.1 Corpus Analysis Corpus analysis is an important means as a way to understand the evolution of language usage by its people. In the case of Bahasa Indonesia, research activities on corpus analysis were almost none. There was one work by R. R. Hardjadibrata (1969) from Monash University, who conducted word frequency analysis of Indonesian newspapers. There was also similar work conducted the MMTS project (will be described later, in the following section); however, the result of the group’s corpus analysis was not made public. Given this condition, with a group of colleague both from the Faculty of Computer Science and the Faculty of Letters, I conducted an Indonesian corpus analysis using newspapers as the text source. We collected 52 editions of Kompas, a national newspaper with a large number of readers, published in the year of 1994. Each of the 52 editions corresponds to a particular week of the year and was taken randomly from the 7 daily editions of that given week. From this collection, we constructed a corpus consisting of 2.200.818 words that were formed by 74.559 unique words. Of these more than 2 million words, 1.826.740 words that were formed by 27.738 unique words are actually words that matched with the KBBI2 entries, while the rest are either names or foreign words. Detailed analysis can be found in Muhadjir (1996). 2 KBBI (Kamus Besar Bahasa Indonesia), the standard word dictionary for Bahasa Indonesia, contains a little more than 70.000 word entries. 2.2 Morphological Analysis Everyone who has used a word processor understands the importance of a spelling checker in helping him/her to produce an error-free document. To develop a spelling checker, we need to understand the morphological structure of words especially how derived-words are constructed from their root-words and the addition of affixes. We have conducted research to analyze the morphological structure of Indonesian words and based on this analysis we have developed a stemming algorithm suitable for those words. Unlike English, where the role of suffix dominates the generation of derived-words, Bahasa Indonesia depends on both prefix and suffix to derive new words. Therefore, to stem a derived Indonesian word in order to obtain its root-word, we have to look at the presence of both prefix and suffix in that derived-word (Nazief, 1996). In addition, similar to English, multiple suffixes can also be present on a given derived-word. Based on this stemming algorithm, we have developed a spelling checker and spelling-error corrector utilities as part of the Lotus Smartsuite3 package. 2.3 The MMTS Project One notable research activity among the few computational linguistics research activities in Indonesia is the Multilingual Machine Translation System (MMTS) project conducted by the Agency for Assessment and Application of Technology (BPPT) as part of multi-national research project between China, Indonesia, Malaysia, Thailand, and lead by Japan (see http://www.cicc.or.jp/homepage/english/about/a ct/mt/mt.htm, http://www.aia.bppt.go.id/mmts). Unfortunately, there are very few publications about this work that could have benefited the computational linguistic community in the country. One of the few publications that the MMTS project made available for public is the Indonesian Word Electronic Dictionary (KEBI), which could be accessed on-line on http://nlp.aia.bppt.go.id/. The dictionary contains 3 Lotus Smartsuite is an office automation package consisting word processor, spreadsheet, presentation editor, and database applications developed by Lotus Development Corporation. 22.500 root-word and 43.500 derived-word entries. 3 Understanding Indonesian Grammar Currently, I am concentrating my work on developing syntax analyzer for sentences written in Bahasa Indonesia. The approach taken initially was to use the context free grammar with restriction such as that used in the linguistic string analysis (Sager, 1981). Using this approach, we have developed grammar that understands declarative sentences (Shavitri, 1999). However, our experience shows that we need to have a more detailed word categories than is currently available in the standard Indonesian word dictionary (KBBI) before the grammar can be used effectively. This finding really shows us the importance of collaborating with the linguists who understand this field better. But before we do this, we need to educate our linguist-fellows the importance of computer in their fields. 4 Acknowledgements I would like to thank Mirna, bu Multamia, pak Muhadjir, bu Kiswartini, and all of my students who have collaborated with me in these efforts to understand Bahasa Indonesia better. 5 References R. R. Hardjadibrata (1969) An Indonesian Newspaper Wordcount. Department of Indonesian and Malay, Faculty of Arts, Monash University, Cayton, Victoria. Muhadjir (1995) Menjaring Data dari Teks. Lembaran Sastra Universitas Indonesia, edisi khusus:Tautan Sastra & Komputer, Faculty of Letters, University of Indonesia, Depok, pp. 81--91. Muhadjir, et. al. (1996) Frekuensi Kosakata Bahasa Indonesia. Faculty of Letters, University of Indonesia, Depok, 207 p. Bobby A. A. Nazief and Mirna Adriani (1996) Confix Stripping: Approach to Stemming Algorithm for Bahasa Indonesia. Internal publication. Faculty of Computer Science, University of Indonesia, Depok. Naomi Sager (1981) Natural Language Information Processing: A Computer Grammar of English and Its Aplications. Addison-Wesley Publishing Company, Massachusetts. Shelly Shavitri (1999) Analisa Struktur Kalimat Bahasa Indonesia dengan Menggunakan Pengurai Kalimat Berbasis Linguistic String Analysis. Bachelor’s Thesis. Faculty of Computer Science, University of Indonesia, Depok, 88 p.
2000
75
Good Spelling of Vietnamese Texts, one aspect of computational linguistics in Vietnam PHAN Huy Khanh Department of Information Technology DaNang University 17, Le Duan Street, DaNang City, Vietnam [email protected] Abstract There are many challenging problems for Vietnamese language processing. It will be a long time before these challenges are met. Even some apparently simple problems such as spelling correction are quite difficult and have not been approached systematically yet. In this paper, we will discuss one aspect of this type of work: designing the so-called Vietools to detect and correct spelling of Vietnamese texts by using a spelling database based on TELEX code. Vietools is also extended to serve many purposes in Vietnamese language processing. Introduction For the past two decades computational linguistics (CL) has progressed substantially in Vietnam, mainly in these basic aspects: data acquisition from the keyboard, encoding, and restitution through an output device for Vietnamese diacritic characters, updates on the fonts in Microsoft DOS/Windows, standardization for Vietnamese (James Do, Ngo Thanh Nhan), automatic translation of English documents into Vietnamese and vice versa (Phan Thi Tuoi, Dinh Dien), recognition of handwriting (Hoang Kiem, Nguyen Van Khuong), speech processing (Nguyen Thanh Phuc, Quach Tuan Ngoc), building bilingual dictionaries such as English-Vietnamese and VE, French-Vietnamese and V-F dictionaries (Lac Viet), archives of old Sino-Vietnamese documents (Ngo Trung Viet, Cong Tam), etc. Some of these works have been presented in Informatics and IT workshops organized in Vietnam. These efforts are modest and do not yet show our full potential. There are many reasons for this weakness. The major reasons that the different efforts are quite isolated and there is not enough coordination. Some coordinated workshops held from time to time would be very helpful. At the IT Dept. DaNang University we are building a lexical database based on TELEX code for accomplishing the following tasks: - Converting Vietnamese texts from any font to any other font. - Putting texts in alphabetical order independently of the font in use. - Looking up words up in the monolingual and / or multilingual dictionary. - Building specialized monolingual dictionaries. At present, we are taking part in the GETA, CLIPS, IMAG, France, in the FEV project: for a multilingual dictionary: French-Vietnamese via English. In fact, inputting Vietnamese texts still encounters many problems, not yet solved properly. The most common mistakes in detecting and correcting spelling errors are: - wrong intonation or misspelling, - not following spelling specialization, not using syllables systematically in the same texts, etc. Winword, a commercial text processor, is not able to detect and correct spelling mistakes. The program designed by Ngo Thanh Nhan (without an associated spelling dictionary) and other software packages for Vietnamese still do not offer adequate solutions. We propose here a general solution for building the so-called Vietools for detecting and correcting spelling errors. Vietools is designed for office application such as Winword, Excel, Acess, PowerPoint, etc. in Microsoft Windows. Vietools has also been extended for converting and rearranging Vietnamese words in the dictionaries and consulting the Vietnamese dictionaries, including multilingual dictionaries. 1 Building spelling database In the spelling dictionary by Hoang Phe (1995), there are 6760 syllables in the writing system (6616 syllables in the phonology system) to compose single words or complex words. Each syllable has two parts: initial consonant (optional) and rhyme pattern (including rhyme and tone). Altogether, there are 27 initial consonants, and 1160 rhyme patterns (including 6 tones). Based on Vietnamese syllable structure, the spelling database is built in a tabular form. Each element of the table helps to check the correction of a syllable based on the column position of initial consonants and the row position of rhyme patterns, for example, the syllable lamf (work) in the TELEX form, is composed of the initial consonant l and rhyme pattern am with by low falling tone (or grave accent) f. Each element of the table can be understood as: - syllables used in Vietnamese. - elements between tone sign positions (on o: oja or on a: oaj), pronunciation or dialect with spelling (z is equivalent to d or gi, y is equivalent to i...) and borrowings such as karaoke, photocopy, fax... - Sino-Vietnamese word: coongj (addition) → congj, quoocs (country) → nuwowcs... - being unable to form syllables: quts, quoon, coan , cuee... Techniques have been developed to recognize the compound words from two syllables, such as baor damr or damr baor (guarantee), chung chung (vague), etc., from three syllables, such as howpj tacs xax (cooperative), etc., from four syllables, such as coong awn vieecj lamf (work, job), etc. 2 Designing Vietools The error detecting program reads one syllable at a time from the text. The syllable is divided into an initial consonant and a rhyme pattern, paying attention to solving initial consonants such as: gi containing vowel i; the consonant qu has vowel u, but it is easy to separate it from the syllable for it does not have the consonant q; the other combined initial consonants have the length of 2, or 3. The error-correcting unit checks the conformity of initial consonants (if present) and the rhyme pattern. 3 Code converting At present, there are many Vietnamese fonts built on different codes (different in number of bytes used: 1 byte or 2 bytes, order of tones, letter arrangements, etc.). Because there has not been a unified code for Vietnamese text, we selected a pivot code and TELEX code. There are many codes to convert from such as IBM-CP01129, Microsoft-CP1258, VISCII, VietKey, VietWare, VNI, TCVN3, Unicode, etc. Vietools works on syllables converted to TELEX. Vietools analyses syllables to detect initial consonants and rhyme pattern in TELEX code. Conclusion The main advantage of our method is that the tool operates independently of the Vietnamese font used. The design of Vietools is open: one can add new functions such as text or data conversion Spelling data base structure design helps building multifunctional dictionaries, which are essential for natural language processing. Acknowledgements My thanks go to my students for the realization of Vietools and my colleagues for their opinions. In particular, I thank Professor Aravind Joshi, University of Pennsylvania, Philadelphia, USA, for his helpful suggestions I am grateful to Christian Boitet, Professor, Joseph Fourier University, GETA, CLIPS, IMAG, France, for his comments on this paper. References 1.Hoang Phe (1995) Dictionary of Orthography. Center of Lexicography, DaNang Publishing House, 509 p. 2.Hoang Phe (1997) Vietnamese Dictionary. Center of Lexicography, DaNang Publishing House, 1130 p.
2000
76
COMPUTATIONAL LINGUISTICS IN INDIA: AN OVERVIEW Akshar Bharati, Vineet Chaitanya, Rajeev Sangal Language Technologies Research Centre Indian Institute of Information Technology, Hyderabad {sangal,vc}@iiit.net 1. Introduction Computational linguistics activities in India are being carried out at many institutions. The activities are centred around development of machine translation systems and lexical resources. 2. Machine Translation Four major efforts on machine translation in India are presented below. The first one is from one Indian language to another, the next three are from English to Hindi. 2.1. Anusaaraka Systems among Indian languages In the anusaaraka systems, the load between the human reader and the machine is divided as follows: language-based analysis of the text is carried out by the machine, and knowledge-based analysis or interpretation is left to the reader. The machine uses a dictionary and grammar rules, to produce the output. Most importantly, it does not use world knowledge to interpret (or disambiguate), as it is an error prone task and involves guessing or inferring based on knowledge other than the text. Anusaaraka aims for perfect "information preservation". We relax the requirement that the output be grammatical. In fact, anusaaraka output follows the grammar of the source language (where the grammar rules differ, and cannot be applied with 100 percent confidence). This requires that the reader undergo a short training to read and understand the output. Among Indian languages, which share vocabulary, grammar, pragmatics, etc. the task (and the training) is easier. For example, words in a language are ambiguous, but if the two languages are close, one is likely to find a one to one correspondence between words such that the meaning is carried across from the source language to target language. For example, for 80 percent of the Kannada words in the anusaaraka dictionary of 30,000 root words, there is a single equivalend Hindi word which covers the senses of the original Kannada word. Similarly, wherever the two languages differ in grammatical constructions, either an existing construction in the target language which expresses the same meaning is used, or a new construction is invented (or an old construction used with some special notation). For example, adjectival participial phrases in the south Indian languages are mapped to relative clauses in Hindi with the ’*’ notation (Bharati, 2000). Similarly, existing words in the target language may be given wider or narrower meaning (Narayana, 1994). Anusaarakas are available for use as email servers (anusaaraka, URL). 2.2. Mantra System The Mantra system translates appointment letters in government from English to Hindi. It is based on synchronous Tree Adjoining Grammar and uses treetransfer for translating from English to Hindi. The system is tailored to deal with its narrow subjectdomain. The grammar is specially designed to accept analyze and generate sentential constructions in "officialese". Similarly, the lexicon is suitably restricted to deal with meanings of English words as used in its subject-domain. The system is ready for use in its domain. 2.3. MaTra System The Matra system is a tool for human aided machine translation from English to Hindi for news stories. It has a text categorisation component at the front, which determines the type of news story (political, terrorism, economic, etc.) before operating on the given story. Depending on the type of news, it uses an appropriate dictionary. For example, the word ’party’ is usually a ’politicalentity’ and not a ’social event’, in political news. The text categorisation component uses word-vectors and is easily trainable from pre-categorized news corpus. The parser tries to identify chunks (such as noun phrases, verb groups) but does not attempt to join them together. It requires considerable human assistance in analysing the input. Another novel component of the system is that given a complex English sentence, it breaks it up into simpler sentences, which are then analysed and used to generate Hindi. The system is under development and expected to be ready for use soon (Rao, 1998). 2.4. Anusaaraka System from English to Hindi The English to Hindi anusaaraka system follows the basic principles of information preservation. It uses XTAG based super tagger and light dependency analyzer developed at University of Pennsylvania [Joshi, 94] for performing the analysis of the given English text. It distributes the load on man and machine in novel ways. The system produces several outputs corresponding to a given input. The simplest possible (and the most robust) output is based on the machine taking the load of lexicon, and leaving the load of syntax on man. Output based on the most detailed analysis of the English input text, uses a full parser and a bilingual dictionary. The parsing system is based on XTAG (consisting of super tagger and parser) wherein we have modified them for the task at hand. A user may read the output produced after the full analysis, but when he finds that the system has "obviously" gone wrong or failed to produce the output, he can always switch to a simpler output. 3. Corpora and Lexical Resources 3.1 Corpora for Indian Languages Text Corpora for 12 Indian languages has been prepared with funding from Ministry of Information Technology, Govt. of India. Each corpus is of about 3-million words, consisting of randomly chosen textpieces published from 1970 to 1980. The texts are categorized into: literature (novel, short story), science, social science, mass media etc. The corpus can be used remotely over the net or obtained on CDs (Corpora, URL). 3.2 Lexical Resources A number of bilingual dictionaries among Indian languages have been developed for the purpose of machine translation, and are available "freely" under GPL. Collaborative creation of a very large English to Hindi lexical resource is underway. As a first step, dictionary with 25000 entries with example sentences illustrating each different sense of a word, has been released on the web (Dictionary, URL). Currently work is going on to refine it and to add contextual information for use in the anusaaraka system, by involving volunteers. 4. Linguistic Tools and Others 4.1. Morphological Analyzers Morphological analyzers for 6 Indian languages developed as part of Anusaaraka systems are available for download and use (Anusaaraka,URL). Sanskrit morphological analyzers have been developed with reasonable coverage based on the Paninian theory by Ramanujan and Melkote. 4.2 Parsers Besides the parsers mentioned above, a parsing formalism called UCSG identifies clause boundaries without using sub-categorization information. 4.3 others Some work has also started on building search engines. However, missing are the terminological databases and thesauri. Spelling checkers are available for many languages. There is substantial work based on alternative theoretical models of language analysis. Most of this work is based on Paninian model (Bharati, 1995). 5. Conclusions In conclusion, there is a large computational linguistic activity in Indian languages, mainly centred around machine translation and lexical resources. Most recently, a number of new projects have been started for Indian languages with Govt. funding, and are getting off the ground. References: Anusaaraka URL: http://www.iiit.net, http://www.tdil.gov.in Bharati, Akshar, and Vineet Chaitanya and Rajeev Sangal, Natural Language Processing: A Paninian Perspective, Prentice-Hall of India, New Delhi, 1995, Bharati, Akshar, et.al, Anusaaraka: Overcoming the Language Barrier in India, To appear in "Anuvad”. (Available from anusaaraka URL.) CDAC URL: http://www.cdac.org.in Corpora URL: http://www.iiit.net Dictionary URL: http://www.iiit.net Narayana, V. N, Anusarak: A Device to Overcome the Language Barrier, PhD thesis, Dept. of CSE, IITKanpur, January 1994. Rao, Durgesh, Pushpak Bhattacharya and Radhika Mamidi, "Natural Language Generation for English to Hindi Human-Aided Machine Translation", pp. 179189, in KBCS-98, NCST, Mumbai. Joshi, A.K. Tree Adjoining Grammar, In D. Dowty et.al. (eds.) Natural Language Parsing, Cambridge University Press, 1985. Joshi, AK and Srinivas, B., Disambignation of Supertags: Almost Parsing, COLING, 1994.
2000
77
The State of the Art in Thai Language Processing Virach Sornlertlamvanich, Tanapong Potipiti, Chai Wutiwiwatchai and Pradit Mittrapiyanuruk National Electronics and Computer Technology Center (NECTEC), National Science and Technology Development Agency, Ministry of Science and Technology Environment. 22nd Floor Gypsum Metropolitan Tower 539/2 Sriayudhya Rd. Rajthevi Bangkok 10400 Thailand. Email: {virach, tanapong, chai}@nectec.or.th, [email protected] Abstract This paper reviews the current state of technology and research progress in the Thai language processing. It resumes the characteristics of the Thai language and the approaches to overcome the difficulties in each processing task. 1 Some Problematic Issues in the Thai Processing It is obvious that the most fundamental semantic unit in a language is the word. Words are explicitly identified in those languages with word boundaries. In Thai, there is no word boundary. Thai words are implicitly recognized and in many cases, they depend on the individual judgement. This causes a lot of difficulties in the Thai language processing. To illustrate the problem, we employed a classic English example. The segmentation of “GODISNOWHERE”. No. Segmentation Meaning (1) God is now here. God is here. (2) God is no where. God doesn’t exist. (3) God is nowhere. God doesn’t exist. With the different segmentations, (1) and (2) have absolutely opposite meanings. (2) and (3) are ambiguous that nowhere is one word or two words. And the difficulty becomes greatly aggravated when unknown words exist. As a tonal language, a phoneme with different tone has different meaning. Many unique approaches are introduced for both the tone generation in speech synthesis research and tone recognition in speech recognition research. These difficulties propagate to many levels in the language processing area such as lexical acquisition, information retrieval, machine translation, speech processing, etc. Furthermore the similar problem also occurs in the levels of sentence and paragraph. 2 Word and Sentence Segmentation The first and most obvious problem to attack is the problem of word identification and segmentation. For the most part, the Thai language processing relies on manually created dictionaries, which have inconsistencies in defining word units and limitation in the quantity. [1] proposed a word extraction algorithm employing C4.5 with some string features such as entropy and mutual information. They reported a result of 85% in precision and 50% in recall measures. For word segmentation, the longest matching, maximal matching and probabilistic segmentation had been applied in the early research [2], [3]. However, these approaches have some limitations in dealing with unknown words. More advanced techniques of word segmentation captured many language features such as context words, parts of speech, collocations and semantics [4], [5]. These reported about 95-99 % of accuracy. For sentence segmentation, the trigram model was adopted and yielded 85% of accuracy [6]. 3 Machine Translation Currently, there is only one machine translation system available to the public, called ParSit (http://www. links.nectec.or.th/services/parsit), it is a service of English-to-Thai webpage translation. ParSiT is a collaborative work of NECTEC, Thailand and NEC, Japan. This system is based on an interlingual approach MT and the translation accuracy is about 80%. Other approaches such as generate-and-repair [7] and sentence pattern mapping have been also studied [8]. 4 Language Resources The only Thai text corpus available for research use is the ORCHID corpus. ORCHID is a 9-MB Thai part-of-speech tagged corpus initiated by NECTEC, Thailand and Communications Research Laboratory, Japan. ORCHID is available at http://www.links.nectec.or.th /orchid. 5 Research in Thai OCR Frequently used Thai characters are about 80 characters, including alphabets, vowels, tone marks, special marks, and numerals. Thai writing are in 4 levels, without spaces between words, and the problem of similarity among many patterns has made research challenging. Moreover, the use of English and Thai in general Thai text creates many more patterns which must be recognized by OCR. For more than 10 years, there has been a considerable growth in Thai OCR research, especially for “printed character” task. The early proposed approaches focused on structural matching and tended towards neural-networkbased algorithms with input for some special characteristics of Thai characters e.g., curves, heads of characters, and placements. At least 3 commercial products have been launched including “ArnThai” by NECTEC, which claims to achieve 95% recognition performance on clean input. Recent technical improvement of ArnThai has been reported in [9]. Recently, focus has been changed to develop system that are more robust with any unclean scanning input. The approach of using more efficient features, fuzzy algorithms, and document analysis is required in this step. At the same time, “Offline Thai handwritten character recognition” task has been investigated but is only in the research phase of isolated characters. Almost all proposed engines were neural network-based with several styles of input features [10], [11]. There has been a small amount of research on “Online handwritten character recognition”. One attempt was proposed by [12], which was also neural networkbased with chain code input. 6 Thai Speech Technology Regarding speech, Thai, like Chinese, is a tonal language. The tonal perception is important to the meaning of the speech. The research currently being done in speech technology can be divided into 3 major fields: (1) speech analysis, (2) speech recognition and (3) speech synthesis. Most of the research in (1) done by the linguists are on the basic study of Thai phonetics e.g. [13]. In speech recognition, most of the current research [14] focus on the recognition of isolated words. To develop continuous speech recognition, a large-scale speech corpus is needed. The status of practical research on continuous speech recognition is in its initial step with at least one published paper [15]. In contrast to western speech recognition, topics specifying tonal languages or tone recognition have been deeply researched as seen in many papers e.g., [16]. For text-to-speech synthesis, processing the idiosyncrasy of Thai text and handling the tones interplaying with intonation are the topics that make the TTS algorithm for the Thai language differrent from others. In the research, the first successful system was accomplished by [14] and later by NECTEC [15]. Both systems employ the same synthesis technique based on the concatenation of demisyllable inventory units. References [1] V. Sornlertlamvanich, T. Potipiti and T. Charoenporn. Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm. In forthcoming Proceedings of COLING 2000. [2] V. Sornlertlamvanich. Word Segmentation for Thai in Machine Translation System Machine Translation. National Electronics and Computer Technology Center, Bangkok. pp. 50-56, 1993. (in Thai). [3] A. Kawtrakul, S. Kumtanode, T. Jamjunya and A. Jewriyavech. Lexibase Model for Writing Production Assistant System. In Proceedings of the Symposium on Natural Language Processing in Thailand, 1995. [4] S. Meknavin, P. Charoenpornsawat and B. Kijsirikul. Featured Based Thai Word Segmentation. In Proceedings of Natural Language Processing Pacific Rim Symposium, pp. 41-46, 1997. [5] A. Kawtrakul, C. Thumkanon, P. Varasarai and M. Suktarachan. Autmatic Thai Unknown Word Recognition. In Proceedings of Natural Language Processing Pacific Rim Symposium, pp. 341-347, 1997. [6] P. Mitrapiyanurak and V. Sornlertlamvanich. The Automatic Thai Sentence Extraction. In Proceedings of the Fourth Symposium on Natural Language Processing, pp. 23-28, May 2000. [7] K. Naruedomkul and N. Cercone. Generate and Repair Machine Translation. In Proceedings of the Fourth Symposium on Natural Language Processing, pp. 63-79, May 2000. [8] K. Chancharoen and B. Sirinaowakul. English Thai Machine Translation Using Sentence Pattern Mapping. In Proceedings of the Fourth Symposium on Natural Language Processing, pp. 2936, May 2000. [9] C. Tanprasert and T. Koanantakool. Thai OCR: A Neural Network Application. In Proceedings of IEEE Region Ten Conference, vol.1, pp.90-95, November 1996. [10] I. Methasate, S. Jitapankul, K. Kiratiratanaphung and W. Unsiam. Fuzzy Feature Extraction for Thai Handwritten Character Recognition. In Proceedings of the Forth Symposium on Natural Language Processing, pp.136-141, May 2000. [11] P. Phokharatkul and C. Kimpan. Handwritten Thai Character Recognition using Fourior Descriptors and Genetic Neural Networks. In Proceedings of the Fourth Symposium on Natural Language Processing, pp.108-123, May 2000. [12] S. Madarasmi and P. Lekhachaiworakul. Customizable Online Thai-English Handwriting Recognition. In Proceedings of the Forth Symposium on Natural Language Processing, pp.142-153, May 2000. [13] J. T. Gandour, S. Potisuk and S. Dechongkit. Tonal Coarticulation in Thai, Journal of Phonetics, vol 22, pp.477-492, 1994. [14] S. Luksaneeyanawin, et al. A Thai Text-to-Speech System. In Proceedings of Fourth NECTEC Conference, pp.65-78, 1992. (in Thai). [15] P. Mittrapiyanuruk, C. Hansakunbuntheung, V. Tesprasit and V. Sornlertlamvanich. Improving Naturalness of Thai Text-toSpeech Synthesis by Prosodic Rule. In forthcoming Proceedings of ICSLP2000. [16] S. Jitapunkul, S. Luksaneeyanawin, V. Ahkuputra, C. Wutiwiwatchai. Recent Advances of Thai Speech Recognition in Thailand. In Proceedings of IEEE Asia-Pacific conference on Circuits and Systems, pp.173-176, 1998.
2000
78
&20387$7,21$//,1*8,67,&6,10$/$<6,$ =DKDULQ<XVRII &RPSXWHU$LGHG7UDQVODWLRQ8QLW 6FKRRORI&RPSXWHU6FLHQFHV 8QLYHUVLWL6DLQV0DOD\VLD 3HQDQJ0DOD\VLD ]DULQ#FVXVPP\  +LVWRULFDO3HUVSHFWLYH &RPSXWDWLRQDOOLQJXLVWLFVLQ0DOD\VLDEHJDQ LQ HDUO\  ZLWK WKH LPSOHPHQWDWLRQ RI D PRUSKRORJLFDO DQDO\VHU IRU 0DOD\ IRU D 0DVWHUV WKHVLV DW 8QLYHUVLWL 6DLQV 0DOD\VLD 860 LQ3HQDQJ7KHH[WHUQDOH[DPLQHUZDV 3URIHVVRU %HUQDUG 9DXTXRLV IURP *URXSH G¶(WXGHV SRXU OD 7UDGXFWLRQ $XWRQDWLTXH *(7$ LQ *UHQREOH)UDQFH :KDWVWDUWHGDV DQ DFDGHPLF HYHQW OHG WR D YHU\ ORQJ WHUP FROODERUDWLRQ EHWZHHQ *(7$ DQG 860 VWDUWLQJ ZLWK WKH GHYHORSPHQW RI D SURWRW\SH (QJOLVK0DOD\ PDFKLQH WUDQVODWLRQ 07 V\VWHPXVLQJ*(7$¶V $5,$1(V\VWHPWKDW IXUWKHU OHG WR WKH GHYHORSPHQW RI YDULRXV OLQJXLVWLFWRROVDQGODQJXDJHGDWDEDQNVDVZHOO DVEDVLFUHVHDUFKRQJUDPPDUIRUPDOLVPV7KH FROODERUDWLRQ FRQWLQXHV WR WKLV GD\ ZLWK WKH VXSSRUW RI 860 DQG WKH )UHQFK JRYHUQPHQW 7KH&RPSXWHU$LGHG7UDQVODWLRQ8QLW NQRZQ DV 870.  ZDV VHW XS LQ 860 LQ  DQG VSHDUKHDGV WKH FROODERUDWLRQ IURP WKH 0DOD\VLDQ HQG 870. KDV VORZHG GRZQ FRQVLGHUDEO\ RQ 07 EXW FRQWLQXHV ZRUN RQ YDULRXV RWKHU DVSHFWV RI FRPSXWDWLRQDO OLQJXLVWLFVWKHLUODWHVWHIIRUWEHLQJRQDVHDUFK HQJLQH EDVHG RQ QDWXUDO ODQJXDJH SURFHVVLQJ WHFKQLTXHV $QRWKHU JURXS EHJDQ LQ 8QLYHUVLWL 7HNQRORJL 0DOD\VLD 870  LQ  ZLWK WKH DUULYDO RI WKH -DSDQHVH &,&& SURMHFW VSHDUKHDGHGE\)XMLWVXIURPWKH-DSDQHVHHQG EULQJLQJ ZLWK WKHP WKH $WODV,, V\VWHP $ PXOWLOLQJXDOWUDQVODWLRQV\VWHPSURMHFWZDVVHW XSLQYROYLQJVHYHUDOHDVW$VLDQFRXQWULHVZLWK WKH 0DOD\VLDQ PHPEHUV EHLQJ 870 DQG 'HZDQ%DKDVDGDQ3XVWDND '%3 WKH0DOD\ /DQJXDJH $FDGHP\ 7KH 0DOD\VLDQ FRPSRQHQW RI WKH SURMHFW ZDV FDOOHG .DQWD ZKLFKODVWHGIRUDERXWVL[\HDUVFXOPLQDWLQJLQ WKH HVWDEOLVKPHQW RI WKH 1DWLRQDO ,QVWLWXWH RI 7UDQVODWLRQ0DOD\VLD NQRZQDV,710 ,710 KDV VLQFH VWRSSHG ZRUN RQ 07 DQG VR KDV 870 870 RSHUDWHG IURP .XDOD /XPSXU GXULQJWKH.DQWDSURMHFWEXWWKHPDLQFDPSXV KDVPRYHGWR-RKRU%DKUX $VPDOOEXWVWDEOHJURXSH[LVWVLQ8QLYHUVLWL .HEDQJVDDQ 0DOD\VLD LQ %DQJL ZLWKLQ WKH )DFXOW\ RI ,QIRUPDWLRQ 7HFKQRORJ\ 7KHLU ZRUNFHQWUHVRQWKHGHYHORSPHQWRIODQJXDJH WRROV IRU 0DOD\ VXFK DV QDWXUDO ODQJXDJH TXHU\ PRUSKRORJLFDO DQDO\VHUV HWF :RUN LQ RWKHU XQLYHUVLWLHV DQG UHVHDUFK LQVWLWXWLRQV LV YHU\ OLPLWHG DQG VSRUDGLF 7KHUH ZDV DQ DWWHPSWDW 07 DW 8QLYHUVLWL 0DOD\DLQ .XDOD /XPSXU EDFN LQ WKH PLGHLJKWLHV DQG VRPH ZRUNRQFRPSXWHUDLGHGODQJXDJHOHDUQLQJZDV FDUULHG RXW IRU D ZKLOH DW 8QLYHUVLWL ,QVWLWXWH 7HFKQRORJL0$5$LQ6KDK$ODP 7RGDWHH[FHSWIRUDKDQGIXO d ILYH ZKLFK WR\HGZLWK WKHLGHD RI FRPPHUFLDOLVLQJ VPDOO ODQJXDJH WRROV HJ 0DOD\ VSHOOFKHFNHU  QR SULYDWHFRPSDQ\KDVSXWLQDQ\VHULRXVHIIRUW RU LQYHVWPHQW LQWR FRPSXWDWLRQDO OLQJXLVWLFV *RYHUQPHQW VXSSRUW RWKHU WKDQIRUWKH .DQWD SURMHFW LV LQGHHG YHU\ ORZ DQG UHVHDUFKHUV KDYH WR FRPSHWH ZLWK DOO RWKHU GRPDLQV DQG GLVFLSOLQHV IRU WKH OLPLWHG 5 ' JUDQWV PDGH DYDLODEOH E\ WKH 0LQLVWU\ RI 6FLHQFH 7HFKQRORJ\ DQG (QYLURQPHQW FI 86 PLOOLRQIRUWKHHQWLUHSHULRGRIWKH WK 0DOD\VLD SODQ  7KHOHYHORIDSSUHFLDWLRQIRU WKHILHOGKDVUHPDLQHGYHU\ORZWKURXJKRXWLWV \HDUKLVWRU\LQWKHFRXQWU\HVSHFLDOO\JLYHQ WKHUXVKIRUTXLFN UHVXOWV DQG SURILWDELOLW\ E\ ERWKWKHSXEOLFDQGSULYDWHVHFWRUV :LWKWKHDERYHFDSDFLW\ZLWKLQWKHFRXQWU\ LQWHUPVRIQXPEHUVDQGOHYHORIH[SHUWLVHLQ FRPSXWDWLRQDO OLQJXLVWLFV KDV QHYHU UHDFKHG WKHUHTXLUHGFULWLFDOPDVV,IDQ\WKLQJLQWHUHVW LQWKHILHOGKDVEHHQRQDGHFOLQHDIWHULWVVPDOO SHDNLQWKHODWHHLJKWLHVDQGHDUO\QLQHWLHV  &RPSXWHU$LGHG7UDQVODWLRQ8QLW 7KH &RPSXWHU$LGHG 7UDQVODWLRQ 8QLW RU 8QLW 7HUMHPDKDQ 0HODOXL .RPSXWHU 870. LVWKHORQJHVWUXQQLQJUHVHDUFKXQLWZRUNLQJLQ FRPSXWDWLRQDO OLQJXLVWLFV LQ WKH FRXQWU\ 0DLQWDLQLQJ D FRUH WHDP RI  UHVHDUFKHUV DQG DERXW WKH VDPH QXPEHU DV FRQWUDFW SHUVRQQHOLWKDVDOVRDOZD\VEHHQWKHODUJHVW $SDUWIURP*(7$ QRZ*(7$&/,36 ZKLFK LV870.¶VFORVHVWDQGORQJHVWUXQQLQJIRUHLJQ FROODERUDWRU LW KDV DOVR ZRUNHG ZLWK RWKHU FRPSXWDWLRQDO OLQJXLVWLF UHVHDUFK XQLWVWHDPV LQ80,67%DQJNRN3UDJXH6RILD 1LMPHJHQ .\RWRHWF870.¶VFORVHVWORFDOFROODERUDWRU LV'%3DSDUWQHUVKLSWKDWGDWHVEDFNWR $V PHQWLRQHG HDUOLHU 870. EHJDQ ZLWK WKHGHYHORSPHQWRIDSURWRW\SH(QJOLVK0DOD\ 07V\VWHP IRUFKHPLVWU\WH[W XVLQJ *(7$¶V $5,$1(V\VWHP7KHZRUNZDVFRPSOHWHG LQEXWHIIRUWVWRH[SDQGWKHSURWRW\SHWR DQLQGXVWULDOV\VWHPZDVQRWVXFFHVVIXOGXHWR ODFN RI ILQDQFLDO VXSSRUW DQG WRR KLJK H[SHFWDWLRQV WUDQVODWLRQRIWH[WERRNVLQYHU\ TXLFN WLPH  ,Q SDUDOOHO D 07 V\VWHP VKHOO FDOOHG-(0$+ZDVGHYHORSHGLQ/,63RQWKH 0DFLQWRVK $5,$1(UDQRQWKHPDLQIUDPH DQG WKH (QJOLVK0DOD\ SURWRW\SH DSSOLFDWLRQ ZDVFRPSOHWHGLQ $V DQ LPPHGLDWH SURGXFW IRU WUDQVODWLRQ D PDFKLQHDLGHG KXPDQ WUDQVODWLRQ 0$+7 V\VWHP FDOOHG 6,6.(3 ZDV GHYHORSHG RQ WKH ,%0 3& LQ  DQG D 0DFLQWRVK YHUVLRQ LQ  WKH GHVLJQ EHLQJ LQVSLUHG E\ 0HOE\¶V 0HUFXU\ V\VWHP 7KH V\VWHP JDLQHG VRPH PHDVXUH RI SRSXODULW\ DQG ZDV LQ IDFWSXW RQ WKHPDUNHWIRUDOPRVWDGHFDGH$SDUWIURPWKH VFUHHQ GHVLJQ DQG RWKHU ODQJXDJH VSHFLILF XWLOLWLHV WKH EDVLF FRPSRQHQWV RI 6,6.(3 LQFOXGHG D ELOLQJXDO GLFWLRQDU\ ORRNXS ZLWK URRWZRUGH[WUDFWLRQ0DOD\VSHOOFKHFNHU WKH ILUVW LQ WKH FRXQWU\  DQG WKHVDXUXV ORRNXS ZKLFKOHGWRWKHSXEOLFDWLRQRIWKHILUVWHYHU HGLWLRQRID0DOD\WKHVDXUXV DQGWKHVHEDVLF FRPSRQHQWV ZHUH EXQGOHG LQ D GHVNWRS DFFHVVRU\ FDOOHG 5DNDQ%0 WKDW FRXOG ZRUN ZLWKDQ\ZRUGSURFHVVVRURQWKH0DFLQWRVKRU RQ:LQGRZV7KHFRQVLGHUDEOHVXFFHVVRIWKH SURMHFWOHG WR WKH FRQFHSWLRQ RI D XVHUGULYHQ 07 V\VWHP LQ  WRJHWKHU ZLWK 80,67  ZKLFK LV HVVHQWLDOO\ 6,6.(3 ZLWK -(0$+ UXQQLQJLQWKHEDFNJURXQGZKHUHXVHUVPD\ FKRRVHWRWRJJOHEHWZHHQ0$+7IDFLOLWLHVDQG IXOO07GHSHQGLQJRQWKHLUWUDQVODWLRQQHHGV 5HDOLVLQJWKHQHHGWRFROOHFWOLQJXLVWLFWRROV DQGLQSDUWLFXODUGDWDIRU0DOD\LQRUGHUWRJHW DQ\ IXUWKHU ZLWK 07 RU DQ\ FRPSXWDWLRQDO OLQJXLVWLF DSSOLFDWLRQ IRU WKDW PDWWHU 870. HPEDUNHG RQ EXLOGLQJ OLQJXLVWLF WRROV DQG ODQJXDJH GDWDEDQNV $ 0DOD\ WH[W DQDO\VLV V\VWHPFDOOHG0$7$ LQFOXGLQJFRQFRUGDQFH IUHTXHQF\FRXQWVVWDWLVWLFDODQDO\VHVHWF ZDV GHYHORSHGLQRQWKHPDLQIUDPHDQGZDV ODWHUSRUWHGWRWKH3&DQGWKHQ81,;LQ WR EH LQFOXGHG LQ D &RUSXV 6\VWHP IRU '%3 7KHV\VWHPQRZKROGV0DOD\WH[WVZLWKDWRWDO RIPRUH WKDQ PLOOLRQ ZRUGV $ 'LFWLRQDU\ 6\VWHPZDVDOVRGHYHORSHGIRU'%3LQ 7KH V\VWHP LV D JHQHULF V\VWHP WKDW FDQ JHQHUDWH GLFWLRQDU\ V\VWHPV DV DSSOLFDWLRQV DQG  '%3 GLFWLRQDULHV KDYH EHHQ LPSOHPHQWHGXVLQJWKHVDLGV\VWHPLQFOXGLQJD ILUVW HYHU )UHQFK0DOD\ GLFWLRQDU\ D FROODERUDWLYH ZRUN LQYROYLQJ WKH )UHQFK (PEDVV\ '%3 *(7$ DQG 870.  7KHUH ZDV DQ DWWHPSW WR EXLOG D JHQHUDO /H[LFDO 'DWDEDVH IRU 0DOD\ 7UHDVXUH %R[ DQ\WKLQJ RQH ZDQWV WR NQRZ DERXW 0DOD\  EXW WKH SURMHFWQHYHUUHDOO\JRWRIIWKHJURXQGGXHWR ODFN RI IXQGV 7KHVH SURMHFWV RFFXSLHG WKH SHULRGXSWRWKHHQGRI $WWKHOHYHORIJUDPPDU870.KDVDOZD\V ZRUNHG RQ JUDPPDU IRUPDOLVPV QDPHO\ WKH 6WULQJ7UHH&RUUHVSRQGHQFH*UDPPDU 67&* EDVHGRQWKHFRQFHSWRI6WUXFWXUHG6WULQJ7UHH &RUUHVSRQGHQFH 667&  7KLV DOVR HQWDLOHG ZRUNRQJUDSKLFJUDPPDUHGLWRUVSDUVHUVDQG LQSDUWLFXODUDELOLQJXDO667&FRUSXVEDQNWR IXHOYDULRXVH[SHULPHQWVLQH[DPSOHEDVHG07 DVZHOODVDXWRPDWLFJHQHUDWRUVRIDQDO\VLVDQG V\QWKHVLVSURJUDPV2QHPDMRUVHWEDFNWRVXFK HIIRUWVLVWKHODFNRI IRUPDO OLQJXLVWLF VWXGLHV IRU 0DOD\ D VLWXDWLRQ WKDW KDV DOVR IRUFHG 870.WRZRUNGLUHFWO\RQ0DOD\OLQJXLVWLFV $VRIFLUFXPVWDQFHVIRUFHG870.WR YHQWXUH LQWR PRUH FRPPHUFLDO XQGHUWDNLQJV 2QHVXFKSURMHFWLVWKHGHYHORSPHQWRIDQ(', HQJLQH SDUVLQJJHQHUDWLRQ RI PHVVDJH W\SHV DUHQHHGHG 2WKHUVLQFOXGHDVHDUFKHQJLQH D FRPPHUFLDO SURGXFW  EDVHG RQ ZRUG RQWRORJ\ DQGGLVWDQFHDPXOWLOLQJXDOFKDWV\VWHP UHDO WLPH07IRUDYHU\UHVWULFWHGODQJXDJH DQGD JHQHULF LQWHUQHW SRUWDO «WKH UHODWLRQ WR FRPSXWDWLRQDOOLQJXLVWLFVLVVWLOOXQNQRZQ« 
2000
79
        !"#$%&&' #)(+*,.-*- 0/1 2*34 576 89;:=<>?<@ ACBDEFHGJILKMBKMN2OILBPMEILQGJILFSR$BKUTWVLXYD,PZXM[\+I]VL^E_`PabBIdcZXY_`QeIdGCfhg i KMjkElGnm GeILKZBm VOILBPMEILQGJILFoQY[,abBIdcZXY_`QeIdGCfpKMN%qSjQ`GJXo_eDm j[ r kEI]Q`Ge_JmZm GtsvuMw[sUxlszy"{$|1qSjQ`GeXY_eDmMj[,\+}X"~XoGe}XY_`V€mMBD,QY ‚jƒm ILV…„‡†ˆ}mMV]ILV…‰QeI]jƒmMmMB.Š}ˆEj‰E,c m,‰BV ‹Œ%ŽoU <‘  ’ “ˆ”?•—–‡˜v–™nš'™J›M–lœžUš™Ÿ•¡ “ˆ™ ¢Z”?£ˆ¤ˆ•¡U¥ –‡šU¦‡˜v¦‡”?œd”?•C ”?§š™Ÿœd˜v ”žz£ˆ•ƒ “ˆ˜v h˜v𙍔?©ª –Uš ˜U£  ”?£«•C¬Z£­ ˜U§J ”?§+¤‡”?•˜U©b¦‡”ž®z¯ˆ˜v ”žz£.° ±  ²–ˆšU–z•C™Ÿ•= “ˆ˜v ² ;³2´³ ”?¤ˆ™Ÿœž¬µ¯ˆ•C™Ÿ¤ ¢Z”?£ˆ¤‡•U¥¶š™Ÿœ?˜v ”žz£ˆ•Ÿ· ¸ž¹ºU»½¼¾v¸ ¿ ¹ÁÀ¹nÃl¿­¹JÃ‡Ä ¼Ÿ»½¹JؘU£‡¤ÆÅJǀÈ`Ɉ¼HǀÉMÈC¾v¸"ÈC¹n¸Ê¾zǀ»€ËvÈÅ`·“‡˜ŸÌU™ §Jz©t–‡œž™Ÿ©$™Ÿ£  ˜vš¬Í¤‡”?•˜U©b¦‡”ž®z¯ˆ˜v ”žz£Î§n˜Yª –l˜v¦‡”?œ?”ž ”?™Ÿ•n° ±  $–ˆš™Ÿ•C™Ÿ£  •t˜p£Â™n³Æ©tM¤Â™Ÿœ ¦l˜U•C™Ÿ¤Ïz£Ð•C C𝇧J ¯Âš˜Uœƒš™Ÿœ?˜v ”žz£‡•n·h “™ шȹ¹JĽÒUÈC¾vÓµÓt˟¿ ¹J¸Ô·‡˜U£ˆ¤"š™n–Uš •2™J›M–™nšCª ”d©$™Ÿ£­ •Õ•“ˆH³ ”?£ˆ®Ö “ˆ˜v +•C Cš`¯ˆ§J ¯Âš˜Uœ×š™Ÿœ?˜Yª  ”?z£ˆ••“z¯ˆœd¤b¦™Ÿ£Â™n؇ ¥Lšz©1™Ÿ£ˆš”?§“‡©$™Ÿ£­  ¦M¬«œž™J›Z”d§n˜Uœ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§n”ž™Ÿ•Ÿ° Ù Ú @ UzÛÜÝ ‘  9 Û @ Þ ™Ÿ˜U¤Zª;œž™J›Â”?§n˜Uœ?”žßŸ˜v ”žz£à§n¯Âšš™Ÿ£­ œž¬p–™nšÌv˜U¤Â™Ÿ•S”?£h “ˆ™ –‡˜vš•”?£Â®œd”ž C™nš˜v ¯Âš™"™U°á®Â° â€ã”d•£Â™nšŸ·äŸåUåUæMç¶è2zœ?œd”?£ˆ•n· äŸåUåzé ç&è꓈˜vš£ˆ”d˜v¢l·äŸåUåUåzë`°S’ “ˆ”?•4©$™n “Z¤™J›Z C™Ÿ£ˆ¤ˆ• ™nÌU™n𬵠Cš™n™n¦‡˜U£Â¢µ£Âz£  C™nš©t”?£‡˜Uœì³ ”ž “Í”ž •í“™Ÿ˜U¤Âª ³2Uš¤î. “™%©$M¤ˆ™ŸœU”?• Cš˜U”?£ˆ™Ÿ¤+z£W “ˆ”d•ïZ¹¾z¿W¸ž¹ºU»½¼¾v¸ÔÄ »?ðH¹¿tǀÈC¹`¹Hñ¾UÃÂòo° Þ ™Ÿ˜U¤Öœ?™J›Z”?§n˜Uœd”žßn™Ÿ¤ó©$Z¤Â™Ÿœ?•&™J›M Cš˜U§J  –ˆšU¦l˜v¦‡”?œ?”?• ”?§š™Ÿœ?˜v ”žz£ˆ•ô¦™n ;³2™n™Ÿ£WÀ¾v»]È`Å Ëöõ긞¹ºU»½¼¾v¸ÔÄ »?ðH¹¿ÃlËUÇÇ;¹JÈeÓ»]Ãl¾U¸‰Å"âJ÷ö¦l”?œž™J›Â”?§n˜Uœ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§n”?™Ÿ•ø­ë`î ™nÌU™nš¬àš™Ÿœ?˜v ”žz£¨”d•¦,™n Á³%™n™Ÿ£í˜–‡˜vš™Ÿ£  t£ÂM¤ˆ™«˜U£‡¤ z£Â™ìU¥+”ž •$§`“ˆ”?œ?¤Âš™Ÿ£à”d£¨˜p–‡˜vš•C™Jªù Cš™n™U°íúê”?œž™J›Â”?§n˜Uœ ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£‡§n”ž™Ÿ•h®U™Ÿ£Â™nš`˜v C™=–l˜vš•C™Jªù Cš™n™Ÿ•à¥LUšà”?£ˆ–‡¯Â  •C™Ÿ£  C™Ÿ£ˆ§J™Ÿ•4ÌM”d˜óû˜vš¢UHÌ–‡šM§J™Ÿ••C™Ÿ•  “ˆ˜v 4®U™Ÿ£Â™nš˜v C™ è2z£  C™J›M öª;üˆš™n™²ý4š`˜U©t©t˜všíâùèêü×ý냚¯ˆœž™Ÿ•¨â€“™Ÿ£‡§J™ û˜vš¢UHÌý4š`˜U©t©t˜všbâùè꓇˜vš£ˆ”?˜v¢,·.äŸåUåUåzëCë`° þ ™Ÿœ?˜v ”žÌU™ÿ CZ CM§`“ˆ˜U•C ”?§ èü×ýՕ â‡èêü×ýՕ`ë`· ¦‡”?œ?™J›Z”?§n˜Uœ7¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§J¬ ©$M¤ˆ™Ÿœ?• ™J›Â“ˆ”ž¦l”ž  ®UMM¤ –™nš¥LUš©t˜U£ˆ§J™U° Þ H³2™nÌU™nšŸ·t¦l”?œž™J›Â”?§n˜Uœb¤ˆ™n–,™Ÿ£‡¤Â™Ÿ£Zª §n”ž™Ÿ•§n˜v–ˆ ¯Âš™à©t˜U£­¬²¦‡¯ˆ £ÂU ˜Uœdœ+š™Ÿœ?˜v ”žz£‡•0¦™Jª  ;³2™n™Ÿ£Í³%U𤇕  “ˆ˜v =˜vš™ §Jš¯ˆ§n”d˜Uœì¥]Uš=•C¬Z£­ ˜U§J ”?§ ¤ˆ”?•˜U©¦‡”?®z¯ˆ˜v ”žz£°p™ ®z”žÌU™µ “ˆš™n™´™J›Z˜U©t–‡œž™Ÿ• U¥ ¢Z”?£ˆ¤ˆ•$U¥š™Ÿœ?˜v ”žz£ˆ•֣U «§n˜v–ˆ ¯Âš™Ÿ¤7¦ ¬¨¦l”?œž™J›Â”?§n˜UœÊª ¤Â™n–™Ÿ£ˆ¤ˆ™Ÿ£ˆ§n”ž™Ÿ•n° ü”žš• œž¬U·lš™Ÿœ?˜v ”žz£ˆ•¶¦™n ;³2™n™Ÿ£ƒ£ˆz£Zª “™Ÿ˜U¤=³2Uš¤ˆ•"U¥–l“š˜U•C™Ÿ•n· ™U°á®Â° “™š™Ÿœ?˜v ”žz£=¦™Jª  Á³%™n™Ÿ£ ÷C©$Uš™Ÿø ˜U£ˆ¤ ÷ö “ˆ˜U£‡ø ”?£Ð÷C©tUš™7˜v–ˆ–lœž™Ÿ•  “ˆ˜U£óUš˜U£Â®U™Ÿ•øbUš–‡šU¦‡œž™Ÿ©t•&U¥à˜v C ˜U§“ˆ©$™Ÿ£  • ˜U•$”?£ ÷C“™0˜v C™ì–l”žßnߟ˜ âL³ ”ž “ ©S¯ˆ•“šMz©t•`ë ZâL³¶”ž “ ˜¥LUš¢ˆë…øZ° M™Ÿ§Jz£ˆ¤ˆœ?¬U·Mš™Ÿœ?˜v ”žz£ˆ•ꦙn ;³2™n™Ÿ£ì “š™n™+Uš ©$Uš™S³2Uš¤ˆ•+˜vš™U·.¦ ¬¤Â™n؇£‡”ž ”žz£·,¦™n¬Uz£ˆ¤¦‡”?œž™J›Â”ʪ §n˜Uœ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£‡§n”ž™Ÿ•$âL™U°á®Â°¦™n ;³2™n™Ÿ£¡÷C©S¯ˆ§“p©$Uš™Ÿø ˜U£ˆ¤´÷ö “‡˜U£ˆø ”?£µ÷C©S¯ˆ§`“¡©$Uš™ ˜v–‡–‡œž™Ÿ•« “ˆ˜U£—Ušª ˜U£Â®U™Ÿ•ø­ë`°'ü”?£ˆ˜Uœdœž¬U· ”ž Ö”d•Ö¯ˆ£‡§nœž™Ÿ˜všÖ“o³Ï¦‡”?œž™J›Â”?§n˜Uœ ¤Â™n–™Ÿ£ˆ¤ˆ™Ÿ£ˆ§n”ž™Ÿ•“™Ÿœž–$š™Ÿ•zœžÌU™  “™ ˜U©b¦‡”ž®z¯‡”ž ;¬SU¥”?¤Zª ”žz©ó•n·™U°á®Â° ÷C’ ”?©t™ l”ž™Ÿ•+œ?”ž¢U™$˜U£p˜vššo³ øh‣™Ÿ”ž “ˆ™nš ÷ö ”?©t™Ÿø+–ˆš™n¥L™nš• Cƒ÷ ˆ¬MøZ·M£ÂUš& “™¶Ø‡§J ”ž ”žz•&¦,™Ÿ˜U• • ÷C’ ”d©$™ ‡”ž™Ÿ•ø$“ˆ˜HÌU™ ˜U•C C™¥]Uš4˜U£ ÷C˜vššo³ ø­ë`° ’ “ˆ™M¯Â™Ÿ•C ”žz£ “ˆ˜v 4”d©$–z•C™Ÿ• ”ž •C™Ÿœ?¥”d•n·l”?£‡¤Â™n™Ÿ¤·  ïM¾zÇ%ȹJ¸ž¾Uǀ»½ËvÃÂÅÓ»ÊÒUï ǼËvÓ¶Àl¸?¹JÓó¹JÃlÇ ñJ»L¸ž¹ºU»½¼¾v¸¿­¹JÄ À¹nÃl¿­¹nÃl¼n»ù¹eÅ p™2–ˆšU–z•C™ “ˆ˜v ô¦‡”dœž™J›Z”d§n˜Uœz¤Â™n–™Ÿ£Zª ¤Â™Ÿ£‡§n”ž™Ÿ•×§n˜U£¦™ê§Jz©$–‡œž™Ÿ©t™Ÿ£­ C™Ÿ¤¦M¬tÅJǀÈ`Ɉ¼ŸÇ€ÉZÈC¾v¸ˆÈ¹JÄ ¸ž¾Uǀ»½ËvÃÂÅ âZ§“‡˜Z·,äŸåUå­ë`·Â”½°á™U°­§JMZ§n§n¯Âšš™Ÿ£ˆ§J™Ÿ•%U¥•C¬Z£Zª  ˜U§J ”?§•C Cš`¯ˆ§J ¯Âš™Ÿ•n·”?£‡§nœ?¯ˆ¤ˆ”?£ˆ®t˜U§J ¯ˆ˜Uœ³%Uš`¤ˆ•n°4£ ™J›Â˜U©$–‡œž™p©$Z¤Â™ŸœÕ “ˆ˜v ì™Ÿ©t–‡œžH¬Z•ìz£Â™pÌU™nš`•”žz£—U¥ •C Cš`¯ˆ§J ¯Âš˜Uœš™Ÿœd˜v ”žz£ˆ•W”?•4˜v ˜4š”ž™Ÿ£  C™Ÿ¤˜vš•”d£Â® âêë0â€ú2M¤.·WäŸåUåzë`° "! •Ö–‡˜vš˜U©t™n C™nš•Ö˜vš™ ÷C•¯ˆ¦ˆ Cš™n™Ÿ•øZ·ˆ”½°á™U°Â§Jz£ˆ£ˆ™Ÿ§J C™Ÿ¤0•¯Â¦ˆ®Uš`˜v–‡“ˆ•%U¥×–‡˜vš`•C™Jª  Cš™n™Ÿ•  “‡˜v  §Jz£ˆ•C ”ž ¯Â C™—§Jz©b¦‡”?£ˆ˜v ”?z£ˆ•àU¥èêü×ý š¯‡œž™Ÿ•n·Â”?£ˆ§nœ?¯‡¤ˆ”?£Â®b C™nš©t”?£ˆ˜Uœš`¯ˆœž™Ÿ•n° üUš`©t˜Uœ?œž¬ •C–™Ÿ˜v¢M”?£ˆ®Â·Ï÷ö¦l”?œž™J›Â”?§n˜Uœ7¤ˆ™n–,™Ÿ£‡¤Â™Ÿ£Zª §n”ž™Ÿ•ø¶˜U£ˆ¤÷C•C Cš`¯ˆ§J ¯Âš˜Uœ š™Ÿœ?˜v ”žz£‡•ø¶¤ˆ™n؇£Â™ Á³%W¤‡”?•öª # z”?£  •C™n •àU¥0–ˆšU¦‡˜v¦‡”?œ?”d•C ”?§àš™Ÿœ?˜v ”žz£‡•n° úê”?œž™J›Mª ”?§n˜Uœ¤Â™n–™Ÿ£ˆ¤ˆ™Ÿ£ˆ§n”ž™Ÿ•ƒ˜vš™¨š™Ÿœ?˜v ”žz£‡•ƒ¤ˆ™n؇£Â™Ÿ¤'HÌU™nš ¤ˆ”?š™Ÿ§J "¤ˆz©t”?£ˆ˜U£ˆ§J™p“™Ÿ˜U¤—œ?™J›Z”?§n˜Uœd”žßn™Ÿ¤7£ˆz£­ C™nš©ó”ʪ £ˆ˜Uœd•―C™n™²âZ˜v C ˜Z·%$­ëCë`竔?£ §Jz£­ Cš˜U• n·S•C Cš`¯ˆ§eª  ¯Âš`˜Uœš™Ÿœ?˜v ”žz£‡• ˜vš™S¤Â™n؇£Â™Ÿ¤ìoÌU™nšÕ³%U𤇕 ˜U£ˆ¤0˜všª ¦‡”? Cš˜v𬠕”žßn™ •C¬Z£­ ˜U§J ”?§7•C Cš¯ˆ§J ¯ˆš™Ÿ• âL³ ”ž “ £ˆz£Zª œž™J›Â”?§n˜Uœ?”?ßn™Ÿ¤£Âz£  C™nš©t”d£ˆ˜Uœ?•`ë`°¶–‡˜vš Õ¥Lšz©Ï¥]Uš`©t˜Uœ ¤ˆ”'&,™nš™Ÿ£‡§J™Ÿ•n·z “™n¬t˜Uœ?•“ˆ˜ŸÌU™Õ§Jz©$–‡œ?™Ÿ©$™Ÿ£­ ˜vš¬t˜U¤Zª Ìv˜U£­ ˜v®U™Ÿ•Ÿ°=úê”?œž™J›Â”?§n˜UœÊª;¤ˆ™n–,™Ÿ£‡¤Â™Ÿ£ˆ§n”ž™Ÿ•S§n˜v–ˆ ¯Âš™”?£Zª ‡¯ˆ™Ÿ£­ ”?˜Uœ œž™J›Z”d§n˜Uœ š™Ÿœ?˜v ”?z£ˆ•t¦™n ;³2™n™Ÿ£=“™Ÿ˜U¤ˆ•ó˜U£‡¤ ¤Â™n–™Ÿ£ˆ¤ˆ™Ÿ£­ •n° Þ ™Ÿ£ˆ§J™U·2˜Uœ?œ2¦‡”?œž™J›Â”?§n˜Uœ2¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§J¬ –ˆšU¦l˜v¦‡”?œ?”ž ”?™Ÿ•¶˜vš™¼ËvÃ,¿v»Lǀ»½ËvÃ,¹¿pËvè¸?¹ºU»½¼¾U¸&»]ÃHõJËvÈeÄ Ót¾Uǀ»½ËvÃh˜U£ˆ¤ìœž™J›Â”?§n˜Uœ”?£Â¥LUš©t˜v ”?z£ì”?•˜ŸÌv˜U”?œ?˜v¦‡œ?™Õ˜v  ™nÌU™n𬫖z”?£   ”?£« “™+–‡˜vš•™Jªù Cš™n™U°(M Cš`¯ˆ§J ¯Âš˜Uœš™Ÿœd˜Yª  ”žz£ˆ•Ÿ·ˆ”?£ì§Jz£­ Cš`˜U•C n·l§n˜v–ˆ ¯ˆš™©t˜U£ ¬Öš™Ÿœ?˜v ”žz£ˆ• £ˆU  §n˜v–ˆ ¯Âš™Ÿ¤¨¦ ¬à¦‡”?œ?™J›Z”?§n˜Uœžª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§n”ž™Ÿ•«âL™U°á®Â°í “ˆ™ ™J›Â˜U©$–‡œž™Ÿ•˜v¦,oÌU™oë`° Þ o³%™nÌU™nšH·b•C C𝇧J ¯Âš˜Uœ+š™Ÿœd˜Yª  ”žz£ˆ• ¤Ât£ÂU  ˜Uœž³ê˜Ÿ¬Z• –™nš§Jzœ?˜v C™Wœ?™J›Z”?§n˜Uœ”d£Â¥]Uš`©t˜Yª  ”žz£"¯Â–󠓈™¶–‡˜vš`•C™Jªù Cš™n™Õ•”?£ˆ§J™¶ “™Ÿ”žš2–ˆšU¦‡˜v¦‡”dœ?”ž ”ž™Ÿ• ˜vš™£ˆU ×˜Uœž³ê˜Ÿ¬Z•&¸?¹ºz»€¼`¾v¸ »?ðH¹¿v°&’ “ˆ”d•”?•.˜ •C™nš`”žz¯ˆ•.¤ˆ”d•öª ˜U¤ÂÌv˜U£­ ˜v®U™³ “ˆ™Ÿ£–‡˜vš`•C™Jªù Cš™n™Ÿ•Õ˜vš™S®U™Ÿ£Â™nš˜v C™Ÿ¤ƒ¥]Uš ÃlË)U¹J¸.”?£ˆ–‡¯Â •C™Ÿ£  C™Ÿ£ˆ§J™Ÿ••”d£ˆ§J™ ™U°á®Â°Â•¯Â¦§n˜v ¥Lš˜U©$™Ÿ• ˜v𙓭¬M–U “™Ÿ•”žßn™Ÿ¤¥LUš4£ÂZ¤Â™Ÿ•4“ˆ”ž®z“”?£0 “™S–‡˜vš•™Jª  Cš™n™W³¶”ž “z¯Â êš™n¥L™nš™Ÿ£ˆ§J™W Có “™Ÿ”žš “™Ÿ˜U¤0³%U𤇕n° MÂ·× “™nUš™n ”?§n˜Uœdœž¬•C–™Ÿ˜v¢Z”?£Â®Â·.¦l”?œž™J›Â”?§n˜Uœ¤Â™n–™Ÿ£Zª ¤Â™Ÿ£ˆ§n”?™Ÿ•¶˜U£ˆ¤ƒ•C C𝇧J ¯Âš˜Uœ.š™Ÿœ?˜v ”žz£ˆ•¶“ˆ˜ŸÌU™$§Jz©$–‡œ?™Jª ©$™Ÿ£  ˜vš¬0˜U•C–™Ÿ§J •n°êúê¯Â n·  ïM¾zÇ ¾vÈC¹$ÇLïZ¹t¹JÓ¶Àl»LÈe»€¼`¾v¸ Óó¹JÈe»LÇLÅ ¾vÃl¿¸Ô»]Ó»€Çù¾Uǀ»€ËUÃÂŶËöõ2ÅeǀÈeɈ¼ŸÇ€ÉZÈö¾U¸ÂÈC¹n¸Ê¾Uǀ»½ËvÃÂÅ* ’ “ˆ”d•«–‡˜v–™nš–ˆš™Ÿ•™Ÿ£­ •˜í£Â™n³ ©tM¤Â™ŸœW¦‡˜U•C™Ÿ¤¡z£ ÅeǀÈeɈ¼ŸÇ€ÉZÈC¾v¸ ÈC¹n¸Ê¾Uǀ»½ËvÃÂÅe·¡ “™µ’ך™n™Jªù®Uš˜U© ©$M¤ˆ™Ÿœ½· ³ “ˆ”d§“à˜Uœ?œžH³¶•S“ˆ™Ÿ˜U¤Zª;¤Âš”žÌU™Ÿ£–‡˜vš•”?£Â®Â° ±  $•C ¯ˆ¤‡”ž™Ÿ•  “™à™+&,™Ÿ§J U¥$–™nš§Jzœ?˜v ”d£Â®7“™Ÿ˜U¤'§n˜v C™n®UUš”ž™Ÿ•z£ –™nš¥LUš©t˜U£ˆ§J™ ˜U£‡¤ §Jz©$–‡˜vš™Ÿ•0 “™–,™nš¥]Uš©ó˜U£ˆ§J™ U¥¶• Cš¯ˆ§J ¯Âš˜Uœ%š™Ÿœ?˜v ”?z£ˆ•S Cƒ¦l”?œž™J›Â”?§n˜Uœ%¤ˆ™n–,™Ÿ£‡¤Â™Ÿ£Zª §n”ž™Ÿ•n°’ “™&§Jz©t–‡˜vš”?•Cz£b”?•.§Jz£ˆ¤‡¯ˆ§J C™Ÿ¤z£ “™,h˜Uœ?œ M Cš™n™n *-zz¯Âš£ˆ˜Uœ4â./0-MëW§JU𖇝ˆ•ìâ½û˜vš`§n¯ˆ•S™n ˜Uœù°ž· äŸåUå1zë`° ± £¡ “ˆ™ š™Ÿ©t˜U”d£ˆ¤Â™nšŸ·+³2™”?£  CšZ¤ˆ¯ˆ§J™h “ˆ™ ’ך™n™Jªù®Uš˜U© ©$Z¤Â™Ÿœ4”?£²•C™Ÿ§J ”žz£/$M·Õ¤ˆ”?•§n¯ˆ••󖈚`˜U§eª  ”?§n˜Uœ4”?••¯ˆ™Ÿ•ó”?£ •™Ÿ§J ”žz£/1M·™J›Z“ˆ”?¦‡”ž ó˜U£ˆ¤7¤ˆ”?•§n¯‡••  “™ìš™Ÿ•¯ˆœž •”?£¨•C™Ÿ§J ”žz£32·%˜U£ˆ¤¨”?£¨•™Ÿ§J ”žz£4ƒ³2™ ®z”žÌU™Wz¯ˆš §Jz£ˆ§nœ?¯ˆ•”žz£ˆ•n° 5 6879:6  9;9=<?>  <: : ÛÜ 9;@ üUštU¦l•C™nšÌZ”?£Â® “ˆ™"™+&™Ÿ§J tU¥ –™nš§Jzœd˜v ”?£Â®”?£Â¥LUšCª ©t˜v ”žz£ ¯Â–p “™t–‡˜vš`•C™Jªù Cš™n™tz£h©tM¤Â™Ÿœ¦™Ÿ“ˆ˜HÌM”?UšŸ· ³2™í”?£  CšM¤‡¯ˆ§J™ÀlÈC¹nÄ;ï¹¾z¿1¹JÇÈ`»½¼Jï­Óó¹nÇǀ·ó˜—•C C𝇧eª  ¯Âš˜UœêÌv˜vš”?˜U£  U¥4“ˆ™Ÿ˜U¤Zª;œž™J›Â”?§n˜Uœ?”žßŸ˜v ”žz£.° ýՔžÌU™Ÿ£í˜  Cš˜U”?£‡”?£Â®+ Cš™n™n¦‡˜U£ˆ¢%ACB"·­¥LUš™nÌU™nš¬ó£ˆz£Zª;œž™Ÿ˜v¥£ÂZ¤Â™ D ³2™ ©t˜vš¢=z£Â™pU¥$”ž •ì§“‡”?œ?¤Âš™Ÿ£²˜U• “™ ïZ¹¾­¿vÄ ¼Jï »]¸ž¿v·”½°á™U°. “™ó§`“ˆ”?œ?¤ “‡˜v ¤Âz©t”?£‡˜v C™Ÿ•+ “™ó“™Ÿ˜U¤Âª ³2Uš¤FEtU¥+ “™ƒ§Jz£ˆ•C ”ž ¯Â™Ÿ£  ó¯ˆ£‡¤Â™nš D °Gp™ “™Ÿ£ ™Ÿ£Âš”d§“7 “ˆ”?•Ö Cš™n™n¦‡˜U£Â¢ ¦M¬í˜v C ˜U§`“ˆ”?£Â®  Cà “™ƒœd˜Yª ¦™ŸœU¥Ö™nÌU™n𬠖‡“ˆš˜U•˜Uœ$£ÂZ¤Â™¡â€”½°á™U°£Âz£  C™nš©t”d£ˆ˜Uœ  “ˆ˜v ”d•W£ÂU ˜H  ªù ˜v® ë+˜«ÀlȹJÄÁïZ¹¾­¿hš™n–‡š™Ÿ•C™Ÿ£  öª ”?£Â®"”ž •4“™Ÿ˜U¤Zªù³2U𤰶’ “™–ˆš™Jª;“™Ÿ˜U¤U¥%£ÂM¤ˆ™ D ”d• ™J›Z Cš˜U§J C™Ÿ¤¥]šz©Æ “™§Jz£ˆ•C ”? ¯Â™Ÿ£­ 4–‡˜vš•™Jªù Cš™n™S¯‡£Zª ¤Â™nšW£ÂZ¤Â™ D ° ± £ “ˆ”?•Õ–‡˜v–™nšŸ· “™$–‡š™Jª;“™Ÿ˜U¤U¥ D §Jz£ˆ•”d•C •SU¥SäHë “™I( ªù ˜v®U¥  “™ì“™Ÿ˜U¤Âªù³%Uš¤ U¥ D ‧n˜Uœdœž™Ÿ¤päKJLUš`¤Â™nš–ˆš™Jª;“™Ÿ˜U¤‡•êUšäNMO¶ë`·‡˜U£‡¤ PRQTS U?VKWYXZV[S \N]X^_ U`]Xbac\dfeaN_gS Vfh[eSjiNklYmnacopobXp\fq rts ucu+vcwyx –z••”?¦‡œž¬z$zët “ˆ™œd˜v¦,™Ÿœ¶U¥ “ˆ™ƒ©$U “ˆ™nš0£ÂM¤ˆ™U¥  “ˆ˜v C  ªù ˜v®ƒâ€§n˜Uœ?œž™Ÿ¤H$|{}+Uš¤ˆ™nš Uš$MO4ë`° š™Jª “™Ÿ˜U¤‡• “™nš™S˜Uœ?•C«”?£ˆ§nœ?¯‡¤Â™+U “™nšÕ”?£Â¥LUš©t˜v ”žz£¤Â™Jª ؇£ˆ™Ÿ¤à”d£  “™0•C™KM¯Â™Ÿœ½·&™U°á®Â°•¯Â¦§n˜v $¥]š˜U©t™Ÿ•n°à’ “™ §Jz©$–lœž™J›Õ§n˜v C™n®UUš`”ž™Ÿ•ô “‡˜v .š™Ÿ•¯ˆœ? ,¥Lšz©— “ˆ™™Ÿ£Âš”?§`“Zª ©$™Ÿ£  •C™nšÌU™«˜U•b “ˆ™"£Âz£  C™nš©t”d£ˆ˜Uœ?•+U¥ z¯ˆšb Cš˜U”d£Zª ”?£ˆ®4 Cš™n™n¦‡˜U£Â¢,ç ³2™ š™n¥L™nš CW “ˆ™ Uš”ž®z”?£‡˜UœZ Cš™n™n¦‡˜U£Â¢ •C¬Z©b¦,zœd•˜U•t÷ /~-óœ?˜v¦™Ÿœ?•øZ° €‚ ƒ…„=†‡„tˆ|‰0Šf‹ŒŽ„’‘“”„t•.–  –ˆšU¦l˜v¦‡”?œ?”?• ”?§ ©$Z¤Â™Ÿœt˜U••”ž®z£ˆ•˜—–ˆšU¦‡˜v¦l”?œ?”ž Á¬  C ™nÌU™n𬠖‡˜vš•C™Jªù Cš™n™ ®z”žÌU™Ÿ£ ˜U£ ”?£Â–‡¯ˆ Ï•™Ÿ£Zª  C™Ÿ£ˆ§J™˜—2·  “ˆ™nš™n¦ ¬ ¤ˆ”?•C ”?£ˆ®z¯ˆ”?•“ˆ”d£Â® z£ˆ™ –‡˜vš`•C™ A™›šGœž|Ÿ¡ Iœž¢~£’¤óâA%¥—ë,šGœžfŸž Iœ¡¢;£¦¤óâA¨§c—&ë`° ’ “ˆ™ó–ˆšU¦l˜v¦‡”?œ?”ž Á¬¤ÖâA¨§c—&ë”?•¯ˆ•¯ˆ˜Uœ?œž¬ƒ™Ÿ•C ”?©t˜v C™Ÿ¤ ¥Lšz©Æ§JMZ§n§n¯Âšš™Ÿ£ˆ§J™•C ˜v ”?• ”?§n•4™J›M Cš˜U§J C™Ÿ¤ƒ¥Lšz©Î˜  Cš™n™n¦l˜U£Â¢l° ± £®U™Ÿ£Â™nš˜v ”?ÌU™©tM¤Â™Ÿœd•n·‡ “™S Cš™n™A ”?• ®U™Ÿ£Â™nš`˜v C™Ÿ¤b”?£ CU–S¤ˆH³ £b¤Â™nš”žÌv˜v ”žz£ˆ• “‡˜v ×š™n³ š”ž C™  “™p•C ˜vš «•¬M©b¦zœ¨Aª©ª¤ ”?£  Cà “ˆ™•™Ÿ£­ C™Ÿ£ˆ§J™:—2° ã%˜U§“Ïš™n³ š`”ž C™Jª;•C C™n– ”?£ ÌUzœžÌU™Ÿ•—˜ ÷öš™n³ š”ž C™Jªùš`¯ˆœž™Ÿø  CU®U™n “™nšp³¶”ž “ ”? •™Ÿ• ”?©t˜v C™Ÿ¤'–ˆšU¦‡˜v¦l”?œ?”ž Á¬U° ± £  “™–ˆš™Ÿ•C™Ÿ£  Ö©tM¤Â™Ÿœù·2 “ˆ™7÷öš™n³ š`”ž C™Jªùš¯ˆœž™Ÿ•øh¤ˆ”«&™nš ¥Lšz©  “™Wèêü×ý'š¯‡œž™Ÿ•%˜U£‡¤"§Jz©b¦‡”?£‡˜v ”žz£ˆ•% “™nš™nU¥  “ˆ˜v 4§n˜U£0¦™W™J›Z Cš˜U§J C™Ÿ¤ƒ¥]šz©µ “™b Cš™n™n¦‡˜U£Â¢,°(p™ š™n¥L™nš Cƒ “™Ÿ© ˜U•ÑˆÈ¹¹JĽÒUÈC¾vÓSÅh‘v¦ˆ¦ˆš™nÌZ”?˜v C™Ÿ¤à’êª ®Uš˜U©ó•`ë`°’êªù®Uš˜U©t•–ˆšHÌZ”?¤Â™ ˜©$Uš™ ®U™Ÿ£Â™nš`˜UœÊªù¥]Uš`© ¥LUš7û˜vš¢UHÌÎý՚˜U©t©t˜vš š`¯ˆœž™Ÿ• âùè2zœdœ?”?£ˆ•n·häŸåUåzé ç è꓇˜vš£ˆ”?˜v¢,·ƒäŸåUåUåzë ˜U• ³%™Ÿœdœ"˜U•4ªªÿ•¯Â¦ˆ Cš™n™Ÿ•n° ± £í§Jz©$–l˜vš”?•Cz£í³ ”ž “3 •¯Â¦‡ Cš™n™Ÿ•n· ’êªù®Uš˜U©ó• §n˜v–ˆ ¯ˆš™ ©$Uš™ •C Cš`¯ˆ§J ¯Âš˜UœÏš™Ÿœd˜v ”žz£ˆ•n· ˜Uœ?œžo³ “™Ÿ˜U¤Âª;¤Âš”žÌU™Ÿ£$–l˜vš•”?£Â®W˜U£‡¤t˜vš™ ™Ÿ˜U•”ž™nš CS§Jz©¦‡”d£Â™ ³ ”? “«¦‡”?œž™J›Â”?§n˜UœÊª;¤Â™n–™Ÿ£ˆ¤ˆ™Ÿ£ˆ§n”ž™Ÿ•n° €¬ ­ª®°¯Žˆf‰Ž±„¡²Šfˆf‰~³´Šf‹.‘ކ ýՔ?ÌU™Ÿ£˜–‡˜vš•C™µA ¥Lšz©  “™֠Cš`˜U”?£ˆ”?£Â®ì Cš™n™n¦‡˜U£Â¢,· ³2™Ö™J›Z Cš˜U§J $ “š™n™"¤ˆ”d• # z”d£­ S’êªù®Uš˜U© •C™n •n·§n˜Uœdœž™Ÿ¤ ÈCËv¸?¹eÅ`·‡¥]šz©µ™nÌU™nš¬«z£Â™+U¥”? • £Âz£Zª;œž™Ÿ˜v¥£ÂZ¤Â™Ÿ•`¶ D î  “™¨“™Ÿ˜U¤Âªùšzœž™¸·â D ë`· “™¨œž™n¥L öª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  šzœž™ ¹ â D ë%˜U£ˆ¤« “™+š”ž®z“  öª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  2šzœž™ºƒâ D ë`°’ “™ šzœ?™ U¥ô˜’ªù®Uš˜U© •”ž®z£ˆ”žØ‡™Ÿ• “™¶’ªù®Uš˜U©»! •2§Jz£­ Cš”žª ¦‡¯ˆ ”žz£à Ch•C CM§`“ˆ˜U•C ”?§¤Â™nš”žÌv˜v ”žz£ˆ•n¾½¿· §n˜všª š”?™Ÿ•¶˜"“™Ÿ˜U¤Zª;§`“ˆ”?œ?¤U¥”ž •4šMU 4£ˆM¤Â™œ?˜v¦™Ÿœ½·Ž¼½ ¹ â.¼*½Àº«ë§n˜všš”ž™Ÿ•Sœž™n¥L "âLš™Ÿ•–.°š”?®z“­ `ë¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  • ¥LUš.U “™nš“™Ÿ˜U¤’êªù®Uš`˜U©t• “ˆ˜v “ˆ˜ŸÌU™%š U •ל?˜v¦™Ÿœž™Ÿ¤  “™p•˜U©t™h˜U•ì “™šMU ìU¥¼e°ÂÁהž¢U™p”?£—û˜vš¢UoÌ ý4š`˜U©t©t˜vš•n·Y˜ “™Ÿ˜U¤Âª;¤Âš”žÌU™Ÿ£W¤ˆ™nš”žÌv˜v ”žz£Õ®U™Ÿ£Â™nš˜v C™Ÿ• ؈š`•C .˜ “™Ÿ˜U¤Âªùšzœž™’êªù®Uš`˜U© ˜U£ˆ¤˜v C ˜U§“™Ÿ• C4”ž ôœž™n¥L öª ˜U£ˆ¤š”ž®z“  öª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  .šzœž™2’êªù®Uš˜U©t•Ÿ°Tp™¤ˆ”?•§n¯ˆ•• à ćqRqRh|śXp\|Æ]ÇU°]SgÈ+SgeRk¨\|aKV[SjÇU°qTUÉh|\|XbÊNh|SjU?V|VKeS qRq x ÄÌËÎÍcÏÐÑŽÒ ÓÕÔ;Ö x x x ×RØ Ö Ô P ×RØ P Ù ×Ú Û PÜ Ü Ü ×RÝ P Û”Þ Ó ×Ý Þ ü”?®z¯Âš™$ävîêè2z£‡•C ”ž ¯Â™Ÿ£   ¯ˆ£ˆ¤Â™nš£ÂZ¤Â™ D î,߅à¡äv°  “™Ÿ•C™+¤Â™nš`”žÌY˜v ”?z£ˆ•2š”ž®z“  ê˜v¥L C™nš³2™Õ•–,™Ÿ§n”?¥]¬t “ˆ™Õ’ª ®Uš˜U©Æ™J›M Cš˜U§J ”?z£0–ˆšZ§J™Ÿ¤ˆ¯Âš™U° Áô™n ¿ßµš™n–ˆš™Ÿ•C™Ÿ£­ ² “™ ¤ˆ™n–ˆ “~á U¥ “™ §Jz£Âª •C ”ž ¯ˆ™Ÿ£­ à Cš™n™Jª;•C C𝇧J ¯Âš™= “ˆ˜v ¨”?• š U C™Ÿ¤ ˜v  D · ⠚™n–‡š™Ÿ•C™Ÿ£    “™²œ?˜v¦™ŸœtU¥0 “ˆ™=“™Ÿ˜U¤Âª;§“ˆ”?œd¤ U¥ D · ˜U£ˆ¤/㠚™n–ˆš™Ÿ•C™Ÿ£­ " “™•C–™Ÿ§n”?˜UœÅeÇùË`À'•¬M©b¦zœ  “ˆ˜v  ™Ÿ£ˆ§nœžz•™Ÿ•  “™ §“ˆ”?œd¤Âš™Ÿ£ U¥ ™nÌU™nš¬ £ÂZ¤Â™ ―C™n™ƒØˆ®z¯Âš™àäHë`° 4œ?•CÂ·¥LUš"§Jz£ ÌU™Ÿ£ˆ”ž™Ÿ£‡§J™U· œž™n …ä { å ¦™í™KM¯ˆ˜UœÖ Cæã ”«&èçéš ê͘U£‡¤ìëí¡îÉî —ù°á™U°  “™ ™Ÿ©$–ˆ Á¬  Cš™n™Jª;• Cš¯ˆ§J ¯Âš™oë U “ˆ™nš³ ”?•C™U° p™ •C–™Ÿ§n”ž¥L¬ “™¶™J›Z Cš˜U§J ”žz£"¥LUš ßïš ä4˜U£ˆ¤Ö¥]Uš(ß8à äv° ²“™Ÿ£æßìš äv·b “ˆ™àœ?˜v¦™ŸœU¥ D ”d•˜G Mªù ˜v® ˜U£ˆ¤  “™͕¯Â¦ˆ Cš™n™´¯ˆ£‡¤Â™nš D ”?• U¥7 “ˆ™µ¥]Uš`© ð ¼,ñòã%óã«·³ “™nš™8óµ”?•b˜³2U𤰠± £  “‡”?•§n˜U•C™ ·pâ D ë šÂô ð ¼Îñõã%óã…ö$˜U£ˆ¤ ¹ â D ëš÷ºƒâ D ëšÂøM° ²“™Ÿ£߅à¡ävî× “™2•¯Â¦ˆ Cš™n™2¯ˆ£ˆ¤Â™nš D “ˆ˜U•× “™2¥]Uš`© ù»ú Ó ÔŽÖ l × Ø Ö w Ü ÜgÜ Ô P l × Ø P w Ù l × Ú w Û P l × Ý P w ÜgÜ Ü Û”Þ l × Ý Þ w Ó âL؈®z¯Âš™ äHë`·4³¶“™nš™p™nÌU™nš¬û¼gü ý ·¼yþ ÿ ˜U£ˆ¤G¼ O ”?•« “ˆ™ •¯Â¦‡ Cš™n™=¤ˆz©t”?£ˆ˜v C™Ÿ¤ ¦ ¬  “™=§`“ˆ”?œ?¤ £ÂZ¤Â™íU¥ D “?˜v¦™Ÿœž™Ÿ¤ƒš™Ÿ•C–™Ÿ§J ”žÌU™Ÿœ?¬Hî ý ·ÿ Uš â ë¶³ “z•™˜U¤Âª ¤Âš™Ÿ••³%™¶¤Â™Ÿ£ÂU C™ š™Ÿ•–,™Ÿ§J ”?ÌU™Ÿœž¬b³¶”ž “ ß â D § …ë`·  ß 2â D § Më ˜U£‡¤ ß O â D ë`°›p™S™J›M Cš`˜U§J ¶ “š™n™ •C™n • U¥’êªù®Uš`˜U©t• ¥Lšz© D î ·pâ D ë §Jz£­ ˜U”d£ˆ•àäzêh˜U£ˆ¤ ä3 p·  ñòä { ý î ý â ü ý ë!   â â#"­ë!   $ÿ â þ ÿ ë ä&% ÿ · ³¶“™nš™#"²”?•™Ÿ”ž “™nš”?£ ·â'( ß O â D ëCë0Uš ëí¡îÉî ·Z˜U£ˆ¤"™nÌU™nš¬ü ) âLš™Ÿ•C–ô°* þ ) ë”d•™Ÿ”ž “ˆ™nš ˜ ’êªù®Uš˜U© ¥Lšz© ·â'+.ß, â D §.-MëCëµâLš™Ÿ•C–ô° ·pâ' ß 2â D §.-MëCë ë%UšCëí¡îÉî ° ¹ â D ë/ §Jz£­ ˜U”d£ˆ•  ñõäK{ å î å â å ë!   ?î ý â ý ë`·l¥]Uš ˜Uœdœ ä01Àç2zê· ³ “™nš™ ™nÌU™nš¬  ) · 13-4ç,· ”?• ™Ÿ”? “™nšÿ˜ ’êªù®Uš˜U© ¥]šz© ·pâ' ß, â D §.- ëCëUš¨ëí¡îî · ºƒâ D ë¶§Jz£­ ˜U”?£‡•  ñ4ý â ý ë!   5å â å ë`ä&% å · ¥LUš˜Uœ?œÖä4ç63 p·³ “™nš™ ™nÌU™nš¬7 ) · 13-4ç,· ”?• ™Ÿ”? “™nšÿ˜ ’êªù®Uš˜U© ¥]šz© ·pâ' ß 2â D §.-MëCëêUš¨ëí¡îÉî · 8:9FÇfS”V[S df]Ç a/;tU(l'qRh|iw]ReSgS‡Xpq;]Ç|S\Kh|Å iSyeŽa5;¡S VfÆcS qnXp\ ]Ç|S”obac\|ÆcS q]”d|U°]Ç<; ea?Å3X]qeaNac]]a›U opS U.;~\|aKV[S?x = l?>?w @BA lDCcw EFE obU°q] @B@ G S S:H @IA lDC?w J*K 9 U @L@ V[S Uco MBA ls w MBNOJ G Ucq MIA ls`w MIN!@ qRS U°opS V ü”ž®z¯Âš™$Mîɶ£ì™J›Â˜U©$–‡œ?™+–‡˜vš•C™Jªù Cš™n™U° P U C™ó “ˆ˜v b™nÌU™n𬠒êªù®Uš`˜U©»! •b£ˆz£ZªùšMU ˜U£ˆ¤ £ˆz£Zª œž™Ÿ˜v¥ˆ£ˆM¤Â™2¤Âz©t”?£‡˜v C™Ÿ•טïZ¹¾­¿vÄÁÈCËv¸?¹¶’êªù®Uš˜U© ―C–™Ÿ§eª ”žØ‡™Ÿ¤«¦ ¬…·â'( ßRQQQÊëCë`° ӝz£Zª;œž™Ÿ˜v¥4£ÂZ¤Â™ D œd˜v¦,™Ÿœ?™Ÿ¤ ¦M¬£Âz£  C™nš©t”d£ˆ˜Uœ  ”?•4§n˜Uœ?œž™Ÿ¤ ¼ËUÓ À,¸ž¹ŸÇ;¹J·.¤Â™Ÿ£ÂU C™Ÿ¤=÷&S <T øZ·.”«&¦ãΤ™Jª œ?”d©t”ž •”ž ••C™KM¯Â™Ÿ£ˆ§J™U¥l§`“ˆ”?œ?¤ˆš™Ÿ£S¥]šz© ¦U “$•”d¤Â™Ÿ•nç ³ “ˆ™Ÿ£…ã ”?•% CS “ˆ™4œž™n¥L +âLš”ž®z“  `ëU¥× “™4§`“ˆ”?œ?¤ˆš™Ÿ£tU¥  “™$£ÂZ¤Â™U· “™$£ÂZ¤Â™$”?•Õ§n˜Uœ?œž™Ÿ¤à¸?¹€õJÇ0UùȹeÅ;ÀWVìÈe»žÒvï ÇYX ¼`ËvÓ¶Àl¸ž¹ŸÇ;¹J·¤ˆ™Ÿ£ÂU C™Ÿ¤ ÷&S  ø¨âLš™Ÿ•C–ô°=÷ <T ø­ë`°Ì=“ˆ™Ÿ£ D ”d•£ÂU Sœž™n¥] «âLš”ž®z“  `ë+§Jz©$–lœž™n C™Ö”ž S”?•ÖËeÀ¹JÃìõeÈöËUÓ ÇLïZ¹¸ž¹½õeÇWâLš™Ÿ•–.°+È`»žÒvïMÇdë`ç,³¶“™Ÿ£ D ”d• œž™n¥L ¶˜U£ˆ¤š”ž®z“   U–™Ÿ£·ˆ”ž ”?• §n˜Uœ?œž™Ÿ¤hË`À¹nȰ ü”ž®z¯Âš™$™J›Z“‡”ž¦‡”ž •&˜S–‡˜vš`•C™Jªù Cš™n™ZUî “™գ ¯ˆ©b¦™nš U¥Â “™2“™Ÿ˜U¤Zª;§`“ˆ”?œ?¤U¥ˆ˜ £ÂZ¤Â™%”d•ô•C–™Ÿ§n”žØˆ™Ÿ¤W¦,™n Á³%™n™Ÿ£ ¦ˆš`˜U§¢U™n •n°&ü”ž®z¯Âš™¨1+•“o³ ••Cz©t™ U¥ “™ ’êªù®Uš˜U©ó•  “ˆ˜v ¶§n˜U£ì¦™+™J›M Cš˜U§J C™Ÿ¤¥Lšz©´ “ˆ”?•ê Cš™n™U° Þ ˜ŸÌZ”?£Â® ™J›M Cš`˜U§J C™Ÿ¤ ’êªù®Uš˜U©t• ¥Lšz© ˜Uœ?œ £Âz£Âª;œž™Ÿ˜v¥ £ˆM¤Â™Ÿ•НU¥  “ˆ™ÿ Cš™n™n¦‡˜U£Â¢,·Î³%™ U¦Âª  ˜U”?£ ·Iš\[<]_^ £O` ·pâ D ë`· ¹ ša[<]_^ £O` ¹ â D ë ˜U£‡¤ º»š\[ ]_^ £W` ºƒâ D ë`°·cb· ¹ b—˜U£ˆ¤¦ºcb²š™n–ˆš™Ÿ•C™Ÿ£    “™b•¯Â¦‡•C™n • U¥š™Ÿ•C–.°·ƒ· ¹ ˜U£ˆ¤ ºÍ “ˆ˜v ¶§Jz£­ ˜U”?£  “z•™'’êªù®Uš`˜U©t•= “ˆ˜v =“‡˜ŸÌU™ šMU •=œd˜v¦,™Ÿœ?™Ÿ¤  °  b âB$ë ½Ìô ¹ b âB$ëc§ º b âBtëc§ · b âBtë°ö •–,™Ÿ§n”?؈™Ÿ•  “ˆ˜v Ï “™ ™J›Z Cš˜U§J ”žz£  CMU¢ –‡œd˜U§J™ z£ •Cz©t™  Cš™n™n¦l˜U£Â¢…B U “™nš¶ “ˆ˜U£ì “™+ Cš`˜U”?£ˆ”?£Â® Cš™n™n¦l˜U£Â¢l° €ed ­ª®°¯Žˆf‰Ž±¯;„=†‡„tˆ|‰0Šf‹ŒŽ„gfTˆ|‘³´„t–K–K„t– P H³ ³2™S•C–™Ÿ§n”ž¥L¬ì’êªù®Uš˜U©Î¤Â™nš”žÌv˜v ”žz£ˆ•¶˜U••¯ˆ©ó”?£Â®  “ˆ˜v 2³2™4“ˆ˜HÌU™+˜U£Ö™Ÿ• ”?©t˜v C™4U¥. “™4–ˆšU¦‡˜v¦‡”dœ?”ž ;¬U¥ ˜"’êªù®Uš˜U©ì°Cp™š™n ¯Âš£ CÖ “‡”?•¶”d••¯Â™Wš`”ž®z“­ 4˜v¥] C™nš  “ˆ”d•n°I´•C CZ§“ˆ˜U•C ”d§Ö¤Â™nš`”žÌY˜v ”?z£ •C ˜vš •S¥Lšz©  “™ •C ˜vš £ˆz£­ C™nš©ó”?£ˆ˜UœžA©ª¤t°‡A©¤²”?•˜+•”?£Â®zœž™2£ÂZ¤Â™ –‡˜vš ”?˜Uœ–‡˜vš•C™Jªù Cš™n™W³ “ˆ”?§`“ì”?••”?©S¯ˆœž ˜U£ˆ™nz¯ˆ•œž¬Ö “™ šMU ˜U£ˆ¤" “™՝z£ˆœž¬óœ?™Ÿ˜v¥×£ÂZ¤Â™U°, ¤ˆ™nš”žÌv˜v ”žz£ó C™nšª ©t”d£ˆ˜v C™Ÿ•4³ “™Ÿ£ƒ ;³2§Jz£ˆ¤ˆ”? ”žz£ˆ•Õ˜vš™$©$™n óâöäH붙nÌ ª ™nš¬£Âz£Zª;œž™Ÿ˜v¥2£ÂM¤ˆ™S”?£ƒ “™®U™Ÿ£Â™nš`˜v C™Ÿ¤p–‡˜vš•C™Jªù Cš™n™ ”?•$¼`ËvÓ¶Àl¸ž¹ŸÇ;¹â€”½°á™U°*ãϤ™Ÿœ?”?©ó”ž •Õ”ž •+§“ˆ”dœ?¤Âš™Ÿ£¥]šz© h A eSgWYÇ|S UcVfq‡U°eSÎacśX]R]S V<; aceeS U?VfUcifXbopX].kNx ij_k l =_monqp @IA irsk l =_monqp @IAt @L@ l @BA(t J*K 9 @B@ i'u&k MIAt m vwp l MIAt l MIN!@Lt qRS U°obS V i'xk l @BA(t mzyWp l J*K 9 t U l @B@t V[S Uco i{qk = t mzywp l MIAt l MBNOJBt G Ucq l MIAt l MINO@t qRS UcopS V i|/k = t mzywp @BA(t l @B@Lt VfS U°o MBA(t l MBNO@Lt qRS U°opS V ü”ž®z¯Âš™…1Mî = a?śS}9nWYưe Ucśq,S~K]Re Uc_g]S V; eacÅ¿]ÇfS(]ReS S›Xp\ ^ÆchfeS<C€É]ÇfS›qRh|dSgeqR_geXpd[]Éac\ ]Ç|S(eaNa°] obU°iS onqRdS _gXp^|S q,]Ç|S ‚Yƒ/„†…/‡ˆ„'‰$ŠŒ‹ rpx Scx Æ[x´]Ç|S opS;Z]RWYśa?q]9nWYÆce U°ÅûXpqÎXp\]ÇfSÉopS;Z]RWVfS dS \|VfS \N]Tea?opS?x @ a?\[WYobS U5;Ž\|aKVfSgq,U°eS Å"U`eŽH+S V G X]ǐ l ‘ Uc\|V  t ‘ ]a(qRdSg_ Xz; k G Ç|Sg]ÇfSge]Ç|Sgk¨U`eS_ga?śd|opSg]SL; ea?Å:]ÇfSÎopS; ]+’ eXbÆcÇN]”a°eTia°]Ç l'opS U ÈKXp\|Æ(acdS \\|aKV[S qh|\fÅ"U°eŽH+S V|wyx ¦U “p•”?¤Â™Ÿ•eë ˜U£‡¤ âR$zë4˜Uœ?œœž™Ÿ˜v¥%£ˆM¤Â™Ÿ•+˜vš™tœ?˜v¦™Ÿœž™Ÿ¤ ³ ”ž “à C™nš©ó”?£ˆ˜Uœ•C¬Z©¦zœ?•Ÿ°¦Á.™n #“Íš™n–ˆš™Ÿ•C™Ÿ£­ $ “ˆ™ §n¯Âšš™Ÿ£­ –‡˜vš ”?˜Uœ –‡˜vš•™Jªù Cš™n™U·­”½°á™U° “™2š™Ÿ•¯ˆœ? ôU¥‡ “ˆ™ –ˆš™Ÿ§J™Ÿ¤‡”?£Â®"®U™Ÿ£Â™nš˜v ”žz£h• C™n–‡•n·.˜U£ˆ¤pœž™n •”–¨š™n–ˆš™Jª •C™Ÿ£  ¶ “‡˜v  –‡˜vš ¶U¥“  “ˆ˜v Õ”?£0 ‡¯ˆ™Ÿ£ˆ§J™Ÿ• “™S§“z”?§J™ U¥ “™£Â™J›Z  •C C™n–.·‡”ù°á™U° “™§Jz£ˆ¤ˆ”? ”žz£ˆ”?£Â®“ˆ”?• CUš¬U° ’ “™֮U™Ÿ£Â™nš˜v ”žz£–ˆšZ§J™Ÿ••š™n–™Ÿ˜v •b “ˆ™ó¥Lzœ?œžH³¶”?£Â® •C C™n–‡• ”?£ì•Cz©$™WUš`¤Â™nšŸ·Â™U°á®Â°l“ˆ™Ÿ˜U¤Zª;œž™n¥L öªùš”ž®z“  nî — „t‰~“‡®°¯;„=†‡„tˆ|‰0Šf‹.‘ކ  M™Ÿœž™Ÿ§J +˜ìœž™Ÿ˜v¥ê£ÂZ¤Â™ D œd˜Yª ¦™Ÿœž™Ÿ¤'¦ ¬ ˜¡£Âz£  C™nš©t”d£ˆ˜Uœ  ·t˜U£ˆ¤ œž™n   ®U™Ÿ£Âª ™nš˜v C™í˜ “™Ÿ˜U¤ ’êªù®Uš˜U© ¼ ½ ·cbƳ ”ž “ –ˆšU¦l˜Yª ¦‡”?œd”ž ;¬:¤ O â.¼K¥  §:”–ë`°'’ “ˆ”?•tš™Ÿ•¯ˆœž •"”?£7˜h–‡˜vš ”d˜Uœ –‡˜vš•™Jªù Cš™n™ “ˆ˜v Ö™J›Z C™Ÿ£ˆ¤ˆ•2“Иv  D ³ ”ž “7˜§JU–M¬ U¥j¼Õ‘U•¶”?£èêü×ý+• ˜U£ˆ¤ì”?£ ªª2ë`° ˜ ‘“”‹š™j³´‰0Šf‹.‘n†  Z™Ÿœž™Ÿ§J  ¥Lšz© “ ˜ £Âz£Zª;œž™Ÿ˜v¥ £ÂZ¤Â™ D  “ˆ˜v «”?•ÃlËUÇt¼ËUÓ À,¸ž¹ŸÇ;¹J° Á.™n   ¦™ “ˆ™ œ?˜v¦™Ÿœ.U¥ D ˜U£‡¤IAæš  ñ› E â.¢ E ë!QQQ5#œŸâ.¢!œë ¦™  “™W Cš™n™¤Âz©t”?£‡˜v C™Ÿ¤«¦ ¬ D ―C™n™؇®z¯Âš™ª2 ë`î  „,ž Š  ”ž¥ D ”d• £ÂU íœž™n¥L öª;§Jz©$–‡œ?™n C™U·œž™n  D ®U™Ÿ£Âª ™nš`˜v C™¨ C= “ˆ™ œ?™n¥] U¥*Aÿ˜²œž™n¥L öª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£   ’ªù®Uš˜U© ¼Îš 0Ÿ¡ £¢ ñÕî E â' E ë!QQQ°î¤Mâ'D¤vë ¥Lšz© ¹ b ³ ”ž “ –ˆšU¦‡˜v¦‡”?œ?”? ;¬ ¤B ×â.¼K¥  §:”–ë ―™n™ ؈®z¯Âš™ 2 âÁëCë`ç  “ˆ”?•)š™Ÿ•¯ˆœ? • ”?£ ˜ –l˜vš ”?˜Uœ –‡˜vš•™Jªù Cš™n™  “ˆ˜v  ”?• U¦ˆª  ˜U”d£Â™Ÿ¤ ¦ ¬ š™n–‡œ?˜U§n”d£Â® A ”?£ “ ³ ”? “  ñÕî E â' E ë!QQQ°î1¤ â'D¤vë+ E â.¢ E ë!QQQ/#œJâ.¢Oœë`· ¥ ‹¯O¦nŠ   “ˆ”d•Д?•  “™ ©t”žššUš §n˜U•C™ ―C™n™ ؇®vª ¯ˆš™ 2²â þ ëCë`°ê’ “™0®U™Ÿ£ˆ™nš˜v ”žz£=–ˆšU¦‡˜v¦l”?œ?”ž Á¬ ”d•›¤B 2â.¼N¥  §:”–ë`° ü”ž®z¯Âš™z=•“o³ •p˜²¤Â™nš”žÌv˜v ”žz£'¯ˆ•”?£Â®=’êªù®Uš˜U©t• âL™oë`·‘­ë$˜U£‡¤ ․lë¥Lšz© ؇®z¯Âš™ 1h˜v–‡–‡œ?”ž™Ÿ¤  C ’ª ®Uš˜U©éA©¤ ñõ—2° P U C™« “ˆ˜v S™Ÿ˜U§“¨¤Â™nš`”žÌY˜v ”?z£Zª •C C™n–"–ˆšU¦‡˜v¦l”?œ?”ž Á¬b”?•&§Jz£‡¤ˆ”ž ”žz£Â™Ÿ¤tz£  ·­ “™4œ?˜v¦™Ÿœ U¥£ˆM¤Â™ D ”d£§“ ³ “ˆ™nš™ “ˆ™ƒ§n¯Âšš™Ÿ£­ "š™n³ š`”ž ”?£Â® ”?•ó ˜v¢M”d£Â®–lœ?˜U§J™U· z£= “™ƒšzœž™àâ.·ƒ· ¹ Uš…º«ëóU¥  “™W’ªù®Uš˜U©µ”?£ ÌUzœžÌU™Ÿ¤·ˆ˜U£ˆ¤«z£0 “™+š™Ÿœž™nÌv˜U£­  “‡”?•öª  CUš¬2”–&°¶••¯ˆ©t”?£ˆ®t¦™n¬Uz£ˆ¤0 “ˆ”d•  “ˆ˜v Õ•C CZ§“ˆ˜U•Cª  ”?§$”d£ˆ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§J™S¦™n ;³2™n™Ÿ£h “™$Ìv˜vš”žz¯‡•+¤Â™nš”žÌv˜Yª  ”žz£•C C™n–l•“ˆzœ?¤ˆ•n·ÂÇLïZ¹ÀlÈCË­ñ¾ ñJ»L¸ »€Ç'¨ËCõ ¾¿­¹nÈ`».)Y¾Uǀ»½Ëvà »]Å ¹5©nɈ¾v¸ÂÇùËbÇLïZ¹ÓÉZ¸Êǀ» Àl¸Ô»½¼¾Uǀ»½ËvÃËöõÇLïZ¹¶¼ËvÃ,¿v»Lǀ»½ËvÃl¾U¸ ÀlÈCË­ñ¾­ñJ»L¸ »€Ç€»ù¹eÅSËöõbÇLïZ¹b»]Ã,¿v»)o»€¿UɾU¸ôȹ  Èe»LÇ;¹ÅJÇ;¹ÁÀ‡Åe° ª £ˆœ?”ž¢U™H‡èêü×ýՕ֘U£ˆ¤íû˜vš¢UHÌ ®Uš`˜U©t©t˜vš•$¦l¯Â  œ?”?¢U™÷·ƒ˜Æ–‡˜vš•C™Jªù Cš™n™ ©t˜H¬Ð¦™ ®U™Ÿ£Â™nš˜v C™Ÿ¤ ÌZ”?˜ ¤‡”«&,™nš™Ÿ£­ Ï¤Â™nš”žÌv˜v ”žz£ˆ•Ÿ° ’ “™ –ˆšU¦l˜v¦‡”?œÊª ”ž Á¬ U¥Ï˜ –‡˜vš•C™Jªù Cš™n™ A ”d•ÿ™KM¯ˆ˜Uœ´ C  “™ ÅeÉMÓ Ëöõ ÇLïZ¹=ÀlÈöË ñ¾­ñn»]¸Ô»Lǀ»ù¹eÅ Ëöõ ÇLïZ¹ ¿­¹JÈe»)v¾Uǀ»½ËvÃÂÅ ÇLïM¾zÇµÒ ¹JùJÈC¾UÇ;¹ »€Ç4UÁ¿­¹nÃlËUÇ;¹¿ ß«N¬éARXY·ÿ”½°á™U° ¤óâAC§c—&ë,š\­ }.® þ.¯ £ ¤Öâß,«N|§c—&ën° Þ o³%™nÌU™nšH·¦™Jª §n˜U¯ˆ•™í§Jz©$–‡¯Â ”d£Â®/œ¡|Ÿ¡ Iœž¢~£¦¤ÖâA¨§c—&덧n˜U£ £ÂU  ¦™˜U§“ˆ”ž™nÌU™Ÿ¤ ”?£¤Â™n C™nš©t”?£‡”?•C ”?§–,zœ?¬M£Âz©ó”?˜Uœô ”d©$™ âZ”d©t˜=! ˜U£·ZäŸåUåUæzë`·z³%™2˜v–ˆ–‡œž¬Õ™Ÿ•C ”d©t˜v ”žz£©t™n “M¤‡•  “ˆ˜v ¶˜Uœ?œžH³— Cš`˜U§J ˜v¦‡œž™W–‡˜vš`•”?£Â®Â° €¡° ±–NŠf‹’‰~Šf‹†‡¯­®°¯;ˆ|‰;²f”ˆ|‘s³”‰(³”‹•‹Šf‹.„ž– Áô™n $´qµŽê¼Jâ'¶ E §QQQ/¶ % ëך™n–ˆš™Ÿ•™Ÿ£­  “™Z§n§n¯Âšš™Ÿ£ˆ§J™ §Jz¯ˆ£  4¥]Uš # z”?£­ ¶™nÌU™Ÿ£­ ·'¶ E QQQ5¶ %•¸ ”?£0 “ˆ™b Cš˜U”d£Zª ”?£ˆ®¨ Cš™n™n¦‡˜U£Â¢,°Ïè2z£‡•”?¤Â™nš0˜ ’ªù®Uš˜U© ¼z½¹ b · #b¦½ô ¹ b§ ºcbɧ ·cb,öz·˜U£ˆ¤ ˜§Jz£ˆ¤ˆ”ž ”žz£‡”?£Â®“‡”?•öª  CUš¬º”–&°Æ’ “ˆ™p™Ÿ•C ”?©t˜v C™ »Ž¼½ {|L Ÿ L'¾ ¿IÀW¾ ÁF ¢ ­Ã$Ä/Å À » ¼+½ {fL ŸYÆ ¾ ¿BÀO¾ Á ¢ ˜U••¯ˆ©$™Ÿ•£Ât“ˆ”d¤ˆ¤Â™Ÿ£Ö™Ÿœž™Ÿ©$™Ÿ£  •b․ˆ”«&™nš™Ÿ£  ¤Â™nš”žÌv˜Yª  ”žz£‡•–™nšô–‡˜vš•™Jªù Cš™n™oë`·U”½°á™U°o”ž ô™Ÿ•C ”?©t˜v C™Ÿ•× “™&–ˆšU¦Âª ˜v¦‡”dœ?”ž ;¬˜¤ ¿ â.¼N¥  §:”£–ë7¤ˆ”žš™Ÿ§J œ?¬´¥]šz©  “ˆ™' Cš™n™Jª ¦‡˜U£ˆ¢0 Cš™n™Ÿ•Ö–ˆ™Ÿ£ˆ§J™n¥]Uš “ ¿v»]ȹ¼HǀÄC¹JÅeǀ»LÓt¾UÇ;¹ë`°ó’ “ˆ”?• ™Ÿ•C ”d©t˜v C™ ”?•—™Ÿ©t–‡œžH¬U™Ÿ¤ ”?£  ˜U£ˆ¤ ”?• £ÂU  û˜Y›Z”d©b¯ˆ©$ª Áô”ž¢U™Ÿœ?”d“ Z¤â€ú2z£ˆ£Â™Ÿ©ó˜b™n ˜Uœ½°ž·äŸåUåUåzë`° p™²˜vš®z¯Â™= “ˆ˜v   “™í¦l”?˜U• U¥ “™=¤‡”žš™Ÿ§J  ™Ÿ• ”ʪ ©t˜v C™%˜Uœ?œžH³¶•ô˜v–ˆ–ˆšŸ›Â”?©t˜v ”?£Â®ê “™–ˆš™n¥L™nšš™Ÿ¤+–‡˜vš`•C™ ¦M¬  “™0z£Â™®U™Ÿ£Â™nš˜v C™Ÿ¤7¦ ¬  “™û0z•C …&šU¦l˜v¦‡œž™ 4™nš”žÌv˜v ”žz£íâ½ûHWë`°’ “ˆ”?•”?•+¦™n¬Uz£ˆ¤p “™"•§JU–™ âÁë ù monÇp Ô P x x x Ô(È É x x x ù Ë Í?ÏÐÑ;Ò Ê P x x x ÊÌË Í Î x x x ù ËÎÍ?ÏÐÑ;ҍÏÐŽÑ«ÑšÒ Ô P x x x Ô(È Ê P x x x ÊÓË â þ ë x x x ù Ë,ÍcÏÐÑŽÒ Ê P xgx x ÊÌË É ù mzvwp Û P x x x Û È Í Î x x x ù ËÎÍ?ÏÐÑ;Ò1Ï+Ð Ñ ÑšÒ Ê P x x x ÊÌË Û P xgx x Û È ü”ž®z¯Âš™ª2Âî’ªù®Uš˜U©˜¼ê”?•ê®U™Ÿ£Â™nš˜v C™Ÿ¤ƒ˜v  D î âÁë ¼ ½ ¹ b ·â þ ë ¼(½»º b = t Ë l MBA(t l MINOJLt G U°q l MBA(t l MBNO@Lt qRS U°opS V É   l =_monqp @IA ÔIÕ l = t Ö @BA l MIAt l MIN!JBt G Ucq l MIAt l MBNO@Lt qRS UcopS V ÉØ× l @IAt mzywp l J*K 9 t U l @B@Lt VfS U°o Í Î l = t l @IAt l J*K 9 t U l @L@Lt V[S Uco l MBA(t l MINOJLt G U°q l MBA(t l MBNO@Lt qRS U°opS V ü”ž®z¯Âš™ªMî ÄÙ9nWYÆce UcŸVfSgeXpÈcU°]Xpa?\ €0]Ç|SeS G eX]Xp\|Æ a5;9ÛÚ A XpqF\|a°]qRÇ|a G \¡x;ÄT\¨U°eRea G Å"U`eŽH[q;]Ç|ST\|aKVfS G Ç|SyeS”eS G eX]Xp\|Æ ] U/H?S q”dfoZU°_ S?xÜfa?opopa G Xp\fÆC]Ç|SÉU°eRea G q:€s?x¡ÄÌopS; ]*9nWÆ°e UcÅ G Xp]ǪeaNa°] l Ý Xbq”Æ?S \fSge U°]S VU°]”\|aKV[S Ý t € Ý XbqT_ a?śdfopSg]S?xCNx¡Ä Ç|S U?V[W¬eacobS9nWYưe UcÅzXpqTÆcS \|Sye U°]S VU`]‡\|aKVfSIÞàßӀŽUcopot\faKVfS qU`eSSgXp]ÇfSge_ acśd|opSg]SÎace‡obUciSgobS V G X]Ç]SgeśXp\U°obq x U¥ “ˆ”?•–‡˜v–™nš˜U£‡¤³ ”?œ?œM¦™ ¤ˆ”?•§n¯‡••C™Ÿ¤™Ÿœ?•C™n³¶“™nš™U° F€Dá âäãWå3’‘“”„t• ‹†‡–KŠ[‰;†T³´„ ª –  ”?œ?œ'£ÂH³ ”– š™n–‡š™Ÿ•C™Ÿ£  C™Ÿ¤ §Jz£ˆ¤ˆ”ž ”žz£‡”?£Â® ”?£Â¥LUš©t˜v ”?z£ ˜U£Âz£ ¬Z©$z¯ˆ•œž¬ ”?£ z¯Âš ©$M¤ˆ™Ÿœ½° üUšÏ “™ /~§JU𖇝ˆ•n· ³%™ ”?£ˆ•C ˜U£  ”?˜v C™ä”£– ˜U• ¥Lzœ?œžo³ •nî ‚;€Ûæ0ç/èFé,ê$ë$ìWê:íÛî ’ “™ l˜v®äïL â.¼ë â'ïI 2â.¼ëCë  C™Ÿœ?œ?•4³ “™n “ˆ™nšÕ˜«œž™n¥L öª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  $âLš”ž®z“  öª ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  `ëà’êªù®Uš˜U© ¼ ™J›M Cš˜U§J C™Ÿ¤Í¥Lšz© •Cz©t™ £ÂZ¤Â™ D ¤ˆz©t”?£ˆ˜v C™Ÿ•Ϙ •¯Âš¥L˜U§J™ •C Cš”d£Â®  “‡˜v  ”?•ΘU¤ # ˜U§J™Ÿ£­ Ï C  “™ “™Ÿ˜U¤Zªù³2Uš¤)U¥ D ․ˆ™Jª  ˜U”?œì”?£Ðâùèêzœ?œ?”?£ˆ•Ÿ·äŸåUåzéUëCë`° F€LðÛñ(òsê/é£óŽô'õDö/éÇ÷øë$ù&î âùè2zœ?œd”?£ˆ•n·ƒäŸåUåzéUëà•¯Â¦§n˜v  ¥Lš˜U©$™Ÿ• ˜vš™—˜U¤ˆ˜v–ˆ C™Ÿ¤.î ³ ”ž “Í™nÌU™nš¬Î£ÂM¤ˆ™ D  “ˆ˜v —¤Âz©t”d£ˆ˜v C™Ÿ•²˜ š¯‡œž™  ñõãïî {   cî E â E   5% 㠔?£  “™  Cš™n™Jª ¦‡˜U£Â¢ âL؈®z¯Âš™—äHë`·$³2™ ˜U••CM§n”d˜v C™¨ ;³2 âL–z••”ž¦‡œ?¬ ™Ÿ©$–ˆ Á¬ÂëW©b¯‡œž ”?•C™n •+U¥§Jz©$–lœž™Ÿ©$™Ÿ£  •nî —ú ] ˜U£‡¤ —ú ] ° ã&ÌU™nš¬Ð§Jz©t–‡œž™Ÿ©$™Ÿ£   ”?£ —ú ] ◍ú ] ë š™n–ˆš™Ÿ•C™Ÿ£­ •4•Cz©$™Sœž™n¥L bâLš”ž®z“  `ë§Jz©$–lœž™Ÿ©$™Ÿ£  öª;§“ˆ”?œd¤ U¥ D °S’ “‡”?•Õ§“ˆ˜U£Â®U™Ÿ•b’êªù®Uš˜U© ™J›Z Cš˜U§J ”žz£ ˜U•+¥]zœžª œžo³ •nî  »LÇLï=¹N)v¹nÈ/¨ÃlËvÈą¸ž¹¾öõƒÃl˟¿­¹ƒ»Là ¾—Ñ×ĀÒzÈö¾UÓ  “ˆ˜v W”?•4™J›Z Cš˜U§J C™Ÿ¤h¥Lšz© ˜" Cš™n™t”?£ƒ “ˆ”d•¶™Ÿ£ˆš”?§“ˆ™Ÿ¤  Cš™n™n¦l˜U£Â¢µ³2™'“ˆ˜HÌU™ £ÂH³ ˜ œž™n¥L =˜U£ˆ¤Ï˜ š”ž®z“   •¯ˆ¦§n˜v "¥Lš˜U©$™˜U••CZ§n”?˜v C™Ÿ¤° èêz£ˆ•”?¤Â™nš" “™ƒšMU  £ÂZ¤Â™H¢ ”?£=˜à’ªù®Uš˜U© ™J›M Cš˜U§J C™Ÿ¤¡¥]šz© £ÂZ¤Â™ D ˜U£ˆ¤ œž™n W “™"§“ˆ”?œd¤Âš™Ÿ£U¥(¢¦™¶ E QQQ5¶Wû ‘•¯Â¦Âª •C™KM¯Â™Ÿ£ˆ§J™ƒU¥ãïî { §QQQ[§ â §QQQ/% ãÖë`°'’ “ˆ™œž™n¥L  âLš”?®z“­ `땯¦§n˜v &¥]š`˜U©$™ U¥F¢ì”?•&•¯ˆ¦‡•¯ˆ©$™Ÿ¤¦M¬8—ú ] âLš™Ÿ•–.°C—ú ] ë ˜U£ˆ¤§Jz£­ ˜U”?£‡•  “z•™S§Jz©$–lœž™Ÿ©$™Ÿ£  •  “ˆ˜v "§JUšš™Ÿ•C–z£ˆ¤í C “ˆ™ƒœž™n¥L öª;¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  âLš™Ÿ•–.° š”?®z“­ öª;¤Â™n–™Ÿ£ˆ¤ˆ™Ÿ£­ `ë«§`“ˆ”?œ?¤ˆš™Ÿ£¡U¥ D  “ˆ˜v ƒ˜vš™7ÃlËzÇ ˜U©$z£ˆ®º¶ E QQQ5¶OûM° ’ôš™n™Jªù®Uš`˜U© ¤Â™nš”žÌv˜v ”žz£ˆ•0˜vš™ ©$Z¤ˆ”žØ‡™Ÿ¤˜U§n§JU𤇔?£Â®zœž¬,îÖ³¶“™Ÿ£Â™nÌU™nšt˜p’êªù®Uš˜U© ”?• ®U™Ÿ£Â™nš`˜v C™Ÿ¤ âL CU®U™n “ˆ™nšp³ ”ž “¡ “ˆ™•¯Â¦§n˜v ¥]š˜U©t™Ÿ• U¥ ”ž •S£ÂZ¤Â™Ÿ•`ë+¥]šz© •Cz©$™«£ÂZ¤Â™ D ”?£à˜–‡˜vš ”?˜UœÊª  Cš™n™U·M “™4§Jz©$–‡œž™Ÿ©$™Ÿ£  •& “‡˜v %”ž •š U 2¤Âz©t”?£‡˜v C™Ÿ• ˜vš™=š™Ÿ©$oÌU™Ÿ¤ ¥]šz©  “™²•¯Â¦§n˜v à¥]š`˜U©$™Ÿ•àU¥ D ° ü”ž®z¯Â𙫿•“o³ •$˜•©t˜Uœ?œ%™J›Z˜U©t–‡œž™"U¥ ˜¤Â™nš”žÌv˜Yª  ”žz£.° d€üéqö.ý,þ_ÿšéÇìë$ì!ë/ö/é£óþqìÛî ²“™Ÿ£1£ÂZ¤Â™ D “ˆ˜U•󙟩$–ˆ Á¬ •¯Â¦§n˜v t¥]š`˜U©$™Ÿ•n·2³%™ƒ˜U••¯ˆ©$™äK¼Áª Uš¤ˆ™nšÕû˜vš¢UH̍–ˆšM§J™Ÿ••™Ÿ•4”d£®U™Ÿ£Â™nš`˜v ”?£Â®"¦,U “ ¹ ˜U£ˆ¤ º ’êªù®Uš˜U©t•4˜všz¯ˆ£ˆ¤”ž •C· ’êªù®Uš˜U©0î(î ] ˜U£ˆ¤  ] ¤Â™Ÿ£ÂU C™¨š™Ÿ•C–.°Õ “™ œž™n¥L öª0˜U£ˆ¤ š”ž®z“  öª ©$z• ƒ§`“ˆ”?œ?¤Âš™Ÿ£¡U¥ó£ÂZ¤Â™ D °ÕÁô™n g  ] ˜U£‡¤ l 9ÛÚ A(t = t  Ë l MIAt l MBNOJBt G Ucq l MIAt l MINO@t qRS UcopS V É   l = monÇp @IA ÔBÕ l 9ÛÚ A(t l = t   Ö @IA l MBA(t l MINOJBt G Ucq l MIAt l MIN!@Lt qRS U°opS V ÉØ× l @BA(t mzyWp l J*K 9 t U l @B@t VfS Uco Í Î l 9ÛÚ At l = t l @IAt l J*K 9 t U l @L@Lt V[S Uco l MBA(t l MINOJLt G U°q l MBA(t l MBNO@Lt qRS U°opS V ü”ž®z¯Âš™$æMî Ý t  XpqÎUªl'opS; ]RWYacdS \eXpÆcÇK]RWY_ acśd|opSg]S`w\|aKV[SÉobU°iS opS V Ý G Xp]ÇU›opS; ]ÎqRhfi_ U°]L; e U°Å›SÉ_ ac\N] UcXp\|Xp\|ÆCUc\ @BA x Ä;Z]Sge]Ç|Sj^feq]‡eS G eXp]Xp\fÆfr]ÇfSÎqRh|i_ U`]L;Ze UcśSÎiS _ acśS qS śdf]k¨qRXb\f_ Sj]Ç|S @BA _ acśd|opS śS \N] G Ucq‡Æ?S \fSge U°]S VeSgqRh|o]Xp\|ƛXp\ l = t  xs9FÇ|SÚT]Ç|SyeqRh|i_ U`];Ze UcśS q‡U`eSÎS śdf]k¨Uc\VCU°eSÎ\fac]‡qRÇ|a G \ÇfSgeS?x Hî ] ¦™ì™K ¯‡˜Uœ  Chš™Ÿ•–.°  ] ˜U£ˆ¤Àî ] ”ž¥  “™󣇘U©$™$U¥ê “™ó’êªù®Uš˜U© •C¬Z•C C™Ÿ©Ð§Jz£  ˜U”?£ˆ•W “ˆ™ ³2Uš¤ œ¡´ç£´ƒâLU “ˆ™nš³ ”?•C™+ “ˆ™n¬«˜vš™W™Ÿ©$–ˆ Á¬Âë`° Áô™n  D ·4œ?˜v¦™Ÿœž™Ÿ¤  ·¶¦™ “™h£ÂZ¤Â™³ “™nš™ƒ “ˆ™ §n¯Âšš™Ÿ£­  š™n³ š`”ž C™Jª;•C C™n–´ ˜v¢U™Ÿ• –lœ?˜U§J™U·I¤ ¦™= “ˆ™ ¿0-vª;œ?˜v¦™ŸœÖU¥ “™=–‡˜vš™Ÿ£   U¥ D ·ì˜U£ˆ¤ â  “ˆ™ ¿0-vª;œ?˜v¦™Ÿœ¨U¥— “ˆ™Æ“™Ÿ˜U¤Zª;§`“ˆ”?œ?¤ U¥ D ° Õ¯ˆš –ˆšU¦l˜v¦‡”?œ?”ž ”?™Ÿ•˜vš™Uî ¤ O â.¼K¥  §:”–ëG¤ O â.¼K¥  §`¤bë`· ¤B â.¼K¥  §:”–ëG¤L â.¼N¥  § â §c—ú ] §.ïL â.¼ëc§   ] ë`· ¤I %â.¼K¥  §:”–ëG¤I %â.¼K¥  § â §c—ú ] §.ïI %â.¼ëc§ Hî ] ë`°  Ú :  @ 9 : 9 @  <  9 Û @ 9 ŽvŽvÝ 9 Ž M™Ÿ§J ”žz£‡•4´$oªy$Zä /~-÷™Ÿ£ˆ£Í’ôš™n™n¦l˜U£Â¢Ðâ½û˜všCª §n¯ˆ•²™n ¡˜Uœ½°ž·7äŸåUå1zë âLš™Ÿœž™Ÿ˜U•™ $zë=˜vš™ ¯ˆ•C™Ÿ¤Î¥]Uš  Cš˜U”?£‡”?£Â®˜U£‡¤—•C™Ÿ§J ”žz£¿$1 ”?•ì“™Ÿœd¤Zªùz¯Â "¥]Ušì C™Ÿ•C öª ”?£Â® âL³%™= ¯ˆ£Â™7z£ •™Ÿ§J ”žz£ $|2 ë`° ’ “ˆ™í–‡˜vš•™nšCª z¯Â C–‡¯ˆ ê”d•%™nÌv˜Uœ?¯ˆ˜v C™Ÿ¤0¦ ¬à÷ö™nÌY˜Uœž¦lø"!v·Âz£« “™F þ ª Zã#”Á©t™Ÿ˜U•¯Âš™Ÿ•4â€ú2œd˜U§¢t™n 2˜Uœ½°ž·läŸåUåZäHë%§Jz©$–‡˜všª ”?£Â®«˜«–ˆšU–z•C™Ÿ¤ƒ–‡˜vš`•C™ï¤´³ ”ž “ƒ “™$§JUšš™Ÿ•–,z£‡¤Zª ”?£Â®¶ Cš™n™n¦‡˜U£Â¢b–‡˜vš•C™ÉA¨z£%Áô˜v¦™Ÿœž™Ÿ¤ þ ™Ÿ§n˜Uœ?œâÁ þ š { ½ % œ ®yþ ¼ û »Ž¼ þ þ:® » L »Ž¼ {JL ý L ½ ®y{fLYJ ý { M { ½ % œ ® þ ¼ û »Ž¼ {JL ý L ½ ®y{|LYJ ý { £ ë`·Áטv¦,™Ÿœ?™Ÿ¤&š™Jª §n”?•”?z£âÁ‡zš { ½ % œ ®yþ ¼ û »Ž¼ þ þ ® » L »Ž¼ {JL ý L ½ ®y{|LYJ ý {%M { ½ % œ ® þ ¼ û »Ž¼ {JL ý L ½ ®y{fLYJ ý { M ë`· ˜U£ˆ¤è2šz••”?£Â®ú2š˜U§¢U™n •Sâùèêú¿š'£ ¯‡©¦™nšêU¥§Jz£Âª •C ”ž ¯ˆ™Ÿ£­ •”?£¾¨ “‡˜v 2ÌZ”žzœ?˜v C™W§Jz£‡•C ”ž ¯Â™Ÿ£  ê¦,z¯‡£ˆ¤Zª ˜vš”ž™Ÿ• ”?£ì’¶ë`° $ ô%_ö$éÇ÷ ë'&Wó'ö$é,ê óšþqìÛî ’ “ˆ™ê£M¯ˆ©b¦™nšU¥,’êªù®Uš˜U©t• ”?•êœ?”?©ó”ž C™Ÿ¤ó¦M¬Ö•™n C ”?£Â®t§Jz£ˆ•C Cš`˜U”?£­ •êz£« “ˆ™Ÿ”žš¥]Uš`© ©S¯ˆ§“—œ?”?¢U™ êªù®Uš˜U©t•Ÿ°Âգ™¯Â–ˆ–™nš¦z¯ˆ£ˆ¤¨”d•"•™n  z£= “™¤Â™n–ˆ “)( âßZë`·¶˜à•C™Ÿ§Jz£ˆ¤=z£= “™£ ¯ˆ©b¦™nš U¥¶§`“ˆ”?œ?¤ˆš™Ÿ£hU¥ ™nÌU™nš¬£ÂZ¤Â™pâ*Jë`·%˜ “‡”žš¤z£ “ˆ™ + ÇN]R]d € ’/’ GwGwG x eS qRS U`e_yÇ¡x U°]R] x _ a?Å<’›Å›_ acobopXp\|q ’Nx ,:9nWYưe UcÅÀVfS d[]ÇXpq‡]ÇfSÎopS \|ư]Ça/;~]Ç|SÎopac\|Æ?Sgq]”d|U°]ÇXp\ ]Ç|S]ReS S(acif] UcXp\fS ViNkªeXpÆ?ÇN]+’`obS+; ]RWYopXb\fS U°eX.- U`]Xpa?\ïa/;Ž]Ç|SÓ9nW Æce U°Å3U°each|\V¨]Ç|S*9nWYÆce U°Å3\|aKVfSgq0/|ÇfS U?VKW_ Ç|XpobV[eSg\žx •¯‡©ÍU¥ê “™t£M¯ˆ©b¦™nš4U¥ê£Âz£  C™nš©t”?£‡˜Uœœ?™Ÿ˜v¥L•+³¶”ž “  “™%£ ¯ˆ©b¦™nšôU¥ô“ž™n¥L coš”ž®z“  `ëU–™Ÿ£Zª;£ˆM¤Â™Ÿ•%â.ê×ë`·v˜U£‡¤ ˜7¥]z¯ˆš “ â.ó+ëz£  “ˆ™à£M¯ˆ©b¦,™nšƒU¥t³2Uš¤ˆ•”?£ ˜ ’êªù®Uš`˜U©ì°’4œ?•CÂ·˜ “š™Ÿ•“zœ?¤à”?••C™n $z£  “™«¥Lš™Jª M¯Â™Ÿ£ˆ§J¬ìâ1.ëU¥ “™ ’ªù®Uš˜U©ì° ± £ó “™ ™J›Z–™nš”?©$™Ÿ£  • ê è2·(ó  1p˜U£ˆ¤ 132 p˜vš™ØÂ›M™Ÿ¤í³ “ˆ”?œ?™¾ß §`“ˆ˜U£Â®U™Ÿ•n°54 ìý£ì!þ"6ì76Iþ öçù<éÇìWçcù.÷øþ,þó8)Dì9sî p™ ¤ˆ”d¤ì£ÂU  •©$MU “0 “ˆ™š™Ÿœ?˜v ”?ÌU™W¥]š™K ¯Â™Ÿ£‡§n”ž™Ÿ•n°(”?©ª ”?œd˜všS Cíâùèêzœ?œ?”?£ˆ•Ÿ· äŸåUåzéUë`·™nÌU™n𬠳%Uš¤ Z§n§n¯Âšš`”?£Â® œž™Ÿ•• “ˆ˜U£4 ”?©t™Ÿ•t”?£  “™0 Cš`˜U”?£ˆ”?£Â®vª;•™n $³2˜U•óš™Jª £ˆ˜U©t™Ÿ¤í C è( ª•P;:P " P ª üü2·%³ “ˆ™nš™ è( ”d•0ä0”ž¥4”ž •$؈š• öª;œž™n C C™nšÖ”?•ó§n˜v–‡”ž ˜Uœ?”žßn™Ÿ¤¨˜U£‡¤ bU “™nš³ ”d•C™U·Â˜U£ˆ¤Iª üü”?•ꔞ •%•¯=<$›° ª £Â¢Z£Âo³ £ ³2Uš¤ˆ•+”?£ “ˆ™t”?£Â–‡¯Â Õ˜vš™$š™Ÿ£‡˜U©$™Ÿ¤ “ˆ”?•ճꘟ¬¦™Jª ¥LUš™t–‡˜vš`•”?£Â®0•C ˜vš •n°7$ é">?ì92éÇìWç@?Ûéqö5ù'?ì9sî 4£ ”?£ˆ–‡¯Â Õ³%Uš`¤ ”?•+ ˜v®U®U™Ÿ¤à³ ”? “p˜Uœ?œ,  ªù ˜v®z•³¶”ž “ ³ “‡”?§“ƒ”ž Õ§J Z§n§n¯Âšš™Ÿ¤”?£ƒ “™S Cš˜U”?£‡”?£Â®ó Cš™n™n¦‡˜U£Â¢,° ’ “ˆ™%–‡˜vš`•C™nš&”?•˜4 ;³2vªù–‡˜U••2è :BA –‡˜vš•™nšŸîô “™ê؈š`•C  –‡˜U••왟©$–‡œžo¬M•’êªù®Uš˜U©t• “ˆ˜v 0¥€¯ˆœžØ‡œdœCß š ä ”?£ Uš¤ˆ™nš" C¨¢U™n™n–— “ˆ™ƒ–l˜vš•C™Jª;•C–‡˜U§J™h¯‡£ˆ¤Â™nš«§Jz£­ Cšzœ ¦™n¥LUš™b “ˆ™S•C™Ÿ§Jz£‡¤Zªù–‡˜U••4™Ÿ©$–‡œ?H¬Z•¶ “ˆ™S¥€¯ˆœ?œ×’ôš™n™Jª ®Uš˜U©Æ©$M¤ˆ™Ÿœ¥]Uš •™Ÿœž™Ÿ§J ”?£Â®t “ˆ™û Éb° C D :E29  9Á‘Z< @  9 ŽvÝ @ zŽ ü”žš• ƒ³2™¨š™nÌZ”ž™n³ÿ “™ œ?™J›Z”?§n˜Uœžª;§Jz£ˆ¤ˆ”ž ”žz£‡”?£Â®z•ƒ”?£ –ˆš™nÌM”žz¯‡•²³%Uš¢ÿâLU “™nš ”?©$–Uš ˜U£  ¡§Jz£ˆ¤ˆ”? ”žz£Zª ”?£ˆ®z•—˜vš™ £ÂU ¡¤ˆ”d•§n¯ˆ••C™Ÿ¤Ï¥LUš •C–l˜U§J™ š™Ÿ˜U•Cz£ˆ•eë`° û˜v®U™nš©t˜U£‡å â½û˜v®U™nš©t˜U£.·íäŸåUåMç¿-U™Ÿœ?”?£ˆ™n¢´™n  ˜Uœ½°ž·ÖäŸåUå|2 ë®UšH³ •˜=¤Â™Ÿ§n”?•”?z£Zªù Cš™n™  C=™Ÿ•C ”?©ó˜v C™ ¤óâA綾&ë  “šz¯Â®z“ÿ˜ “‡”?•C CUš¬ ªù¦‡˜U•C™Ÿ¤ÿ˜v–‡–ˆšz˜U§“ ³ “‡”?§“ЧJz£‡¤ˆ”ž ”žz£ˆ• z£ ˜U§J ¯‡˜UœÊªù³%Uš`¤ˆ•n° è꓇˜všCª £ˆ”d˜v¢ƒâùè꓈˜vš`£ˆ”?˜v¢,·äŸåUåzéUëꖈš™Ÿ•C™Ÿ£  • œž™J›Â”?§n˜Uœ?”žßŸ˜v ”?z£ˆ• U¥¸‡èêü×ýՕnî  “™ û”?£ˆ”d©t˜Uœ©$Z¤Â™Ÿœƒ§Jz£ˆ¤ˆ”ž ”?z£ˆ• ˆèü×ý š¯ˆœž™ ®U™Ÿ£ˆ™nš˜v ”žz£"z£" “™4“ˆ™Ÿ˜U¤Zªù³%Uš`¤óU¥ô”? • œž™n¥L öª;“ˆ˜U£ˆ¤ƒ•”?¤ˆ™U·,³¶“ˆ”?œž™S蓈˜vš£ˆ”?˜v¢Âåzét¥L¯Âš “™nš4§Jz£Âª ãF–KŠ[„t Ó¥HG IJG KML NOKMLPG  KMLPG û”d£ˆ”?©t˜Uœâùè꓈˜vš`£ˆ”?˜v¢,·äŸåUåzéUë Q1M°2 Q|2°?ä äv°2´ 1M°b$ évåM°p û˜v®U™nš`©t˜U£‡åâ½û˜v®U™nš©t˜U£.·×äŸåUåzë Q|2°‰æ Q|2°‰å äv°b$Uæ UæM°‰æ QZäv°2 è꓈˜v𣇔?˜v¢Zåzé"âùè꓇˜vš£ˆ”?˜v¢,·.äŸåUåzéUë Qzé °b Qzé °2 äv°p æ$M°?ä QUæM°?ä è2zœ?œ?”d£ˆ•åzéÖâùè2zœ?œ?”?£‡•n·,äŸåUåzéUë Q"QM°?ä Q"QM°‰æ Z°‰åZä æUæM°2 QUæM°‰å è꓈˜v𣇔?˜v¢ZåUå«âùè꓇˜vš£ˆ”?˜v¢,·.äŸåUåUåzë åZ°?ä åZ°?ä Z° éf2 é|Z°?ä QUåM°‰æ ˆèêü×ýµâùè꓈˜vš£ˆ”d˜v¢l·.äŸåUåzéUë éMäv° é éM°RQ $M°p´1 1UåM°b æ"QM°?ä $ ô%_ö/éÇ÷ âßø40âR$MO4ëCë ST9U V S"W>UYX X"U[Z]\ W"S>U \ ST9UYX ’טv¦‡œ?™tävî#&˜vš”žz¯ˆ• š™Ÿ•¯ˆœž • z£»/~-t•C™Ÿ§J ”žz£H$1t•C™Ÿ£  C™Ÿ£ˆ§J™Ÿ• 2´$³%Uš`¤ˆ•âR$$|2¡֕C™Ÿ£­ C™Ÿ£‡§J™Ÿ•`ë`° ¤ˆ”ž ”?z£ˆ•t “™ƒ®U™Ÿ£Â™nš˜v ”?z£7U¥+™nÌU™nš¬7§Jz£ˆ•C ”? ¯Â™Ÿ£­ N! • “™Ÿ˜U¤Zªù³2Uš¤ z£  “™=“™Ÿ˜U¤Zªù³2Uš¤ U¥ì”ž • –‡˜vš™Ÿ£  öª §Jz£ˆ•C ”? ¯Â™Ÿ£­ n·™+&™Ÿ§J ”žÌU™Ÿœž¬h¯ˆ•”?£Â®0¦‡”dœž™J›Z”d§n˜Uœ¤Â™n–™Ÿ£Zª ¤Â™Ÿ£ˆ§n”?™Ÿ•n°èêzœ?œ?”?£ˆ•`åzé+âùè2zœ?œ?”d£ˆ•n·ZäŸåUåzéU믈•C™Ÿ•˜4¦‡”?œž™J›Mª ”?§n˜Uœ?”?ßn™Ÿ¤L " ªùUš¤ˆ™nšû˜vš¢UHÌ$ý4š˜U©t©ó˜všŸîô˜4œž™J›Z”d§n˜UœÊª ”žßn™Ÿ¤hèêü×ý š¯ˆœž™”?•4®U™Ÿ£Â™nš˜v C™Ÿ¤p¦M¬0–ˆš # ™Ÿ§J ”?£ˆ®" “ˆ™ “™Ÿ˜U¤Zª;§`“ˆ”?œ?¤W؈š•C ×¥]zœ?œ?H³2™Ÿ¤W¦M¬+™nÌU™nš¬œž™n¥L ô˜U£ˆ¤bš”ž®z“   ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  n·ó§Jz£ˆ¤‡”ž ”žz£ˆ”?£ˆ®= “ˆ™Ÿ•C™7•C C™n–l•hz£  “ˆ™ “™Ÿ˜U¤Zªù³2Uš¤7U¥W “™§Jz£ˆ•C ”ž ¯ˆ™Ÿ£­ n°1è2zœ?œ?”?£‡•åz郙J›Mª  C™Ÿ£ˆ¤ˆ•2 “ˆ”d•2•§“™Ÿ©$™+ Ct¤ˆ™Ÿ˜Uœl³¶”ž “«•¯Â¦§n˜v %¥Lš˜U©$™Ÿ•Ÿ· ˜U¤ # ˜U§J™Ÿ£ˆ§J¬U·Õ Cš˜U§J™Ÿ•˜U£‡¤²³ “Zª;©$oÌU™Ÿ©$™Ÿ£­ n°Æè꓇˜všCª £ˆ”?˜v¢ÂåUå+§Jz£ˆ¤‡”ž ”žz£ˆ•œž™J›Â”?§n˜Uœ?œ?¬b˜U•2èêzœ?œ?”?£ˆ•¤ÂM™Ÿ•&¦l¯Â  ˜Uœ?•C+™J›Z–‡œžz”? •¯ˆ–$ C1 þ } ªùUš¤Â™nšû˜vš¢UHÌ–ˆšZ§J™Ÿ••C™Ÿ• ¥LUš ®U™Ÿ£Â™nš˜v ”?£ˆ® ¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  •n° ã&›Z§J™n–‡  ¥]Uš ’ª ®Uš˜U©t•˜U£‡¤ï‡èêü×ýՕn·z˜UœdœM•¬M•C C™Ÿ©ó••©$ U “ “ˆ™2š™ŸœÊª ˜v ”žÌU™W¥Lš™KM¯Â™Ÿ£ˆ§n”ž™Ÿ• ³ ”ž “ì©S¯ˆ§“0§n˜vš™U° M™Ÿ£  C™Ÿ£ˆ§J™Ÿ•6Â2´ƒ³%Uš`¤ˆ•ì—?£ˆ§nœ?¯ˆ¤‡”?£Â®ì–‡¯ˆ£‡§J ¯ˆ˜Yª  ”žz£lë ”?£ƒ•C™Ÿ§J ”žz£$1Ö³2™nš™b–l˜vš•C™Ÿ¤ƒ¦ ¬ÌY˜vš”?z¯ˆ•¶’ª ®Uš˜U© •C¬M• C™Ÿ©t•n°²’טv¦‡œ?™ ä•“ˆH³ •ó “™0š™Ÿ•¯‡œž •$U¥ •Cz©$™$•C¬Z•C C™Ÿ©t•Õ”?£‡§nœ?¯ˆ¤ˆ”?£ˆ®$z¯Âš•n°M¬Z•C C™Ÿ©t•Õ§Jz£ˆ¤‡”ʪ  ”žz£ˆ”d£Â®b©$z• œž¬tz£«œž™J›Â”?§n˜Uœl”d£Â¥]Uš`©t˜v ”žz£ó˜vš™Õ§Jz£Âª  Cš˜U•C C™Ÿ¤7 C3ˆèêü×ýՕ"˜U£ˆ¤7’êªù®Uš˜U©t•n° +¯Âštš™Ÿ•¯‡œž  •“o³ •t “‡˜v Ö’êªù®Uš˜U©ó•Ö”?©t–ˆšHÌU™z£ûˆèêüýՕó¦l¯Â  ¥€˜Uœ?œ•“ˆUš U¥ “™֦™Ÿ•C bœž™J›Â”?§n˜UœÊª;¤ˆ™n–,™Ÿ£‡¤Â™Ÿ£ˆ§J¬•C¬M•Cª  C™Ÿ©t•n°ú2™Ÿ”?£Â®täNYªäK$]^1¦™n C C™nš& “ˆ˜U£µˆèêüýՕn·­§Jz©ª –‡˜vš˜v¦lœž™$³ ”ž “ “™"û”?£‡”?©t˜Uœ©$Z¤Â™Ÿœ&˜U£ˆ¤û˜v®U™nšCª ©t˜U£‡åh˜U£ˆ¤ ˜v¦z¯Â "é °p>^ ³%Uš•™0 “ˆ˜U£  “ˆ™0¦™Ÿ•C  •C¬Z•C C™Ÿ©ì·$”ž ”?•ƒ¥L˜U”?š C¡•˜H¬  “ˆ˜v í․ˆ™n–ˆ “ zë’ª ®Uš˜U©t•W–,™nš¥]Uš© ©tUš™tœ?”ž¢U™$¦‡”dœž™J›Z”d§n˜Uœ?”žßn™Ÿ¤ƒ¤Â™n–™Ÿ£Zª ¤Â™Ÿ£ˆ§J¬«•C¬Z•C C™Ÿ©t• “‡˜U£0¦‡˜vš™‡èêü×ýՕn° ’˜v¦‡œž™»$™J›Z“ˆ”?¦‡”ž •Wš™Ÿ•¯ˆœž •bU¥ Ìv˜vš”žz¯ˆ•S’êªù®Uš˜U© •C¬Z•C C™Ÿ©t•n° èêzœ?¯ˆ©t£ˆ• äeªy$µ™J›Â“ˆ”ž¦‡”? — “™  Cš˜U¤‡”ʪ  ”žz£ˆ˜Uœ ϝU¦‡•C™nšÌY˜v ”žz£ ˜v¦z¯Â  “™ ™+&,™Ÿ§J U¥  “™•”žßn™+U¥•¯Â¦ˆ Cš™n™Ÿ•cY’ªù®Uš˜U©t•z£–,™nš¥]Uš©ó˜U£ˆ§J™U° è2zœ?¯‡©t£ˆ•C1oªy֘vš™©$Uš™”?£  C™nš™Ÿ•C ”?£Â®Âî “ˆ™n¬0•“H³  “ˆ˜v  ™nÌU™Ÿ£ ³ “™Ÿ£ ’êªù®Uš˜U© •”žßn™²”?•¢U™n–ˆ  ØÂ›Z™Ÿ¤· •C¬Z•C C™Ÿ©t•ì “‡˜v 0˜vš™p–ˆš™Jª;“™Ÿ˜U¤²™Ÿ£ˆš”?§“ˆ™Ÿ¤=”d©$–ˆšoÌU™ z£ •C¬M• C™Ÿ©t•  “‡˜v ˜vš™²£ÂU  –‡š™Jª;“™Ÿ˜U¤ ™Ÿ£ˆš”?§“ˆ™Ÿ¤ â MO ë`°’ “ˆ”?•ê”?••¯Â–ˆ–Uš C™Ÿ¤"¦ ¬" “™+š™Ÿ•¯ˆœž êU¥§Jzœžª ¯ˆ©ó£ìä ”?£t§Jz£  Cš˜U•C % C%ˆèêüý—˜U£ˆ¤Öèêzœ?œ?”?£ˆ•`åzébâL ˜Yª ¦‡œ?™ äHë`îí “™`_ƒäp’êªù®Uš˜U© •¬M•C C™Ÿ© ¤ˆ”«&™nš•«¥]šz© è2zœdœ?”?£ˆ•åzé ˜Uœ?©$z•C pz£‡œž¬ ”?£'–ˆš™Jª;“ˆ™Ÿ˜U¤ ÌM•n°W“ˆ™Ÿ˜U¤ ™Ÿ£Âš`”?§“ˆ©t™Ÿ£­ ˜U£‡¤”?£‡¤Â™n™Ÿ¤h–™nš¥LUš©t•W©ó”?¤Â³ê˜Ÿ¬p¦™Jª  Á³%™n™Ÿ£˜ˆèêüý ˜U£ˆ¤´è2zœ?œd”?£ˆ•åzé ° ’ “ˆ”?•˜Uœ?œó•¯Â®vª ®U™Ÿ•C •󠓈˜v ó˜Uœ?œ?H³ ”?£ˆ®p¦‡”?œž™J›Â”?§n˜Uœê¤Â™n–™Ÿ£ˆ¤Â™Ÿ£ˆ§n”?™Ÿ•$”?£ ’êªù®Uš`˜U© ©$Z¤Â™Ÿœ?•W•“z¯ˆœ?¤p”?©t–ˆšHÌU™ó–,™nš¥]Uš©ó˜U£ˆ§J™U° ±  t”?•$£ÂU C™n³2Uš “ ¬à “‡˜v t–ˆš™Jª;“™Ÿ˜U¤ ™Ÿ£Âš”?§`“™Ÿ¤¨•C¬Z•öª  C™Ÿ©t•%˜vš™ ˜Uœ?•©$Uš™¶™a<ó§n”ž™Ÿ£  ”?£$ ”?©t™ ˜U£ˆ¤t•C–l˜U§J™U° è2zœd¯ˆ©t£æÖ•“o³ •4 “ˆ˜v 4˜U¤‡¤ˆ”?£Â®«û˜vš¢UoÌM”d˜U£ƒ§Jz£Âª ¤ˆ”? ”žz£ˆ”?£Â® Cí•¯Â¦§n˜v «¥Lš˜U©$™Ÿ•쥀¯Âš “™nš0”d©$–ˆšoÌU™Ÿ• –™nš¥LUš©t˜U£ˆ§J™•¯Â®U®U™Ÿ• ”?£Â®ì “ˆ˜v +¥€¯Âš “™nš+•C ¯‡¤Â¬U¥  “™t§Jz£‡¤ˆ”ž ”žz£ˆ˜Uœ–ˆšU¦‡˜v¦‡”?œ?”? ”ž™Ÿ• U¥ê¤Â™n–™Ÿ£ˆ¤Â™Ÿ£  +’êª ®Uš˜U©ó•à”d•£Â™Ÿ§J™Ÿ••˜vš¬U° P H³ ¥]Uš¨˜U£ ¬ £ÂZ¤Â™=”?£ ˜®Uzœ?¤/p–ˆšU–,z•™Ÿ¤í–‡˜vš`•C™U· œž™n «£ÂM¤ˆ™Jª;“™Ÿ”ž®z“  ó¦™  “™b˜ŸÌU™nš˜v®U™–‡˜v “Zª;œ?™Ÿ£Â®U “ì CÖ˜ó³%Uš`¤¤Âz©t”d£ˆ˜v C™Ÿ¤ ¦M¬  “ˆ˜v Ö£ÂZ¤Â™U°ûp™•C™n "˜h “š™Ÿ•“zœ?¤¨z£7£ÂZ¤Â™Jª “™Ÿ”?®z“­ ô”d£W “™%®Uzœ?¤b˜U£‡¤W–ˆšU–z•C™Ÿ¤–‡˜vš•C™Ÿ•טU£ˆ¤bU¦Âª •C™nšÌU™+–,™nš¥]Uš©ó˜U£ˆ§J™U°ü”?®z¯Âš™éb–‡œžU •ê “™+üת;•§JUš™ š âR$b[Á‡cb[Á þ ë ZâÁTÁ þ ëb˜v®z˜U”?£ˆ•C S£ÂZ¤Â™Jª;“™Ÿ”ž®z“    “š™Ÿ•“zœ?¤°íèêœ?™Ÿ˜všœž¬U·–™nš¥LUš©t˜U£‡§J™0¤Â™n®Uš˜U¤Â™Ÿ•Ö˜U•  “™p£ÂZ¤Â™Ÿ•«®U™n 0¥€¯Âš “ˆ™nš"¥Lšz©  “™p³2Uš¤ˆ•«³¶“ˆ”?œž™ –ˆš™Jª;“™Ÿ˜U¤ˆ•”?©$–‡šHÌU™W–™nš¥LUš©t˜U£‡§J™U° d e Û @%‘ @ ÝŽ 9 Û @ Ž p™ • ˜vš C™Ÿ¤— “ˆ”?•«–l˜v–,™nš«³2z£ˆ¤Â™nš”d£Â®¨˜v¦,z¯ˆ ì “™ ©$™nš`”ž •$U¥W•C C𝇧J ¯Âš˜UœÊªùš™Ÿœd˜v ”žz£ˆ•n°ûp™–‡š™Ÿ•C™Ÿ£  C™Ÿ¤  “™ ’ªù®Uš˜U© ©$Z¤Â™ŸœZ˜U£ˆ¤$™J›Â“ˆ”ž¦‡”? C™Ÿ¤™Ÿ©$–l”žš”?§n˜UœM™nÌ ª ”?¤ˆ™Ÿ£ˆ§J™W¥LUš¶ “™S¯ˆ•C™n¥€¯ˆœ?£Â™Ÿ•• ˜U•4³%™Ÿœ?œ×˜U•  “ˆ™S•“ˆUš öª §Jz©t”d£Â®z•4U¥2•C Cš`¯ˆ§J ¯Âš˜Uœš™Ÿœ?˜v ”?z£ˆ•n°p™t˜Uœ?•"–‡švª ÌZ”?¤Â™Ÿ¤™nÌM”?¤ˆ™Ÿ£ˆ§J™¥LUšÕ “™b®z˜U”d£ˆ• ¥Lšz©Í™Ÿ£Âš”?§`“ˆ©$™Ÿ£   U¥•C C𝇧J ¯Âš˜Uœš™Ÿœd˜v ”žz£ˆ• ³¶”ž “0•C™Ÿ©t”žª;œž™J›Z”d§n˜Uœ.”?£Â¥LUšCª ©t˜v ”?z£° ± £z¯ÂšM¯Â™Ÿ•C  ¥LUšÕ¦™n C C™nš4©$Z¤Â™Ÿœ?”d£Â®Â·ˆ³2™ •C ”dœ?œ£Â™n™Ÿ¤  Cp™J›Z–‡œžUš™"“o³µ•C Cš¯ˆ§J ¯ˆš˜UœÊªùš™Ÿœ?˜v ”?z£ˆ• ˜U£ˆ¤ ¦‡”?œž™J›Â”?§n˜Uœ&¤ˆ™n–,™Ÿ£‡¤Â™Ÿ£ˆ§n”ž™Ÿ•b§n˜U£¦™ó§Jz©b¦‡”?£ˆ™Ÿ¤° šU¦‡˜v¦‡”?œd”ž ;¬'™Ÿ•C ”?©t˜v ”?z£·ì•©$MU “ˆ”?£Â®'˜U£ˆ¤ ™a<tª §n”ž™Ÿ£   ”?©$–lœž™Ÿ©$™Ÿ£  ˜v ”žz£ˆ•£Â™n™Ÿ¤0•–,™Ÿ§n”d˜Uœ˜v C C™Ÿ£­ ”žz£.° ãgfã­± ˜ _ E âR$ MO ë _cZUâR$ MO ë _h!vâR$ MO ë _h!vâöä MO ë _P!zâ MO ë _P!UâR$ MO ëij œ¡´ç£´ Á þ QZ°p´1 Q$M°2¡$ Q$M°bzé Q$M°RQ QZäv°b1 Q$M°‰å1 Á‡ QZ°‰åUå QM°b$1 QM°p´$ QM°pzæ Q|2°bUå QM°?äK1 èêú äv° é| äv°b1$ äv°22 äv°2¡1 äv°2>Q äv°b1 k •C™Ÿ£ˆ• $$|2¡ ؈š•C +äN $$|2¡ ’˜v¦‡œž™"$Mî þ ™Ÿ•¯ˆœž •U¥lÌY˜vš`”žz¯ˆ••C¬Z•C C™Ÿ©t•nîl_ ý âßø…ë`·_ MO âL–ˆš™Jª;“™Ÿ˜U¤$œ?™Ÿ£Â®U “$”?•I…ë`·m  œž´ç£´«âöäKJLUš¤ˆ™nš û˜vš¢UHÌ0§Jz£ˆ¤‡”ž ”žz£ˆ”?£ˆ®bz££ÂM¤ˆ™Ÿ•ê³ ”ž “«™Ÿ©$–ˆ Á¬ì•¯Â¦§n˜v ¥]š`˜U©$™Ÿ•¥LUš ®U™Ÿ£Â™nš˜v ”?£ˆ®µîí˜U£ˆ¤1’êªù®Uš˜U©t•`ë`° 50 55 60 65 70 75 80 85 1 2 3 4 5 6 7 8 9 10 11 12 13 F-score n Height threshold D5 0-PH D5 1-PH D5 2-PH ü”ž®z¯Âš™bé î Þ ™Ÿ”ž®z“™nš¶£ÂM¤ˆ™Ÿ• ˜vš™“ˆ˜vš`¤Â™nš o ³p†‡‘mq8•.„t“T¯Ž„t¦„t†nŠf–  ±  “ˆ˜U£Â¢ þ ™Ÿ©$¢U Z§`“ˆ˜Z· þ ™Ÿ©$Ðú2z£ˆ£Â™Ÿ©ó˜Z·ûh˜Uœ? C™nšì՘v™Ÿœž™Ÿ©t˜U£ˆ• ˜U£ˆ¤µ-­˜v¢M¯ˆ¦hr˜ŸÌMš™ŸœÂ¥LUš”?œdœ?¯ˆ©t”?£‡˜v ”?£Â®¶¤‡”?•§n¯ˆ••”žz£ˆ•n· þ ™Ÿ©t¢Uú2z£ˆ£Â™Ÿ©t˜Õ¥LUš“ˆ”?•–‡š™Jª.˜U£ˆ¤$–z•C öªù–‡˜vš•”?£Â® •CU¥L ;³ê˜vš™U·ì˜U£ˆ¤  “™7˜U£Âz£ ¬Z©$z¯ˆ•š™nÌM”?™n³%™nš• ¥]Uš  “™Ÿ”žšÖ§Jz©t©$™Ÿ£  •n°=’ “ˆ”?•³2U𢍔?•$•¯Â–ˆ–Uš C™Ÿ¤ ¦M¬  “™ P ™n “™nšœd˜U£ˆ¤ˆ•À4š®z˜U£ˆ”žßŸ˜v ”žz£´¥LUšG§n”ž™Ÿ£­ ”?؇§ þ ™Ÿ•™Ÿ˜vš§“° s 9)t 9  9 @%‘ 9 Ž uvgwyx[z{0|P}'~z€x%vƒ‚€‚>€v…„‡†]ˆŠ‰‹{'}ƒŒ">ˆ}Ž‰ˆ;>z€‘~Š’R~0za“ ~Š’•”}'x[–—{'‰€˜B†>z€ˆ’[‘]™…~Šš]}œ›–‹‘~Šz{~’ {5{'‰a”€}ƒˆŠz€™€}‰Ž}'‘"“ ™x•’ ›šž™ˆŠz€˜˜z€ˆŠ›ƒv Ÿ‘¢¡y£¤¦¥0§0§Š¨©Yª«¬P¤ ­J®¯]§œ°±§ƒ²'£³"´ µ £¶B·€¸€¸·c¹º»¡=ºj¼½>§0§Š¥0¯ µ ª9¨¾ µ ®¿³"£ µaÀ]Á±µ ª«€³ µ «§ ÂJ¤a£Ãa¬¯]¤Š½9v Äcv)wy‰"Œ=v@ƒ‚€‚Å"vÆlª>£©Ç¥0¯‹©Yª« Á ©Yª‹«³"© ¬®¿©Ç¥¬BÈO©Y®¯h¼® µ ®¿© ¬´ ®¿©Ç¥¬'É¡§Ê£­Ê¤£Ë µ ªm¥0§ËB¤¦¨€§ À ¬¤­¾ µ ®%³"£ µaÀ9Á±µ ª‹«³ µ «§'v Ì͚>ÎÏ~š]}¦›’ ›'П ÑÒÑgÓ͓Ԍ]’[›Š›}ƒˆ~0za~Š’•‰‘Õ›}ƒˆ’[}ƒ›Öƒ‚€‚Åד0'ØÐ Ù‘]’•”}'ˆ0›’•~ – ‰€Ž„˜›~}ƒˆŠŒ]z€˜œv ÄcvOwy‰€‘]‘>}'˜z]ÐÍÌgvOwy]–‹’[‘]™>Ðiz€‘>ŒÖÄcviÚ"{0šz]v7ƒ‚‚€‚>vh„ ‘]}ƒÛ܆]ˆŠ‰€ÝzÝ]’[x•’•~ –˜B‰"Œ"}'xގ‰ˆ Œ]za~0z߉ˆ’[}'‘~Š}ƒŒ†>zˆ0› “ ’[‘]™>v±Ÿ‘;ÌOz€]xÎ}'|‹|}'ˆgz‘>ŒáàÛy}'‘âã}'ˆ0Œ"’[x•}¦›'Ð×}ƒŒ]’R~Љ€ˆ0›'Ð ¡y£Š¤ƒ¥0§0§0¨a©Yª«¬¤­®¯]§c·aä®Y¯BºË;¬®Ô§'£¨ µ ËæåO¤ ÀYÀ ¤¦çʳ"©Y³"ËÐ „˜B›~}ƒˆŠŒ>z˜œÐ>èlš]}êé}Ê~š>}'ˆŠx[z€‘>Œ]›ƒÐ>Œ"}ƒ{'}'˜;Ý}ƒˆƒv"Ÿ‘>›~’•“ ~Š"~}ގ‰ˆiÑ)‰™€’ {Ѐѱz‘]™>z™}lz€‘>ŒÓ뉀˜B†]"~0za~’[‰€‘ÒÐÎ}ʓ †>z€ˆ~Š˜B}'‘~މŽOÌiš]’[x•‰›‰†]š‹–€v uvÓëš>zˆŠ‘]’ z|9v쁃‚‚í‹vîÚ~Šz~’ › ~Š’[{ƒzx†>zˆ0›’•‘]™7Ûޒ•~šÕz {'‰€‘~}'ï~“%ŽˆŠ}'}ߙ€ˆ0z˜B˜zˆ5z€‘>ŒžÛë‰ˆŠŒ7›~Šz~’ › ~Š’[{ƒ›'v👠¡y£Š¤ƒ¥0§0§0¨a©Yª«¬;¤ ­ã®¯]§á'ØñÇò5¾ µ ®¿©Ç¤aª µÀ åi¤aª¦­Ê§Ê£§'ªm¥0§B¤aª º£®¿© óë¥'© µaÀ5ô ª>®§ ÀYÀ ©R«§'ªm¥0§ÊÐh†>z€™€}ƒ› Å€‚€õaö‹÷€øù]ÐJúJ}ƒ‘]x•‰ ÌOz€ˆ|9v"„„„ŸÞÌiˆŠ}ƒ›Š›0ûúJŸ è7ÌiˆŠ}ƒ›Š›ƒv uv9Óëš>z€ˆ‘]’ z|9vƒ‚‚€‚]vބj˜z[˜á>˜;“Ô}'‘~Šˆ‰†–“Ô’•‘›†]’[ˆŠ}ƒŒ †>z€ˆŠ›}'ˆ¦vŸ‘œ»§Ô½>¤a£®å¼m´ ¸€¸´·ä€Ð>Ì͈‰a”‹’ Œ"}'‘{Ê}€Ð‹Äޚ]‰"Œ"} Ÿ ›x[z€‘>Œ=v úvÓ뉀x[x[’•‘>›ƒvü¦‚€‚í"vÕèlš]ˆŠ}'}ߙ}'‘]}ƒˆŠz~’[”€}ÐÞx•}' {'z€x•’[ý'}¦Œ ˜B‰"Œ"}'x ›yŽ‰€ˆl›~Šza~Š’[›~’ {'z€x9†>zˆ0›’[‘]™v±Ÿ‘…¡y£¤¦¥0§Š§0¨©Yª«¬c¤­ ®¯]§þÿa®¯êºªª>³ µaÀ§0§Ê®¿©Yª«;¤ ­y®Y¯>§ëº;å Á—µ ª9¨ã®¯]§€®Y¯ åi¤aª¦­Ê§Ê£Š§Êª9¥Š§B¤­c®Y¯>§ÆÍº;å Á Ð]†>z€™€}ƒ›ã¦÷×öù]Ћú…zŒ"ˆ’ Œ=Рڋ†z’[‘)v vyu͒[›‘]}ƒˆƒv‡ƒ‚€‚÷]vðèlš>ˆ}ƒ}ߑ]}ƒÛ †]ˆ‰Ý>zÝ>’•x[’[›~’ {…˜B‰"Œ‹“ }ƒx[›HŽ‰€ˆÖŒ"}ƒ†}ƒ‘>Œ"}ƒ‘>{ʖ †>zˆ0›’•‘]™܄‘ }'ï"†]x•‰ˆŠz~’[‰€‘)v Ÿ‘ ¡l£¤ƒ¥0§0§0¨a©Yª‹«a¬@¤­Jå Á)ô ¾ l´ ¸ ×ÐO†z™€}¦›êùØø×öaØÅ]Ð Ó뉆}ƒ‘]š>z™}'‘)Ð]Î}'‘>˜Bz€ˆ|9v yv}'x[’•‘>}'|9Ðv)ÑÒzm}ƒˆ~ –бÎ;vÒú…z™}'ˆŠ˜z‘)ÐÒÄêvÒúJ}'ˆ0{Ê}ƒˆƒÐ „ávÄz~‘>z€†>zˆŠ|‹š]’¿Ðz‘Œ Ú9vĉ€]|‰›ƒv́ƒ‚‚Øv±Î}ƒ{ʒ ›’•‰‘ ~Šˆ}ƒ}ë†>z€ˆŠ›’[‘]™>›’•‘>™zš]’[Œ>Œ"}'‘Œ"}'ˆŠ’•”az~’[‰€‘ê˜B‰"Œ"}'x¿vŸ‘ ¡y£Š¤ƒ¥0§0§0¨a©Yª«¬ã¤­®¯]§c·€¸€¸ ê³"Ë µ ª Á±µ ª‹«³ µ «§ §0¥Š¯"´ ª9¤ À ¤Š«¶Öœ¤£Ã׬¯]¤0½mv9΄ÄÌ=„êv Î;v>úvmú…z™}'ˆŠ˜Bz€‘)vށƒ‚‚Å]vÍÚ~Šz~’ › ~Š’[{ƒzxÒÎ}ƒ{'’[›’•‰‘"“èlˆ}ƒ} úJ‰"Œ"}ƒx[›Ž‰ˆÌëzˆ0›’[‘]™vŸ‘H¡y£¤¦¥0§0§Š¨©Yª«¬¤­®Y¯>§þ€þ€®Y¯ ºªª>³ µaÀ§0§'®%©Yª‹«@¤­ã®Y¯>§º;å Á v úvgú…z€ˆŠ{'>›'ÐgwvgÚ]z‘~‰ˆ’[‘]’¿Ðz€‘>ŒHúvgú…zˆ0{ʒ[‘]|‹’[}'Ûޒ {Êý€v ¦‚€‚€ù>v`wy]’[x[Œ"’[‘]™ÖzѱzˆŠ™€}J„‘>‘]‰~0za~}¦Œ`Ó뉀ˆŠ†]>›‰€Ž u͑]™€x[’[›šJèlš]}JÌi}ƒ‘]‘7èlˆ}ƒ}'Ý>z€‘]|9vðåi¤a˽m³"® µ ®¿©Ç¤aª µaÀ Á ©Yª«€³"© ¬0®¿©Ç¥Ê¬0бƒ‚Êv àvÚ]za~~0z]v ø€øø]vOÌÍz€ˆŠ›’[‘]™~}ƒ{0š>‘]’[]}¦›lŽ‰€ˆx•}' {'z€x•’[ý'}¦Œ {'‰€‘~}'ï~“%ŽˆŠ}'}P™€ˆ0z˜B˜zˆ0›'vEŸ‘ ¡y£¤¦¥0§0§Š¨©Yª«¬—¤­ß®¯]§ ÷€ñÇò ô ¡ÒÐ>豈Š}'‘~‰Ð"ŸÔ~Šzx[–€Ð>}'Ý]ˆŠ>z€ˆ’¿v ÄcvgÚ"{0š>z>vHƒ‚‚€ø]vJѱz‘]™>z™} èlš]}'‰ˆ–hz‘>ŒHѱz‘]™>z™} è±}ƒ{0š]‘>‰€x[‰€™€–Ó뉀˜B†m}Ê~}ƒ‘>{Ê}Jz€‘>ŒÌg}ƒˆŽ‰ˆ˜z€‘>{Ê} ǒ•‘ ΍"~Š{0š!v7Ÿ‘#";v „áv úviŒ]}…â㉀ˆ~Bz€‘>Œ àv Ñiv viÑ)}'}ƒˆ“ Œ]z€˜œÐa}ƒŒ"’•~‰ˆŠ›ƒÐ9åO¤Ë½9³"®Ô§'£®¤ƒ§½ µ ¬0¬0©Yª‹«§'ª ©YªM¨§i¾á§0§'£´ À•µ ªm¨© ¬0®¿©Ç§0ÃaÐ9„x[˜B}'ˆŠ}iÑ$%$é“'&zzˆŠÝ‰‹}ƒ|mv âBvÚ"’•˜z( z‘Òv ƒ‚‚€÷>v Ó뉀˜B†]"~0za~’[‰€‘zxÓ뉘†>x•}'•~ – ‰€ŽJÌ͈‰Ý>zÝ>’•x[’[›~’ {žÎ’ ›Šz˜áÝ>’•™>za~Š’•‰‘E݋–ð˜B}¦z‘>›‰€Ž èlˆŠ}'}làˆŠz€˜˜z€ˆŠ›ƒv=Ÿ‘¡y£¤¦¥0§0§Š¨©Yª«¬Þ¤­å Á)ô ¾) )* ¸ aÐ ”‰€x[]˜B}+]Ðm†z™€}¦›á×í€Å¦ömƒõø]ÐÓ뉀†m}'‘>š>z™}'‘)Ð=Î}'‘"“ ˜zˆŠ|9Ð]„]™>›~ƒv
2000
8
An Improved Parser for Data-Oriented Lexical-Functional Analysis Rens Bod Informatics Research Institute, University of Leeds, Leeds LS2 9JT, UK, & Institute for Logic, Language and Computation, University of Amsterdam [email protected] Abstract We present an LFG-DOP parser which uses fragments from LFG-annotated sentences to parse new sentences. Experiments with the Verbmobil and Homecentre corpora show that (1) Viterbi n best search performs about 100 times faster than Monte Carlo search while both achieve the same accuracy; (2) the DOP hypothesis which states that parse accuracy increases with increasing fragment size is confirmed for LFG-DOP; (3) LFGDOP's relative frequency estimator performs worse than a discounted frequency estimator; and (4) LFG-DOP significantly outperforms TreeDOP if evaluated on tree structures only. 1 Introduction Data-Oriented Parsing (DOP) models learn how to provide linguistic representations for an unlimited set of utterances by generalizing from a given corpus of properly annotated exemplars. They operate by decomposing the given representations into (arbitrarily large) fragments and recomposing those pieces to analyze new utterances. A probability model is used to choose from the collection of different fragments of different sizes those that make up the most appropriate analysis of an utterance. DOP models have been shown to achieve state-of-the-art parsing performance on benchmarks such as the Wall Street Journal corpus (see Bod 2000a). The original DOP model in Bod (1993) was based on utterance analyses represented as surface trees, and is equivalent to a Stochastic Tree-Substitution Grammar. But the model has also been applied to several other grammatical frameworks, e.g. Tree-Insertion Grammar (Hoogweg 2000), Tree-Adjoining Grammar (Neumann 1998), Lexical-Functional Grammar (Bod & Kaplan 1998; Cormons 1999), Head-driven Phrase Structure Grammar (Neumann & Flickinger 1999), and Montague Grammar (Bonnema et al. 1997; Bod 1999). Most probability models for DOP use the relative frequency estimator to estimate fragment probabilities, although Bod (2000b) trains fragment probabilities by a maximum likelihood reestimation procedure belonging to the class of expectation-maximization algorithms. The DOP model has also been tested as a model for human sentence processing (Bod 2000d). This paper presents ongoing work on DOP models for Lexical-Functional Grammar representations, known as LFG-DOP (Bod & Kaplan 1998). We develop a parser which uses fragments from LFG-annotated sentences to parse new sentences, and we derive some experimental properties of LFG-DOP on two LFG-annotated corpora: the Verbmobil and Homecentre corpus. The experiments show that the DOP hypothesis, which states that there is an increase in parse accuracy if larger fragments are taken into account (Bod 1998), is confirmed for LFG-DOP. We report on an improved search technique for estimating the most probable analysis. While a Monte Carlo search converges provably to the most probable parse, a Viterbi n best search performs as well as Monte Carlo while its processing time is two orders of magnitude faster. We also show that LFG-DOP outperforms TreeDOP if evaluated on tree structures only. 2 Summary of LFG-DOP In accordance with Bod (1998), a particular DOP model is described by • a definition of a well-formed representation for utterance analyses, • a set of decomposition operations that divide a given utterance analysis into a set of fragments, • a set of composition operations by which such fragments may be recombined to derive an analysis of a new utterance, and • a definition of a probability model that indicates how the probability of a new utterance analysis is computed. In defining a DOP model for LFG representations, Bod & Kaplan (1998) give the following settings for DOP's four parameters. 2.1 Representations The representations used by LFG-DOP are directly taken from LFG: they consist of a cstructure, an f-structure and a mapping φ between them. The following figure shows an example representation for Kim eats. (We leave out some features to keep the example simple.) S NP VP Kim eats PRED 'Kim' NUM SG SUBJ TENSE PRES PRED 'eat(SUBJ)' Figure 1 Bod & Kaplan also introduce the notion of accessibility which they later use for defining the decomposition operations of LFG-DOP: An f-structure unit f is φ-accessible from a node n iff either n is φ-linked to f (that is, f = φ(n) ) or f is contained within φ(n) (that is, there is a chain of attributes that leads from φ(n) to f). According to the LFG representation theory, cstructures and f-structures must satisfy certain formal well-formedness conditions. A cstructure/f-structure pair is a valid LFG representation only if it satisfies the Nonbranching Dominance, Uniqueness, Coherence and Completeness conditions (Kaplan & Bresnan 1982). 2.2 Decomposition operations and Fragments The fragments for LFG-DOP consist of connected subtrees whose nodes are in φcorrespondence with the correponding sub-units of f-structures. To give a precise definition of LFG-DOP fragments, it is convenient to recall the decomposition operations employed by the orginal DOP model which is also known as the "Tree-DOP" model (Bod 1993, 1998): (1) Root: the Root operation selects any node of a tree to be the root of the new subtree and erases all nodes except the selected node and the nodes it dominates. (2) Frontier: the Frontier operation then chooses a set (possibly empty) of nodes in the new subtree different from its root and erases all subtrees dominated by the chosen nodes. Bod & Kaplan extend Tree-DOP's Root and Frontier operations so that they also apply to the nodes of the c-structure in LFG, while respecting the principles of c/f-structure correspondence. When a node is selected by the Root operation, all nodes outside of that node's subtree are erased, just as in Tree-DOP. Further, for LFG-DOP, all φ links leaving the erased nodes are removed and all f-structure units that are not φ-accessible from the remaining nodes are erased. For example, if Root selects the NP in figure 1, then the f-structure corresponding to the S node is erased, giving figure 2 as a possible fragment: NP Kim PRED 'Kim' NUM SG Figure 2 In addition the Root operation deletes from the remaining f-structure all semantic forms that are local to f-structures that correspond to erased cstructure nodes, and it thereby also maintains the fundamental two-way connection between words and meanings. Thus, if Root selects the VP node so that the NP is erased, the subject semantic form "Kim" is also deleted: VP eats NUM SG SUBJ TENSE PRES PRED 'eat(SUBJ)' Figure 3 As with Tree-DOP, the Frontier operation then selects a set of frontier nodes and deletes all subtrees they dominate. Like Root, it also removes the φ links of the deleted nodes and erases any semantic form that corresponds to any of those nodes. For instance, if the NP in figure 1 is selected as a frontier node, Frontier erases the predicate "Kim" from the fragment: eats S NP VP NUM SG SUBJ TENSE PRES PRED 'eat(SUBJ)' Figure 4 Finally, Bod & Kaplan present a third decomposition operation, Discard, defined to construct generalizations of the fragments supplied by Root and Frontier. Discard acts to delete combinations of attribute-value pairs subject to the following condition: Discard does not delete pairs whose values φ-correspond to remaining c-structure nodes. According to Bod & Kaplan (1998), Discard-generated fragments are needed to parse sentences that are "ungrammatical with respect to the corpus", thus increasing the robustness of the model. 2.3 The composition operation In LFG-DOP the operation for combining fragments is carried out in two steps. First the cstructures are combined by leftmost substitution subject to the category-matching condition, as in Tree-DOP. This is followed by the recursive unification of the f-structures corresponding to the matching nodes. A derivation for an LFG-DOP representation R is a sequence of fragments the first of which is labeled with S and for which the iterative application of the composition operation produces R. For an illustration of the composition operation, see Bod & Kaplan (1998). 2.4 Probability models As in Tree-DOP, an LFG-DOP representation R can typically be derived in many different ways. If each derivation D has a probability P(D), then the probability of deriving R is the sum of the individual derivation probabilities: (1) P(R) = ΣD derives R P(D) An LFG-DOP derivation is produced by a stochastic process which starts by randomly choosing a fragment whose c-structure is labeled with the initial category. At each subsequent step, a next fragment is chosen at random from among the fragments that can be composed with the current subanalysis. The chosen fragment is composed with the current subanalysis to produce a new one; the process stops when an analysis results with no non-terminal leaves. We will call the set of composable fragments at a certain step in the stochastic process the competition set at that step. Let CP(f | CS) denote the probability of choosing a fragment f from a competition set CS containing f, then the probability of a derivation D = <f1, f2 ... fk> is (2) P(<f1, f2 ... fk>) = Πi CP(fi | CSi) where the competition probability CP(f | CS) is expressed in terms of fragment probabilities P(f): Σf'∈CS P(f') P(f) (3) CP(f | CS) = Bod & Kaplan give three definitions of increasing complexity for the competition set: the first definition groups all fragments that only satisfy the Category-matching condition of the composition operation; the second definition groups all fragments which satisfy both Categorymatching and Uniqueness; and the third definition groups all fragments which satisfy Categorymatching, Uniqueness and Coherence. Bod & Kaplan point out that the Completeness condition cannot be enforced at each step of the stochastic derivation process, and is a property of the final representation which can only be enforced by sampling valid representations from the output of the stochastic process. In this paper, we will only deal with the third definition of competition set, as it selects only those fragments at each derivation step that may finally result into a valid LFG representation, thus reducing the off-line validity checking to the Completeness condition. Note that the computation of the competition probability in the above formulas still requires a definition for the fragment probability P(f). Bod and Kaplan define the probability of a fragment simply as its relative frequency in the bag of all fragments generated from the corpus, just as in most Tree-DOP models. We will refer to this fragment estimator as "simple relative frequency" or "simple RF". We will also use an alternative definition of fragment probability which is a refinement of simple RF. This alternative fragment probability definition distinguishes between fragments supplied by Root/Frontier and fragments supplied by Discard. We will treat the first type of fragments as seen events, and the second type of fragments as previously unseen events. We thus create two separate bags corresponding to two separate distributions: a bag with fragments generated by Root and Frontier, and a bag with fragments generated by Discard. We assign probability mass to the fragments of each bag by means of discounting: the relative frequencies of seen events are discounted and the gained probability mass is reserved for the bag of unseen events (cf. Ney et al. 1997). We accomplish this by a very simple estimator: the Turing-Good estimator (Good 1953) which computes the probability mass of unseen events as n1/N where n1 is the number of singleton events and N is the total number of seen events. This probability mass is assigned to the bag of Discard-generated fragments. The remaining mass (1 − n1/N) is assigned to the bag of Root/Frontier-generated fragments. The probability of each fragment is then computed as its relative frequency in its bag multiplied by the probability mass assigned to this bag. Let | f | denote the frequency of a fragment f, then its probability is given by: | f | Σf': f' is generated by Root/Frontier | f'| (1 − n1/N) (4) P(f | f is generated by Root/Frontier) = (5) P(f | f is generated by Discard) = (n1/N) | f | Σf': f' is generated by Discard | f'| We will refer to this fragment probability estimator as "discounted relative frequency" or "discounted RF". 4 Parsing with LFG-DOP In his PhD-thesis, Cormons (1999) presents a parsing algorithm for LFG-DOP which is based on the Tree-DOP parsing technique described in Bod (1998). Cormons first converts LFGrepresentations into more compact indexed trees: each node in the c-structure is assigned an index which refers to the φ-corresponding f-structure unit. For example, the representation in figure 1 is indexed as (S.1 (NP.2 Kim.2) (VP.1 eats.1)) where 1 --> [ (SUBJ = 2) (TENSE = PRES) (PRED = eat(SUBJ)) ] 2 --> [ (PRED = Kim) (NUM = SG) ] The indexed trees are then fragmented by applying the Tree-DOP decomposition operations described in section 2. Next, the LFG-DOP decomposition operations Root, Frontier and Discard are applied to the f-structure units that correspond to the indices in the c-structure subtrees. Having obtained the set of LFG-DOP fragments in this way, each test sentence is parsed by a bottom-up chart parser using initially the indexed subtrees only. Thus only the Category-matching condition is enforced during the chart-parsing process. The Uniqueness and Coherence conditions of the corresponding f-structure units are enforced during the disambiguation or chart decoding process. Disambiguation is accomplished by computing a large number of random derivations from the chart and by selecting the analysis which results most often from these derivations. This technique is known as "Monte Carlo disambiguation" and has been extensively described in the literature (e.g. Bod 1993, 1998; Chappelier & Rajman 2000; Goodman 1998; Hoogweg 2000). Sampling a random derivation from the chart consists of choosing at random one of the fragments from the set of composable fragments at every labeled chart-entry (where the random choices at each chart-entry are based on the probabilities of the fragments). The derivations are sampled in a topdown, leftmost order so as to maintain the LFGDOP derivation order. Thus the competition sets of composable fragments are computed on the fly during the Monte Carlo sampling process by grouping the f-structure units that unify and that are coherent with the subderivation built so far. As mentioned in section 3, the Completeness condition can only be checked after the derivation process. Incomplete derivations are simply removed from the sampling distribution. After sampling a sufficiently large number of random derivations that satisfy the LFG validity requirements, the most probable analysis is estimated by the analysis which results most often from the sampled derivations. As a stop condition on the number of sampled derivations, we compute the probability of error, which is the probability that the analysis that is most frequently generated by the sampled derivations is not equal to the most probable analysis, and which is set to 0.05 (see Bod 1998). In order to rule out the possibility that the sampling process never stops, we use a maximum sample size of 10,000 derivations. While the Monte Carlo disambiguation technique converges provably to the most probable analysis, it is quite inefficient. It is possible to use an alternative, heuristic search based on Viterbi n best (we will not go into the PCFG-reduction technique presented in Goodman (1998) since that heuristic only works for TreeDOP and is beneficial only if all subtrees are taken into account and if the so-called "labeled recall parse" is computed). A Viterbi n best search for LFG-DOP estimates the most probable analysis by computing n most probable derivations, and by then summing up the probabilities of the valid derivations that produce the same analysis. The algorithm for computing n most probable derivations follows straightforwardly from the algorithm which computes the most probable derivation by means of Viterbi optimization (see e.g. Sima'an 1999). 5 Experimental Evaluation We derived some experimental properties of LFG-DOP by studying its behavior on the two LFG-annotated corpora that are currently available: the Verbmobil corpus and the Homecentre corpus. Both corpora were annotated at Xerox PARC. They contain packed LFGrepresentations (Maxwell & Kaplan 1991) of the grammatical parses of each sentence together with an indication which of these parses is the correct one. For our experiments we only used the correct parses of each sentence resulting in 540 Verbmobil parses and 980 Homecentre parses. Each corpus was divided into a 90% training set and a 10% test set. This division was random except for one constraint: that all the words in the test set actually occurred in the training set. The sentences from the test set were parsed and disambiguated by means of the fragments from the training set. Due to memory limitations, we restricted the maximum depth of the indexed subtrees to 4. Because of the small size of the corpora we averaged our results on 10 different training/test set splits. Besides an exact match accuracy metric, we also used a more fine-grained score based on the well-known PARSEVAL metrics that evaluate phrase-structure trees (Black et al. 1991). The PARSEVAL metrics compare a proposed parse P with the corresponding correct treebank parse T as follows: Precision = # correct constituents in P # constituents in P # correct constituents in P # constituents in T Recall = A constituent in P is correct if there exists a constituent in T of the same label that spans the same words and that φ-corresponds to the same f-structure unit (see Bod 2000c for some illustrations of these metrics for LFG-DOP). 5.1 Comparing the two fragment estimators We were first interested in comparing the performance of the simple RF estimator against the discounted RF estimator. Furthermore, we want to study the contribution of generalized fragments to the parse accuracy. We therefore created for each training set two sets of fragments: one which contains all fragments (up to depth 4) and one which excludes the generalized fragments as generated by Discard. The exclusion of these Discard-generated fragments means that all probability mass goes to the fragments generated by Root and Frontier in which case the two estimators are equivalent. The following two tables present the results of our experiments where +Discard refers to the full set of fragments and −Discard refers to the fragment set without Discard-generated fragments. Exact Match Precision Recall +Discard −Discard +Discard −Discard +Discard −Discard Simple RF 1.1% 35.2% 13.8% 76.0% 11.5% 74.9% 35.9% 35.2% 77.5% 76.0% 76.4% 74.9% Discounted RF Estimator Table 1. Experimental results on the Verbmobil Exact Match Precision Recall +Discard −Discard +Discard −Discard +Discard −Discard 2.7% 37.9% 17.1% 77.8% 15.5% 77.2% 38.4% 37.9% 80.0% 77.8% 78.6% 77.2% Simple RF Discounted RF Estimator Table 2. Experimental results on the Homecentre The tables show that the simple RF estimator scores extremely bad if all fragments are used: the exact match is only 1.1% on the Verbmobil corpus and 2.7% on the Homecentre corpus, whereas the discounted RF estimator scores respectively 35.9% and 38.4% on these corpora. Also the more fine-grained precision and recall scores obtained with the simple RF estimator are quite low: e.g. 13.8% and 11.5% on the Verbmobil corpus, where the discounted RF estimator obtains 77.5% and 76.4%. Interestingly, the accuracy of the simple RF estimator is much higher if Discard-generated fragments are excluded. This suggests that treating generalized fragments probabilistically in the same way as ungeneralized fragments is harmful. The tables also show that the inclusion of Discard-generated fragments leads only to a slight accuracy increase under the discounted RF estimator. Unfortunately, according to paired ttesting only the differences for the precision scores on the Homecentre corpus were statistically significant. 5.2 Comparing different fragment sizes We were also interested in the impact of fragment size on the parse accuracy. We therefore performed a series of experiments where the fragment set is restricted to fragments of a certain maximum depth (where the depth of a fragment is defined as the longest path from root to leaf of its c-structure unit). We used the same training/test set splits as in the previous experiments and used both ungeneralized and generalized fragments together with the discounted RF estimator. Fragment Depth Exact Match Precision Recall 1 30.6% 74.2% 72.2% ≤2 34.1% 76.2% 74.5% ≤3 35.6% 76.8% 75.9% ≤4 35.9% 77.5% 76.4% Table 3. Accuracies on the Verbmobil Fragment Depth Exact Match Precision Recall 1 31.3% 75.0% 71.5% ≤2 36.3% 77.1% 74.7% ≤3 37.8% 77.8% 76.1% ≤4 38.4% 80.0% 78.6% Table 4. Accuracies on the Homecentre Tables 3 and 4 show that there is a consistent increase in parse accuracy for all metrics if larger fragments are included, but that the increase itself decreases. This phenomenon is also known as the DOP hypothesis (Bod 1998), and has been confirmed for Tree-DOP on the ATIS, OVIS and Wall Street Journal treebanks (see Bod 1993, 1998, 1999, 2000a; Sima'an 1999; Bonnema et al. 1997; Hoogweg 2000). The current result thus extends the validity of the DOP hypothesis to LFG annotations. We do not yet know whether the accuracy continues to increase if even larger fragments are included (for Tree-DOP it has been shown that the accuracy decreases after a certain depth, probably due to overfitting -- cf. Bonnema et al. 1997; Bod 2000a). 5.3 Comparing LFG-DOP to Tree-DOP In the following experiment, we are interested in the impact of functional structures on predicting the correct tree structures. We therefore removed all f-structure units from the fragments, thus yielding a Tree-DOP model, and compared the results against the full LFG-DOP model (using the discounted RF estimator and all fragments up to depth 4). We evaluated the parse accuracy on the tree structures only, using exact match together with the standard PARSEVAL measures. We used the same training/test set splits as in the previous experiments. Exact Match Precision Recall Tree-DOP 46.6% 88.9% 86.7% LFG-DOP 50.8% 90.3% 88.4% Model Table 5. Tree accuracy on the Verbmobil Exact Match Precision Recall Tree-DOP 49.0% 93.4% 92.1% LFG-DOP 53.2% 95.8% 94.7% Model Table 6. Tree accuracy on the Homecentre The results indicate that LFG-DOP's functional structures help to improve the parse accuracy of tree structures. In other words, LFG-DOP outperforms Tree-DOP if evaluated on tree structures only. According to paired t-tests all differences in accuracy were statistically significant. This result is promising since TreeDOP has been shown to obtain state-of-the-art performance on the Wall Street Journal corpus (see Bod 2000a). 5.4 Comparing Viterbi n best to Monte Carlo Finally, we were interested in comparing an alternative, more efficient search method for estimating the most probable analysis. In the following set of experiments we use a Viterbi n best search heuristic (as explained in section 4), and let n range from 1 to 10,000 derivations. We also compute the results obtained by Monte Carlo for the same number of derivations. We used the same training/test set splits as in the previous experiments and used both ungeneralized and generalized fragments up to depth 4 together with the discounted RF estimator. Nr. of derivations Viterbi n best Monte Carlo 1 74.8% 20.1% 10 75.3% 36.7% 100 77.5% 67.0% 1,000 77.5% 77.1% 10,000 77.5% 77.5% Table 7. Precision on the Verbmobil Nr. of derivations Viterbi n best Monte Carlo 1 75.6% 25.6% 10 76.2% 44.3% 100 79.1% 74.6% 1,000 79.8% 79.1% 10,000 79.8% 80.0% Table 8. Precision on the Homecentre The tables show that Viterbi n best already achieves a maximum accuracy at 100 derivations (at least on the Verbmobil corpus) while Monte Carlo needs a much larger number of derivations to obtain these results. On the Homecentre corpus, Monte Carlo slightly outperforms Viterbi n best at 10,000 derivations, but these differences are not statistically significant. Also remarkable are the relatively high results obtained with Viterbi n best if only one derivation is used. This score corresponds to the analysis generated by the most probable (valid) derivation. Thus Viterbi n best is a promising alternative to Monte Carlo resulting in a speed up of about two orders of magnitude. 6 Conclusion We presented a parser which analyzes new input by probabilistically combining fragments from LFG-annotated corpora into new analyses. We have seen that the parse accuracy increased with increasing fragment size, and that LFG's functional structures contribute to significantly higher parse accuracy on tree structures. We tested two search techniques for the most probable analysis, Viterbi n best and Monte Carlo. While these two techniques achieved about the same accuracy, Viterbi n best was about 100 times faster than Monte Carlo. References E. Black et al., 1991. "A Procedure for Quantitatively Comparing the Syntactic Coverage of English", Proceedings DARPA Workshop, Pacific Grove, Morgan Kaufmann. R. Bod, 1993. "Using an Annotated Language Corpus as a Virtual Stochastic Grammar", Proceedings AAAI'93, Washington D.C. R. Bod, 1998. Beyond Grammar: An ExperienceBased Theory of Language, CSLI Publications, Cambridge University Press. R. Bod 1999. "Context-Sensitive Dialogue Processing with the DOP Model", Natural Language Engineering 5(4), 309-323. R. Bod, 2000a. "Parsing with the Shortest Derivation", Proceedings COLING-2000, Saarbrücken, Germany. R. Bod 2000b. "Combining Semantic and Syntactic Structure for Language Modeling", Proceedings ICSLP-2000, Beijing, China. R. Bod 2000c. "An Empirical Evaluation of LFGDOP", Proceedings COLING-2000, Saarbrücken, Germany. R. Bod, 2000d. "The Storage and Computation of Frequent Sentences", Proceedings AMLAP2000, Leiden, The Netherlands. R. Bod and R. Kaplan, 1998. "A Probabilistic Corpus-Driven Model for Lexical Functional Analysis", Proceedings COLING-ACL'98, Montreal, Canada. R. Bonnema, R. Bod and R. Scha, 1997. "A DOP Model for Semantic Interpretation", Proceedings ACL/EACL-97, Madrid, Spain. J. Chappelier and M. Rajman, 2000. "Monte Carlo Sampling for NP-hard Maximization Problems in the Framework of Weighted Parsing", in NLP 2000, Lecture Notes in Artificial Intelligence 1835, 106-117. B. Cormons, 1999. Analyse et désambiguisation: Une approche à base de corpus (Data-Oriented Parsing) pour les répresentations lexicales fonctionnelles. PhD thesis, Université de Rennes, France. I. Good, 1953. "The Population Frequencies of Species and the Estimation of Population Parameters", Biometrika 40, 237-264. J. Goodman, 1998. Parsing Inside-Out, PhD thesis, Harvard University, Mass. L. Hoogweg, 2000. Enriching DOP1 with the Insertion Operation, MSc Thesis, Dept. of Computer Science, University of Amsterdam. R. Kaplan, and J. Bresnan, 1982. "LexicalFunctional Grammar: A Formal System for Grammatical Representation", in J. Bresnan (ed.), The Mental Representation of Grammatical Relations, The MIT Press, Cambridge, Mass. J. Maxwell and R. Kaplan, 1991. "A Method for Disjunctive Constraint Satisfaction", in M. Tomita (ed.), Current Issues in Parsing Technology, Kluwer Academic Publishers. G. Neumann, 1998. "Automatic Extraction of Stochastic Lexicalized Tree Grammars from Treebanks", Proceedings of the 4th Workshop on Tree-Adjoining Grammars and Related Frameworks, Philadelphia, PA. G. Neumann and D. Flickinger, 1999. "Learning Stochastic Lexicalized Tree Grammars from HPSG", DFKI Technical Report, Saarbrücken, Germany. H. Ney, S. Martin and F. Wessel, 1997. "Statistical Language Modeling Using Leaving-One-Out", in S. Young & G. Bloothooft (eds.), CorpusBased Methods in Language and Speech Processing, Kluwer Academic Publishers. K. Sima'an, 1999. Learning Efficient Disambiguation. PhD thesis, ILLC dissertation series number 1999-02. Utrecht / Amsterdam.
2000
9
Interpreting the human genome sequence, using stochastic grammars Richard Durbin The Sanger Centre Wellcome Trust Genome Campus Hinxton Cambridge CB10 1SA UK [email protected] Abstract The 3 billion base pair sequence of the human genome is now available, and attention is focusing on annotating it to extract biological meaning. I will discuss what we have obtained, and the methods that are being used to analyse biological sequences. In particular I will discuss approaches using stochastic grammars analogous to those used in computational linguistics, both for gene finding and protein family classification.
2001
1
What is the Minimal Set of Fragments that Achieves Maximal Parse Accuracy? Rens Bod School of Computing University of Leeds, Leeds LS2 9JT, & Institute for Logic, Language and Computation University of Amsterdam, Spuistraat 134, 1012 VB Amsterdam [email protected] Abstract We aim at finding the minimal set of fragments which achieves maximal parse accuracy in Data Oriented Parsing. Experiments with the Penn Wall Street Journal treebank show that counts of almost arbitrary fragments within parse trees are important, leading to improved parse accuracy over previous models tested on this treebank (a precis ion of 90.8% and a recall of 90.6%). We isolate some dependency relations which previous models neglect but which contribute to higher parse accuracy. 1 Introduction One of the goals in statistical natural language parsing is to find the minimal set of statistical dependencies (between words and syntactic structures) that achieves maximal parse accuracy. Many stochastic parsing models use linguistic intuitions to find this minimal set, for example by restricting the statistical dependencies to the locality of headwords of constituents (Collins 1997, 1999; Eisner 1997), leaving it as an open question whether there exist important statistical dependencies that go beyond linguistically motivated dependencies. The Data Oriented Parsing (DOP) model, on the other hand, takes a rather extreme view on this issue: given an annotated corpus, all fragments (i.e. subtrees) seen in that corpus, regardless of size and lexicalization, are in principle taken to form a grammar (see Bod 1993, 1998; Goodman 1998; Sima'an 1999). The set of subtrees that is used is thus very large and extremely redundant. Both from a theoretical and from a computational perspective we may wonder whether it is possible to impose constraints on the subtrees that are used, in such a way that the accuracy of the model does not deteriorate or perhaps even improves. That is the main question addressed in this paper. We report on experiments carried out with the Penn Wall Street Journal (WSJ) treebank to investigate several strategies for constraining the set of subtrees. We found that the only constraints that do not decrease the parse accuracy consist in an upper bound of the number of words in the subtree frontiers and an upper bound on the depth of unlexicalized subtrees. We also found that counts of subtrees with several nonheadwords are important, resulting in improved parse accuracy over previous parsers tested on the WSJ. 2 The DOP1 Model To-date, the Data Oriented Parsing model has mainly been applied to corpora of trees whose labels consist of primitive symbols (but see Bod & Kaplan 1998; Bod 2000c, 2001). Let us illustrate the original DOP model presented in Bod (1993), called DOP1, with a simple example. Assume a corpus consisting of only two trees: NP VP S NP Mary V likes John NP VP S NP V Peter hates Susan Figure 1. A corpus of two trees New sentences may be derived by combining fragments, i.e. subtrees, from this corpus, by means of a node-substitution operation indicated as °. Node-substitution identifies the leftmost nonterminal frontier node of one subtree with the root node of a second subtree (i.e., the second subtree is substituted on the leftmost nonterminal frontier node of the first subtree). Thus a new sentence such as Mary likes Susan can be derived by combining subtrees from this corpus: NP VP S NP V likes NP Mary NP Susan NP VP S NP Mary V likes Susan = ° ° Figure 2. A derivation for Mary likes Susan Other derivations may yield the same tree, e.g.: NP VP S NP V NP Mary NP VP S NP Mary V likes Susan = Susan V likes ° ° Figure 3. Another derivation yielding same tree DOP1 computes the probability of a subtree t as the probability of selecting t among all corpus subtrees that can be substituted on the same node as t. This probability is equal to the number of occurrences of t, | t |, divided by the total number of occurrences of all subtrees t' with the same root label as t. Let r(t) return the root label of t. Then we may write: P(t) = | t | Σ t': r(t')=r(t) | t' | In most applications of DOP1, the subtree probabilities are smoothed by the technique described in Bod (1996) which is based on Good-Turing. (The subtree probabilities are not smoothed by backing off to smaller subtrees, since these are taken into account by the parse tree probability, as we will see.) The probability of a derivation t1°...°tn is computed by the product of the probabilities of its subtrees ti: P(t1°...°tn) = Πi P(ti) As we have seen, there may be several distinct derivations that generate the same parse tree. The probability of a parse tree T is thus the sum of the probabilities of its distinct derivations. Let tid be the i-th subtree in the derivation d that produces tree T, then the probability of T is given by P(T) = ΣdΠi P(tid) Thus the DOP1 model considers counts of subtrees of a wide range of sizes in computing the probability of a tree: everything from counts of single-level rules to counts of entire trees. This means that the model is sensitive to the frequency of large subtrees while taking into account the smoothing effects of counts of small subtrees. Note that the subtree probabilities in DOP1 are directly estimated from their relative frequencies. A number of alternative subtree estimators have been proposed for DOP1 (cf. Bonnema et al 1999), including maximum likelihood estimation (Bod 2000b). But since the relative frequency estimator has so far not been outper formed by any other estimator for DOP1, we will stick to this estimator in the current paper. 3 Computational Issues Bod (1993) showed how standard chart parsing techniques can be applied to DOP1. Each corpussubtree t is converted into a context-free rule r where the lefthand side of r corresponds to the root label of t and the righthand side of r corresponds to the frontier labels of t. Indices link the rules to the original subtrees so as to maintain the subtree's internal structure and probability. These rules are used to create a derivation forest for a sentence (using a CKY parser), and the most probable parse is computed by sampling a sufficiently large number of random derivations from the forest ("Monte Carlo disambiguation", see Bod 1998). While this technique has been successfully applied to parsing the ATIS portion in the Penn Treebank (Marcus et al. 1993), it is extremely time consuming. This is mainly because the number of random derivations that should be sampled to reliably estimate the most probable parse increases exponentially with the sentence length (see Goodman 1998). It is therefore questionable whether Bod's sampling technique can be scaled to larger domains such as the WSJ portion in the Penn Treebank. Goodman (1996, 1998) showed how DOP1 can be reduced to a compact stochastic contextfree grammar (SCFG) which contains exactly eight SCFG rules for each node in the training set trees. Although Goodman's method does still not allow for an efficient computation of the most probable parse (in fact, the problem of computing the most probable parse in DOP1 is NP-hard -see Sima'an 1999), his method does allow for an efficient computation of the "maximum constituents parse", i.e. the parse tree that is most likely to have the largest number of correct constituents. Goodman has shown on the ATIS corpus that the maximum constituents parse performs at least as well as the most probable parse if all subtrees are used. Unfortunately, Goodman's reduction method is only beneficial if indeed all subtrees are used. Sima'an (1999: 108) argues that there may still be an isomorphic SCFG for DOP1 if the corpus-subtrees are restricted in size or lexicalization, but that the number of the rules explodes in that case. In this paper we will use Bod's subtree-torule conversion method for studying the impact of various subtree restrictions on the WSJ corpus. However, we will not use Bod's Monte Carlo sampling technique from complete derivation forests, as this turned out to be prohibitive for WSJ sentences. Instead, we employ a Viterbi n-best search using a CKY algorithm and estimate the most probable parse from the 1,000 most probable derivations, summing up the probabilities of derivations that generate the same tree. Although this heuristic does not guarantee that the most probable parse is actually found, it is shown in Bod (2000a) to perform at least as well as the estimation of the most probable parse with Monte Carlo techniques. However, in computing the 1,000 most probable derivations by means of Viterbi it is prohibitive to keep track of all subderivations at each edge in the chart (at least for such a large corpus as the WSJ). As in most other statistical parsing systems we therefore use the pruning technique described in Goodman (1997) and Collins (1999: 263-264) which assigns a score to each item in the chart equal to the product of the inside probability of the item and its prior probability. Any item with a score less than 10−5 times of that of the best item is pruned from the chart. 4 What is the Minimal Subtree Set that Achieves Maximal Parse Accuracy? 4.1 The base line For our base line parse accuracy, we used the now standard division of the WSJ (see Collins 1997, 1999; Charniak 1997, 2000; Ratnaparkhi 1999) with sections 2 through 21 for training (approx. 40,000 sentences) and section 23 for testing (2416 sentences ≤ 100 words); section 22 was used as development set. All trees were stripped off their semantic tags, co-reference information and quotation marks. We used all training set subtrees of depth 1, but due to memory limitations we used a subset of the subtrees larger than depth 1, by taking for each depth a random sample of 400,000 subtrees. These random subtree samples were not selected by first exhaustively computing the complete set of subtrees (this was computationally prohibit ive). Instead, for each particular depth > 1 we sampled subtrees by randomly selecting a node in a random tree from the training set, after which we selected random expansions from that node until a subtree of the particular depth was obtained. We repeated this procedure 400,000 times for each depth > 1 and ≤ 14. Thus no subtrees of depth > 14 were used. This resulted in a base line subtree set of 5,217,529 subtrees which were smoothed by the technique described in Bod (1996) based on Good-Turing. Since our subtrees are allowed to be lexicalized (at their frontiers), we did not use a separate part-ofspeech tagger: the test sentences were directly parsed by the training set subtrees. For words that were unknown in our subtree set, we guessed their categories by means of the method described in Weischedel et al. (1993) which uses statistics on word-endings, hyphenation and capitalization. The guessed category for each unknown word was converted into a depth-1 subtree and assigned a probability by means of simple Good-Turing estimation (see Bod 1998). The most probable parse for each test sentence was estimated from the 1,000 most probable derivations of that sentence, as described in section 3. We used "evalb"1 to compute the standard PARSEVAL scores for our parse results. We focus on the Labeled Precision (LP) and Labeled Recall (LR) scores only in this paper, as these are commonly used to rank parsing systems. Table 1 shows the LP and LR scores obtained with our base line subtree set, and compares these scores with those of previous stochastic parsers tested on the WSJ (respectively Charniak 1997, Collins 1999, Ratnaparkhi 1999, and Charniak 2000). The table shows that by using the base line subtree set, our parser outperforms most previous parsers but it performs worse than the parser in Charniak (2000). We will use our scores of 89.5% LP and 89.3% LR (for test sentences ≤ 40 words) as the base line result against which the effect of various subtree restrictions is investigated. While most subtree restrictions diminish the accuracy scores, we will see that there are restrictions that improve our scores, even beyond those of Charniak (2000). 1 http://www.cs.nyu.edu/cs/projects/proteus/evalb/ We will initially study our subtree restrictions only for test sentences ≤ 40 words (2245 sentences), after which we will give in 4.6 our results for all test sentences ≤ 100 words (2416 sentences). While we have tested all subtree restrictions initially on the development set (section 22 in the WSJ), we believe that it is interesting and instructive to report these subtree restrictions on the test set (section 23) rather than reporting our best result only. Parser LP LR ≤ 40 words Char97 87.4 87.5 Coll99 88.7 88.5 Char00 90.1 90.1 Bod00 89.5 89.3 ≤ 100 words Char97 86.6 86.7 Coll99 88.3 88.1 Ratna99 87.5 86.3 Char00 89.5 89.6 Bod00 88.6 88.3 Table 1. Parsing results with the base line subtree set compared to previous parsers 4.2 The impact of subtree size Our first subtree restriction is concerned with subtree size. We therefore performed experiments with versions of DOP1 where the base line subtree set is restricted to subtrees with a certain maximum depth. Table 2 shows the results of these experiments. depth of subtrees LP LR 1 76.0 71.8 ≤2 80.1 76.5 ≤3 82.8 80.9 ≤4 84.7 84.1 ≤5 85.5 84.9 ≤6 86.2 86.0 ≤8 87.9 87.1 ≤10 88.6 88.0 ≤12 89.1 88.8 ≤14 89.5 89.3 Table 2. Parsing results for different subtree depths (for test sentences ≤ 40 words) Our scores for subtree-depth 1 are comparable to Charniak's treebank grammar if tested on word strings (see Charniak 1997). Our scores are slightly better, which may be due to the use of a different unknown word model. Note that the scores consistently improve if larger subtrees are taken into account. The highest scores are obtained if the full base line subtree set is used, but they remain behind the results of Charniak (2000). One might expect that our results further increase if even larger subtrees are used; but due to memory limitations we did not perform experiments with subtrees larger than depth 14. 4.3 The impact of lexical context The more words a subtree contains in its frontier, the more lexical dependencies can be taken into account. To test the impact of the lexical context on the accuracy, we performed experiments with different versions of the model where the base line subtree set is restricted to subtrees whose frontiers contain a certain maximum number of words; the subtree depth in the base line subtree set was not constrained (though no subtrees deeper than 14 were in this base line set). Table 3 shows the results of our experiments. # words in subtrees LP LR ≤1 84.4 84.0 ≤2 85.2 84.9 ≤3 86.6 86.3 ≤4 87.6 87.4 ≤6 88.0 87.9 ≤8 89.2 89.1 ≤10 90.2 90.1 ≤11 90.8 90.4 ≤12 90.8 90.5 ≤13 90.4 90.3 ≤14 90.3 90.3 ≤16 89.9 89.8 unrestricted 89.5 89.3 Table 3. Parsing results for different subtree lexicalizations (for test sentences ≤ 40 words) We see that the accuracy initially increases when the lexical context is enlarged, but that the accuracy decreases if the number of words in the subtree frontiers exceeds 12 words. Our highest scores of 90.8% LP and 90.5% LR outperform the scores of the best previously published parser by Charniak (2000) who obtains 90.1% for both LP and LR. Moreover, our scores also outperform the reranking technique of Collins (2000) who reranks the output of the parser of Collins (1999) using a boosting method based on Schapire & Singer (1998), obtaining 90.4% LP and 90.1% LR. We have thus found a subtree restriction which does not decrease the parse accuracy but even improves it. This restriction consists of an upper bound of 12 words in the subtree frontiers, for subtrees ≤ depth 14. (We have also tested this lexical restriction in combination with subtrees smaller than depth 14, but this led to a decrease in accuracy.) 4.4 The impact of structural context Instead of investigating the impact of lexical context, we may also be interested in studying the importance of structural context. We may raise the question as to whether we need all unlexicalized subtrees, since such subtrees do not contain any lexical information, although they may be useful to smooth lexicalized subtrees. We accomplished a set of experiments where unlexicalized subtrees of a certain minimal depth are deleted from the base line subtree set, while all lexicalized subtrees up to 12 words are retained. depth of deleted unlexicalized subtrees LP LR ≥1 79.9 77.7 ≥2 86.4 86.1 ≥3 89.9 89.5 ≥4 90.6 90.2 ≥5 90.7 90.6 ≥6 90.8 90.6 ≥7 90.8 90.5 ≥8 90.8 90.5 ≥10 90.8 90.5 ≥12 90.8 90.5 Table 4. Parsing results for different structural context (for test sentences ≤ 40 words) Table 4 shows that the accuracy increases if unlexicalized subtrees are retained, but that unlexicalized subtrees larger than depth 6 do not contribute to any further increase in accuracy. On the contrary, these larger subtrees even slightly decrease the accuracy. The highest scores obtained are: 90.8% labeled precision and 90.6% labeled recall. We thus conclude that pure structural context without any lexical information contributes to higher parse accuracy (even if there exists an upper bound for the size of structural context). The importance of structural context is consonant with Johnson (1998) who showed that structural context from higher nodes in the tree (i.e. grandparent nodes) contributes to higher parse accuracy. This mirrors our result of the importance of unlexicalized subtrees of depth 2. But our results show that larger structural context (up to depth 6) also contributes to the accuracy. 4.5 The impact of nonheadword dependencies We may also raise the question as to whether we need almost arbitrarily large lexicalized subtrees (up to 12 words) to obtain our best results. It could be the case that DOP's gain in parse accuracy with increasing subtree depth is due to the model becoming sensitive to the influence of lexical heads higher in the tree, and that this gain could also be achieved by a more compact model which associates each nonterminal with its headword, such as a head-lexicalized SCFG. Head-lexicalized stochastic grammars have recently become increasingly popular (see Collins 1997, 1999; Charniak 1997, 2000). These grammars are based on Magerman's headpercolation scheme to determine the headword of each nonterminal (Magerman 1995). Unfortunately this means that head-lexicalized stochastic grammars are not able to capture dependency relations between words that according to Magerman's head-percolation scheme are "nonheadwords" -- e.g. between more and than in the WSJ construction carry more people than cargo where neither more nor than are headwords of the NP constituent more people than cargo. A frontier-lexicalized DOP model, on the other hand, captures these dependencies since it includes subtrees in which more and than are the only frontier words. One may object that this example is somewhat far-fetched, but Chiang (2000) notes that head-lexicalized stochastic grammars fall short in encoding even simple dependency relations such as between left and John in the sentence John should have left. This is because Magerman's head-percolation scheme makes should and have the heads of their respective VPs so that there is no dependency relation between the verb left and its subject John. Chiang observes that almost a quarter of all nonempty subjects in the WSJ appear in such a configuration. In order to isolate the contribution of nonheadword dependencies to the parse accuracy, we eliminated all subtrees containing a certain maximum number of nonheadwords, where a nonheadword of a subtree is a word which according to Magerman's scheme is not a headword of the subtree's root nonterminal (although such a nonheadword may of course be a headword of one of the subtree's internal nodes). In the following experiments we used the subtree set for which maximum accuracy was obtained in our previous experiments, i.e. containing all lexicalized subtrees with maximally 12 frontier words and all unlexicalized subtrees up to depth 6. # nonheadwords in subtrees LP LR 0 89.6 89.6 ≤1 90.2 90.1 ≤2 90.4 90.2 ≤3 90.3 90.2 ≤4 90.6 90.4 ≤5 90.6 90.6 ≤6 90.6 90.5 ≤7 90.7 90.7 ≤8 90.8 90.6 unrestricted 90.8 90.6 Table 5. Parsing results for different number of nonheadwords (for test sentences ≤ 40 words) Table 5 shows that nonheadwords contribute to higher parse accuracy: the difference between using no and all nonheadwords is 1.2% in LP and 1.0% in LR. Although this difference is relatively small, it does indicate that nonheadword dependencies should preferably not be discarded in the WSJ. We should note, however, that most other stochastic parsers do include counts of single nonheadwords: they appear in the backed-off statistics of these parsers (see Collins 1997, 1999; Charniak 1997; Goodman 1998). But our parser is the first parser that also includes counts between two or more nonheadwords, to the best of our knowledge, and these counts lead to improved performance, as can be seen in table 5. 4.6 Results for all sentences We have seen that for test sentences ≤ 40 words, maximal parse accuracy was obtained by a subtree set which is restricted to subtrees with not more than 12 words and which does not contain unlexicalized subtrees deeper than 6.2 We used 2 It may be noteworthy that for the development set (section 22 of WSJ), maximal parse accuracy was obtained with exactly the same subtree restrictions. As explained in 4.1, we initially tested all restrictions on the development set, but we preferred to report the effects of these restrictions for the test set. these restrictions to test our model on all sentences ≤ 100 words from the WSJ test set. This resulted in an LP of 89.7% and an LR of 89.7%. These scores slightly outperform the best previously published parser by Charniak (2000), who obtained 89.5% LP and 89.6% LR for test sentences ≤ 100 words. Only the reranking technique proposed by Collins (2000) slightly outperforms our precision score, but not our recall score: 89.9% LP and 89.6% LR. 5 Discussion: Converging Approaches The main goal of this paper was to find the minimal set of fragments which achieves maximal parse accuracy in Data Oriented Parsing. We have found that this minimal set of fragments is very large and extremely redundant: highest parse accuracy is obtained by employing only two constraints on the fragment set: a restriction of the number of words in the fragment frontiers to 12 and a restriction of the depth of unlexicalized fragments to 6. No other constraints were warranted. There is an important question why maximal parse accuracy occurs with exactly these constraints. Although we do not know the answer to this question, we surmise that these constraints differ from corpus to corpus and are related to general data sparseness effects. In previous experiments with DOP1 on smaller and more restricted domains we found that the parse accuracy decreases also after a certain maximum subtree depth (see Bod 1998; Sima'an 1999). We expect that also for the WSJ the parse accuracy will decrease after a certain depth, although we have not been able to find this depth so far. A major difference between our approach and most other models tested on the WSJ is that the DOP model uses frontier lexicalization while most other models use constituent lexicalization (in that they associate each constituent non terminal with its lexical head -- see Collins 1996, 1999; Charniak 1997; Eisner 1997). The results in this paper indicate that frontier lexicalization is a promising alternative to constituent lexicalization. Our results also show that the linguistically motivated constraint which limits the statistical dependencies to the locality of headwords of constituents is too narrow. Not only are counts of subtrees with nonheadwords important, also counts of unlexicalized subtrees up to depth 6 increase the parse accuracy. The only other model that uses frontier lexicalization and that was tested on the standard WSJ split is Chiang (2000) who extracts a stochastic tree-insertion grammar or STIG (Schabes & Waters 1996) from the WSJ, obtaining 86.6% LP and 86.9% LR for sentences ≤ 40 words. However, Chiang's approach is limited in at least two respects. First, each elementary tree in his STIG is lexicalized with exactly one lexical item, while our results show that there is an increase in parse accuracy if more lexical items and also if unlexicalized trees are included (in his conclusion Chiang acknowledges that "multiply anchored trees" may be important). Second, Chiang computes the probability of a tree by taking into account only one derivation, while in STIG, like in DOP1, there can be several derivations that generate the same tree. Another difference between our approach and most other models is that the underlying grammar of DOP is based on a treebank grammar (cf. Charniak 1996, 1997), while most current stochastic parsing models use a "markov grammar" (e.g. Collins 1999; Charniak 2000). While a treebank grammar only assigns probabilities to rules or subtrees that are seen in a treebank, a markov grammar assigns probabilities to any possible rule, resulting in a more robust model. We expect that the application of the markov grammar approach to DOP will further improve our results. Research in this direction is already ongoing, though it has been tested for rather limited subtree depths only (see Sima'an 2000). Although we believe that our main result is to have shown that almost arbitrary fragments within parse trees are important, it is surprising that a relatively simple model like DOP1 outperforms most other stochastic parsers on the WSJ. Yet, to the best of our knowledge, DOP is the only model which does not a priori restrict the fragments that are used to compute the most probable parse. Instead, it starts out by taking into account all fragments seen in a treebank and then investigates fragment restrictions to discover the set of relevant fragments. From this perspective, the DOP approach can be seen as striving for the same goal as other approaches but from a different direction. While other approaches usually limit the statistical dependencies beforehand (for example to headword dependencies) and then try to improve parse accuracy by gradually letting in more dependencies, the DOP approach starts out by taking into account as many dependencies as possible and then tries to constrain them without losing parse accuracy. It is not unlikely that these two opposite directions will finally converge to the same, true set of statistical dependencies for natural language parsing. As it happens, quite some convergence has already taken place. The history of stochastic parsing models shows a consistent increase in the scope of statistical dependencies that are captured by these models. Figure 4 gives a (very) schematic overview of this increase (see Carroll & Weir 2000, for a more detailed account of a subsumption lattice where SCFG is at the bottom and DOP at the top). context-free rules Charniak (1996) Collins (1996), Eisner (1996) context-free rules, headwords Charniak (1997) context-free rules, headwords, grandparent nodes Collins (2000) context-free rules, headwords, grandparent nodes/rules, bigrams, two-level rules, two-level bigrams, nonheadwords Bod (1992) all fragments within parse trees Scope of Statistical Dependencies Model Figure 4. Schematic overview of the increase of statistical dependencies by stochastic parsers Thus there seems to be a convergence towards a maximalist model which "takes all fragments [...] and lets the statistics decide" (Bod 1998: 5). While early head-lexicalized grammars restricted the fragments to the locality of headwords (e.g. Collins 1996; Eisner 1996), later models showed the importance of including context from higher nodes in the tree (Charniak 1997; Johnson 1998). This mirrors our result of the utility of (unlexicalized) fragments of depth 2 and larger. The importance of including single nonheadwords is now also uncontroversial (e.g. Collins 1997, 1999; Charniak 2000), and the current paper has shown the importance of including two and more nonheadwords. Recently, Collins (2000) observed that "In an ideal situation we would be able to encode arbitrary features hs, thereby keeping track of counts of arbitrary fragments within parse trees". This is in perfect correspondence with the DOP philosophy. References R. Bod, 1992. Data Oriented Parsing, Proceedings COLING'92, Nantes, France. R. Bod, 1993. Using an Annotated Language Corpus as a Virtual Stochastic Grammar, Proceedings AAAI'93, Washington D.C. R. Bod, 1996. Two Questions about Data-Oriented Parsing, Proceedings 4th Workshop on Very Large Corpora, COLING'96, Copenhagen, Denmark. R. Bod, 1998. Beyond Grammar: An ExperienceBased Theory of Language, Stanford, CSLI Publications, distributed by Cambridge University Press. R. Bod, 2000a. Parsing with the Shortest Derivation, Proceedings COLING'2000, Saarbrücken, Germany. R. Bod, 2000b. Combining Semantic and Syntactic Structure for Language Modeling, Proceedings ICSLP-2000, Beijing, China. R. Bod, 2000c. An Improved Parser for DataOriented Lexical-Functional Analysis, Proceedings ACL-2000, Hong Kong, China. R. Bod, 2001. Using Natural Language Processing Techniques for Musical Parsing, Proceedings ACH/ALLC'2001, New York, NY. R. Bod and R. Kaplan, 1998. A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis, Proceedings COLING-ACL'98, Montreal, Canada. R. Bonnema, P. Buying and R. Scha, 1999. A New Probability Model for Data-Oriented Parsing, Proceedings of the Amsterdam Colloquium'99, Amsterdam, Holland. J. Carroll and D. Weir, 2000. Encoding Frequency Information in Lexicalized Grammars, in H. Bunt and A. Nijholt (eds.), Advances in Probabilistic and Other Parsing Technologies, Kluwer Academic Publishers. E. Charniak, 1996. Tree-bank Grammars, Proceedings AAAI'96, Menlo Park, Ca. E. Charniak, 1997. Statistical Parsing with a Context-Free Grammar and Word Statistics, Proceedings AAAI-97, Menlo Park, Ca. E. Charniak, 2000. A Maximum-Entropy-Inspired Parser. Proceedings ANLP-NAACL'2000, Seattle, Washington. D. Chiang, 2000. Statistical parsing with an automatically extracted tree adjoining grammar, Proceedings ACL'2000, Hong Kong, China. M. Collins 1996. A new statistical parser based on bigram lexical dependencies, Proceedings ACL'96, Santa Cruz, Ca. M. Collins, 1997. Three generative lexicalised models for statistical parsing, Proceedings ACL'97, Madrid, Spain. M. Collins, 1999. Head-Driven Statistical Models for Natural Language Parsing, PhD thesis, University of Pennsylvania, PA. M. Collins, 2000. Discriminative Reranking for Natural Language Parsing, Proceedings ICML-2000, Stanford, Ca. J. Eisner, 1996. Three new probabilistic models for dependency parsing: an exploration, Proceedings COLING-96, Copenhagen, Denmark. J. Eisner, 1997. Bilexical Grammars and a CubicTime Probabilistic Parser, Proceedings Fifth International Workshop on Parsing Technologies, Boston, Mass. J. Goodman, 1996. Efficient Algorithms for Parsing the DOP Model, Proceedings Empirical Methods in Natural Language Processing, Philadelphia, PA. J. Goodman, 1997. Global Thresholding and Multiple-Pass Parsing, Proceedings EMNLP2, Boston, Mass. J. Goodman, 1998. Parsing Inside-Out, Ph.D. thesis, Harvard University, Mass. M. Johnson, 1998. PCFG Models of Linguistic Tree Representations, Computational Linguistics 24(4), 613-632. D. Magerman, 1995. Statistical Decision-Tree Models for Parsing, Proceedings ACL'95, Cambridge, Mass. M. Marcus, B. Santorini and M. Marcinkiewicz, 1993. Building a Large Annotated Corpus of English: the Penn Treebank, Computational Linguistics 19(2). A. Ratnaparkhi, 1999. Learning to Parse Natural Language with Maximum Entropy Models, Machine Learning 34, 151-176. Y. Schabes and R. Waters, 1996. Stochastic Lexicalized Tree-Insertion Grammar. In H. Bunt and M. Tomita (eds.) Recent Advances in Parsing Technology. Kluwer Academic Publishers. R. Schapire and Y. Singer, 1998. Improved Boosting Algorithms Using ConfedenceRated Predictions, Proceedings 11th Annual Conference on Computational Learning Theory. Morgan Kaufmann, San Francisco. K. Sima'an, 1999. Learning Efficient Disambiguation. PhD thesis, University of Amsterdam, The Netherlands. K. Sima'an, 2000. Tree-gram Parsing: Lexical Dependencies and Structural Relations, Proceedings ACL'2000, Hong Kong, China. R. Weischedel, M. Meteer, R, Schwarz, L. Ramshaw and J. Palmucci, 1993. Coping with Ambiguity and Unknown Words through Probabilistic Models, Computational Linguistics, 19(2).
2001
10
Underspecified Beta Reduction Manuel Bodirsky Katrin Erk Joachim Niehren Programming Systems Lab Saarland University D-66041 Saarbr¨ucken, Germany {bodirsky|erk|niehren}@ps.uni-sb.de Alexander Koller Department of Computational Linguistics Saarland University D-66041 Saarbr¨ucken, Germany [email protected] Abstract For ambiguous sentences, traditional semantics construction produces large numbers of higher-order formulas, which must then be -reduced individually. Underspecified versions can produce compact descriptions of all readings, but it is not known how to perform -reduction on these descriptions. We show how to do this using -reduction constraints in the constraint language for  -structures (CLLS). 1 Introduction Traditional approaches to semantics construction (Montague, 1974; Cooper, 1983) employ formulas of higher-order logic to derive semantic representations compositionally; then -reduction is applied to simplify these representations. When the input sentence is ambiguous, these approaches require all readings to be enumerated and reduced individually. For large numbers of readings, this is both inefficient and unelegant. Existing underspecification approaches (Reyle, 1993; van Deemter and Peters, 1996; Pinkal, 1996; Bos, 1996) provide a partial solution to this problem. They delay the enumeration of the readings and represent them all at once in a single, compact description. An underspecification formalism that is particularly well suited for describing higher-order formulas is the Constraint Language for Lambda Structures, CLLS (Egg et al., 2001; Erk et al., 2001). CLLS descriptions can be derived compositionally and have been used to deal with a rich class of linguistic phenomena (Koller et al., 2000; Koller and Niehren, 2000). They are based on dominance constraints (Marcus et al., 1983; Rambow et al., 1995) and extend them with parallelism (Erk and Niehren, 2000) and binding constraints. However, lifting -reduction to an operation on underspecified descriptions is not trivial, and to our knowledge it is not known how this can be done. Such an operation – which we will call underspecified -reduction – would essentially reduce all described formulas at once by deriving a description of the reduced formulas. In this paper, we show how underspecified -reductions can be performed in the framework of CLLS. Our approach extends the work presented in (Bodirsky et al., 2001), which defines -reduction constraints and shows how to obtain a complete solution procedure by reducing them to parallelism constraints in CLLS. The problem with this previous work is that it is often necessary to perform local disambiguations. Here we add a new mechanism which, for a large class of descriptions, permits us to perform underspecified -reduction steps without disambiguating, and is still complete for the general problem. Plan. We start with a few examples to show what underspecified -reduction should do, and why it is not trivial. We then introduce CLLS and -reduction constraints. In the core of the paper we present a procedure for underspecified -reduction and apply it to illustrative examples. 2 Examples In this section, we show what underspecified reduction should do, and why the task is nontrivial. Consider first the ambiguous sentence Every student didn’t pay attention. In first-order logic, the two readings can be represented as                             ! #" %$    &         '      (! )+* )+ )  )+$ )+ ),          -  ( Figure 1: Underspecified -reduction steps for ‘Every student did not pay attention’   . 0/ 1      '    2 34 Figure 2: Description of ‘Every student did not pay attention’ 576 & 5   6   ' 598 8  6:576 & 5    ' 598 8 A classical compositional semantics construction first derives these two readings in the form of two HOL-formulas: 6&. 0/ 1 8  5;6   <= 58  6 6&. 0/ > & 8  5;6  <= 58 8 where . 0/ 1 is an abbreviation for the term . / 1@? BACD 6:576 A 5  D 58 8 An underspecified description of both readings is shown in Figure 2. For now, notice that the graph has all the symbols of the two HOL formulas as node labels, that variable binding is indicated by dashed arrows, and that there are dotted lines indicating an “outscopes” relation; we will fill in the details in Section 3. Now we want to reduce the description in Figure 2 as far as possible. The first -reduction step, with the redex at 34 is straightforward. Even though the description is underspecified, the reducing part is a completely known  -term. The result is shown on the left-hand side of Figure 1. Here we have just one redex, starting at   , which binds a single variable. The next reduction step is less obvious: The  operator could either belong to the context (the part between @ and % )    E  F   G  E  F G  3  H I Figure 3: Problems with rewriting of descriptions or to the argument (below #" ). Still, it is not difficult to give a correct description of the result: it is shown in the middle of Fig. 1. For the final step, which takes us to the rightmost description, the redex starts at ),* . Note that now the  might be part of the body or part of the context of this redex. The end result is precisely a description of the two readings as first-order formulas. So far, the problem does not look too difficult. Twice, we did not know what exactly the parts of the redex were, but it was still easy to derive correct descriptions of the reducts. But this is not always the case. Consider Figure 3, an abstract but simple example. In the left description, there are two possible positions for the  : above 3 or below  . Proceeding na¨ıvely as above, we arrive at the right-hand description in Fig. 3. But this description is also satisfied by the term E 6  6JF6 G 8 8 8 , which cannot be obtained by reducing any of the terms described on the left-hand side. More generally, the na¨ıve “graph rewriting” approach is unsound; the resulting descriptions can have too many readings. Similar problems arise in (more complicated) examples from semantics, such as the coordination in Fig. 8. The underspecified -reduction operation we propose here does not rewrite descriptions. Instead, we describe the result of the step using a “ -reduction constraint” that ensures that the reduced terms are captured correctly. Then we use a saturation calculus to make the description more explicit. 3 Tree descriptions in CLLS In this section, we briefly recall the definition of the constraint language for  -structures (CLLS). A more thorough and complete introduction can be found in (Egg et al., 2001). We assume a signature K ? L E%M&N#MPOPOPORQ of function symbols, each equipped with an arity  6 E 8TSVU . A tree W consists of a finite set of nodes XZY;[]\ , each of which is labeled by a symbol ^_\ 6 X 8 Y`K . Each node X has a sequence of children X+a MPOPOPO7M XbTYc[]\ where b ?d&6 ^e\ 6 X 8 8 is the arity of the label of X . A single node f , the root of W , is not the child of any other node. 3.1 Lambda structures The idea behind  -structures is that a  -term can be considered as a pair of a tree which represents the structure of the term and a binding function encoding variable binding. We assume K contains symbols  - (arity 0, for variables),  g (arity 1, for abstraction),  (arity 2, for application), and analogous labels for the logical connectives. Definition 1. A  -structure h is a pair 6 W M  8 of a tree W and a binding function  that maps every node X with label   to a node with label   ,  , or i dominating X .    E   The binding function  explicitly maps nodes representing variables to the nodes representing their binders. When we draw  -structures, we represent the binding function using dashed arrows, as in the picture to the right, which represents the  -term  5 OjE 6k598 . A  -structure corresponds uniquely to a closed  -term modulo l -renaming. We will freely consider  -structures as first-order model structures with domain []\ . This structure defines the following relations. The labeling relation Xnm E 6 X  MPOPOPO M X#o 8 holds in W if ^_\ 6 X 8p? E and XBq ? Xr for all atsur]sub . The dominance relation X9vxw<Xy holds iff there is a path Xy y such that X9X y y ? X y . Inequality z ? is simply inequality of nodes; disjointness X {2Xy holds iff neither X9vxw<Xy nor X y v w X . 3.2 Basic constraints Now we define the constraint language for  structures (CLLS) to talk about these relations. 3 M  M ) are variables that will denote nodes of a  -structure. | m1m ? 3 v w ~}=3 z ? ~}=3 { ~} |tt| y } 3 m E 6 3€ MPOPOPO M 3 o 8 6J 6 E 8+? b 8 }  6 3 8 ? } 9‚  6 34 8 ?@L 3€ MPOPOPORM 3 o Q A constraint | is a conjunction of literals (for dominance, labeling, etc). We use the abbreviations 3 vxƒ  for 3 vxw   3 z ?  and 3 ?  for 3 v w    v w 3 . The  -binding literal  6 3 8 ?  expresses that  denotes a node which the binding function maps to 3 . The inverse  -binding literal 9‚  6 34 8 ?@L 3€ MPOPOPO M 3 o Q states that 3  MPOPOPO M 3 o denote the entire set of variable nodes bound by 34 . A pair 6 h M…„ 8 of a  structure h and a variable assignment „ satisfies a  -structure iff it satisfies each literal, in the obvious way.    -   3 3  3 ! Figure 4: The constraint graph of  ‚  6 3 8 ?CL 3€ M 34! Q  3 vxw 3€  3 vxw 34! We draw constraints as graphs (Fig. 4) in which nodes represent variables. Labels and solid lines indicate labeling literals, while dotted lines represent dominance. Dashed arrows indicate the binding relation; disjointness and inequality literals are not represented. The informal diagrams from Section 2 can thus be read as constraint graphs, which gives them a precise formal meaning. 3.3 Segments and Correspondences Finally, we define segments of  -structures and correspondences between segments. This allows us to define parallelism and -reduction constraints. A segment is a contiguous part of a  -structure that is delineated by several nodes of the structure. Intuitively, it is a tree from which some subtrees have been cut out, leaving behind holes. Definition 2 (Segments). A segment l of a  structure 6 W M  8 is a tuple X J† X  OPOPO M X#o of nodes in []\ such that X  v w X#q and X#qJ{2Xˆ‡ hold in W for as‰rŠz ?‰‹ s‰b . The root Œ 6 l 8 is X  , and #Žˆ6 l 8? X  MPOPOPO M XBo is its (possibly empty) sequence of holes. The set  6 l 8 of nodes of l is  6 l 8_?dL X‘Y;[“’ } Œ 6 l 8 vxw<X M and not X q vxƒnX for all a”s•r_s–b Q To exempt the holes of the segment, we define  ‚ 6 l 8”?  6 l 8˜—–Ž6 l 8 . If Ž6 l 8 is a singleton sequence then we write  6 l 8 for the unique hole of l , i.e. the unique node with ™6 l 8 Y #Žˆ6 l 8 . For instance, l ? X š† X ! M X  is a segment in Fig. 5; its root is X  , its holes are X ! and X  , and it contains the nodes  6 l 8_?›L X  M X $ M X ! M X  Q . Two tree segments l M overlap properly iff x‚ 6 l 8œ x‚ 6 8 z ?Ÿž . The syntactic equivalent of a segment is a segment term 34P†R3€ MPOPOPO 3 o . We use the letters   M…¡¢M¤£2M [ for them and extend Œ 6   8 , Ž6   8 , and  6   8 correspondingly. A correspondence function is intuitively an isomorphism between segments, mapping holes to holes and roots to roots and respecting the structures of the trees: Definition 3. A correspondence function between the segments l M is a bijective mapping ¥ m¦ 6 l 8   6 8 such that ¥ maps the r -th hole of l to the r -th hole of for each r , and for every XZYZx‚ 6 l 8 and every label E , Xnm E 6 X+a MPOPOPORM Xb 8n§ ¥ 6 X 8 m E 6 ¥ 6 X+a 8 MPOPOPO ¥ 6 Xb 8 8 O There is at most one correspondence function between any two given segments. The correspondence literal co 6 £(M [ 8<6 3 8 ?  expresses that a correspondence function ¥ between the segments denoted by £ and [ exists, that 3 and  denote nodes within these segment, and that these nodes are related by ¥ . Together, these constructs allow us to define parallelism, which was originally introduced for the analysis of ellipsis (Egg et al., 2001). The parallelism relation l•¨ holds iff there is a correspondence function between l and that satisfies some natural conditions on  -binding which we cannot go into here. To model parallelism in the presence of global  -binders relating multiple parallel segments, Bodirsky et al. (2001) generalize parallelism to group parallelism. Group parallelism 6 l  MPOPOPO M l o 8 ¨ 6  MPOPOPO M o 8 is entailed   N E   g      G E    G X  Xy  X  X $ X y  X ! X " X  Xy  Figure 5: E 6 6  5 Oª© 6k598 8<6 G 8 8 ¬« E 6 © 6 G 8 8 by the conjunction  o q>­  l q ¨ q of ordinary parallelisms, but imposes slightly weaker restrictions on  -binding. By way of example, consider the  structure in Fig. 5, where 6 X † X  M X !=† X " M X † 8 ¨ 6 Xy  † Xy  M Xy  † Xy " M Xy " † 8 holds. On the syntactic side, CLLS provides group parallelism literals 6    MPOPOPO M   o 8 ¨ 6 ¡  MPOPOPO®M…¡ o 8 to talk about (group) parallelism. 4 Beta reduction constraints Correspondences are also used in the definition of -reduction constraints (Bodirsky et al., 2001). A -reduction constraint describes a single reduction step between two  -terms; it enforces correct reduction even if the two terms are only partially known. Standard -reduction has the form £ 6 6  5 Oª¡ 8   8 ¬« £ 6 ¡°¯ 5 †  ²± 8³5 free for   O The reducing  -term consists of context £ which contains a redex 6  5 Oª¡ 8   . The redex itself is an occurrence of an application of a  -abstraction  5 Oª¡ with body ¡ to argument   . -reduction then replaces all occurrences of the bound variable 5 in the body by the argument while preserving the context. We can partition both redex and reduct into argument, body, and context segments. Consider Fig. 5. The  -structure contains the reducing  term E 6 6  5 Oª© 6k58 8<6 G 8 8 starting at X  . The reduced term can be found at X y  . Writing ´ M ´ y for the context, ,M y for the body and l M l y for the argument tree segments of the reducing and the reduced term, respectively, we find ´ ? X µ† X  ? X !µ† X " l ? X µ† ´y ? Xy  † Xy  y ? Xy  † Xy  l y ? Xy  † Because we have both the reducing term and the reduced term as parts of the same  -structure, we can express the fact that the structure below X y  can be obtained by -reducing the structure below X  by requiring that l corresponds to l y , to y , and ´ to ´ y , again modulo binding. This is indeed true in the given  -structure, as we have seen above. More generally, we define the -reduction relation 6 ´ M ,M l 8 « —  6 ´ y M y M l y  MPOPOPO™M l y o 8 for a body with b holes (for the variables bound in the redex). The -reduction relation holds iff two conditions are met: 6 ´ M ,M l 8 must form a reducing term, and the structural equalities that we have noted above must hold between the tree segments. The latter can be stated by the following group parallelism relation, which also represents the correct binding behaviour: 6 ´ M ,M l MPOPOPOnM l 8 ¨ 6 ´%y M y M l y  MPOPOPO®M l y o 8 Note that any  -structure satisfying this relation must contain both the reducing and the reduced term as substructures. Incidentally, this allows us to accommodate for global variables in  -terms; Fig. 5 shows this for the global variable © . We now extend CLLS with -reduction constraints 6 £(M…¡¶M   8 « —  6 £ y M…¡ y M   y  MPOPOPO®M   y o 8 M which are interpreted by the -reduction relation. The reduction steps in Section 2 can all be represented correctly by -reduction constraints. Consider e.g. the first step in Fig. 1. This is represented by the constraint 6 @†ˆ M %!†ˆ M "† 8 « —  6 2!†P)+ M )+†P), M )+† 8 . The entire middle constraint in Fig. 1 is entailed by the -reduction literal. If we learn in addition that e.g.  vxw  , the -reduction literal will entail )  vxw )  because the segments must correspond. This correlation between parallel segments is the exact same effect (quantifier parallelism) that is exploited in the CLLS analysis of “Hirschb¨uhler sentences”, where ellipses and scope interact (Egg et al., 2001). -reduction constraints also represent the problematic example in Fig. 3 correctly. The spurious solution of the right-hand constraint does not usb( | , X) = if all syntactic redexes in | below 3 are reduced then return 6 | M 3 8 else pick a formula redex · 6 £(M…¡¶M   8 in | that is unreduced, with 3 ? Œ 6 £ 8 in | add 6 £(M…¡¶M   8 « —  6 £ y M…¡ y M   y  MPOPOPO®M   y o 8 to | where £ y M…¡ y M  ¸y  MPOPOPO M  ¸y o are new segment terms with fresh variables add 3 {2Œ 6 £ y 8 to | for all | y9Y solve 6 | 8 do usb 6 | y M Œ 6 £ y 8 8 end Figure 6: Underspecified -reduction satisfy the -reduction constraint, as the bodies would not correspond. 5 Underspecified Beta Reduction Having introduced -reduction constraints, we now show how to process them. In this section, we present the procedure usb, which performs a sequence of underspecified -reduction steps on CLLS descriptions. This procedure is parameterized by another procedure solve for solving reduction constraints, which we discuss in the following section. A syntactic redex in a constraint | is a subformula of the following form: redex · 6 £(M…¡¶M   8_? df  6 £ 8 m  6  M Œ 6   8 8   m  @6 Œ 6 ¡ 8 8   ‚  6  8+?¹#Žˆ6 ¡ 8 A context £ of a redex must have a unique hole ™6 £ 8 . An b -ary redex has b occurrences of the bound variable, i.e. the length of #Žˆ6 ¡ 8 is b . We call a redex linear if b ? a . The algorithm º9»¼ is shown in Figure 6. It starts with a constraint | and a variable 3 , which denotes the root of the current  -term to be reduced. (For example, for the redex in Fig. 2, this root would be   .) The procedure then selects an unreduced syntactic redex and adds a description of its reduct at a disjoint position. Then the solve procedure is applied to resolve the reduction constraint, at least partially. If it has to disambiguate, it returns one constraint for each reading it finds. Finally, usb is called recursively with the new constraint and the root variable of the new  -term. Intuitively, the solve procedure adds entailed literals to | , making the new -reduction literal more explicit. When presented with the left-hand constraint in Fig. 1 and the root variable @ , usb will add a -reduction constraint for the redex at ® ; then solve will derive the middle constraint. Finally, usb will call itself recursively with the new root variable 2! and try to resolve the redex at ), , etc. The partial solving steps do essentially the same as the na¨ıve graph rewriting approach in this case; but the new algorithm will behave differently on problematic constraints as in Fig. 3. 6 A single reduction step In this section we present a procedure solve for solving -reduction constraints. We go through several examples to illustrate how it works. We have to omit some details for lack of space; they can be found in (Bodirsky et al., 2001). The aim of the procedure is to make explicit information that is implicit in -reduction constraints: it introduces new corresponding variables and copies constraints from the reducing term to the reduced term. We build upon the solver for -reduction constraints from (Bodirsky et al., 2001). This solver is complete, i.e. it can enumerate all solutions of a constraint; but it disambiguates a lot, which we want to avoid in underspecified -reduction. We obtain an alternative procedure solve by disabling all rules which disambiguate and adding some new non-disambiguating rules. This allows us to perform a complete underspecified reduction for many examples from underspecified semantics without disambiguating at all. In those cases where the new rules alone are not sufficient, we can still fall back on the complete solver. 6.1 Saturation Our constraint solver is based on saturation with a given set of saturation rules. Very briefly, this means that a constraint is seen as the set of its literals, to which more and more literals are added according to saturation rules. A saturation rule of the form |   ½7o q>­  | q says that we can add one of the | q to any constraint that contains at least the literals in |  . We only apply rules where each possible choice adds new literals to the set; a constraint is saturated under a set ¾ of saturation rules if no rule in ¾ can add anything else. solve returns the set of all possible saturations of its input. If the rule system contains nondeterministic distribution rules, with bÀ¿Áa , this set can be non-singleton; but the rules we are going to introduce are all deterministic propagation rules (with b ? a ). 6.2 Solving Beta Reduction Constraints The main problem in doing underspecified reduction is that we may not know to which part of a redex a certain node belongs (as in Fig. 1). We address this problem by introducing underspecified correspondence literals of the form co 6'L6 £  M [  8 MPOPOPO™M 6 £ o M [“o 8 Q 8<6 3 8 ?  O Such a literal is satisfied if the tree segments denoted by the £ ’s and by the [ ’s do not overlap properly, and there is an r for which co 6 £ q M [“q 8<6 3 8e?  is satisfied. In Fig. 7 we present the rules UB for underspecified -reduction; the first five rules are the core of the algorithm. To keep the rules short, we use the following abbreviations (with as•r_s•b ): beta ?2ÂÃÅÄÆ6 £2M…¡¢M   8 « —  6 £ y M…¡ y M  ¸y  MPOPOPO M  ¸y o 8 co q ?2ÂÃÅÄ co 6'L6 £(M¤£ y 8 M 6 ¡¶M…¡ y 8 M 6   M   y q 8 Q 8 The procedure solve consists of UB together with the propagation rules from (Bodirsky et al., 2001). The rest of this section shows how this procedure operates and what it can and cannot do. First, we discuss the five core rules. Rule (Beta) states that whenever the -reduction relation holds, group parallelism holds, too. (This allows us to fall back on a complete solver for group parallelism.) Rule (Var) introduces a new variable as a correspondent of a redex variable, and (Lab) and (Dom) copy labeling and dominance literals from the redex to the reduct. To understand the exceptions they make, consider e.g. Fig. 5. Every node below X  has a correspondent in the reduct, except for X  . Every labeling relation in the redex also holds in the reduct, except for the labelings of the  -node X  , the   -node X  , and the   -node X " . For the variables that possess a correspondent, all dominance relations in the redex hold in the reduct too. The rule (  .Inv) copies inverse  binding literals, i.e. the information that all variables bound by a  -binder are known. For now, (Beta) Ç1È%ÉËÊ7ÉËÌ%ÍeÎ Ï ÇÐÈ%ÑÒÉkÊÑÒÉkÌÑ Ó&ÉÅÔµÔÅÔ0ÉkÌÑ ÕRÍ Ï ÇÐÈ%ÉJÊ7ÉkÌ ÉÅÔÅÔ'ÔRÉÒÌÍgÖ¢Ç1È%Ñ:ÉËÊÑ:ÉkÌ#Ñ Ó ÉÅÔµÔµÔ0ÉÒÌ#Ñ ÕÍ (Var) beta × redex Ø ÇÐÈ%ÉJÊ7ÉkÌÍ×_ÙÇÐÈ9Í1ڤܸۚצÜnÝ ­ · ÏpÞ Ü Ñ Ô co ßJÇ1ÜÍ ­ Ü Ñ (Lab) beta × redex Ø ÇÐÈ%ÉJÊ7ÉkÌÍצÜgàá â0Ç1Ü Ó É'ÔÅÔÅÔÉkÜxã'ÍRצä ã å æ à co ßËÇÐÜ å Í ­ Ü#Ñ å ×eÜgàÝ ­ç ÇÐÈ9Í0×eÜàRè é çê ÇÐÊ®Í Ï ÜBÑ à á â0Ç1ÜBÑ Ó ÉµÔÅÔµÔ0ÉkÜ#Ñ ã Í (Dom) beta × änë å&æ Ó co ßËÇÐÜ å Í ­ Ü#Ñ å ×¦Ü Ó Ú Û Ü ë Ï Ü#Ñ Ó Ú Û ÜBÑ ë ( ì .Inv) beta × redex Ø Ç1È%ÉËÊnÉÒÌ%Í'×Bìí Ó Ç1Ügà…Í ­®î Ü Ó ÉÅÔµÔµÔ0ÉkÜgï,ð<× ä ï å&æ à co Ó Ç1Ü å Í ­ Ü Ñ å Ï ìí Ó ÇÐÜ Ñ à Í ­®î Ü Ñ Ó ÉÅÔ'ÔÅÔRÉËÜ Ñ ï ð redex linear (Par.part) beta Õ × co ßËÇ1ÜÍ ­ Ü Ñ × · é˜ñ ÇÐÌ%ÍצÜBÚ¤Û · Ï Ü Ñ è é˜ò ÇÐÊ Ñ Í 9ó q ó o (Par.all) co Ç î Ç1ôeÉËô Ñ ÍËÉÅÔ'ÔÅÔJð&ÍJÇÐÜ%Í ­ Ü Ñ ×¦Ü éõñ ÇÐô+Í Ï Ü Ñ éõñ ÇÐô Ñ Í0× co ÇÐôeÉËô Ñ ÍËÇ1ÜÍ ­ Ü Ñ Figure 7: New saturation rules UB for constraint solving during underspecified -reduction. it is restricted to linear redexes; for the nonlinear case, we have to take recourse to disambiguation. It can be shown that the rules in UB are sound in the sense that they are valid implications when interpreted over  -structures. 6.3 Some Examples To see what the rules do, we go through the first reduction step in Fig. 1. The -reduction constraint that belongs to this reduction is 6 £(M…¡¶M   8 « —  6 £ y M…¡ y M  ²y  8 with £ ? @¤†ˆ% M ¡ ? ®¤†ˆ% M   ? #"† M £ y ? 2!=†P)+ Mö¡ y ? ),=†P)+ M  ²y  ? ),† Now saturation can add more constraints, for example the following: Ç  Í ·=÷ Ý ­ · Ó Ç $ Í ·=÷ Ý ­ ·=ø Ç ! Í ·ù Ý ­ · Ó Ç  ÍúÜ ÷ á ûüÇÐÜ ù Í (Lab) Ç  Í Þ Ü ÷ Ô co Ó Ç ·=÷ Í ­ Ü ÷ (Var) Ç  Íúý ë Ú Û Ü ÷ (Dom) Ç " Í Þ Ü ù Ô co Ó Ç ·ù Í ­ Ü ù (Var) We get (1), (2), (5) by propagation rules from (Bodirsky et al., 2001): variables bearing different labels must be different. Now we can apply (Var) to get (3) and (4), then (Lab) to get (6). Finally, (7) shows one of the dominances added by (Dom). Copies of all other variables and literals can be computed in a completely analogous fashion. In particular, copying gives us another redex starting at ),* , and we can continue with the algorithm usb in Figure 6. Note what happens in case of a nonlinear redex, as in the left picture of Fig. 8: as the redex is þ ary, the rules produce two copies of the  labeling constraint, one via co  and one via co ! . The result is shown on the right-hand side of the figure. We will return to this example in a minute. 6.4 More Complex Examples The last two rules in Fig. 7 enforce consistency between scoping in the redex and scoping in the reduct. The rules use literals that were introduced in (Bodirsky et al., 2001), of the forms 3 Y‘ 6   8 , 3 † Y›ÿ 6 ¡ 8 , etc., where   , ¡ are segment terms. We take 3 YZ 6   8 to mean that 3 must be inside the tree segment denoted by   , and we take 3 Y ÿ 6 ¡ 8 (i for ’interior’) to mean that 3 Y‘ 6 ¡ 8 and 3 denotes neither the root nor a hole of ¡ . As an example, reconsider Fig. 3: by rule (Par.part), the reduct (right-hand picture of Fig. 3) cannot represent the term E 6  6JF06 G 8 8 8 because that would require the  operator to be in ÿ 6 ¡ y 8 . Similarly in Fig. 8, where we have introduced two copies of the  label. If the  in the redex on the left ends up as part of the context, there should be only one copy in the reduct. This is brought about by the rule (Par.all) and the fact that correspondence is a function (which is enforced by rules from (Erk et al., 2001) which are part of the solver in (Bodirsky et al., 2001)). Together, they can be used to infer that ), can have only one correspondent in the reduct context. 7 Conclusion In this paper, we have shown how to perform an underspecified -reduction operation in the CLLS framework. This operation transforms underspecified descriptions of higher-order formulas into descriptions of their -reducts. It can be used to essentially -reduce all readings of an ambiguous sentence at once. It is interesting to observe how our underspecified -reduction interacts with parallelism constraints that were introduced to model ellipses. Consider the elliptical three-reading example “Peter sees a loophole. Every lawyer does too.” Under the standard analysis of ellipsis in CLLS (Egg et al., 2001), “Peter” must be represented as a generalized quantifier to obtain all three readings. This leads to a spurious ambigu        /< &/     (-1      -  ), )           /< &/         (>   ) y  ) y  ) y y  ) y y  Figure 8: “Peter and Mary do not laugh.” ity in the source sentence, which one would like to get rid of by -reducing the source sentence. Our approach can achieve this goal: Adding -reduction constraints for the source sentence leaves the original copy intact, and the target sentence still contains the ambiguity. Under the simplifying assumption that all redexes are linear, we can show that it takes time  6 b  8 to perform steps of underspecified reduction on a constraint with b variables. This is feasible for large as long as b  U , which should be sufficient for most reasonable sentences. If there are non-linear redexes, the present algorithm can take exponential time because subterms are duplicated. The same problem is known in ordinary  -calculus; an interesting question to pursue is whether the sharing techniques developed there (Lamping, 1990) carry over to the underspecification setting. In Sec. 6, we only employ propagation rules; that is, we never disambiguate. This is conceptually very nice, but on more complex examples (e.g. in many cases with nonlinear redexes) disambiguation is still needed. This raises both theoretical and practical issues. On the theoretical level, the questions of completeness (elimination of all redexes) and confluence still have to be resolved. To that end, we first have to find suitable notions of completeness and confluence in our setting. Also we would like to handle larger classes of examples without disambiguation. On the practical side, we intend to implement the procedure and disambiguate in a controlled fashion so we can reduce completely and still disambiguate as little as possible. References M. Bodirsky, K. Erk, A. Koller, and J. Niehren. 2001. Beta reduction constraints. In Proc. 12th Rewriting Techniques and Applications, Utrecht. J. Bos. 1996. Predicate logic unplugged. In Proceedings of the 10th Amsterdam Colloquium. R. Cooper. 1983. Quantification and Syntactic Theory. Reidel, Dordrecht. M. Egg, A. Koller, and J. Niehren. 2001. The constraint language for lambda structures. Journal of Logic, Language, and Information. To appear. K. Erk and J. Niehren. 2000. Parallelism constraints. In Proc. 11th RTA, LNCS 1833. K. Erk, A. Koller, and J. Niehren. 2001. Processing underspecified semantic representations in the Constraint Language for Lambda Structures. Journal of Language and Computation. To appear. A. Koller and J. Niehren. 2000. On underspecified processing of dynamic semantics. In Proc. 18th COLING, Saarbr¨ucken. A. Koller, J. Niehren, and K. Striegnitz. 2000. Relaxing underspecified semantic representations for reinterpretation. Grammars, 3(2/3). Special Issue on MOL’99. To appear. J. Lamping. 1990. An algorithm for optimal lambda calculus reduction. In ACM Symp. on Principles of Programming Languages. M. P. Marcus, D. Hindle, and M. M. Fleck. 1983. Dtheory: Talking about talking about trees. In Proc. 21st ACL. R. Montague. 1974. The proper treatment of quantification in ordinary English. In Formal Philosophy. Selected Papers of Richard Montague. Yale UP. M. Pinkal. 1996. Radical underspecification. In Proc. 10th Amsterdam Colloquium. O. Rambow, K. Vijay-Shanker, and D. Weir. 1995. D-Tree Grammars. In Proceedings of ACL’95. U. Reyle. 1993. Dealing with ambiguities by underspecification: construction, representation, and deduction. Journal of Semantics, 10. K. van Deemter and S. Peters. 1996. Semantic Ambiguity and Underspecification. CSLI Press, Stanford.
2001
11
Detecting problematic turns in human-machine interactions: Rule-induction versus memory-based learning approaches Antal van den Bosch  ILK / Comp. Ling. KUB, Tilburg The Netherlands [email protected] Emiel Krahmer   IPO TU/e, Eindhoven The Netherlands [email protected] Marc Swerts    CNTS UIA, Antwerp Belgium [email protected] Abstract We address the issue of on-line detection of communication problems in spoken dialogue systems. The usefulness is investigated of the sequence of system question types and the word graphs corresponding to the respective user utterances. By applying both ruleinduction and memory-based learning techniques to data obtained with a Dutch train time-table information system, the current paper demonstrates that the aforementioned features indeed lead to a method for problem detection that performs significantly above baseline. The results are interesting from a dialogue perspective since they employ features that are present in the majority of spoken dialogue systems and can be obtained with little or no computational overhead. The results are interesting from a machine learning perspective, since they show that the rule-based method performs significantly better than the memory-based method, because the former is better capable of representing interactions between features. 1 Introduction Given the state of the art of current language and speech technology, communication problems are unavoidable in present-day spoken dialogue systems. The main source of these problems lies in the imperfections of automatic speech recognition, but also incorrect interpretations by the natural language understanding module or wrong default assumptions by the dialogue manager are likely to lead to confusion. If a spoken dialogue system had the ability to detect communication problems on-line and with high accuracy, it might be able to correct certain errors or it could interact with the user to solve them. For instance, in the case of communication problems, it would be beneficial to change from a relatively natural dialogue strategy to a more constrained one in order to resolve the problems (see e.g., Litman and Pan 2000). Similarly, it has been shown that users switch to a ‘marked’, hyperarticulate speaking style after problems (e.g., Soltau and Waibel 1998), which itself is an important source of recognition errors. This might be solved by using two recognizers in parallel, one trained on normal speech and one on hyperarticulate speech. If there are communication problems, then the system could decide to focus on the recognition results delivered by the engine trained on hyperarticulate speech. For such approaches to work, however, it is essential that the spoken dialogue system is able to automatically detect communication problems with a high accuracy. In this paper, we investigate the usefulness for problem detection of the word graph and the history of system question types. These features are present in many spoken dialogue systems and do not require additional computation, which makes this a very cheap method to detect problems. We shall see that on the basis of the previous and the current word graph and the six most recent system question types, communication problems can be detected with an accuracy of 91%, which is a significant improvement over the relevant baseline. This shows that spoken dialogue systems may use these features to better predict whether the ongoing dialogue is problematic. In addition, the current work is interesting from a machine learning perspective. We apply two machine learning techniques: the memory-based IB1-IG algorithm (Aha et al. 1991, Daelemans et al. 1997) and the RIPPER rule induction algorithm (Cohen 1996). As we shall see, some interesting differences between the two approaches arise. 2 Related work Recently there has been an increased interest in developing automatic methods to detect problematic dialogue situations using machine learning techniques. For instance, Litman et al. (1999) and Walker et al. (2000a) use RIPPER (Cohen 1996) to classify problematic and unproblematic dialogues. Following up on this, Walker et al. (2000b) aim at detecting problems at the utterance level, based on data obtained with AT&Ts How May I Help You (HMIHY) system (Gorin et al. 1997). Walker and co-workers apply RIPPER to 43 features which are automatically generated by three modules of the HMIHY system, namely the speech recognizer (ASR), the natural languageunderstanding module (NLU) and the dialogue manager (DM). The best result is obtained using all features: communication problems are detected with an accuracy of 86%, a precision of 83% and a recall of 75%. It should be noted that the NLU features play first fiddle among the set of all features. In fact, using only the NLU features performs comparable to using all features. Walker et al. (2000b) also briefly compare the performance of RIPPER with some other machine learning approaches, and show that it performs comparable to a memory-based (instance-based) learning algorithm (IB, see Aha et al. 1991). The results which Walker and co-workers describe show that it is possible to automatically detect communication problems in the HMIHY system, using machine learning techniques. Their approach also raises a number of interesting followup questions, some concerned with problem detection, others with the use of machine learning techniques. (1) Walker et al. train their classifier on a large set of features, and show that the set of features produced by the NLU module are the most important ones. However, this leaves an important general question unanswered, namely which particular features contribute to what extent? (2) Moreover, the set of features which the NLU module produces appear to be rather specific to the HMIHY system and indicate things like the percentage of the input covered by the relevant grammar fragment, the presence or absence of context shifts, and the semantic diversity of subsequent utterances. Many current day spoken dialogue systems do not have such a sophisticated NLU module, and consequently it is unlikely that they have access to these kinds of features. In sum, it is uncertain whether other spoken dialogue systems can benefit from the findings described by Walker et al. (2000b), since it is unclear which features are important and to what extent these features are available in other spoken dialogue systems. Finally, (3) we agree with Walker et al. (and the machine learning community at large) that it is important to compare different machine learning techniques to find out which techniques perform well for which kinds of tasks. Walker et al. found that RIPPER does not perform significantly better or worse than a memory-based learning technique. Is this incidental or does it reflect a general property of the problem detection task? The current paper uses a similar methodology for on-line problem detection as Walker et al. (2000b), but (1) we take a bottom-up approach, focussing on a small number of features and investigating their usefulness on a per-feature basis and (2) the features which we study are automatically available in the majority of current spoken dialogue system: the sequence of system question types and the word graphs corresponding to the respective user utterances. A word graph is a lattice of word hypotheses, and we conjecture that various features which have been shown to cue communication problems (prosodic, linguistic and ASR features, see e.g., Hirschberg et al. 1999, Krahmer et al. 1999 and Swerts et al. 2000) have correlates in the word graph. The sequence of system question types is taken to model the dialogue history. Finally, (3) to gain further insight into the adequacy of various machine learning techniques for problem detection we use both RIPPER and the memory-based IB1-IG algorithm. 3 Approach 3.1 Data and Labeling The corpus we used consisted of 3739 question-answer pairs, taken from 444 complete dialogues. The dialogues consist of users interacting with a Dutch spoken dialogue system which provides information about train time tables. The system prompts the user for unknown slots, such as departure station, arrival station, date, etc., in a series of questions. The system uses a combination of implicit and explicit verification strategies. The data were annotated with a highly limited set of labels. In particular, the kind of system question and whether the reply of the user gave rise to communication problems or not. The latter feature is the one to be predicted. The following labels are used for the system questions. O open questions (“From where to where do you want to travel?”) I implicit verification (“When do you want to travel from Tilburg to Schiphol Airport?”) E explicit verification (“So you want to travel from Tilburg to Schiphol Airport?”) Y yes/no question (“Do you want me to repeat the connection?”) M Meta-questions (“Can you please correct me?”) The difference between an explicit verification and a yes/no question is that the former but not the latter is aimed at checking whether what the system understood or assumed corresponds with what the user wants. If the current system question is a repetition of the previous question it asked, this is indicated by the suffix R. A question only counts as a repetition when it has the same contents as the previous system question. Of the user inputs, we only labeled whether they gave rise to a communication problem or not. A communication problem arises when the value which the system assigns to a particular slot (departure station, date, etc.) does not coincide with the value given for that particular slot by the user in his or her most recent contribution to the dialogue or when the system makes an incorrect default assumption (e.g., the dialogue manager assumes that the date slot should be filled with the current date, i.e., that the user wants to travel today). Communication problems are generally easy to label since the spoken dialogue system under consideration here always provides direct feedback (via verification questions) about what it believes the user intends. Consider the following exchange. U: I want to go to Amsterdam. S: So you want to go to Rotterdam? As soon as the user hears the explicit verification question of the system, it will be clear that his or her last turn was misunderstood. The problemfeature was labeled by two of the authors to avoid labeling errors. Differences between the two annotators were infrequent and could always easily be resolved. 3.2 Baselines Of the 3739 user utterances 1564 gave rise to communication problems (an error rate of 41.8%). The majority class is thus formed by the unproblematic user utterances, which form 58.2% of all user utterances. This suggests that the baseline for predicting communication problems is obtained by always predicting that there are no communication problems. This strategy has an accuracy of 58.2%, and a recall of 0% (all problems are missed).  The precision is not defined, and consequently neither is the   .  This baseline is misleading, however, when we are interested in predicting whether the previous user utterance gave rise to communication problems. There are cases when the dialogue system is itself clearly aware of communication problems. This is in particular the case when the system repeats the question (labeled with the suffix R) or when it asks a metaquestion (M). In the corpus under investigation here this happens 1024 times. It would not be  For definitions of accuracy, precision and recall see e.g., Manning and Sch¨utze (1999:268-269).  Since 0 cases are selected, one would have to divide by 0 to determine precision for this baseline.  Throughout this paper we use the   measure (van Rijsbergen 1979:174) to combine precision and recall in a single measure. By setting  equal to 1, precision and recall are given an equal weight, and the  measure simplifies to ! "$#%& (  = precision,  = recall). baseline acc (%) prec (%) rec (%)   majority-class 58.2 ' 0.4 — 0.0 — system-knows 85.6 ' 0.4 100 65.5 79.1 Table 1: Baselines very illuminating to develop an automatic error detector which detects only those problems that the system was already aware of. Therefore we take the following as our base-line strategy for predicting whether the previous user utterance gave rise to problems, henceforth referred to as the system-knows-baseline: if the Q( ( ) is repetition or meta-question, then predict user utterance ( -1 caused problems, else predict user utterance ( -1 caused no problems. This ‘strategy’ predicts problems with an accuracy of 85.6% (1024 of the 1564 problems are detected, thus 540 of 3739 decisions are wrong), a precision of 100% (of 1024 predicted problems 1024 were indeed problematic), a recall of 65.5% (1024 of the 1564 problems are predicted to be problematic) and thus an ) *  of 79.1. This is a sharp baseline, but for predicting whether the previous user utterance caused problems or not the system-knows-baseline is much more informative and relevant than the majority-classbaseline. Table 1 summarizes the baselines. 3.3 Feature representations Question-answer pairs were represented as feature vectors (or patterns) of the following form. Six features were reserved for the history of system questions asked so far in the current dialogue (6Q). Of course, if the system only asked 3 questions so far, only 3 types of system questions are stored in memory and the remaining three features for system question are not assigned a value. The representation of the user’s answer is derived from the word graph produced by the ASR module. It should be kept in mind that in general the word graph is much more complex than the recognized string. The latter typically is the most plausible path (e.g., on the basis of acoustic confidence scores) in the word graph, which itself may contain many other paths. Different systems determine the plausibility of paths in the word graph in different ways. Here, for the sake of generality, we abstract over such differences and simply represent a word graph as a Bag of Words (BoW), collecting all words that occur in one of the paths, irrespective of the associated acoustic confidence score. A lexicon was derived of all the words and phrases that occurred in the corpus. Each word graph is represented as a sequence of bits, where the + -th bit is set to 1 if the + -th word in the pre-derived lexicon occurred at least once in the word graph corresponding to the current user utterance and 0 otherwise. Finally, for each user utterance, a feature is reserved for indicating whether it gave rise to communication problems or not. This latter feature is the one to be predicted. There are basically two approaches for detecting communication problems. One is to try to decide on the basis of the current user utterance whether it will be recognized and interpreted correctly or not. The other approach uses the current user utterance to determine whether the processing of the previous user utterance gave rise to communication problems. This approach is based on the assumption that users give feedback on communication problems when they notice that the system misunderstood their previous input. In this study, eight prediction tasks have been defined: the first three are concerned with predicting whether the current user input will cause problems, and naturally, for these three tasks, the majority-class-baseline is the relevant one; the last five tasks are concerned with predicting whether the previous user utterance caused problems, and for these the sharp, system-knows-baseline is the appropriate one. The eight tasks are: (1) predict on the basis of the (representation of the) current word graph BoW ( whether the current user utterance (at time ( ) will cause a communication problem, (2) predict on the basis of the six most recent system question types up to ( (6Q ( ), whether the current user utterance will cause a communication problem, (3) predict on the basis of both BoW ( and 6Q ( , whether the current user utterance will cause a problem, (4) predict on the basis of the current word graph BoW ( , whether the previous user utterance, uttered at time ( -1, caused a problem, (5) predict on the basis of the six most recent system questions, whether the previous user utterance caused a problem, (6) predict on the basis of BoW ( and 6Q ( , whether the previous user utterance caused a problem, (7) predict on the basis of the two most recent word graphs, BoW ( -1 and BoW ( , whether the previous user utterance caused a problem, and finally (8) predict on the basis of the two most recent word graphs, BoW ( -1 and BoW ( , and the six most recent system question types 6Q ( , whether the previous user utterance caused a problem. 3.4 Learning techniques For the experiments we used the rule-induction algorithm RIPPER (Cohen 1996) and the memory-based IB1-IG algorithm (Aha et al. 1991, Daelemans et al. 1997). , RIPPER is a fast rule induction algorithm. It starts with splitting the training set in two. On the basis of one half, it induces rules in a straightforward way (roughly, by trying to maximize coverage for each rule), with potential overfitting. When the induced rules classify instances in the other half below a certain threshold, they are not stored. Rules are induced per class. By default the ordering is from low-frequency classes to high frequency ones, leaving the most frequent class as the default rule, which is generally beneficial for the size of the rule set. The memory-based IB1-IG algorithm is one of the primary memory-based learning algorithms. Memory-based learning techniques can be characterized by the fact that they store a representation of a set of training data in memory, and classify new instances by looking for the most similar instances in memory. The most basic distance function between two features is the overlap metric in (1), where -/.103254$6 is the distance between patterns 0 and 4 (both consisting of 7 features) and 8 is the distance between the features. If 0 is the test-case, the measure determines which group 9 of cases 4 in memory is the most similar to 0 . The most frequent value for the relevant : We used the TiMBL software package, version 3 (Daelemans et al. 2000) to run the IB1-IG experiments. category in 9 is the predicted value for 0 . Usually, 9 is set to 1. Since some features are more important than others, a weighting function ;=< is used. Here ;=< is the gain ratio measure. In sum, the weighted distance between vectors 0 and 4 of length 7 is determined by the following equation, where 8.?>@<A25B<C6 gives a point-wise distance between features which is 1 if >@<ED F B< and 0 otherwise. -/.10G254H6 F I J <  8K.?>L<25B<M6 (1) Both learning techniques were used for the same 8 prediction tasks, and received exactly the same feature vectors as input. All experiments were performed using ten-fold cross-validation, which yields errors margins in the predictions. 4 Results First we look at the results obtained with the IB1IG algorithm (see Table 2). Consider the problem of predicting whether the current user utterance will cause problems. Either looking at the current word graph (BoW ( ), at the six most recent system questions (6Q ( ) or at both, leads to a significant improvement with respect to the majorityclass-baseline. N The best results are obtained with only the system question types (although the difference with the results for the other two tasks is not significant): a 63.7% accuracy and an   of 58.3. However, even though this is a significant improvementover the majority-class-baseline, the accuracy is improved with only 5.5%. O Next consider the problem of predicting whether the previous user utterance caused communication problems (these are the five remaining tasks). The best result is obtained by taking the two most recent word graphs and the six most recent system question types as input. This yields an accuracy of 88.1%, which is a significant improvement with respect to the P All checks for significance were performed with a onetailed Q test. R As an aside, we performed one experiment with the words in the actual, transcribed user utterance at time Q instead of BoW Q , where the task is to predict whether the current user utterance would cause a communication problem. This resulted in an accuracy of 64.2% (with a standard deviation of 1.1%). This is not significantly better than the result obtained with the BoW. input output acc (%) prec (%) rec (%)   BoW ( problem ( 63.2 ' 4.1 S 57.1 ' 5.0 49.6 ' 3.8 53.0 ' 3.8 6Q ( problem ( 63.7 ' 2.3 S 56.1 ' 3.4 60.8 ' 5.0 58.3 ' 3.6 BoW ( + 6Q ( problem ( 63.5 ' 2.0 S 57.5 ' 2.8 49.1 ' 3.3 52.8 ' 1.9 BoW ( problem ( -1 61.9 ' 2.3 55.1 ' 2.6 48.8 ' 1.9 51.7 ' 1.2 6Q ( problem ( -1 82.4 ' 2.0 85.6 ' 3.8 69.6 ' 3.7 76.6 ' 3.5 BoW ( + 6Q ( problem ( -1 87.3 ' 1.1 T 85.5 ' 2.8 83.9 ' 1.3 84.7 ' 1.3 BoW ( -1 + BoW ( problem ( -1 73.5 ' 1.7 69.8 ' 3.8 64.6 ' 2.3 67.0 ' 2.3 BoW ( -1 + BoW ( + 6Q ( problem ( -1 88.1 ' 1.1 T 91.1 ' 2.4 79.3 ' 3.1 84.8 ' 2.0 Table 2: IB1-IG results (accuracy, precision, recall, and   , with standard deviations) on the eight prediction tasks. S : this accuracy significantly improves the majority-class-baseline ( UWVYX[Z*Z]\ ). T : this accuracy significantly improves the system-knows-baseline ( UGV^X[Z*Z]\ ). input output acc (%) prec (%) rec (%) *  BoW ( problem ( 65.1 ' 2.4 S 58.3 ' 3.4 59.8 ' 4.2 58.9 ' 2.0 6Q ( problem ( 65.9 ' 2.1 S  58.9 ' 3.5 60.7 ' 4.8 59.7 ' 3.2 BoW ( + 6Q ( problem ( 66.0 ' 2.3 S   64.8 ' 2.6 50.3 ' 3.1 56.5 ' 1.1 BoW ( problem ( -1 63.2 ' 2.5 60.3 ' 5.5 36.1 ' 5.5 44.8 ' 4.6 6Q ( problem ( -1 83.4 ' 1.6 99.8 ' 0.4 60.4 ' 3.1 75.2 ' 2.4 BoW ( + 6Q ( problem ( -1 90.0 ' 2.1 T   93.2 ' 1.7 82.5 ' 4.5 87.5 ' 2.6 BoW ( -1 + BoW ( problem ( -1 76.7 ' 2.6  74.7 ' 3.6 66.0 ' 5.7 69.9 ' 3.8 BoW ( -1 + BoW ( + 6Q ( problem ( -1 91.1 ' 1.1 T   92.6 ' 2.0 85.7 ' 2.9 89.0 ' 1.5 Table 3: RIPPER results (accuracy, precision, recall, and ) *  , with standard deviations) on the eight prediction tasks. S : this accuracy significantly improves the majority-class-baseline ( UWVYX[Z*Z]\ ). T : this accuracy significantly improves the system-knows-baseline ( U_V`X[Z*Z]\ ). : this accuracy result is significantly better than the IB1-IG result given in Table 2 for this particular task, with UaV .05.  : this accuracy result is significantly better than the IB1-IG result given in Table 2 for this particular task, with UbV .001.  : this accuracy result is significantly better than the IB1-IG result given in Table 2 for this particular task, with UGV .01. sharp, system-knows-baseline. In addition, the ) *  of 84.8 is nearly 6 points higher than that of the relevant, majority-class baseline. The results obtained with RIPPER are shown in Table 3. On the problem of predicting whether the current user utterance will cause a problem, RIPPER obtains the best results by taking as input both the current word graph and the types of the six most recent system questions, predicting problems with an accuracy of 66.0%. This is a significant improvementover the majority-class-baseline, but the result is not significantly better than that obtained with either the word graph or the system questions in isolation. Interestingly, the result is significantly better than the results for IB1-IG on the same task. On the problem of predicting whether the previous user utterance caused a problem, RIPPER obtains the best results by taking all features into account (that is: the two most recent bags of words and the six system questions). c This results in a 91.1% accuracy, which is a significant improvement over the sharp system-knows-baseline. This implies that 38% of the communication problems which were not detected by the dialogue system d Notice that RIPPER sometimes performs below the system-knows-baseline, even though the relevant feature (in particular the type of the last system question) is present. Inspection of the RIPPER rules obtained by training only on 6Q reveals that RIPPER learns a slightly suboptimal rule set, thereby misclassifying 10 instances on average. 1. if Q ( Q ) = R, then problem. (939/2) 2. if Q ( Q ) = I e “naar” f BoW ( Q -1) e “naar” f BoW( Q ) e “om” g f BoW ( Q ) then problem. (135/16) 3. if “uur” f BoW( Q -1) e “om” f BoW( Q -1) e “uur” f BoW( Q ) e “om” f BoW( Q ) then problem. (57/4) 4. if Q( Q ) = I e Q( Q -3) = I e “uur” f BoW ( Q -1) then problem. (13/2) 5. if “naar” f BoW( Q -1) e “vanuit” f BoW ( Q ) e “van” g f BoW( Q ) then problem. (29/4) 6. if Q( Q -1) = I e “uur” f BoW ( Q -1) e “nee” f BoW ( Q ) then problem. (28/7) 7. if Q( Q ) = I e “ik” f BoW( Q -1) e “van” f BoW( Q -1) e “van” f BoW( Q ) then problem. (22/8) 8. if Q( Q ) = I e “van” f BoW ( Q -1) e “om” f BoW( Q -1) then problem. (16/6) 9. if Q( Q ) = E e “nee” f BoW ( Q ) then problem. (42/10) 10. if Q( Q ) = M e BoW ( Q -1) = h then problem. (20/0) 11. if Q( Q -1) = O e “ik” f BoW ( Q ) e “niet” in BoW( Q ) then problem. (10/2) 12. if Q( Q -2) = I e Q( Q ) = O e “wil” f BoW( Q -1) then problem. (8/0) 13. else no problem. (2114/245) Figure 1: RIPPER rule set for predicting whether user utterance ( -1 caused communication problems on the basis of the Bags of Words for ( and ( -1, and the six most recent system questions. Based on the entire data set. The question features are defined in section 2. The word “naar” is Dutch for to, “om” for at, “uur” for hour, “van” for from, “vanuit” is slightly archaic variant of “van” (from), “ik” is Dutch for I, “nee” for no, “niet” for not and “wil”, finally, for want. The ( 7 / i ) numbers at the end of each line indicate how many correct ( 7 ) and incorrect ( i ) decisions were taken using this particular if ...then ...statement. under investigation could be classified correctly using features which were already present in the system (word graphs and system question types). Moreover, the   is 89, which is 10 points higher than the *  associated with the systemknows baseline strategy. Notice also that this RIPPER result is significantly better than the IB1-IG results for the same task. To gain insight into the rules learned by RIPPER for the last task, we applied RIPPER to the complete data set. The rules induced are displayed in Figure 1. RIPPER’s first rule is concerned with repeated questions (compare with the system-knows-baseline). One important property of many other rules is that they explicitly combine pieces of information from the three main sources of information (the system questions, the current word graph and the previous word graph). Moreover, it is interesting to note that the words which crop up in the RIPPER rules are primarily function words. Another noteworthy feature of the RIPPER rules is that they reflect certain properties which have been claimed to cue communication problems. For instance, Krahmer et al. (1999), in their descriptive analysis of dialogue problems, found that repeated material is often an indication of problems, as is the use of a marked vocabulary. The rules 2, 3 and 7 are examples of the former cue, while the occurrence of the somewhat archaic “vanuit” instead of the ordinary “van” is an example of the latter. 5 Discussion In this study we have looked at automatic methods for problem detection using simple features which are available in the vast majority of spoken dialogue systems, and require little or no computational overhead. We have investigated two approaches to problem detection. The first approach is aimed at testing whether a user utterance, captured in a noisyj word graph, and/or the recent history of system utterances, would be predictive of whether the utterance itself would be misrecognised. The results, which basically represents a signal quality test, show that problematic cases could be discerned with an accuracy of about 65%. Although this is somewhat above the baseline of 58% decision accuracy when no problems would be predicted, signalling recognition problems with word graph features and previous system question types as predictors is a hard task. As other studies suggest (e.g., Hirschberg et al. 1999), confidence scores and acoustic/prosodic features could be of help. The second approach tested whether the word graph for the current user utterance and/or the recent history of system question types could be employed to predict whether the previous user k In the sense that it is not a perfect image of the users input. utterance caused communication problems. The underlying assumption is that users will signal problems as soon as they become aware of them through the feedback provided by the system. Thus, in a sense, this second approach represents a noisy channel filtering task: the current utterance has to be decoded as signalling a problem or not. As the results show, this task can be performed at a surprisingly high level: about 91% decision accuracy (which is an error reduction of 38%), with an ) *  of the problem category of 89. This result can only be obtained using a combination of features; neither the word graph features in isolation nor the system question types in isolation offer enough predictive power to reach above the sharp baseline of 86% accuracy and an ) *  on the problem category of 79. Keeping information sources isolated or combining them directly influences the relative performances of the memory-based IB1-IG algorithm versus the RIPPER rule induction algorithm. When features are of the same type, accuracies of the memory-based and the ruleinduction systems do not differ significantly (with one exception). In contrast, when features from different sources (e.g., words in the word graph and question type features) are combined, RIPPER profits more than IB1-IG does, causing RIPPER to perform significantly more accurately. The feature independence assumption of memory-based learning appears to be the harming cause: by its definition, IB1-IG does not give extra weight to apparently relevant interactions of feature values from different sources. In contrast, in nine out of the twelve rules that RIPPER produces, word graph features and system questions type features are explicitly integrated as joint left-hand side conditions. The current results show that for on-line detection of communication problems at the utterance level it is already beneficial to pay attention only to the lexical information in the word graph and the sequence of system question types, features which are present in most spoken dialogue system and which can be obtained with little or no computational overhead. An approach to automatic problem detection is potentially very useful for spoken dialogue systems, since it gives a quantitative criterion for, for instance, changing the dialogue strategy (initiative, verification) or speech recognition engine (from one trained on normal speech to one trained on hyperarticulate speech). Bibliography Aha, D., Kibler, D., Albert, M. (1991), Instance-based Learning Algorithms, Machine Learning, 6:36–66. Cohen, W. (1996), Learning trees and rules with set-valued features, Proc. 13th AAAI. Daelemans, W., van den Bosch, A., Weijters, A. (1997), IGTree: using trees for compression and classification in lazy learning algorithms, Artificial Intelligence Review 11:407–423. Daelemans, W., Zavrel, J., van der Sloot, K., van den Bosch, A. (2000), TiMBL: Tilburg Memory-Based Learner, version 3.0, reference guide, ILK Technical Report 00-01, http://ilk.kub.nl/ l ilk/papers/ilk0001.ps.gz. Gorin, A., Riccardi, G., Wright, J. (1997), How may I Help You?, Speech Communication 23:113-127. Hirschberg, J., Litman, D., Swerts, M. (1999), Prosodic cues to recognition errors, Proc. ASRU, Keystone, CO. Krahmer, E., Swerts, M., Theune, M., Weegels, M., (1999), Error spotting in human-machine interactions, Proc. EUROSPEECH, Budapest, Hungary. Litman, D., Pan, S. (2000), Predicting and adapting to poor speech recongition in a spoken dialogue system, Proc. 17th AAAI, Austin, TX. Litman, D., Walker, M., Kearns, M. (1999), Automatic Detection of Poor Speech Recognition at the Dialogue Level. Proc. ACL’99, College Park, MD. Manning, C., Sch¨utze, H., (1999), Foundations of Statistical Natural Language Processing, The MIT Press, Cambridge, MA. van Rijsbergen, C.J. (1979), Information Retrieval, London: Buttersworth. Soltau, H., Waibel, A. (1998), On the influence of hyperarticulated speech on recognition performance, Proc. ICSLP’98, Sydney, Australia Swerts, M., Litman, D., Hirschberg, J. (2000), Corrections in spoken dialogue systems, Proc. ICSLP 2000, Beijing, China. Walker, M., Langkilde, I., Wright, J., Gorin, A., Litman, D. (2000a), Learning to predict problematic situations in a spoken dialogue system: Experiment with How May I Help You?, Proc. NAACL, Seattle, WA. Walker, M., Wright, J. Langkilde, I. (2000b), Using natural language processing and discourse features to identify understanding errors in a spoken dialogue system, Proc. ICML, Stanford, CA.
2001
12
           ! "$#&% '()!*  +,  .  0/1.23. 4 5 6 7 8 9 :;=<>@? 9,AB$CED FGH5 8 IJ9 A K)L8 I MON P C,Q R STN D,RU VWU,STP X R N QY Z [ N D Z N WCESTP X \]U ^&_ `Ea b cdCE\ e [ D fER UEDHgD [ h,N Q \ [ R i Y,R jk UEX [ \ lm=n+oEp _ p,` q aEr,sEs t ^ P R C,U lu Q N DER vEwZ \ j xX \ R y j N F X z|{ } B ? 8 6EB ~ € ‚Eƒ ‚E„ …† „  ‡ … € ˆE„ ƒT ‰  Š „ ‹Œ  … „ Ž  ‹„  Š €  Ž$‘ €  „  „dŠ „ ’ Š&€  Š “ … † =”E  €  Ž|Š  „|•O–—˜  ™Tƒ š Ž  … € Š  ‹O›$•O–—˜  ™ € 3ƒ|œ   “1š „ † Ž „  Œ … „ „T „ Ž ‹„  Š ƒ Š €  )ƒ š Ž   … € Š  ‹0Š Eƒ ŠOˆE  Š  Š … ƒ ‚E€ Š  “1&š „ ’ € ‡    “1 € ‡ d Š ƒ … Š O ” Š3„ ‹‚ Š ‰ ›0ž ’ ‚E„ … € ‹„  Š   ‘ €  „  „=ƒ E†)ž Ž š €  ‡  … ‚E … ƒ=   “ Š Eƒ ŠO•O–—˜  ™O… „ š € ƒ ˆ š ‰| ” Š ‚E„ … Œ  … ‹Š  „ ˆE„  Š‚ … „ Ÿ €  ”Eƒ š Ž  … € Š  ‹“1 „ 3Š  „ƒ Ÿ ƒ € š  ƒ ˆ š „Eƒ E†   „ Ž ‹„  Š „ †@Š … ƒ €  €  ŽT‡  … ‚ ”E€   ‹ƒ š š ›    Š  „  € ¡ „ Œ Š  „1Eƒ E†   „ Ž ‹„  Š „ † Š … ƒ €  €  Ž@‡  … ‚ ”EŽ …  “   Š  „‚E„ … Œ  … ‹ƒ E‡ „  Œ •O–—˜  ™ ‡   Ÿ „ … Ž „  Š  “1ƒ … †Š Eƒ Š ŒEŠ  „ ˆE„  Š‚ … „ Ÿ €  ”Eƒ š Ž  … € Š  ‹O›3~ „Œ ƒ ‡ ŠŠ Eƒ Š •O–1— ˜ ™‡ ƒ 3ˆE„”E „ †3“1€ Š TƒO ‹ƒ š š1‡  …  ‚ ”E1€ „ ’ ‚E„ ‡ Š „ †Š ˆE„”E „ Œ ” š,  Š1  š ‰Œ  … Š  „… ƒ … „„ Ÿ „  Š1 Œƒ † ƒ ‚ Š €  ŽŠ ƒ „ “&š ƒ   Ž ”Eƒ Ž „  ˆ ” Š ƒ š  Œ  …1Š  „‡  ‹‹ „ Ÿ „  Š Œ ƒ † ƒ ‚ Š €  ŽŠ ƒ „ “|Ž „  … „1“1€ Š  € Š  „ ƒ ‹„ š ƒ  Ž ”Eƒ Ž „ › ¢¤£ AB ? I¥ ¦ 6EB 5 IA •Oƒ  ‰š ƒ  Ž ”Eƒ Ž „   € E‡ š ”E† €  Ž‘ €  „  „ ƒ E†§ ƒ ‚Eƒ  „  „  ƒ … „ “1… € Š Š „ “1€ Š   ” Š1 ‚Eƒ ‡ „  … Š  „ … † „ š € ‹€ Š „ … ˆE„  Š “„ „ Š  „“ … †  ›  =“ … † „ Ž ‹„  Š ƒ Š €  ƒ š Ž  … € Š  ‹ € Š  „ … „ Œ  … „… „ ¨ ” € … „ †Tƒ ƒOŒ …   Š„ E†@Œ  …ƒ  ‰@š ƒ   Ž ”Eƒ Ž „T‚ …  ‡ „  €  Ž& ‰  Š „ ‹©Š Eƒ Š@… „ š € „ 3 dƒH“ … †  ˆEƒ  „ †@… „ ‚ … „  „  Š ƒ Š €   ›@•  Ё ‰  Š „ ‹Œ  …‚Eƒ …  €  ŽE € E† „ ’ €  ŽE,†  ‡ ” ‹„  Š… „ Š … € „ Ÿ ƒ š   ‚E„ š š‡  „ ‡ œ €  ŽE,ƒ E† Ž … ƒ ‹‹ƒ …‡  „ ‡ œ €  ŽŒ ƒ š š,€  Š Š  € 1‡ ƒ Š „ Ž  … ‰ ›   1ƒ… „   ” š Š ,Š  „Š „ ’ Ё „ Ž ‹„  Š ƒ Š €  3‚ …  ˆ š „ ‹Eƒ  … „ ‡ „ € Ÿ „ † ‡  E € † „ … ƒ ˆ š „ƒ Š Š „  Š €    ‚Eƒ … Š € ‡ ” š ƒ … š ‰1Œ  … ‘ €  „  „ ›   Ÿ ƒ … € „ Š ‰@ Œ1‹„ Š   † Eƒ Ÿ „ˆE„ „ T€  Ÿ „  Š € Ž ƒ Š „ †,›@ª  „ ƒ ‚ ‚ …  ƒ ‡ € 1ˆEƒ  „ † š   œ €  Ž” ‚3 Š … €  Ž 1€ 3ƒ‚ … „  „ ’ €  Š €  ŽT† € ‡ Š €  Eƒ … ‰|ƒ E†T”E €  ŽTŠ  „Oš   Ž „  Š‹ƒ Š ‡  « ‘ „  Ž„ Šƒ š › ,™ ¬ ¬ ¬ ­ › & „ ‡  E†ƒ ‚ ‚ …  ƒ ‡ € 1ˆEƒ  „ †  Tˆ € Ž … ƒ ‹$Œ … „ ¨ ” „ E‡ € „  …‹ … „Ž „  „ … ƒ š ” ˆE Š … €  Ž Œ … „ ¨ ” „ E‡ € „  « —ƒ €„ Š3ƒ š › ™ ¬ ¬ ¬ ®˜   Š „@ƒ E†&‘…  Œ Š  ™ ¬ ¬ ¯ ® ~ „ ƒ Eƒ „ Š1ƒ š ›  ° ± ± ± ­ ›  =Š  € … †ƒ ‚ ‚ …  ƒ ‡ ”E „  Š … ƒ E Œ  … ‹ƒ Š €    ˆEƒ  „ †Oš „ ƒ …  €  Ž « ²  ‡ œ „  ‹ƒ € „ …ƒ E† –… „ “ ™ ¬ ¬ ³ ®1˜ ƒ š ‹„ …  ™ ¬ ¬ ´ ­ ›H … „ Ÿ € „ “ Œ„ ƒ … š € „ … “ … œ‡ ƒ ˆE„Œ  ” E†€  « µ ”3ƒ E†O~1 „  ŽE ™ ¬ ¬ ¶ ­ › 1š š  Œ,Š  „  „ ‹„ Š   † … „ ¨ ” € … „ „ € Š  „ …1ƒ‚ … „  „ ’ €  Š €  ކ € ‡  Š €  Eƒ … ‰ … „ š  „1ƒ  ” ‚E„ … Ÿ €  „ †Š … ƒ €  €  Ž … „ Ž € ‹„ ”E €  Ž ƒ‹ƒ  ”Eƒ š š ‰ „ Ž ‹„  Š „ †‡  … ‚ ”E › — € ‡ Š €  Eƒ … ‰  ˆEƒ  „ †‹„ Š   † Eƒ Ÿ „Š  „“„ š š  œ   “1 ƒ † Ÿ ƒ  Š ƒ Ž „ ƒ E†=† €  ƒ † Ÿ ƒ  Š ƒ Ž „  Œƒ š šœ   “1š „ † Ž „  €  Š „ E € Ÿ „ƒ ‚ ‚ …  ƒ ‡  „  › •Oƒ  ”Eƒ š š ‰‡ ” … ƒ Š „ †š €  Ž ” €  Š € ‡ œ   “1š „ † Ž „Š „ E† Š ˆE„‹ … „ƒ ‡ ‡ ” … ƒ Š „Š Eƒ @“1Eƒ Š ‡ ƒ |ˆE„OŽ š „ ƒ  „ †Tˆ ‰|ƒ Hƒ † ƒ ‚ Š € Ÿ „š „ ƒ …  €  ŽT ‰  Š „ ‹O „  ‚E„ ‡ € ƒ š š ‰“1 „ Eƒ E† š €  Ž… „ š ƒ Š € Ÿ „ š ‰… ƒ … „1‡ ƒ  „  ›ª1 Š  „  Š  „ …Eƒ E†, € ŠŠ „ E† Š Ž € Ÿ „1€ E ” ·‡ € „  Š“„ € Ž  Š Š 3Š  „3‡  ‹‹ H‡ ƒ  „ ƒ E†| ” ¸E„ … Œ …  ‹+ƒ3š ƒ ‡ œ@ Œ ƒ † ƒ ‚ Š ƒ ˆ € š € Š ‰Š  „ “&š ƒ  Ž ”Eƒ Ž „   Ž „  … „   ƒ E†Oƒ ‚ ‚ š €  ‡ ƒ Š €  E ›@¹ € E‡ „Š  € ‚Eƒ ‚E„ …€ ‡  E‡ „ …  „ †3‚ … € ‹ƒ … € š ‰ “1€ Š Oƒ † ƒ ‚ Š ƒ ˆ € š € Š ‰  “„“1€ š š,Œ  ‡ ”E1 Š  „ˆE„  Š ƒ Ÿ ƒ € š  ƒ ˆ š „‹„ Š   † Š Eƒ І @  Š… „ ¨ ” € … „Oƒ3‚ … „  „ ’ €  Š €  Ž † € ‡ Š €  Eƒ … ‰ › ˜˜•º€ |ƒ ƒ † ƒ ‚ Š € Ÿ „=Š „ ’ ŠH‡  ‹‚ … „   €  ƒ š Ž   … € Š  ‹©Š Eƒ Š@Eƒ @ˆE„ „ ƒ ‚ ‚ š € „ †)Š &Š  „= „ Ž ‹„  Š ƒ  Š €  O‚ …  ˆ š „ ‹ « ~ „ ƒ Eƒ O„ Šƒ š ›  ™ ¬ ¬ ³ ®E~ „ ƒ Eƒ O„ Šƒ š ›  ° ± ± ± ­ › •Oƒ  ‰Š „ ’ Ї  ‹‚ … „   €  ƒ š Ž  … € Š  ‹  € E‡ š ”E†  €  Ž˜˜•@ “ … œˆ ‰„  Š € ‹ƒ Š €  Žƒ‚ …  ˆEƒ ˆ € š € Š ‰† €  Š … €  ˆ ” Š €   Š  „1 „ ’ Ё ‰ ‹ˆE š € ƒ Š „ ’ ŠŽ € Ÿ „ Š  „ ‚ … „  Ÿ €  ”E ‡   Š „ ’ Š › —1€  Š … € ˆ ” Š €  E „  Š € ‹ƒ Š „ †Œ …  ‹)ƒ1Š „ ’ Š Š Eƒ Š€ E‡ š ”E† „ “ … †  ˆE ” E† ƒ … ‰T† „ š € ‹€ Š „ … ƒ   € Ž |ƒ ‚ …  ˆEƒ ˆ € š € Š ‰@Š @“ … †  ˆE ” E† ƒ … ‰T† „ š € ‹€ Š „ … € |„ ƒ ‡  ‡   Š „ ’ Š › » Œ ƒ ” E „ Ž ‹„  Š „ †Š „ ’ Š€ Ÿ € „ “„ †ƒ Eƒ Ÿ  €  Ž3 € † † „ |“ … †  ˆE ” E† ƒ … ‰@† „ š € ‹€ Š „ …  ƒO¼ € Š „ … ˆ €   Š ‰ š „ ƒ š Ž  … € Š  ‹d‡ ƒ ˆE„ ”E „ †Š ½ E†Š  „1‹  Š ‚ …  ˆ  ƒ ˆ š „š  ‡ ƒ Š €  E Œ Š  „ € † † „ 3† „ š € ‹€ Š „ …  Eƒ ‡ ‡  … † €  Ž Š Š  „„  Š € ‹ƒ Š „ †O‚ …  ˆEƒ ˆ € š € Š ‰O‹ † „ š ›~ „ ƒ Eƒ @ƒ E† ‡  š š „ ƒ Ž ” „ Eƒ Ÿ „O†   „Š  € “1€ Š HŠ  „O˜˜•0‚ …  ˆEƒ  ˆ € š € Š ‰@‹ † „ š ›@~ „… „  ” š Šƒ ‚ ‚E„ ƒ … Š 3ˆE„Š  „ˆE„  Š ƒ Ÿ ƒ € š ƒ ˆ š „ƒ š Ž  … € Š  ‹Œ  … „ Ž ‹„  Š ƒ Š €   Œ š ƒ … Ž „‡  …  ‚E … ƒ Œ“1… € Š Š „ OŠ „ ’ Š  ˆE Š O€ 3ž  Ž š €  Oƒ E†€ @‘ €   „  „ ›|˜˜•… „ ¨ ” € … „  ” ‚E„ … Ÿ €  „ †@Š … ƒ €  €  Ž3”E €  ŽTƒ ‹ƒ  ”Eƒ š š ‰ „ Ž ‹„  Š „ †‡  … ‚ ”E › ~ „•O–1— ˜ ™ ƒ š Ž  … € Š  ‹ « –… „  Š E™ ¬ ¬ ¬ ƒ ­ “1ƒ † „  Ÿ „ š  ‚E„ †ƒ  ƒ‹ † „ š  Œ   “@‡  € š † … „  „ Ž ‹„  Š  ‚E„ „ ‡  € |Š  „O‡  ” …  „ Œ š „ ƒ …  €  Ž3Š  „ € …Eƒ Š € Ÿ „š ƒ  Ž ”Eƒ Ž „  › –„ ‡ ƒ ”E „  ‚E„ „ ‡ ‡   Š ƒ € E  œ   “1ƒ ‡  ”E Š € ‡‹ƒ … œ  €  ŽH Œ“ … †HˆE ” E† ƒ … € „  ‡  € š † … „ =‹”E Š3 „ Ž ‹„  Š Š  „T” Š Š „ … ƒ E‡ „ OŠ  „ ‰& „ ƒ …3€ ) … † „ …3Š Hš „ ƒ … &Š  „ “ … †  Œ Š  „ € …š ƒ  Ž ”Eƒ Ž „ ›” … Š  „ … ‡  € š † … „ | Š ƒ … Š  ” Š“1€ Š   ” Šœ   “1€  ŽOƒ  ‰O“ … † ƒ E†O“1€ Š   ” Šƒ ‡  ‡ „  1Š ƒ‚ … „  „ Ž ‹„  Š „ †O ‚E„ „ ‡ 3 ƒ ‹‚ š „ ŒŠ  „  … Š Š Eƒ ŠO“ ” š †HˆE„T… „ ¨ ” € … „ †HŒ  …3 ” ‚E„ … Ÿ €  „ †HŠ … ƒ €  €  ŽE› ~ ”E  •O–—˜  ™ Š  „@ƒ š Ž  … € Š  ‹+” E† „ … š ‰ €  Ž|ƒ  ƒ ˆE Š … ƒ ‡ Ї  Ž  € Š € Ÿ „‹ † „ š œ   “1ƒ  » ‘— ª ˜  Œ  … » ‘… „ ‹„  Š ƒ š — €  Š … € ˆ ” Š €  Eƒ š„ Ž ” š ƒ … € Š ‰ª˜ Š € ‹€ ¡ ƒ  Š €   « –… „  Š  ™ ¬ ¬ ¬ ˆ ® —ƒ Eƒ ƒ E†–… „  Š  ™ ¬ ¬ ¬ ­ … „  ¨ ” € … „   „ € Š  „ …1ƒ† € ‡ Š €  Eƒ … ‰  …1ƒ „ Ž ‹„  Š „ †Š … ƒ €   €  އ  … ‚ ”E › » ŠˆE  Š  … ƒ ‚E € Š  “1† € ‡ Š €  Eƒ … ‰  “1 € ‡  € €  € Š € ƒ š š ‰3„ ‹‚ Š ‰  ”E €  Ž3ƒ‚ …  ˆEƒ ˆ € š € Š ‰3‹ † „ šƒ E† ¼ € Š „ … ˆ €   Š ‰ š „ ‚ Š € ‹€ ¡ ƒ Š €  @ƒ š Ž  … € Š  ‹›» @Š  € ‚Eƒ  ‚E„ …  “„   “=Š Eƒ Š •O–—˜  ™€ ”E „ Œ ” š Œ  … Š „ ’ Š  „ Ž  ‹„  Š ƒ Š €  € ˆE Š Ož  Ž š €  ƒ E†O‘ €  „  „ › ~ „… „ ‹ƒ € E† „ … Œ Š  „‚Eƒ ‚E„ …1€  … Ž ƒ  € ¡ „ †ƒ Œ  š  š  “  ›~ „ „ ’ Ё „ ‡ Š €  @† „  ‡ … € ˆE„ Š  „‚ …  ˆEƒ ˆ € š € Š ‰ ‹ † „ 𔠐E† „ … š ‰ €  ŽH» ‘— ª ˜ƒ E†=Š  „T ˆ „ ‡ Š € Ÿ „ Œ ” E‡ Š €  TŠ Eƒ Š… „  ” š Š Œ …  ‹ Š  „‹ † „ š ›T¹ „ ‡ Š €  |¶ † „  ‡ … € ˆE„  Š  „ ‚ Š € ‹€ ¡ ƒ Š €  ƒ š Ž  … € Š  ‹&Š Eƒ Š•O–—˜  ™O”E „ Š |½ E†|Š  „@‹  Š‚ …  ˆEƒ ˆ š „3 „ Ž ‹„  Š ƒ Š €    ƒ ‡ ‡  … † €  ŽŠ Š  „‹ † „ š ›¹ „ ‡ Š €   … „ ‚E … Š  „ ’ ‚E„ …  € ‹„  Š € H“1 € ‡ &•O–—˜  ™O€ ‡  ‹‚Eƒ … „ †|Š |˜˜• « ~ „ ƒ Eƒ „ Š1ƒ š ›  ° ± ± ± ­”E €  ŽŠ  „˜ ² ‡  … ‚ ”E Œ ‘ €   „  „O „ “  ‚Eƒ ‚E„ …Š „ ’ Šƒ E†TŠ  „Ož Ž š €  |‚E … Š €  T Œ Š  „ ² ƒ E ƒ … †3‡  … ‚ ”E › € Eƒ š š ‰  ¹ „ ‡ Š €   ‡  E € † „ …  Š  „ˆ …  ƒ † „ …€ ‹‚ š € ‡ ƒ Š €  E Œ Š  € 1“ … œ,›  9 A 9,? 8 B 5 93? I { 8 { 5 : 5 B 4 I1¥ 9,: ~ € 3 „ ‡ Š €  &€  Š …  † ”E‡ „ Oƒ|š ƒ  Ž ”Eƒ Ž „  € E† „ ‚E„ E† „  Š ‚ … €  …‚ …  ˆEƒ ˆ € š € Š ‰3† €  Š … € ˆ ” Š €  @ @ƒ š š‚E   € ˆ 𠄁 „ Ž  ‹„  Š „ †Š „ ’ Š  ›€ Ÿ „ ƒ ” E „ Ž ‹„  Š „ †Š „ ’ Š Š  €  ‚ … €  …† €  Š … € ˆ ” Š €  =† „ ½  „ ƒT‡  E† € Š €  Eƒ š† €  Š … € ˆ ”  Š €  ) dƒ š š „ Ž ‹„  Š „ †&Š „ ’ Š 3Š Eƒ Š3‰ € „ š †ƒ Œ Š „ … “ … †  ˆE ” E† ƒ … ‰@† „ š „ Š €   ›T~ „O‡  E† € Š €  Eƒ š1† €  Š … €  ˆ ” Š €  =† „ Š „ … ‹€  „ Š  „3‹  Š‚ …  ˆEƒ ˆ š „3 „ Ž ‹„  Š ƒ  Š €  & Œ ƒ ‡ ‡  … † €  ŽHŠ HŠ  „T‹ † „ š ›+~ „T‚ … €  … † €  Š … € ˆ ” Š €  3€ † „ … € Ÿ „ †OŒ …  ‹$ƒ½ Ÿ „   Š „ ‚3‹ † „ š Œ  … Š  „Ž „  „ … ƒ Š €  O Œ1Š „ ’ Š  ›~ „ Š „ ‚Eƒ … „‚ … „  „  Š „ † ˆE„ š  “ ƒ š   Ž“1€ Š @Š  „ € …‚ …  ˆEƒ ˆ € š € Š ‰O† €  Š … € ˆ ” Š €  E › ~ €  „ ‡ Š €  O† „  ‡ … € ˆE„ ƒ‹ƒ Š  „ ‹ƒ Š € ‡ ƒ šE‹ † „ š    Š ƒ Oƒ š Ž  … € Š  ‹Š Eƒ Š1€ €  Š „ E† „ †Š ˆE„… ”  › » Š  „Œ  š š  “1€  ŽŽ „  „ … ƒ Š € Ÿ „‹ † „ š Eš „ ŠHˆE„Š  „ ƒ š ‚ Eƒ ˆE„ Š …‡ Eƒ … ƒ ‡ Š „ … „ Š Œ Š  „OŠ „ ’ ŠŠ @ˆE„O „ Ž  ‹„  Š „ †, ƒ E†š „ Š)ƒ E†  ˆE„1… „  „ … Ÿ „ † ‰ ‹ˆE š  Š Eƒ Š ƒ … „  Š€  › ~ „  „  ‰ ‹ˆE š  “1€ š šEˆE„1”E „ †Š … „ ‚ … „   „  Š “ … †ˆE ” E† ƒ … € „  ƒ E† „  Š „ E‡ „ˆE ” E† ƒ … € „   … „   ‚E„ ‡ Š € Ÿ „ š ‰ ›  1Œ Š „ …1† „  ‡ … € ˆ €  Ž„ ƒ ‡  Š „ ‚ Œ,Š  „Ž „   „ … ƒ Š € Ÿ „‚ …  ‡ „ † ” … „  “„1‚ …  Ÿ € † „1ƒ ƒ ‹‚ š „1… „  ” š Š Š Eƒ Š ‡  ” š †ˆE„‚ …  † ”E‡ „ †Œ …  ‹Š Eƒ Š  Š „ ‚ ›1‘ ‹ˆ €  €  Žƒ š š ½ Ÿ „  ƒ ‹‚ š „1… „  ” š Š  ‰ € „ š † ƒ ƒ ‹‚ š „  „ Ž ‹„  Š „ †Š „ ’ Š Š Eƒ Š ‡ ƒ ˆE„Ž „  „ … ƒ Š „ †ˆ ‰Š  € 1‹ † „ š › ™ › ‘    „Š  „ ” ‹ˆE„ …  Œ † €  Š € E‡ Š “ … †Š ‰ ‚E„ €  !#" $ % &')( * +, ( . + / * $ & 0)/ * &1. ( 234/ * 5 & +$ .2 " &12 & 6 2 2 (, &70 & 8 34& . 2 & + 9 0 & . 2 & . : &, ( . + / * $ & 07/ * & ; Š  „Š „ ’ Š  <Eƒ ‡ ‡  … † €  ŽŠ Š  „† €  Š … € ˆ ” Š €  )= ˜ … « < ­7> ¯ ?)@ A ™ <B @ « ™ ­ ~ „€  Ÿ „ …  „   ¨ ”Eƒ … „ †† €  Š … € ˆ ” Š €  O OŠ  „‚E   € Š € Ÿ „€  Š „ Ž „ … 1“1ƒ  ‡    „ ˆE„ ‡ ƒ ”E „€ Š €  ƒ € ‹ ‚ š „   ‹  Š † €  Š … € ˆ ” Š €  Š Eƒ Š € ,… „ š ƒ Š € Ÿ „ š ‰CEƒ Š  … „ ‚ … „  „  Š €  Ž@ƒO… „ š ƒ Š € Ÿ „ š ‰@”  ˆ € ƒ  „ †@‚ … €  …  ‰ „ Š € Š  ” ‹‡   Ÿ „ … Ž „  « ”  š € œ „3Š  „3 ” ‹ Œ™ D E,­ › ~ „¯ D ? @ Š „ … ‹  … ‹ƒ š € ¡ „ Š  „ ” ‹Š   „ › FG HIJ K4L K M N J O P <>H¯ › ° ›Q,„ Š7R>SUTV  WˆE„Š  „3‡ Eƒ … ƒ ‡ Š „ … „ ŠŠ   Ž „ Š  „ …“1€ Š 3Š  „“ … †  ˆE ” E† ƒ … ‰‹ƒ … œ „ … ,ƒ E† š „ ŠV X 1Y Y Y X)Z [\ Z WˆE„ƒ‚ …  ˆEƒ ˆ € š € Š ‰† €  Š … € ˆ ” Š €    7R › …1„ ƒ ‡ ^]Œ …  ‹$™Š  < Ž „  „ … ƒ Š „ “ … † Š ‰ ‚E„]ˆ ‰‡     €  އ Eƒ … ƒ ‡ Š „ … ƒ Š1… ƒ E†  ‹ ƒ ‡  ‡  … † €  ŽŠ Š  „‚ …  ˆEƒ ˆ € š € Š € „ V X )Y Y Y X Z [ \ Z W  ”   Š € šŠ  „T“ … †HˆE ” E† ƒ … ‰=‡ Eƒ … ƒ ‡ Š „ … +€ O‡     „  ›» Œ1Š  „“ … †OˆE ” E† ƒ … ‰@‡ Eƒ … ƒ ‡ Š „ …€ ‡     „ 3½ …  Š  † €  ‡ ƒ … †€ Šƒ E†3‡     „ƒ Ž ƒ € O”  Š € šƒ     ˆE ” E† ƒ … ‰H‡ Eƒ … ƒ ‡ Š „ …€ ‡    „  1„ E ” … €  Ž Š Eƒ ŠŠ  „“ … †3Eƒ ƒ Šš „ ƒ  Š  „ ”E‡ T‡ Eƒ … ƒ ‡  Š „ … ›‘ƒ š šŠ  „3… „  ” š Š €  ŽT“ … †|Š ‰ ‚E„#_ `  ƒ E† š „ Ša>V _ )Y Y Y _ b WˆE„Š  „… „  ” š Š €  Žš „ ’ € ‡   ›  …  Š ƒ Š €  Eƒ š‡   Ÿ „  € „ E‡ „  š „ Šc_d>e  Š  „ … „  „ … Ÿ „ † „  Š „ E‡ „  ˆE ” E† ƒ … ‰‹ƒ … œ „ … › ~ ‡  ‹ ‚ ” Š „3Š  „3‚ …  ˆEƒ ˆ € š € Š ‰| Œƒ@Ž € Ÿ „ Hš „ ’ € ‡   1“„ „  Š € ‹ƒ Š „‚ …  ˆEƒ ˆ € š € Š € „  Œ Š  „‡ Eƒ … ƒ ‡ Š „ … ”E €  Ž ƒ † †    „ ‹  Š  €  Ž= @ f X g)>ih g j “1 „ … „ h g€   „3Ž … „ ƒ Š „ …Š Eƒ =Š  „3Œ … „ ¨ ” „ E‡ ‰ ‡  ”  Š  ŒEš „ Š Š „ …7k € Š  „ ‡ ” … … „  Š š ‰ ‰ ‚E Š  „  € ¡ „ †  „ Ž ‹„  Š ƒ Š €    ƒ E† j >l g h g ›» Œ1a|€  ƒ „ Š1 Œ “ … †Š ‰ ‚E„ Š Eƒ Šƒ … „ ƒ š šE‡    „ € E† „ ‚E„ E† „  Š š ‰  Š  „  ˜ … « anm < ­7>U<o A ™ ™pq r s B bt g u [ \v h g jcw q x « ° ­ ~ „3‚ …  † ”E‡ Š3ƒ ŠŠ  „@… € Ž  Š Œ « ° ­… „ ‚ … „  „  Š  Š  „3‚ …  ˆEƒ ˆ € š € Š ‰H Œ „ š „ ‡ Š €  Ž|Š  „ M K y NK z{ K  Œ “ … †OŠ ‰ ‚E„ 4_ )Y Y Y _ b,,€ 3Š Eƒ Š‚Eƒ … Š € ‡ ” š ƒ …  …  † „ …  “1 „ ‹” š Š € ‚ š „“ … †ˆE ” E† ƒ … € „  « ^|  ­ ƒ … „ ‚E„ … ‹€ Š Š „ †@Š O ‡ ‡ ” …€ T „ ¨ ” „ E‡ „ ›O~ „‡ „  Š „ … Š „ … ‹= Š  „… € Ž  Š Eƒ E† € † „ Œ « ° ­,… „  ” š Š ,Œ …  ‹ € ‹‚E  €  ŽŠ  „‡  E Š … ƒ €  Š1ƒ“ … †‡ ƒ    ŠˆE„ Ž €  “1€ Š #›1~ „4<o Š „ … ‹… „ C „ ‡ Š 1Š  „Œ ƒ ‡ Š1Š Eƒ Ša € ƒ ”   … † „ … † „ Š   ƒ  ‰‚E„ … ‹” Š ƒ Š €   Œ,Š  „ <O“ … † € ‚E„ … ‹€   € ˆ š „ › } ~ 07+ & 0 : * $ , & +c, & % ( '79 ')&2 * & / 2 & +c& / : " ,  2 &7( €)& / : "  " $ . & 0 &: " / * / : 2 & */ 0/40 & ‚ / * / 2 &4: " / * / : 2 & * 90 ( 2 " &c/ % ƒ ‚ " / , & 2€ ( *„ …)†‡ƒ ˆ" / 0)‰ 0 27Š ‹7: " / * / : 2 & * 0 ; F)G HIJ K4L K M N J O P _  >  _ @ > O  K  _>  _ 4> J  K  _ > M K K  _> H   _d4> ¶ › …„ ƒ ‡  ]1Œ …  ‹ ™Š < ‡     „ « ] ­ ,Š  „Œ … „  ¨ ” „ E‡ ‰O Œ“ … †_ `€ OŠ  „Š „ ’ Š  ƒ ‡ ‡  … † €  ŽŠ  Š  „€  Ÿ „ …  „   ¨ ”Eƒ … „ †† €  Š … € ˆ ” Š €  3 3Š  „‚E  €  Š € Ÿ „€  Š „ Ž „ …  = ˜ … «  « ] ­7> ­7> ¯ ?)@ A ™ 1B @ « ¶ ­ F)G HIJ K4L K M N J O P  « ™ ­7>H° « ° ­>U  « ¶ ­7>|° « ­>&™  « ­7>H° « ¯ ­>H° « ± ­7>|° E›Q,„ Š > l « ] ­@ˆE„HŠ  „HŠ  Š ƒ 𐠔 ‹ˆE„ …T Œ “ … †|Š  œ „ E ›$‘    „Tƒ = … † „ … €  ŽTŒ ” E‡ Š €    =V ™  Y Y Y  W  V ™  Y Y Y  <1WŠ Eƒ Š‹ƒ ‚E„ ƒ ‡  ‚E  € Š €  |€ |Š  „OŠ „ ’ ŠŠ TˆE„OŽ „  „ … ƒ Š „ †TŠ @Š  „ € E† „ ’| ŒŠ  „3“ … †|Š Eƒ Š“1€ š šƒ ‚ ‚E„ ƒ …€ =Š Eƒ Š ‚E  € Š €   ›~ ”E  “ … †3Š ‰ ‚E„^_   ƒ ‚ ‚E„ ƒ … ƒ  Š  „! Š “ … †€ OŠ  „Š „ ’ Š ›71 Š „Š Eƒ Š  €  ‡     Š … ƒ €  „ † Š  ‹ƒ ‚„ ’ ƒ ‡ Š š ‰! « ] ­E‚E  € Š €  E,Š 1“ … † Š ‰ ‚E„ _ ` ›‘    „  ƒ ‡ ‡  … † €  ŽŠ Š  „”  € Œ  … ‹ † €  Š … € ˆ ” Š €  T @Š  „O† €  Š € E‡ Š … † „ … €  Ž  « Š  „ … „ ƒ … „  𠉽  € Š „ 𠉋ƒ  ‰  Ž € Ÿ „ < a1Eƒ E†",­ = ˜ … «  m <# a$ ,­>&%  « ] ­ o « l' « ] ­ ­ o « ­ F)G HIJ K)(K M N J O P  « ™ ­7>&™  « ° ­7>H¶  « ¶ ­7>n  « ­7>|°  « ­7>H¯  « ¯ ­7>H±  « ´ ­7>n  « ³ ­7>|°  « ¬ ­7>H°  « ™ ± ­7>|±  « ™ ™ ­7>=™  « ™ ° ­>H¶  « ™ ¶ ­7>U  « ™ ­7>|°  « ™ ­7>|¯ — „ ½  „+*  Š 3ˆE„_    Š  „" Š T“ … †3€ TŠ  „ Š „ ’ Š ›—1„ ½  „,->.* 1Y Y Y *$,Š  „‡  E‡ ƒ Š „ Eƒ  Š €  | ŒŠ  „O½ …  Š"|“ … †  ŒŠ  „OŠ „ ’ Š ›1 Š „ Š Eƒ Š,"-€ ƒŠ „ ’ Š€ @“1 € ‡ @“ … †OˆE ” E† ƒ … € „  ƒ … „‹ƒ … œ „ †ˆ ‰› /10 G HIJ K P *  >n_     >n_  >2  * @ >n_   @  >n_>3  *)>n_   >n_ > M K K  › › › /10 G HIJ K P , @ >. !  ›1—1„ š „ Š „Š  „ | Œ …  ‹4,5ƒ E†O ” Š ‚ ” ŠŠ  „… „   ” š Š ›~ „ ” Š ‚ ” Š €  ƒŠ „ ’ Š1€ “1 € ‡ Š  „“ … † ˆE ” E† ƒ … € „ ƒ … „  Š‹ƒ … œ „ †,Eš € œ „Š  „Š „ ’ Š  Š  ˆE„ „ Ž ‹„  Š „ †,› ~ € ,€  ƒ1† „ Š „ … ‹€  €  Š € ‡ ‚ …  ‡ „     Š  „”  € ¨ ” „‚E   € ˆ š „  ” Š ‡  ‹„Eƒ ‚ …  ˆEƒ ˆ € š  € Š ‰3™ › ± › FG HIJ K  N O IN O P   M K K O  K H   M K K O  K O  K    J  K O  K H  ~ „1‚ …  ˆEƒ ˆ € š € Š ‰“1€ Š “1 € ‡  Š „ ‚E ™  Ž „  „ … ƒ Š „ ƒ ‚Eƒ … Š € ‡ ” š ƒ … „ Ž ‹„  Š „ †Š „ ’ Š6,"5H€ 1 € ‹‚ š ‰Š  „ ‚ …  †  ”E‡ Š  Œ „ ¨ ”Eƒ Š €  E « ™ ­  « ­ › ~ „1‡  E† € Š €  Eƒ š ‚ …  ˆEƒ ˆ € š  € Š ‰Š Eƒ Ё Š „ ‚E1™  Ž „  „ … ƒ Š „ †ƒ „ Ž ‹„  Š „ †Š „ ’ Š#,5 Ž € Ÿ „ Š  „ ” E „ Ž ‹„  Š „ †Š „ ’ Š7H… „  ” š Š €  ŽŒ …  ‹ Š „ ‚  € ‚ …  ‚E … Š €  Eƒ šEŠ Š  „‹ƒ … Ž € Eƒ šE‚ …  ˆEƒ ˆ € š € Š ‰€ Œ) ‡ ƒ OˆE„ ˆ Š ƒ €  „ †ˆ ‰O† „ š „ Š €  ŽŠ  „“ … †ˆE ” E† ƒ … € „  Œ …  ‹7,5®  Š  „ … “1€  „€ Š1€ ¡ „ … E› ~ „‚ …  ˆEƒ ˆ € š € Š ‰ Œ8,"5=… „  ” š Š €  ŽŒ …  ‹ Š „ ‚E™  ‡ ƒ OˆE„Œ ƒ ‡ Š  … „ †ˆ ‰† „ ½  €  Ž+9 « Œ  … L K J G O : KIL  G ; J O < ­ ƒ Œ  š š  “  =  9 « ,"­7> ˜… « ,­ ˜ … « ,"=  ­  “1 „ … „˜ … « , d ­€ 1† „ ½  „ †Š ˆE„™ ›1 “ ˜ … « ,"­7>>9 « ,­ ˜ … « ,"=  ­ > t ` ?  9 « , ` ­ « ­ ² „ … „ ƒ Œ Š „ …  “„Œ  ‡ ”E  9 « , ` ­ › » Œ@*)-!Eƒ  ‡ ‡ ” … … „ † ‚ … „ Ÿ €  ”E š ‰ « *)-BA V *   Y Y Y  *)=  W ­O“„| ƒ ‰.*$-H€ @ƒC G H J G LD  L ®  Š  „ … “1€  „  “„1 ƒ ‰€ Š €  ƒ z  : K J D  L › –… „  Š « ™ ¬ ¬ ¬ ƒ ­    “„ †OŠ Eƒ Š€ Œ6*)-€ ƒŒ ƒ ‹€ š € ƒ …“ … †OŠ Eƒ Š ‡ ‡ ” …   « *$­Š € ‹„ 1€ +,"-Š  „  9 « ,"­7>  « *)­  A  « *)­pT™  « *$­ B @ « ¯ ­ 1 Š „@Š Eƒ ŠOŠ  „T½ Eƒ š ‡ ‡ ” … … „ E‡ „3€ &‚E  € Š €  .=€  € E‡ š ”E† „ †|€ 3 « *)­   « *$­E °3Œ  …ƒ@Œ ƒ ‹€ š € ƒ … “ … †@ƒ E†@Š  ”E « ¯ ­€  „ Ÿ „ …¡ „ … E›3~ „½ …  ŠŠ „ … ‹  TŠ  „… € Ž  ŠEƒ E†T € † „ Œ « ¯ ­€ Š  „… „ š ƒ Š € Ÿ „Œ … „  ¨ ” „ E‡ ‰ ŒEŠ  „ “ … † Œ ƒ …  “1€ Š   „ ƒ † † „ †Š ˆE Š  Š  „1 ” ‹„ … ƒ Š  …ƒ E†Š  „ † „   ‹€ Eƒ Š  … «  € E‡ „ Š  „1 ‡  ‡ ” … … „ E‡ „€ @‚E  € Š €  3€ € E‡ š ”E† „ † ­ ›~ € Š „ … ‹€   € ‹€ š ƒ …Š 3Š  „Œ ƒ ‹€ š € ƒ …‹ƒ ’ € ‹” ‹ š € œ „ š €    †@„  Š €  ‹ƒ Š „Œ  …ƒO‚Eƒ … ƒ ‹„ Š „ … Œ ƒO‹” š Š €   ‹€ ƒ š1† €  Š … € ˆ ”  Š €  @ 3“ … †  ›~ „ „ ‡  E†OŠ „ … ‹ ‡ ƒ 3ˆE„Š   ” Ž  Š  Œ ƒ ƒ ƒ † ”E Š ‹„  ŠŒ  …Š  „ Œ ƒ ‡ ŠŠ Eƒ ŠŠ  „  ˆE „ … Ÿ „ † F ~ % 8 & , * / $ : / % %  92 " &* & % / 2 $ G &‚ * ( , / , $ % $ 2 i: / . , & 2 * & / 2 & +4/ 02 " &7: ( . + $ 2 $ ( . / % ‚ * ( , / , $ % $ 2 4( €2 " &$H 2 "4'1( * + 8 $ G & .42 " &#I * 0 21H#J^ˆ'1( * + 0 9 , 2)2 " &0 & 34/ . 2 $ : 01$ 0+ $ K & * ƒ & . 2 ; … „ š ƒ Š € Ÿ „1Œ … „ ¨ ” „ E‡ ‰ Œ,ƒ“ … †Š „ E†  Š  Ÿ „ … „  Š € ‹ƒ Š „ € Š 1ƒ  ‰ ‹‚ Š  Š € ‡ … „ š ƒ Š € Ÿ „Œ … „ ¨ ” „ E‡ ‰  „  ‚E„ ‡ € ƒ š š ‰“1 „  € Š1Eƒ  ‡ ‡ ” … … „ †  š ‰ƒŒ „ “=Š € ‹„  › « »  Š ” € Š € Ÿ „ š ‰  € Œ ƒ “ … †Eƒ   ‡ ‡ ” … … „ †  š ‰ E‡ „  Š  „ … „ ƒ … „1‚ …  ˆEƒ ˆ š ‰ƒ š  Š Œ  Š  „ …1“ … † Š Eƒ Š ƒ … „ ”E Š ƒ Œ … „ ¨ ” „  Š1€ Š  „ š   Ž… ” ˆ ” ŠEƒ ‚ ‚E„   ŠŠ Eƒ Ÿ „ ˆE„ „  ˆE „ … Ÿ „ †€  Š  „ƒ Ÿ ƒ € š ƒ ˆ 𠄁 ƒ ‹‚ š „ › ­ ²  “„ Ÿ „ …  « ¯ ­“1ƒ † „ … € Ÿ „ † † € … „ ‡ Š š ‰Œ …  ‹Š  „‚ …  ˆEƒ ˆ € š € Š ‰‹ † „ š E“1€ Š   ” Šƒ  ‰ ‡  E € † „ … ƒ Š €   Œ,… „ š ƒ Š € Ÿ „Œ … „ ¨ ” „ E‡ ‰ …ƒ † ”E Š ‹„  Š  Š € Š › » Œ1*$-€  ƒ  Ÿ „ š,“ … †“1   „‡ Eƒ … ƒ ‡ Š „ …  „ ¨ ” „ E‡ „ €  )Y Y Y  Š  „  9 « ,­> ¯ ?)@ <   A <p|™ < B @  X  Y Y Y X  ™pX  « ´ ­ “1 „ … „^<|€ Š  „ ” ‹ˆE„ … Œ † €  Š € E‡ Š“ … †OŠ ‰ ‚E„ €  ,"-ƒ E† X  Y Y Y X  ƒ … „Š  „„  Š € ‹ƒ Š „ †‚ …  ˆEƒ ˆ € š € Š € „   ŒEŠ  „ ‡ Eƒ … ƒ ‡ Š „ …  € *)› ~ „1 „ ‡  E†Š „ … ‹& Œ « ´ ­) <1D ©€ Š  „Š ‰ ‚E„  Š   Š  œ „ 3… ƒ Š € E, …Š  „… „ š ƒ Š € Ÿ „ Œ … „ ¨ ” „ E‡ ‰|“1€ Š =“1 € ‡ = „ “$“ … † Eƒ Ÿ „O ‡ ‡ ” … … „ † € @Š  „‚Eƒ  Š ›» Š‹ƒ œ „  „ E „Š Eƒ ŠŠ  „‚ …  ˆEƒ ˆ € š € Š ‰  Œ   Ÿ „ š“ … † “ ” š †@ˆE„ € Ž  „ …€ Œ Š  „ ‰@Eƒ Ÿ „ ‡  ‡ ” … … „ †Œ … „ ¨ ” „  Р𠉀 Š  „1‚Eƒ  ŠŠ Eƒ € Œ,Š  „ ‰Eƒ Ÿ „1 ‡  ‡ ” … … „ †… ƒ … „ 𠉀 OŠ  „‚Eƒ  Š ›~ „Š  € … †Š „ … ‹$‡ ƒ OˆE„ Š   ” Ž  Š  Œƒ  ƒ 3ƒ † ”E Š ‹„  Š Œ ƒ ‡ Š  … Š Š  „ „ ‡  E† Š „ … ‹O … „ C „ ‡ Š €  Ž1Š  „Œ ƒ ‡ Š Š Eƒ Š,Š  „… „ š ƒ Š € Ÿ „Œ … „ ¨ ” „ E‡ ‰ “1€ Š “1 € ‡   Ÿ „ šE“ … † Eƒ Ÿ „  ‡ ‡ ” … … „ †€ Š  „‚Eƒ  Š “1€ š šŠ „ E†3Š  Ÿ „ … „  Š € ‹ƒ Š „Š  „ƒ  ‰ ‹‚ Š  Š € ‡… „ š ƒ Š € Ÿ „ Œ … „ ¨ ” „ E‡ ‰T“1 „ HŠ  „@ ƒ ‹‚ š „3€  ‹ƒ š š › ~ „Oš ƒ  Š Š „ … ‹O “1 € ‡ @Ž „  „ … ƒ š š ‰3†  ‹€ Eƒ Š „   ‡  … … „  ‚E E†  Š  Š  „‚ …  ˆEƒ ˆ € š € Š ‰3Š Eƒ Šƒ‚Eƒ … Š € ‡ ” š ƒ …  Ÿ „ š“ … †O“1€ š š Eƒ ‚ ‚E„ @Š 3ˆE„ ‚E„ š š „ † )Y Y Y  ›3~ € Š „ … ‹ Œ ƒ Ÿ  …    Ÿ „ š “ … †  ‡  E Š … ”E‡ Š „ † ” Š  Œ ‡  ‹‹ ‡ Eƒ … ƒ ‡ Š „ …   Ÿ „ …  Ÿ „ š “ … † ‡  E Š … ”E‡ Š „ †3 ” Š Œ… ƒ … „‡ Eƒ … ƒ ‡  Š „ … ƒ E†@Œ ƒ Ÿ  …    … Š“ … †  Ÿ „ …š   Ž3“ … †  ƒ š š  Š  „ …1Š  €  Ž ˆE„ €  Ž„ ¨ ”Eƒ š › ~ „ ‹  Šš € œ „ 𠉁 „ Ž ‹„  Š ƒ Š €   Œ ƒŠ „ ’ Š  ƒ ‡ ‡  … †  €  ŽŠ OŠ  „‹ † „ š  € Š  „  „Š Eƒ Š‹ƒ ’ € ‹€ ¡ „  « ­  Š  „‚ …  † ”E‡ Š Œ Š  „… „ š ƒ Š € Ÿ „‚ …  ˆEƒ ˆ € š € Š € „  › J1B 5 d5 ,8 B 5 IA z : KI? 5 B 7 ~ „•O–1— ˜ ™ƒ š Ž  … € Š  ‹$ „ Ž ‹„  Š 1  „€  ‚ ” Ё „   Š „ E‡ „ƒ Š1ƒŠ € ‹„  Œ … „ „ ¡ €  ŽŠ  „ „ Ž ‹„  Š ƒ Š €   Œ,„ ƒ ‡   „  Š „ E‡ „ ˆE„ Œ  … „ Š  „  „ ’ Ё „  Š „ E‡ „ €  … „ ƒ †€  › žƒ ‡   „  Š „ E‡ „1€  „ Ž ‹„  Š „ † ƒ  Š ‹ƒ ’ € ‹€ ¡ „1Š  „1‚ …  †  ”E‡ Š ŒŠ  „3… „ š ƒ Š € Ÿ „O‚ …  ˆEƒ ˆ € š € Š € „  Œ“ … † € HŠ  „  „ Ž ‹„  Š ƒ Š €   › ~ „… „ š ƒ Š € Ÿ „‚ …  ˆEƒ ˆ € š € Š € „ 1ƒ … „‡  ‹ ‚ ” Š „ †”E €  Ž « ¯ ­ ƒ E† « ´ ­  ƒ   ” ‹€  Ž Š Eƒ Š Š  „ ƒ š š ‚ … „  Ÿ €  ”E „  Š „ E‡ „  € Š  „‡  … ‚ ”E“„ … „  „ Ž ‹„  Š „ †‡  …  … „ ‡ Š š ‰ › » ƒ † † € Š €    Š  „… „ š ƒ Š € Ÿ „‚ …  ˆEƒ ˆ € š € Š ‰ Œ „ ƒ ‡  “ … †€ 3ƒ „  Š „ E‡ „€  ˆEƒ  „ †  š ‰ 3Š  „“ … † 1€    / 2 $ ( .  1$ 0/ ‚ ‚ % $ : / , % &( . % c/ € 2 & *2 " &6I * 0 210 & . ƒ 2 & . : &4" / 0, & & .0 & 8 3& . 2 & + 98 / * / . 2 & & $ . 8Š ; ( * 2 " &6I * 0 20 & . 2 & . : & 9 ˆ  ƒ   130 21, &0 & +c+ $ * & : 2 %  ; Š  „‚ … „ Ÿ €  ”E 𠉁 „ Ž ‹„  Š „ †O „  Š „ E‡ „     Š  O Š  „ … “ … † € TŠ  „ ƒ ‹„ „  Š „ E‡ „ ›˜ ” І € ¸E„ … „  Š š ‰  Š  „ … „ š ƒ Š € Ÿ „‚ …  ˆEƒ ˆ € š € Š ‰O Œ„ ƒ ‡ 3“ … †O€ ‡ ƒ š ‡ ” š ƒ Š „ †3ƒ  Š   ” Ž )€ Š3“„ … „|Š  „|½ …  Š3“ … †&€ )Š  „= „  Š „ E‡ „ ›  € Ÿ „ @Š  „  „ƒ   ” ‹‚ Š €  E  •O–—˜  ™½ E† Š  „ ‚  Š € ‹ƒ š  „ Ž ‹„  Š ƒ Š €   Œƒ „  Š „ E‡ „”E €  Žƒ¼ € Š „ … ˆ €   Š ‰ 𠄆 ‰ Eƒ ‹€ ‡‚ …  Ž … ƒ ‹‹€  Žƒ š Ž  … € Š  ‹Š Eƒ Ї  ‹ ‚ ” Š „ ,Š  „… „ š ƒ Š € Ÿ „ ‚ …  ˆEƒ ˆ € š € Š ‰  Œ „ Ÿ „ … ‰ ” ˆ   Š … €  Ž1 Œ Š  „ „  Š „ E‡ „ ›~ €  ƒ š Ž  … € Š  ‹‡ ƒ ˆE„Ÿ €  ”Eƒ š € ¡ „ †ƒ  ƒ Ž … ƒ ‚  Š … ”E‡ Š ” … „1Š Eƒ Š “„ ‡ ƒ š šEƒŠ … „ š š €   ˆ ‰ƒ Eƒ š  Ž ‰ Š Š  „Š … „ š š €  ”E „ †O€ 3¼ € Š „ … ˆ €† „ ‡  † €  Ž Œ ² •O•O › ~ €  €  € š š ”E Š … ƒ Š „ †€  € Ž ” … „™Œ  … Š  „€  ‚ ” Š  „   Š „ E‡ „ ‡  E €  Š €  Ž ŒŠ  „ €  Ž š „“ … † zJ < ›^1 † „  … „ ‚ … „  „  Š Š  „‚E Š „  Š € ƒ š “ … †ˆE ” E† ƒ … € „  ˆE„ Š “„ „  ƒ † ƒ ‡ „  Š1€  ‚ ” Š ‡ Eƒ … ƒ ‡ Š „ …   ƒ Oƒ … ‡ «  …  Š … ƒ € Ž  Šš €  „  „ Ž ‹„  Š ­ Œ …  ‹&  † „6 Š   † „)… „ ‚ … „  „  Š  Š  „1‚E  Š „  Š € ƒ š“ … †3ˆE„ Š “„ „ @  † „ ) 3ƒ E†, ƒ E†@ƒ‚Eƒ Š  Œ …  ‹  „„ E† Œ Š  „Š … „ š š € Š Š  „ Š  „ …1… „ ‚ … „  „  Š  ƒ‚E Š „  Š € ƒ š  „ Ž ‹„  Š ƒ Š €   › o n l y € Ž ” … „3™ =~ „O „ Ž ‹„  Š ƒ Š €  @Š … „ š š € Œ  …Š  „  „   Š „ E‡ „ T‡  E €  Š €  Ž@ ŒŠ  „@ €  Ž š „O“ … †. zJ < ›) 1… ‡  ˆE„ Š “„ „ ƒ † ƒ ‡ „  А  † „ ƒ … „ † … ƒ “1ƒ  Š … ƒ € Ž  Šš €  „  „ Ž ‹„  Š  ›  … „ ƒ ‡ O  † „,EŒ …  ‹Š  „š „ Œ Š„ † Ž „ ŒŠ  „ „   Š „ E‡ „1Š Š  „ … € Ž  Š  •O–1— ˜ ™1‡  ‹‚ ” Š „ Š  „1… „ š ƒ Š € Ÿ „ ‚ …  ˆEƒ ˆ € š € Š ‰  Œ „ ƒ ‡ € ‹‚ €  Ž €  Žƒ … ‡ Š Eƒ Š  Š ƒ … Š ,Š  Š  „ š „ Œ Š  Œ#, ” E† „ … Š  „ƒ   ” ‹‚ Š €  E † „  ‡ … € ˆE„ †ƒ ˆE Ÿ „ › » Š‹” š Š € ‚ š € „ Š  € ƒ … ‡‚ …  ˆEƒ ˆ € š € Š ‰3ˆ ‰@Š  „‚ …  † ”E‡ Š  Œ,… „ š ƒ Š € Ÿ „1‚ …  ˆEƒ ˆ € š € Š € „ ƒ š   ŽŠ  „ ˆE„  Š‚Eƒ Š Š Š  „   † „“1 „ … „Š  „Oƒ … ‡ Š ƒ … Š  ›U € Eƒ š š ‰ € ŠŠ ƒ œ „ Š  „ ‹ƒ ’ € ‹” ‹$ Ÿ „ …ƒ š š€ ‹‚ €  Ž €  Ž3ƒ … ‡ ƒ E†3 Š  … „ Š  „ … „  ” š Š ƒ ,Š  „‚ …  † ”E‡ Š  Œ … „ š ƒ Š € Ÿ „‚ …  ˆEƒ ˆ € š € Š € „  ƒ š   Ž Š  „ˆE„  Š1‚Eƒ Š Š   † „,› –„  Š ˜ ƒ Š E˜…  ˆ   !> =  ‹ƒ ’  ?d « –„  Š ˜ƒ Š E˜ …  ˆ ! " „ š ˜…  ˆ   ! ­ “1 „ … „|–„  Š ˜ƒ Š E˜ …  ˆ ± !€ 3 „ Š3Š d™ ›0ª1 š ‰)ƒ … ‡  “1€ Š  € 3Š  „‡ ” … … „  Ё „  Š „ E‡ „ƒ … „‡  E € † „ … „ †,, € E‡ „ “ … †  ‡ ƒ    Ё ‚Eƒ 3 „  Š „ E‡ „ˆE ” E† ƒ … € „  › Š„ ƒ ‡    † „  ‚E €  Š „ …  Š Š  „1‚ … „ Ÿ €  ”E   † „ ƒ š   ŽŠ  „1ˆE„  Š ‚Eƒ Š Tƒ … „ Š  … „ †,® Š … ƒ ‡ €  ŽˆEƒ ‡ œOŠ  …  ” Ž 3Š  „  „ƒ … ‡  ‰ € „ š † Š  „ˆE„  Š1‚Eƒ Š  › » ŠO€ O€ ‹‚E … Š ƒ  ŠŠ H„ ‹‚ Eƒ  € ¡ „@Š Eƒ Š “1 € š „@Š  „  „ ƒ … ‡ 3Œ  …ƒŽ   †3 „ Ž ‹„  Š ƒ Š €  3€ š € ‹€ Š „ †3Š O  „  „  Š „ E‡ „ƒ ŠƒŠ € ‹„  Š  „‚ …  ˆEƒ ˆ € š € Š ‰3‡ ƒ š ‡ ” š ƒ Š €  3€  ˆEƒ  „ † Š  „„  Š € … „‡  … ‚ ”E‚ …  ‡ „   „ † Œ ƒ … › ~Eƒ Š €   Š  „† „ Š „ … ‹€ Eƒ Š €  3 Œ1“1 „ Š  „ …ƒ“ … †O€   Ÿ „ š  …1Œ ƒ ‹€ š € ƒ …  € Š 1Œ … „ ¨ ” „ E‡ ‰ Œ ƒ …  Š  „Š  Š ƒ š  ” ‹ˆE„ …  Œ“ … †  @Œ ƒ … 1ƒ E†TŠ  „O„  Š € ‹ƒ Š „ †|‚ …  ˆEƒ ˆ € š € Š € „   Œ ƒ š š,Š  „š „ Š Š „ …   ƒ … „ˆEƒ  „ † Š  „ˆE„  Š  „ Ž ‹„  Š ƒ  Š €  EŒ  ” E†3Œ  …ƒ š š‚ … „ Ÿ €  ”E „  Š „ E‡ „ € 3Š  „‡  …  ‚ ”E ›c … ‹ƒ š š ‰ E€ Œ6, `€ Š  „ „ ¨ ” „ E‡ „ Œ“ … † 1… „   ” š Š €  ŽŒ …  ‹Š  „ „ Ž ‹„  Š ƒ Š €   Œ ƒ š šE‚ … „ Ÿ €  ”E „   Š „ E‡ „  € Š  „‡  … ‚ ”Eƒ E†   -€ Š  „  Š … €  ŽŠ Eƒ Šš € „  ˆE„ Š “„ „ |  † „  Tƒ E†T€ |Š  „3‡ ” … … „  Ё „  Š „ E‡ „  Š  „  „ š ˜…  ˆ   !>>9 « , `   ­  “1 „ … „€  Š  „1‡  E‡ ƒ Š „ Eƒ Š €   ‚E„ … ƒ Š  …ƒ E†9|€  Š  „ Œ ” E‡ Š €  O† „ ½  „ †ˆ ‰ « ¯ ­1ƒ E† « ´ ­ › 1 … ‹ƒ š š ‰  •O–1— ˜ ™ € ,€  € Š € ƒ š € ¡ „ †Š ƒ1Eƒ € Ÿ „ Š ƒ Š „ € @“1 € ‡ Tƒ š š“ … † ƒ … „  Ÿ „ š1ƒ E†@ƒ š šš „ Š Š „ … Eƒ Ÿ „  ˆE „ … Ÿ „ †@Œ … „ ¨ ” „ E‡ ‰|± › ²  “„ Ÿ „ … ”E €  ŽT•O–—˜  ™ “1€ Š Oƒ‹ƒ  ”Eƒ š š ‰ „ Ž ‹„  Š „ †Š … ƒ €  €  އ  … ‚ ”E1‡  ” š †   ŠˆE„3 € ‹‚ š „ …^ Š  „OŠ … ƒ €  €  ŽT‡  … ‚ ”E€  € ‹‚ š ‰ ƒ ‚ ‚E„ E† „ †3Š Š  „Š „  Ї  … ‚ ”Eƒ ƒ‚ … „ ½ ’,,“1€ Š @€ Š   „ Ž ‹„  Š ƒ Š €  OŒ …  ¡ „ O€ @ƒ † Ÿ ƒ E‡ „ ›~ ”E ,“1 „ 3Š  „ ½ …  Š Š „  Ё „  Š „ E‡ „€  ‚ …  ‡ „   „ †, Š  „1Š … ƒ €  €  އ  … ‚ ”E ‚ š ƒ ‰ Š  „…  š „ Œ8,^` € Š  „š ƒ  Š1„ ¨ ”Eƒ Š €   ›   J9 ? 5 d9,AB –1ƒ  „ †O @‚ ” ˆ š €   „ †@… „  ” š Š  •O–—˜  ™ƒ ‚ ‚E„ ƒ … Š  ˆE„@Š  „T‹  Š„ ¸E„ ‡ Š € Ÿ „@œ   “1&ƒ š Ž  … € Š  ‹Œ  …O „ Ž  ‹„  Š ƒ Š €   Œ3‚    „ ‹€ ‡ ƒ š š ‰dŠ … ƒ E ‡ … € ˆE„ † ‚E  Š ƒ   „  ”E  ‚E„ „ ‡ ˆ ‰ ‹ Š  „ … ,Š  ‰  ”  Ž ‡  € š † … „  « –… „  Š  ™ ¬ ¬ ¬ ƒ ® –… „  Š  ™ ¬ ¬ ¬ ˆE­ › ²  “„ Ÿ „ …  € Š “1ƒ  † „  € Ž  „ †ƒ  ƒ3‡  ‹‚ ” Š ƒ Š €  Eƒ š‹ † „ š1 Œ   “‡  € š † … „ | „ Ž ‹„  Š  ‚E„ „ ‡ € Š  „‡  ” …  „ Œ ƒ ‡ ¨ ” € … €  ŽŠ  „ € …1Eƒ Š € Ÿ „š ƒ   Ž ”Eƒ Ž „  ƒ ƒ ‚ ‚ š € ‡ ƒ Š €  Š Eƒ І  „ ,  Š ‚E„ … ‹€ Š Š  „”E „  Œ‹ƒ  ”Eƒ š š ‰H „ Ž ‹„  Š „ †|Š … ƒ €  €  ŽTŠ „ ’ Š  › ” … Š  „ …  “1… € Š Š „ Š „ ’ Š Eƒ ,Ÿ „ … ‰† € ¸E„ … „  Ї Eƒ … ƒ ‡ Š „ … €  Š € ‡ ,Œ …  ‹  ‚E  Š ƒ  „  ”E‡  € š †  † € … „ ‡ Š „ †O ‚E„ „ ‡  ,„ Ÿ „ 3€ @ƒ @ƒ š  ‚ Eƒ ˆE„ Š € ‡“1… € Š €  ށ ‰  Š „ ‹O› ~ „ ’ Š1“1… € Š Š „ € 3‘ €  „  „ ‡ Eƒ … ƒ ‡ Š „ … 1€ „ Ÿ „ O‹ … „… „ ‹ Š „Œ …  ‹Š  „‡  … ‚E … ƒ  “1 € ‡ •O–—˜  ™Eƒ †ˆE„ „ Š „  Š „ †,› ~ ”E  “„1Eƒ †  € † „ ƒ  “d•O–—˜  ™“ ” 𠆂E„ … Œ  … ‹ T‘ €  „  „ Š „ ’ Š  „ Ž ‹„  Š ƒ Š €  ƒ Œ Š „ …1ˆE„ €  ŽŠ … ƒ €  „ † ƒ‹ƒ  ”  ƒ š š ‰ „ Ž ‹„  Š „ †Š „ ’ Š › » Š  „Œ  š š  “1€  Ž„ ’ ‚E„ … € ‹„  Š  “„3ƒ ‚ ‚ š € „ †|•O–1— ˜ ™Š @Š  „3 „ Ž ‹„  Š ƒ Š €  | Œ « ™ ­ Š  „3˜ ² ‘ … ‚ ”E Œ‘ €  „  „O „ “  ‚Eƒ ‚E„ …Š „ ’ Š 1ƒ E† « ° ­ ˜ ƒ … Š  & Œ Š  „ž Ž š €  O‚E … Š €   Œ Š  „ ² ƒ E ƒ … † ‡  … ‚ ”E E“1 € ‡ 3€ ƒ ƒ ‹‚ š „ ŒŠ  „ ·‡ € ƒ š ‚ …  ‡ „ „ †  €  Ž   ŒŠ  „‘ƒ Eƒ † € ƒ O‚Eƒ … š € ƒ ‹„  Š › µ „ƒ š  Š „  Š „ † ˜˜•@ “1 € ‡ T€ ƒ ‚ ‚Eƒ … „  Š š ‰3Š  „ˆE„  Šœ   “1|ƒ š Ž   … € Š  ‹ Œ  …Š  € ‚ …  ˆ š „ ‹ « ~,„ ƒ Eƒ 3„ Šƒ š ›  ° ± ± ± ­    Š  „ ƒ ‹„‡  … ‚E … ƒ ›   ~ „½ …  Ї  … ‚ ”E“„”E „ †@€ c” @§ € )| •Oƒ E† ƒ … €  ‘ €  „  „H˜ ² ‡  … ‚ ”E ‡   Š ƒ €  €  Ž=‹ … „|Š Eƒ d  „ ‹€ š š €  O“ … † 1 Œ „ “  ‚Eƒ ‚E„ … Š  … € „ 1Œ …  ‹Š  „ €    ”Eƒ@ „ “ ƒ Ž „ E‡ ‰| Œ˜ ‘ € EƒT“1… € Š Š „ HˆE„ Š “„ „  § ƒ  ”Eƒ … ‰ 1™ ¬ ¬ ±Oƒ E†T•Oƒ … ‡  1™ ¬ ¬ ™ ›T~ € ‹ƒ  ”Eƒ š š ‰  „ Ž ‹„  Š „ †@‡  … ‚ ”E€ … „ ‚ … „  „  Š „ †3€ TŠ  „ Š ƒ E† ƒ … † –H‡  † €  ށ ‡  „ ‹„  “1 € ‡ ”E „ Š “ˆ ‰ Š „ Œ  …1„ ƒ ‡  ‘ €  „  „‡ Eƒ … ƒ ‡ Š „ … ›4 š š  “1€  ŽŠ  „‚ …  ‡ „ † ” … „”E „ † ˆ ‰3~,„ ƒ Eƒ „ Š ƒ š › « ° ± ± ± ­  “„Š … „ ƒ Š „ †O„ ƒ ‡ 3ˆ ‰ Š „ƒ  ƒ@ „ ‚Eƒ … ƒ Š „€  ‚ ” Ё ‰ ‹ˆE š Œ  …ˆE Š =•O–—˜  ™Oƒ E† ˜˜•@›E~ ”E E€ Š €  ‚E   € ˆ š „Œ  …„ € Š  „ …ƒ š Ž  … € Š  ‹Š  € E „ … Š ƒ“ … †ˆE ” E† ƒ … ‰ˆE„ Š “„ „ Š  „Š “ˆ ‰ Š „ 1 Œ ƒ‘ €  „  „‡ Eƒ … ƒ ‡ Š „ … › µ „‡    „Š  „½ …  Š  „‹€ š š €  @“ … †  Œ1˜ ² ƒ  ƒŠ … ƒ €  €  އ  … ‚ ”E ƒ E†OŠ  „Œ  š š  “1€  Ž3™ ¯  ± ± ±“ … †  ƒ ƒŠ „  Š1‡  … ‚ ”E › ~ „ Š “ƒ š Ž  … € Š  ‹ “„ … „1Š … ƒ €  „ †  = ” ˆE „ Š  ŒŠ  „3 Ÿ „ … ƒ š š Š … ƒ €  €  Ž|‡  … ‚ ”E“1   „  € ¡ „  Ÿ ƒ … € „ †Œ …  ‹° @ ” ‚Š ° @ d “ … †     Š1‡  ”  Š €  Ž ‚ ” E‡ Š ”Eƒ Š €   ›~ „Š „  Š ‡  … ‚ ”E“1ƒ 1† € Ÿ € † „ †€  Š O™ ¯  ƒ ‹‚ š „  Œ™  ± ± ±3“ … † „ ƒ ‡ H @“„3‡  ” š †Hƒ   „   Š  „Ÿ ƒ … € ƒ E‡ „€ |‚E„ … Œ  … ‹ƒ E‡ „ƒ ‡ …    ƒ ‹‚ š „  Œƒ Ž € Ÿ „ Ž „  … „ ›~ „Š „  Ї  … ‚ ”E“1ƒ ‚ … „ ‚ …  ‡ „   „ †Š  … „ ‹ Ÿ „ƒ š š, ‚Eƒ ‡ „ ƒ E†‡ … „ ƒ Š „ „ ‚Eƒ … ƒ Š „  „  Š „ E‡ „   ƒ Š1‚ ” E‡ Š ”Eƒ Š €  O‹ƒ … œ   “1 € ‡ Oƒ … „”E „ †€ 3‘ €  „  „ ƒ ‚ ‚ …  ’ € ‹ƒ Š „ š ‰ƒ € ž Ž š €   › ~ „ „ ‡  E†3‡  … ‚ ”E“„”E „ †O“1ƒ ˜ ƒ … Š d ŒŠ  „ ž  Ž š €  ‚E … Š €   Œ Š  „ ² ƒ E ƒ … †‡  … ‚ ”E  “1 € ‡ ‡    Š ƒ € Eƒ ƒ ‹‚ š „  ŒEŠ  „ ‚ …  ‡ „ „ † €  Ž   Œ,Š  „‘ƒ Eƒ † € ƒ  ‚Eƒ … š € ƒ ‹„  Š › µ „T„ ’ Š … ƒ ‡ Š „ †=Š … ƒ €  €  Ž=ƒ E†=Š „  Š €  Ž  ƒ ‹‚ š „ „ ’ ƒ ‡ Š š ‰Tƒ Œ  …Š  „O˜ ² ‡  … ‚ ”E  ˆ ” Ё € E‡ „ Š  „ ² ƒ E ƒ … †€  š ƒ … Ž „ … “„“„ … „ƒ ˆ š „Š €  Ÿ „  Š € Ž ƒ Š „ Š … ƒ €  €  ށ ƒ ‹‚ š „  Œ ” ‚Š ° @ @ “ … †  ›       µ „3„ Ÿ ƒ š ”Eƒ Š „ †HŠ  „3… „  ” š Š ˆ ‰Hƒ ‚ ‚ š ‰ €  Ž|Š  „@ Š ƒ   † ƒ … †‹„ ƒ  ” … „  IL K { M  z ƒ E† L K { G J J Š ‚Eƒ € …   Œ,‡     „ ‡ ” Š € Ÿ „“ … †ˆE ” E† ƒ … € „  «   Š1Š € E† € Ÿ € † ”Eƒ š,“ … † ˆE ” E† ƒ … € „  ­ ›~,|‡  ‹‚ ” Š „@Š  „  „3‹„ ƒ  ” … „  1„ ƒ ‡  ‡ Eƒ … ƒ ‡ Š „ …, Œ Š  „1ƒ ” Š  ‹ƒ Š € ‡ „ Ž ‹„  Š ƒ Š €  €  ƒ š € Ž  „ † “1€ Š OŠ  „‡  … … „  ‚E E† €  އ Eƒ … ƒ ‡ Š „ …1 ŒŠ  „ Š ƒ E† ƒ … †  „ Ž ‹„  Š ƒ Š €   ›|žƒ ‡ T“ … †@€ |Š  „Oƒ ” Š  ‹ƒ Š € ‡ „ Ž  ‹„  Š ƒ Š €  3€  š ƒ ˆE„ š „ †3ƒŠ … ” „‚E  € Š € Ÿ „€ Œ€ Šš €  „  ” ‚ „ ’ ƒ ‡ Š š ‰ “1€ Š ƒ“ … † € Š  „1 Š ƒ E† ƒ … † „ Ž ‹„  Š ƒ Š €    Š Eƒ Š€   ˆE Š =ˆE ” E† ƒ … € „ ‹ƒ Š ‡  ›žƒ ‡ H“ … †|€  Š  „3ƒ ” Š  ‹ƒ Š € ‡ „ Ž ‹„  Š ƒ Š €  TŠ Eƒ І  „   Šƒ š € Ž  „ ’ ƒ ‡ Š š ‰H“1€ Š &ƒT“ … †|€ HŠ  „T Š ƒ E† ƒ … †H „ Ž ‹„  Š ƒ  Š €  3€  š ƒ ˆE„ š „ †3ƒŒ ƒ š  „‚E  € Š € Ÿ „ ›žƒ ‡ O“ … †€ 3Š  „  Š ƒ E† ƒ … † „ Ž ‹„  Š ƒ Š €  Š Eƒ І  „    Šƒ š € Ž „ ’ ƒ ‡ Š š ‰ “1€ Š 3ƒ“ … †€ OŠ  „ƒ ” Š  ‹ƒ Š € ‡ „ Ž ‹„  Š ƒ Š €  € 1š ƒ  ˆE„ š „ †ƒ1Œ ƒ š  „ „ Ž ƒ Š € Ÿ „ ›) … „ ’ ƒ ‹‚ š „  € Œ Š  „Š „  Ё „   Š „ E‡ „1€     – ‘  ž—  Š  „  Š ƒ E† ƒ … † „ Ž ‹„  Š ƒ Š €   €   &– ‘=  ž—  Eƒ E†Š  „ƒ ” Š  ‹ƒ Š € ‡ „ Ž ‹„  Š ƒ  Š €  €   T–1‘3 |ž—U  Š  „ Š  „Š … ” „‚E  € Š € Ÿ „  ƒ … „ |  4| | – ‘|  ƒ E†| 7| › ~ „ Œ ƒ š  „ ‚E  € Š € Ÿ „ 1ƒ … „c|  4| « Š  „  „ ‡  E†  „ ­1ƒ E† | ž—4| ›|   ž—c| € 1ƒŒ ƒ š  „ „ Ž ƒ Š € Ÿ „ ›  €  Ž Š  „  „Š „ … ‹  “„† „ ½  „‚ … „ ‡ €  €  ƒ E†… „ ‡ ƒ š š ƒ Œ  š š  “  = ‚ … „ ‡ €  €  ^> Š … ” „‚E  € Š € Ÿ „  Š … ” „‚E  € Š € Ÿ „  !@Œ ƒ š  „‚E  € Š € Ÿ „  « ³ ­ … „ ‡ ƒ š š> Š … ” „‚E  € Š € Ÿ „  Š … ” „‚E  € Š € Ÿ „ !3Œ ƒ š  „ „ Ž ƒ Š € Ÿ „  « ¬ ­ ˜… „ ‡ €  €  € Š  „ ‚ …  ‚E … Š €  0 Œ)Š  „ ‹ƒ ‡  €  „   „ Ž ‹„  Š „ †=“ … † Š Eƒ Š@ƒ … „@… € Ž  Š › 1„ ‡ ƒ š š€ OŠ  „ ‚ …  ‚E … Š €   Œ,“ … †  € Š  „ Š ƒ E† ƒ … † „ Ž ‹„  Š ƒ Š €   “ … † Š Eƒ Š1ƒ … „ € † „  Š € ½ „ †ˆ ‰Š  „ƒ š Ž  … € Š  ‹›~ „  „ Š “‹„ ƒ  ” … „ 1‡ ƒ O† € Ÿ „ … Ž „  ƒ E†Ž   †‚E„ … Œ  … ‹ƒ E‡ „ € 1ƒ ‡  € „ Ÿ „ †  𠉓1 „ ˆE Š Oƒ … „ € Ž  › € Ž ” … „°   “ Š  „… „  ” š Š  @Š  „O‘ €  „  „˜ ² ‡  … ‚ ”E  ƒ  ƒ Œ ” E‡ Š €   ŒEŠ  „1š  Ž ƒ … € Š  ‹) Œ Š  „1Š … ƒ €   €  އ  … ‚ ”E  € ¡ „ › ~ „š „ Œ Š ‚Eƒ  „ š    “ ,‚ … „ ‡ €  €  ƒ E† Š  „O… € Ž  Ё   “ … „ ‡ ƒ š š ›H—ƒ Š ƒ3‚E €  Š Œ  …•O–—˜  ™3ƒ … „3† €  œ   † ƒ Š ƒ@‚E €  Š Œ  …˜˜•©ƒ … „OŠ … € ƒ  Ž š „   ƒ E†H„ … …  …ˆEƒ …  ‚Eƒ =Š “| Š ƒ E† ƒ … †T„ … …  …  ŒŠ  „ ‹„ ƒ  ›T~ „… „  ” š Š    “Š Eƒ Š•O–—˜  ™Eƒ ˆE„ Š  Š „ …… „ ‡ ƒ š šŠ Eƒ T˜˜•  @Š  „˜ ² ‡  … ‚ ”E”  Š € šŠ  „ Š … ƒ €  €  Ž@‡  … ‚ ”E… „ ƒ ‡  „ ° @ d “ … †  ®1ƒ а @ d “ … †  Š  „Š “ƒ š Ž  … € Š  ‹ ƒ … „ Š ƒ Š €  Š € ‡ ƒ š š ‰€ E† €  Š €  Ž ” €    ƒ ˆ š „ › •O–1— ˜ ™ |  ‚ … „ ‡ €  €  3€  € Ž  € ½E‡ ƒ  Š š ‰ˆE„ Š Š „ … Š Eƒ 3˜˜• “1 „ OŠ  „Š … ƒ €  €  ށ € ¡ „€  °  d “ … † 1 … š „   ›  1Œ Š „ …Š Eƒ ŠŠ  „1Š “ˆE„ ‡  ‹„  Š ƒ Š €  Š € ‡ ƒ š š ‰€ E† €   Š €  Ž ” €  Eƒ ˆ š „ › »  Š „ … „  Š €  Ž š ‰  ˜˜•  ‡ ‡ ƒ  €  Eƒ š š ‰€ E „ … Š ƒ“ … † ˆE ” E† ƒ … ‰ ˆE„ Š “„ „ Š  „Š “1ˆ ‰ Š „ E Œ ƒ ‘ €  „  „‡ Eƒ …  ƒ ‡ Š „ … 1“1 „ … „ ƒ •O–—˜  ™  ƒ š Š   ” Ž H€ ŠO‡  ” š †H „ Ž  ‹„  Š1ˆE„ Š “„ „ ˆ ‰ Š „    „ Ÿ „ …1†  „  › € Ž ” … „¶   “ HŠ  „d… „  ” š Š =  Š  „ž Ž š €   ² ƒ E ƒ … †‡  … ‚ ”E  ƒ 1ƒŒ ” E‡ Š €   Œ Š  „š  Ž ƒ … € Š  ‹d Œ Š  „Š … ƒ €  €  އ  … ‚ ”E € ¡ „ ›~ „Œ  … ‹ƒ Š€  Š  „ ƒ ‹„ ƒ €  € Ž ” … „3° ›d~ „3… „  ” š Š    “Š Eƒ ŠO•O–—˜  ™|Eƒ @… „ š € ƒ ˆ š ‰)ˆE„ Š Š „ …T… „ ‡ ƒ š šŠ Eƒ ˜˜•  dŠ  „ ² ƒ E ƒ … †‡  … ‚ ”E  „ ’ ‡ „ ‚ Š1“1 „ Š  „Š … ƒ €  €  އ  … ‚ ”E  € ¡ „€ ˆE„ Š “„ „ H°  @ “ … † ƒ E†|°   “ … †  “1 „ … „ Š  „Š “3ƒ š Ž  … € Š  ‹ƒ … „ Š ƒ Š €  Š € ‡ ƒ š ‰3€ E† €  Š €  Ž ” €    ƒ ˆ š „ ›•O–1— ˜ ™ Eƒ 1ˆE„ Š Š „ … ‚ … „ ‡ €  €    Š  E Œ  … ˆE Š  š ƒ … Ž „ƒ E†@ ‹ƒ š šŠ … ƒ €  €  Ž3‡  … ‚E … ƒ ›O~ „Š “3ƒ š Ž   … € Š  ‹ ƒ … „Š € „ †ƒ Š Š … ƒ €  €  ށ € ¡ „   Œ °  d ƒ E†°    ƒ E† ˜˜• Eƒ  Ž … „ ƒ Š „ … ‚ … „ ‡ €  €  OˆE„ Š “„ „ OŠ    „‚E €  Š  ›  Š Ÿ „ … ‰ š ƒ … Ž „Š … ƒ €  €  އ  … ‚ ”E, € ¡ „ ,ˆE Š ƒ š Ž  … € Š  ‹ ‚E„ … Œ  … ‹„ ’ Š … „ ‹„ 𠉓„ š š ›   ( * 2 " &^0 3c/ % % & 0 2 2 * / $ . $ . 8 : ( * ‚ 0c0 $  & 0^„ …)†‡ƒ ˆ '$ % % % & / * .4/ 0)$ 2)2 * $ & 0)2 (0 & 8 3& . 22 " &2 & 0 2: ( * ‚ 0 9 0 (‚ & * ƒ € ( * 34/ . : &0 " ( % +c, &, & 2 2 & *2 " / . 0 " ( '. " & * &( . % / * 8 & * 2 & 0 2: ( * ‚ ( * / ;  ( *2 " &12 * / $ . $ . 87: ( * ‚ ( * /7/ , ( G &Š  '1( * + 0 ‚ & * € ( * 3c/ . : &7$ 0/ % * & / + 40 (8 ( ( +42 " / 212 " $ 01& K & : 21$ 01. & 8 ƒ % $ 8 & / , % & ;  ‡)‡)„ " / 0 ( . &^€ * & &‚ / * / 3& 2 & * 92 " &^( * + & * ( €2 " & 34( + & % ; ~ % %& 6 ‚ & * $ 34& . 2 0c$ .2 " $ 0c‚ / ‚ & * 0 &( * + & * 9 '" $ : " & / " / .c& 27/ % ;  Š    + & 0 : * $ , &/ 0)8 $ G $ . 82 " &1, & 0 2 * & 0 % 2 01( G & * / % % ; ~ % 2 " ( 8 "& / " / .c& 21/ % ;1+ (. ( 2+ $ 0 : 0 0 / + / ‚ 2 / 2 $ ( .( € 2 " &( * + & * 9 ')&" / G &1$ 34‚ % 3& . 2 & +/ .4/ + / ‚ ƒ 2 / 2 $ ( .c0 : " & 3&/ . +4€ ( . +42 " / 21$ 28 & . & * / % % c: " ( ( 0 & 07( * ƒ + & * € ( *1% / * 8 &  . 8 % $ 0 "c: ( * ‚ ( * / ; )" $ 0$ 34‚ * ( G & 0)2 " &7* & ƒ 0 % 2 0€ ( *% / * 8 &1: ( * ‚ ( * /, / , ( 22 '1(7‚ & * : & . 2 / 8 &1‚ ( $ . 2 0 9 3c/ 5 $ . 8 2 " & 3i0 2 / 2 $ 0 2 $ : / % % $ . + $ 0 2 $ . 8 $ 0 " / , % &c€ * ( 3e2 " & * & 0 % 2 0( €)„ …)†‡ƒ ˆ ;                  » OŠ  € † €  ‡ ”E  €  O“„Œ  ‡ ”E  OŠ  „… „  ” š Š  € T‘ €   „  „ 1 € E‡ „Š „ ’ Ё „ Ž ‹„  Š ƒ Š €  |Eƒ  @… „ ƒ šƒ ‚ ‚ š €  ‡ ƒ Š €  @€ |ž  Ž š €   ›3‘ E € † „ … €  ŽO‚ … „ ‡ €  €  @ƒ E†@… „  ‡ ƒ š šŠ  Ž „ Š  „ …  € Šƒ ‚ ‚E„ ƒ … Š Eƒ Š•O–—˜  ™‚E„ … Œ  … ‹ ‹”E‡ HˆE„ Š Š „ …“1 „ HŠ  „@ƒ Ÿ ƒ € š ƒ ˆ š „OŠ … ƒ €  €  ŽT‡  … ‚ ”E €   ‹ƒ š š „ … Š Eƒ O°  @ “ … †     ‹„ “1Eƒ Š1ˆE„ Š Š „ … “1 „  Š  „Š … ƒ €  €  އ  … ‚ ”E1€  ˆE„ Š “„ „ 3°  @ “ … † 1ƒ E†O°   “ … †  ƒ E†=€ E† €  Š €  Ž ” €  Eƒ ˆ š ‰=“1 „ &Š  „@Š … ƒ €  €  Ž ‡  … ‚ ”E€ 1° @ d “ … †  › ~ „ … „€ ,  € ‹‚ š „  ‡  ‹‚ š „ Š „„ ’ ‚ š ƒ Eƒ Š €  Œ  … Š  „ Œ ƒ ‡ ŠŠ Eƒ Š•O–—˜  ™ ” Š ‚E„ … Œ  … ‹˜˜•+“1€ Š | ‹ƒ š š Š … ƒ €  €  އ  … ‚E … ƒ › ²  “„ Ÿ „ …  € Š € 1“ … Š œ „ „ ‚ €  Ž€  ‹€ E†Š Eƒ Š •O–1— ˜ ™ €  ƒ ˆ š „Š   ‰ ‚E Š  „  € ¡ „Š  „„ ’ €   Š „ E‡ „ Œ “ … † E„ Ÿ „ “1 „ € Š,Eƒ , 1Š … ƒ €  €  Ž ‡  … ‚ ”E  … „ ‡  Ž  € ¡ „3Š  „ ‹0“1 „ =Š  „ ‰H ‡ ‡ ” …€ =š ƒ Š „ …” Š Š „ …  ƒ E‡ „   ƒ E† „ Ž ‹„  Š Š  „ ‹= ” Š › » Š †  „ ,Š  €   € ‚Eƒ … Š  ˆ ‰O‹ƒ œ €  ŽŸ „ … ‰OŽ   †O”E „ Œ1 „  Š „ E‡ „ˆE ” E† ƒ … € „  ƒ E† Š  „ … ‚ ” E‡ Š ”Eƒ Š €   › »  € Š € ƒ š š ‰  “1 „ € Š Eƒ  š € Š Š š „  … 3„ ’ ‚E„ … € „ E‡ „ •O–1— ˜ ™Š „ E† Š @Š … „ ƒ Š„  Š € … „  „  Š „ E‡ „  «  … Š  „ …O‚ ” E‡ Š ”Eƒ Š €    ˆE ” E†H‚  … ƒ  „  ­ ƒ  €  Ž š „“ … †  1 Š  … €  Ž@Š  „ ‹+€ =ƒ3š €  Š Œ Œ ƒ ‹€ š  € ƒ …“ … †  ›ª ‡ ‡ ƒ  €  Eƒ š š ‰  ‚  … ƒ  „ † € Œ ƒ ‡ Š1‡  E €  Š  ŒE  š ‰  „1 …ƒ Œ „ “|“ … †  › ¹ ”E‡    … Š ‚  … ƒ  „  ƒ … „ š € œ „ š ‰Š  ‡ ‡ ” … ƒ Ž ƒ € „ ‹ˆE„ † † „ †€ š   Ž „ …‚  … ƒ  „  › µ  „ |Š  „ ‰|† EŠ  „ ‰TŠ „ E†TŠ @ˆE„3 „ Ž ‹„  Š „ †T ” Š  š „ ƒ Ÿ €  ŽŠ  „… „ ‹ƒ €  €  އ   Š € Ž ”  ”E1 „ Ž ‹„  Š 1 Œ Š  „ ‚  … ƒ  „1Š ˆE„ Š … „ ƒ Š „ †ƒ  Š   ” Ž Š  „ ‰“„ … „  „ ‚Eƒ … ƒ Š „ ‚  … ƒ  „  ›3~ € š „ ƒ † Š OŠ  „€   š ƒ Š €  Tƒ E†T Š  … ƒ Ž „  Œ1‹ … „“ … †  ›# …„ ’ ƒ ‹‚ š „ Š  „  „  “ … †T „   Š „ E‡ „8   3“ ” š †3Ž „  „ … ƒ š š ‰3ˆE„€  Š „ … ‚ … „ Š „ †Tƒ ƒ  €  Ž š „“ … †ƒ E† Š  … „ †€ Š  „š €  Š  Œ Œ ƒ ‹€ š € ƒ … “ … †  › » Œ€ Š ‡ ‡ ” … … „ †=ƒ Ž ƒ € =€ =Š  „@‚  … ƒ  „ 8    K L K  € Š “ ” š †OŠ „ E†OŠ OˆE„ „ Ž ‹„  Š „ †O ” Š  š „ ƒ Ÿ €  Ž  K L K Š  ˆE„Š … „ ƒ Š „ †ƒ Š   ” Ž Oƒ „ ‚Eƒ … ƒ Š „‚  … ƒ  „ › ˜˜•@  TŠ  „ Š  „ …Eƒ E†, €   ŠˆEƒ  „ †3 T ‰  ‚E Š  „  € ¡ €  Ž|“ … † ˆ ” Š… ƒ Š  „ … =„  Š € ‹ƒ Š €  Ž|Š  „ ‚ …  ˆEƒ ˆ € š € Š ‰  Œ “ … †  N z G L K M € Ÿ ƒ … €  ”E ‡   Š „ ’ Š  › ¹ € E‡ „“ … †ˆE ” E† ƒ … € „  ƒ ‚ ‚E„ ƒ …   𠉀 3 „ Ž ‹„  Š „ † Š … ƒ €  €  ŽTŠ „ ’ Š ˜˜•©†  „   Šš „ ƒ … HŒ …  ‹” E „ Ž  ‹„  Š „ †Š „ ’ Š ›~ „Œ ƒ ‡ ŠŠ Eƒ Š€ Š1‡ ƒ š „ ƒ …   š ‰Œ …  ‹ Š  „Š … ƒ €  €  އ  … ‚ ”E1ƒ E†  Š1Œ …  ‹„ ’ ‚E  ” … „Š ”    „ Ž ‹„  Š „ †Š „ ’ Š‹ƒ ‰ˆE„1  „1… „ ƒ   Š Eƒ Š€ Š… „ ¨ ” € … „  ƒš ƒ … Ž „ …Š … ƒ €  €  ށ ƒ ‹‚ š „ › ~ „Š … ƒ €  €  Žƒ E†Š „  Š ‹ƒ Š „ … € ƒ š Œ  …1Š  € 1„ ’ ‚E„ … €  ‹„  Š  “1 € 𠄆 €  Š € E‡ Š E‡ ƒ ‹„Œ …  ‹„ ’ ƒ ‡ Š š ‰Š  „ ƒ ‹„   ” … ‡ „ ›~ „‚E„ … Œ  … ‹ƒ E‡ „ ŒˆE Š @ƒ š Ž  … € Š  ‹‡ ƒ  ˆE„1„ ’ ‚E„ ‡ Š „ †Š † „ Š „ … €  … ƒ Š „1ƒ  Š  „1Š … ƒ €  €  Žƒ E†Š „  Š ‡  … ‚E … ƒ† € Ÿ „ … Ž „€ Ž „  … „ › ! 9,A 9 ? 8 :"H5 } 6,¦ } } 5 IA ~ „‚ …  ‡ „   Œ1ƒ † ƒ ‚ Š €  Ž3ƒEƒ Š ” … ƒ šš ƒ  Ž ”Eƒ Ž „‚ …   ‡ „   €  Ž3ƒ š Ž  … € Š  ‹$Š 3ƒ „ “dš ƒ  Ž ”Eƒ Ž „  š † Ž … „ ƒ Š Š  „  … „ Š € ‡ ƒ š1€  Š „ … „  Š ›H» |Ž „  „ … ƒ š ƒ š Ž  … € Š  ‹Š Eƒ Š ‡ ƒ OˆE„ƒ † ƒ ‚ Š „ †Oƒ ” Š  ‹ƒ Š € ‡ ƒ š š ‰ƒ E†O‡  „ ƒ ‚ š ‰ƒ … „Š  PH Precision log2( text size ) 2 4 6 8 10 12 14 16 18 20 precision 0 10 20 30 40 50 60 70 80 90 100 MBDP Trained PPM PH Recall log2( text size) 2 4 6 8 10 12 14 16 18 20 recall 0 10 20 30 40 50 60 70 80 90 100 MBDP Trained PPM € Ž ” … „° =˜ … „ ‡ €  €  ƒ E†… „ ‡ ƒ š š, ƒ ƒ ‹‚ š „Š  „‘ €  „  „˜ ² ‡  … ‚ ”Eƒ 1ƒŒ ” E‡ Š €   Œ š  Ž @ Š … ƒ €  €  އ  … ‚ ”E  € ¡ „ =•O–—˜  ™Š … ƒ €  „ †ƒ E†˜˜•Š … ƒ €  „ †,› ž… …  …ˆEƒ … 1ƒ … „ Š “ Š ƒ E† ƒ … †„ … …  …  Œ Š  „‹„ ƒ  › Hansard Precision log2( text size ) 2 4 6 8 10 12 14 16 18 20 22 precision 0 10 20 30 40 50 60 70 80 90 100 MBDP Trained PPM Hansard Recall log2( text size ) 2 4 6 8 10 12 14 16 18 20 22 recall 0 10 20 30 40 50 60 70 80 90 100 MBDP Trained PPM € Ž ” … „¶ =˜ … „ ‡ €  €  @ƒ E†O… „ ‡ ƒ š š  @ƒ ƒ ‹‚ š „Š  „ž  Ž š €   ² ƒ E ƒ … †3‡  … ‚ ”Eƒ ƒŒ ” E‡ Š €  3 Œš  Ž @ Š … ƒ €  €  Ž ‡  … ‚ ”E1 € ¡ „ =•O–—˜  ™Š … ƒ €  „ †ƒ E†˜˜•$Š … ƒ €  „ †,›ž… …  …ˆEƒ … 1ƒ … „Š “ Š ƒ E† ƒ … †„ … …  …  Œ Š  „‹„ ƒ  › ˆE„‚ … „ Œ „ … … „ † Ÿ „ … Š    „Š Eƒ Šƒ … „‹ … „† € ·‡ ” š ŠŠ  ƒ † ƒ ‚ Š ƒ š š Š  „ …Š  €  Ž ˆE„ €  Ž3„ ¨ ”Eƒ š … „ ƒ … š ‰@ E› …  ‹ Š  € Š  „  … „ Š € ‡ ƒ š1‚E„ …  ‚E„ ‡ Š € Ÿ „ Š  „  1•O–—˜  ™ƒ ‚ ‚E„ ƒ … Š ˆE„‚ … „ Œ „ … ƒ ˆ š „ Š ˜˜•@› » Œ »Š   ” Ž  Š1» ‹€ Ž  Š ˆE„† …  ‚ ‚E„ † ˆ ‰  „ š € ‡  ‚ Š „ …,€  Š  ƒ ”  „ ’ ‚ š  … „ † … „ Ž €  O Œ1– …  „ “1€ Š @  š ‰3ƒ‚Eƒ š ‹Š  ‚T‡  ‹‚ ” Š „ …  † €  ‡  Ÿ „ …ƒ „ “|“1… € Š Š „ š ƒ  Ž ”Eƒ Ž „  ƒ E† „ „ †Š ‚E … Š ƒ Š „ ’ Ё „ Ž ‹„  Š „ … Š € Š “1€ Š ‹€  € ‹ƒ š „ ¸E … Š  » “ ” š † ‚ š ƒ Š ˆ … €  Ž•O–—˜  ™ƒ š   ŽE› » ‚ … ƒ ‡ Š € ‡ „  ƒ † ƒ ‚ Š ƒ Š €  Š ƒ ‡  ‹‚ š „ Š „ 𠉐 „ “Tš ƒ   Ž ”Eƒ Ž „€ 1… „ š ƒ Š € Ÿ „ š ‰… ƒ … „ ›1» ŒŠ  „ „ “&š ƒ  Ž ”Eƒ Ž „… „ ‚  … „  „  Š ƒ =€ ‹‚E … Š ƒ  Šƒ ‚ ‚ š € ‡ ƒ Š €     € Ž  € ½E‡ ƒ  Š… „    ” … ‡ „ ƒ … „š € œ „ š ‰OŠ OˆE„ƒ Ÿ ƒ € š ƒ ˆ š „Œ  …Š  „‚E … Š4 ‚E„ … Eƒ ‚E„ Ÿ „ ƒEƒ E†   „ Ž ‹„  Š „ †‹€ š š €    “ … †‡  …  ‚ ”E ›H  † ƒ ‚ Š ƒ Š €  |Š Tƒ3 „ “Š „ ’ ŠŽ „  … „   “„ Ÿ „ …  € „ ’ Š … „ ‹„ š ‰|‡  ‹‹  ›= 1|€ E† € Ÿ € † ”Eƒ š … „  „ ƒ … ‡  „ … ‹€ Ž  ŠO“1ƒ  ŠOŠ =ƒ ‚ ‚ š ‰=Eƒ Š ” … ƒ šš ƒ  Ž ”Eƒ Ž „@‚ …  ‡ „    €  ŽŠ   š Š E  ƒ ‰    Ÿ „ š   ‚E „ Š … ‰   ‡ € „  Š € ½E‡ “1… € Š €  ŽE  …  „ Š  „ “   “1€ Š   ” Š1Š  „ˆEƒ ‡ œ €  Ž Œ € Ž  € ½E‡ ƒ  Š1… „    ” … ‡ „   ”E‡ ƒ  š ƒ … Ž „  ‡ ƒ … „ Œ ” š š ‰ƒ    Š ƒ Š „ †‡  … ‚E … ƒ › ¹ ”E‡ ƒ… „  „ ƒ … ‡  „ …‹€ Ž  ŠˆE„“1€ š š €  ŽŠ  „ Ž ‹„  а  @  …,„ Ÿ „ °  “ … † E Œ Š „ ’ Š € Š  „ „ “@Ž „  … „ ˆ ‰Eƒ E†, ˆ ” Š‚ …  ˆEƒ ˆ š ‰T‡  ” š †@  Š‚ …  † ”E‡ „Oƒ3‡  … ‚ ”E Œ °   “ … †  ,ƒ E†3‡ „ … Š ƒ €  š ‰O  а  “ … †  ›c …Š  € ƒ ‚  ‚ š € ‡ ƒ Š €   EŠ  „  ,•O–1— ˜ ™“ ” š †Oƒ ‚ ‚E„ ƒ … Š ˆE„Š  „ ˆE„  Š ƒ Ÿ ƒ € š ƒ ˆ š „ƒ š Ž  … € Š  ‹› » )Š  „TŒ ” Š ” … „ “„T‚ š ƒ )Š =€  Ÿ „  Š € Ž ƒ Š „@Š  „|€    ” „ ŒŠ … ƒ €  €  Ž@ T  „Ž „  … „Oƒ E†TŠ „  Š €  Ž@ Hƒ    Š  „ …   …”E €  Ž3ƒOš ƒ … Ž „Š … ƒ €  €  Ž3‡  … ‚ ”EŒ …  ‹   „ Ž „  … „ «  ”E‡ Oƒ 1  ” … Eƒ š €  Š € ‡ Š „ ’ Š ­1 ” ‚ ‚ š „ ‹„  Š „ †ˆ ‰ ƒ ‹ƒ š š Š … ƒ €  €  އ  … ‚ ”E Œ …  ‹dƒ† € ¸E„ … „  Š Š „  ŠŽ „  … „ › µ „@ƒ š  |‚ š ƒ =Š |€  Ÿ „  Š € Ž ƒ Š „3Š  „@‚E„ … Œ  … ‹ƒ E‡ „3 Œ •O–—˜  ™“1 „ € Š €  Ž € Ÿ „ ƒ  ‹ƒ š š  Eƒ E†   „ Ž ‹„  Š „ † Š … ƒ €  €  އ  … ‚ ”E1Œ  š š  “„ †ˆ ‰Oƒš ƒ … Ž „  ” E „ Ž ‹„  Š „ †  ‚ … ƒ ‡ Š € ‡ „ @‡  … ‚ ”E ˆE„ Œ  … „Š „  Š €  ŽE›&~ € … „ Ž € ‹„  ‡  ” š †ˆE„Ÿ „ … ‰ ”E „ Œ ” š Œ  …ƒ † ƒ ‚ Š ƒ Š €  Š ƒ1 „ “@Ž „  … „ › € Eƒ š š ‰ “„H‚ š ƒ dŠ &€  Ÿ „  Š € Ž ƒ Š „H‹ … „|„ š ƒ ˆE … ƒ Š „ ‚ …  ˆEƒ ˆ € š € Š ‰@‹ † „ š ƒ E†| „ ƒ … ‡ Tƒ š Ž  … € Š  ‹ › 1š Š €  ‹ƒ Š „ š ‰  “„  ‚E„Š † „ Ÿ „ š  ‚OƒŠ   š,Š Eƒ Š ‡ ƒ Oƒ † ƒ ‚ Š … ƒ ‚ € † š ‰HŠ =ƒT „ “ Ž „  … „3“1€ Š =š € Š Š š „@ … HEƒ E†   „ Ž ‹„  Š „ †Š … ƒ €  €  ŽŠ „ ’ Š ›                       …@‹ … „|€  Œ  … ‹ƒ Š €   ‚ š „ ƒ  „= „ „|Š  „H“„ ˆ € Š „ Œ  …=Š  „eQ ƒ  Ž ”Eƒ Ž „d¹ ‡ € „ E‡ „i„  „ ƒ … ‡ S …  ” ‚ ƒ Š           ›E 1  š €  „1† „ ‹ ŒE•O–—˜  ™ « ”E €  ސ Š … ƒ €  €  ŽŠ „ ’ Š “1Eƒ Š   „ Ÿ „ … ­‡ ƒ ˆE„Œ  ” E†  Š  „† „ ‹ ‚Eƒ Ž „ ›      ,       µ „|ƒ … „@Ÿ „ … ‰=Ž … ƒ Š „ Œ ” šŠ =–€ š š~ „ ƒ Eƒ   €  Ž ‰ €  Ž µ „   ƒ E†» ƒ  µ € Š Š „ Œ  … Eƒ … €  ŽŠ  „ € …  ” … ‡ „1‡  † „ “1€ Š ”E ƒ E† „ š ‚ €  Ž”EŠ … „ ‚ …  † ”E‡ „Š  „ € …1„ ’ ‚E„ … €  ‹„  Š  › ~ € ,“ … œ “1ƒ   ” ‚ ‚E … Š „ †, € ‚Eƒ … Š  ˆ ‰Ž … ƒ  Š  ” ‹ˆE„ …—‘± ¶ ± ³ °Œ …  ‹Š  „  ƒ Š €  Eƒ š » E Š € Š ” Š „   Œ ² „ ƒ š Š Š •1–› ;|9 9,? 9 A 6,9 } •€ ‡ Eƒ „ š› –… „  Š ›™ ¬ ¬ ¬ ƒ ›  1„ ·‡ € „  Š  ‚ …  ˆEƒ ˆ € š €   Š € ‡ ƒ š š ‰=  ” E†&ƒ š Ž  … € Š  ‹0Œ  …3 „ Ž ‹„  Š ƒ Š €  &ƒ E† “ … †† €  ‡  Ÿ „ … ‰ › G {  zK  K G L z z  E¶ = ´ ™ E™ ± ¯ › •€ ‡ Eƒ „ š›–… „  Š ›=™ ¬ ¬ ¬ ˆ ›T¹ ‚E„ „ ‡ H „ Ž ‹„  Š ƒ Š €   ƒ E†“ … †† €  ‡  Ÿ „ … ‰= )‡  ‹‚ ” Š ƒ Š €  Eƒ š ‚E„ …  ‚E„ ‡  Š € Ÿ „ ›  L K z M z!   z O : KF){ K z{ K  ¶ = ° ¬  ¶ ± ™ › "“ œ  ¹  €  Ž‘ „  ŽE  € š ˆE„ … Š ² ›   ”  ŽEEƒ E†#"ƒ ‹ ƒ € µ   ŽE›&™ ¬ ¬ ¬ ›T  ”E† ‰@ |“ … †  ˆEƒ  „ †Tƒ E† €  Š „ Ž … ƒ š  ˆ € Ї  €  „  „1Š „ ’ Ї  ‹‚ … „   €  ƒ š Ž  … € Š  ‹O› $  N L zG J  C O  K&%H K L { G zUF  { K O < C  L(' z C  L H G ; O  zF{ K z{ K  ± = ° ™ ³  ° ° ³ › — „ š ‚  €  „H— ƒ Eƒ dƒ E†)•€ ‡ Eƒ „ š ›–… „  Š ›0™ ¬ ¬ ¬ › ª13Š  „† €  ‡  Ÿ „ … ‰ Œ  Ÿ „ š “ … †  š € œ „”  € Š Œ …  ‹ ” Š Š „ … ƒ E‡ „  =H 1)ƒ … Š € ½E‡ € ƒ š  š ƒ  Ž ”Eƒ Ž „@ Š ”E† ‰=“1€ Š  € ‹‚ š € ‡ ƒ Š €  EdŒ  …Eƒ Š € Ÿ „  š ƒ  Ž ”Eƒ Ž „ƒ ‡ ¨ ” €  € Š €   › $  N L zG J  C/10 I K L H K z O G J )M < {   J   < P+*K zK L G J  ™ ° ³ = ™ ¯ E™ ³ ³ › ” ˆ € 3— ƒ € E‘ … €  Š  ‚  „ … ¹,›› "  E ƒ E†~,„ ‡ œž„ Q,  ›T™ ¬ ¬ ¬ ›O  „ “ Š ƒ Š €  Š € ‡ ƒ šŒ  … ‹” š ƒŒ  …‡  €   „  „TŠ „ ’ Š@ „ Ž ‹„  Š ƒ Š €  )€ E‡  … ‚E … ƒ Š €  Ž=‡   Š „ ’  Š ”Eƒ š1€  Œ  … ‹ƒ Š €   ›|»  )7L  { K K z  M  C %!  F ' ; *' (  ‚Eƒ Ž „ 1³ °  ³ ¬ › § ” š € ƒ ²  ‡ œ „  ‹ƒ € „ …ƒ E†3‘ … € –… „ “›™ ¬ ¬ ³ ›ž… …  … † … € Ÿ „  „ Ž ‹„  Š ƒ Š €   Œ,‡  €  „  „ ›,»  !  HcHcN z { G ; O  z M  C !-,  ' )F  Ÿ  š ” ‹„³  ‚Eƒ Ž „ 1¯ ¬  ³ E› —ƒ Ÿ € †˜ƒ š ‹„ … ›™ ¬ ¬ ´ ›  HŠ … ƒ € Eƒ ˆ š „ … ” š „  ˆEƒ  „ †ƒ š Ž   … € Š  ‹Œ  …1“ … † „ Ž ‹„  Š ƒ Š €   › »  )7L  { K K z  M  C O  K. / O #%zz NG J  K K O z   C O  K0%M M  { G O  z C  L !  HIN O G O  zG J  z  N M O { M › § ƒ ‰O•@› ˜   Š „ƒ E† µ › –… ”E‡ „‘…  Œ Š ›™ ¬ ¬ ¯ › ¹ „ Ž= ƒ… „ Š ƒ … Ž „ Š ƒ ˆ š „1“ … † „ Ž ‹„  Š ƒ Š €  ‚ …  ‡ „ † ” … „1Œ  … €  Œ  … ‹ƒ Š €  3… „ Š … € „ Ÿ ƒ š ›~ „ ‡   € ‡ ƒ š„ ‚E … Š~1¬ ¯  °  1 € Ÿ „ …  € Š ‰ Œ •Oƒ   ƒ ‡  ”E „ Š Š    1‹ „ …  Š E•O › µ ›§E›~,„ ƒ Eƒ  ¹,›»  Ž š €  §    ›‘š „ ƒ … ‰ ƒ E† › ²  š ‹„  ›™ ¬ ¬ ³ ›‘ … … „ ‡ Š €  Ž„  Ž š €  Š „ ’ Š1”E €  Ž ‚ ‚ ‹d‹ † „ š  › » §E›  › ¹ Š  … „ …ƒ E†§ › ² › 1„ € Œ  „ †  € Š  …   )7L  { K K z  M  C1 G O G(!  HIL K M M  z2!  z C K L ; K z{ K ,‚Eƒ Ž „ ° ³ ¬  ° ¬ ³ Q,  1š ƒ ‹€ Š   ‘ ›,» žžž ‘ ‹‚ ” Š „ … ¹  ‡ € „ Š ‰˜… „   › µ › §E› ~,„ ƒ Eƒ    €  Ž ‰ €  Ž µ „   1 † Ž „ … •O‡  ƒ ˆ  ƒ E† » ƒ  ² › µ € Š Š „  ›° ± ± ± › d‡  ‹‚ … „   €    ˆEƒ  „ †ƒ š  Ž  … € Š  ‹$Œ  …‡  €  „  „“ … †3 „ Ž ‹„  Š ƒ Š €   › !  H; IN O G O  zG J  z  N M O { M E° ¯ = ¶ ´  ¶ ¬ ¶ › 3 € ‹€  µ ”Hƒ E†“1‰  „ Š H~1 „  ŽE›&™ ¬ ¬ ¶ ›|‘ €  „  „ Š „ ’ Ё „ Ž ‹„  Š ƒ Š €  |Œ  …Š „ ’ Š… „ Š … € „ Ÿ ƒ š =3  ‡  € „ Ÿ „  ‹„  Š  ƒ E†‚ …  ˆ š „ ‹ › $ %F ' F  = ¶ °  ° ›
2001
13
Towards Automatic Classification of Discourse Elements in Essays Jill Burstein ETS Technologies MS 18E Princeton, NJ 08541 USA Jburstein@ etstechnologies.com Daniel Marcu ISI/USC 4676 Admiralty Way Marina del Rey, CA, USA [email protected] Slava Andreyev ETS Technologies MS 18E Princeton, NJ 08541 USA sandreyev@ etstechnologies.com Martin Chodorow Hunter College, The City University of New York New York, NY USA Martin.chodorow@ hunter.cuny.edu Abstract Educators are interested in essay evaluation systems that include feedback about writing features that can facilitate the essay revision process. For instance, if the thesis statement of a student’s essay could be automatically identified, the student could then use this information to reflect on the thesis statement with regard to its quality, and its relationship to other discourse elements in the essay. Using a relatively small corpus of manually annotated data, we use Bayesian classification to identify thesis statements. This method yields results that are much closer to human performance than the results produced by two baseline systems. 1 Introduction Automated essay scoring technology can achieve agreement with a single human judge that is comparable to agreement between two single human judges (Burstein, et al 1998; Foltz, et al 1998; Larkey, 1998; and Page and Peterson, 1995). Unfortunately, providing students with just a score (grade) is insufficient for instruction. To help students improve their writing skills, writing evaluation systems need to provide feedback that is specific to each individual’s writing and that is applicable to essay revision. The factors that contribute to improvement of student writing include refined sentence structure, variety of appropriate word usage, and organizational structure. The improvement of organizational structure is believed to be critical in the essay revision process toward overall improvement of essay quality. Therefore, it would be desirable to have a system that could indicate as feedback to students, the discourse elements in their essays. Such a system could present to students a guided list of questions to consider about the quality of the discourse. For instance, it has been suggested by writing experts that if the thesis statement1 of a student’s essay could be automatically provided, the student could then use this information to reflect on the thesis statement and its quality. In addition, such an instructional application could utilize the thesis statement to discuss other types of discourse elements in the essay, such as the relationship between the thesis statement and the conclusion, and the connection between the thesis statement and the main points in the essay. In the teaching of writing, in order to facilitate the revision process, students are often presented with ‘Revision Checklists.’ A revision checklist is a list of questions posed to the student to help the student reflect on the quality of his or her writing. Such a list might pose questions such as: a) Is the intention of my thesis statement clear? 1 A thesis statement is generally defined as the sentence that explicitly identifies the purpose of the paper or previews its main ideas. See the Literacy Education On-line (LEO) site at http://leo.stcloudstate.edu. (Annotator 1) “In my opinion student should do what they want to do because they feel everything and they can't have anythig they feel because they probably feel to do just because other people do it not they want it. (Annotator 2) I think doing what students want is good for them. I sure they want to achieve in the highest place but most of the student give up. They they don’t get what they want. To get what they want, they have to be so strong and take the lesson from their parents Even take a risk, go to the library, and study hard by doing different thing. Some student they do not get what they want because of their family. Their family might be careless about their children so this kind of student who does not get support, loving from their family might not get what he wants. He just going to do what he feels right away. So student need a support from their family they has to learn from them and from their background. I learn from my background I will be the first generation who is going to gradguate from university that is what I want.” Figure 1: Sample student essay with human annotations of thesis statements. b) Does my thesis statement respond directly to the essay question? c) Are the main points in my essay clearly stated? d) Do the main points in my essay relate to my original thesis statement? If these questions are expressed in general terms, they are of little help; to be useful, they need to be grounded and need to refer explicitly to the essays students write (Scardamalia and Bereiter, 1985; White 1994). The ability to automatically identify and present to students the discourse elements in their essays can help them focus and reflect on the critical discourse structure of the essays. In addition, the ability for the application to indicate to the student that a discourse element could not be located, perhaps due to the ‘lack of clarity’ of this element, could also be helpful. Assuming that such a capability was reliable, this would force the writer to think about the clarity of an intended discourse element, such as a thesis statement. Using a relatively small corpus of essay data where thesis statements have been manually annotated, we built a Bayesian classifier using the following features: sentence position; words commonly used in thesis statements; and discourse features, based on Rhetorical Structure Theory (RST) parses (Mann and Thompson, 1988 and Marcu, 2000). Our results indicate that this classification technique may be used toward automatic identification of thesis statements in essays. Furthermore, we show that this method generalizes across essay topics. 2 What Are Thesis Statements? A thesis statement is defined as the sentence that explicitly identifies the purpose of the paper or previews its main ideas (see footnote 1). This definition seems straightforward enough, and would lead one to believe that even for people to identify the thesis statement in an essay would be clear-cut. However, the essay in Figure 1 is a common example of the kind of first-draft writing that our system has to handle. Figure 1 shows a student response to the essay question: Often in life we experience a conflict in choosing between something we "want" to do and something we feel we "should" do. In your opinion, are there any circumstances in which it is better for people to do what they "want" to do rather than what they feel they "should" do? Support your position with evidence from your own experience or your observations of other people. The writing in Figure 1 illustrates one kind of challenge in automatic identification of discourse elements, such as thesis statements. In this case, the two human annotators independently chose different text as the thesis statement (the two texts highlighted in bold and italics in Figure 1). In this kind of first-draft writing, it is not uncommon for writers to repeat ideas, or express more than one general opinion about the topic, resulting in text that seems to contain multiple thesis statements. Before building a system that automatically identifies thesis statements in essays, we wanted to determine whether the task was well-defined. In collaboration with two writing experts, a simple discourse-based annotation protocol was developed to manually annotate discourse elements in essays for a single essay topic. This was the initial attempt to annotate essay data using discourse elements generally associated with essay structure, such as thesis statement, concluding statement, and topic sentences of the essay’s main ideas. The writing experts defined the characteristics of the discourse labels. These experts then annotated 100 essay responses to one English Proficiency Test (EPT) question, called Topic B, using a PC-based interface implemented in Java. We computed the agreement between the two human annotators using the kappa coefficient (Siegel and Castellan, 1988), a statistic used extensively in previous empirical studies of discourse. The kappa statistic measures pairwise agreement among a set of coders who make categorial judgments, correcting for chance expected agreement. The kappa agreement between the two annotators with respect to the thesis statement labels was 0.733 (N=2391, where 2391 represents the total number of sentences across all annotated essay responses). This shows high agreement based on research in content analysis (Krippendorff, 1980) that suggests that values of kappa higher than 0.8 reflect very high agreement and values higher than 0.6 reflect good agreement. The corresponding z statistic was 27.1, which reflects a confidence level that is much higher than 0.01, for which the corresponding z value is 2.32 (Siegel and Castellan, 1988). In the early stages of our project, it was suggested to us that thesis statements reflect the most important sentences in essays. In terms of summarization, these sentences would represent indicative, generic summaries (Mani and Maybury, 1999; Marcu, 2000). To test this hypothesis (and estimate the adequacy of using summarization technology for identifying thesis statements), we carried out an additional experiment. The same annotation tool was used with two different human judges, who were asked this time to identify the most important sentence of each essay. The agreement between human judges on the task of identifying summary sentences was significantly lower: the kappa was 0.603 (N=2391). Tables 1a and 1b summarize the results of the annotation experiments. Table 1a shows the degree of agreement between human judges on the task of identifying thesis statements and generic summary sentences. The agreement figures are given using the kappa statistic and the relative precision (P), recall (R), and F-values (F), which reflect the ability of one judge to identify the sentences labeled as thesis statements or summary sentences by the other judge. The results in Table 1a show that the task of thesis statement identification is much better defined than the task of identifying important summary sentences. In addition, Table 1b indicates that there is very little overlap between thesis and generic summary sentences: just 6% of the summary sentences were labeled by human judges as thesis statement sentences. This strongly suggests that there are critical differences between thesis statements and summary sentences, at least in first-draft essay writing. It is possible that thesis statements reflect an intentional facet (Grosz and Sidner, 1986) of language, while summary sentences reflect a semantic one (Martin, 1992). More detailed experiments need to be carried out though before proper conclusions can be derived. Table 1a: Agreement between human judges on thesis and summary sentence identification. Metric Thesis Statements Summary Sentences Kappa 0.733 0.603 P (1 vs. 2) 0.73 0.44 R (1 vs. 2) 0.69 0.60 F (1 vs. 2) 0.71 0.51 Table 1b: Percent overlap between human labeled thesis statements and summary sentences. Thesis statements vs. Summary sentences Percent Overlap 0.06 The results in Table 1a provide an estimate for an upper bound of a thesis statement identification algorithm. If one can build an automatic classifier that identifies thesis statements at recall and precision levels as high as 70%, the performance of such a classifier will be indistinguishable from the performance of humans. 3 A Bayesian Classifier for Identifying Thesis Statements 3.1 Description of the Approach We initially built a Bayesian classifier for thesis statements using essay responses to one English Proficiency Test (EPT) test question: Topic B. McCallum and Nigam (1998) discuss two probabilistic models for text classification that can be used to train Bayesian independence classifiers. They describe the multinominal model as being the more traditional approach for statistical language modeling (especially in speech recognition applications), where a document is represented by a set of word occurrences, and where probability estimates reflect the number of word occurrences in a document. In using the alternative, multivariate Bernoulli model, a document is represented by both the absence and presence of features. On a text classification task, McCallum and Nigam (1998) show that the multivariate Bernoulli model performs well with small vocabularies, as opposed to the multinominal model which performs better when larger vocabularies are involved. Larkey (1998) uses the multivariate Bernoulli approach for an essay scoring task, and her results are consistent with the results of McCallum and Nigam (1998) (see also Larkey and Croft (1996) for descriptions of additional applications). In Larkey (1998), sets of essays used for training scoring models typically contain fewer than 300 documents. Furthermore, the vocabulary used across these documents tends to be restricted. Based on the success of Larkey’s experiments, and McCallum and Nigam’s findings that the multivariate Bernoulli model performs better on texts with small vocabularies, this approach would seem to be the likely choice when dealing with data sets of essay responses. Therefore, we have adopted this approach in order to build a thesis statement classifier that can select from an essay the sentence that is the most likely candidate to be labeled as thesis statement.2 2 In our research, we trained classifiers using a classical Bayes approach too, where two classifiers were built: a thesis classifier and a non-thesis In our experiments, we used three general feature types to build the classifier: sentence position; words commonly occurring in thesis statements; and RST labels from outputs generated by an existing rhetorical structure parser (Marcu, 2000). We trained the classifier to predict thesis statements in an essay. Using the multivariate Bernoulli formula, below, this gives us the log probability that a sentence (S) in an essay belongs to the class (T) of sentences that are thesis statements. We found that it helped performance to use a Laplace estimator to deal with cases where the probability estimates were equal to zero. i i i i i log(P(T | S)) = log(P(T)) + log(P(A | T) /P(A)), log(P(A | T) /P(A )), i i if S contains A if S does not contain A    ∑ In this formula, P(T) is the prior probability that a sentence is in class T, P(Ai|T) is the conditional probability of a sentence having feature Ai , given that the sentence is in T, and P(Ai) is the prior probability that a sentence contains feature Ai, P( iA |T) is the conditional probability that a sentence does not have feature Ai, given that it is in T, and P( iA ) is the prior probability that a sentence does not contain feature Ai. 3.2 Features Used to Classify Thesis Statements 3.2.1 Positional Feature We found that the likelihood of a thesis statement occurring at the beginning of essays was quite high in the human annotated data. To account for this, we used one feature that reflected the position of each sentence in an essay. classifier. In the classical Bayes implementation, each classifier was trained only on positive feature evidence, in contrast to the multivariate Bernoulli approach that trains classifiers both on the absence and presence of features. Since the performance of the classical Bayes classifiers was lower than the performance of the Bernoulli classifier, we report here only the performance of the latter. 3.2.2 Lexical Features All words from human annotated thesis statements were used to build the Bayesian classifier. We will refer to these words as the thesis word list. From the training data, a vocabulary list was created that included one occurrence of each word used in all resolved human annotations of thesis statements. All words in this list were used as independent lexical features. We found that the use of various lists of stop words decreased the performance of our classifier, so we did not use them. 3.2.3 Rhetorical Structure Theory Features According to RST (Mann and Thompson, 1988), one can associate a rhetorical structure tree to any text. The leaves of the tree correspond to elementary discourse units and the internal nodes correspond to contiguous text spans. Each node in a tree is characterized by a status (nucleus or satellite) and a rhetorical relation, which is a relation that holds between two non-overlapping text spans. The distinction between nuclei and satellites comes from the empirical observation that the nucleus expresses what is more essential to the writer’s intention than the satellite; and that the nucleus of a rhetorical relation is comprehensible independent of the satellite, but not vice versa. When spans are equally important, the relation is multinuclear. Rhetorical relations reflect semantic, intentional, and textual relations that hold between text spans as is illustrated in Figure 2. For example, one text span may elaborate on another text span; the information in two text spans may be in contrast; and the information in one text span may provide background for the information presented in another text span. Figure 2 displays in the style of Mann and Thompson (1988) the rhetorical structure tree of a text fragment. In Figure 2, nuclei are represented using straight lines; satellites using arcs. Internal nodes are labeled with rhetorical relation names. We built RST trees automatically for each essay using the cue-phrase-based discourse parser of Marcu (2000). We then associated with each sentence in an essay a feature that reflected the status of its parent node (nucleus or satellite), and another feature that reflected its rhetorical relation. For example, for the last sentence in Figure 2 we associated the status satellite and the relation elaboration because that sentence is the satellite of an elaboration relation. For sentence 2, we associated the status nucleus and the relation elaboration because that sentence is the nucleus of an elaboration relation. We found that some rhetorical relations occurred more frequently in sentences annotated as thesis statements. Therefore, the conditional probabilities for such relations were higher and provided evidence that certain sentences were thesis statements. The Contrast relation shown in Figure 2, for example, was a rhetorical relation that occurred more often in thesis statements. Arguably, there may be some overlap between words in thesis statements, and rhetorical relations used to build the classifier. The RST relations, however, capture long distance relations between text spans, which are not accounted by the words in our thesis word list. 3.3 Evaluation of the Bayesian classifier We estimated the performance of our system using a six-fold cross validation procedure. We partitioned the 93 essays that were labeled by both human annotators with a thesis statement into six groups. (The judges agreed that 7 of the 100 essays they annotated had no thesis statement.) We trained six times on 5/6 of the labeled data and evaluated the performance on the other 1/6 of the data. The evaluation results in Table 2 show the average performance of our classifier with respect to the resolved annotation (Alg. wrt. Resolved), using traditional recall (R), precision (P), and F-value (F) metrics. For purposes of comparison, Table 2 also shows the performance of two baselines: the random baseline classifies the thesis statements Figure 2: Example of RST tree. randomly; while the position baseline assumes that the thesis statement is given by the first sentence in each essay. Table 2: Performance of the thesis statement classifier. System vs. system P R F Random baseline wrt. Resolved 0.06 0.05 0.06 Position baseline wrt. Resolved 0.26 0.22 0.24 Alg. wrt. Resolved 0.55 0.46 0.50 1 wrt. 2 0.73 0.69 0.71 1 wrt. Resolved 0.77 0.78 0.78 2 wrt. Resolved 0.68 0.74 0.71 4 Generality of the Thesis Statement Identifier In commercial settings, it is crucial that a classifier such as the one discussed in Section 3 generalizes across different test questions. New test questions are introduced on a regular basis; so it is important that a classifier that works well for a given data set works well for other data sets as well, without requiring additional annotations and training. For the thesis statement classifier it was important to determine whether the positional, lexical, and RST-specific features are topic independent, and thus generalizable to new test questions. If so, this would indicate that we could annotate thesis statements across a number of topics, and re-use the algorithm on additional topics, without further annotation. We asked a writing expert to manually annotate the thesis statement in approximately 45 essays for 4 additional test questions: Topics A, C, D and E. The annotator completed this task using the same interface that was used by the two annotators in Experiment 1. To test generalizability for each of the five EPT questions, the thesis sentences selected by a writing expert were used for building the classifier. Five combinations of 4 prompts were used to build the classifier in each case, and the resulting classifier was then cross-validated on the fifth topic, which was treated as test data. To evaluate the performance of each of the classifiers, agreement was calculated for each ‘cross-validation’ sample (single topic) by comparing the algorithm selection to our writing expert’s thesis statement selections. For example, we trained on Topics A, C, D, and E, using the thesis statements selected manually. This classifier was then used to select, automatically, thesis statements for Topic B. In the evaluation, the algorithm’s selection was compared to the manually selected set of thesis statements for Topic B, and agreement was calculated. Table 3 illustrates that in all but one case, agreement exceeds both baselines from Table 2. In this set of manual annotations, the human judge almost always selected one sentence as the thesis statement. This is why Precision, Recall, and the F-value are often equal in Table 3. Table 3: Cross-topic generalizability of the thesis statement classifier. Training Topics CV Topic P R F ABCD E 0.36 0.36 0.36 ABCE D 0.49 0.49 0.49 ABDE C 0.45 0.45 0.45 ACDE B 0.60 0.59 0.59 BCDE A 0.25 0.24 0.25 Mean 0.43 0.43 0.43 5 Discussion and Conclusions The results of our experimental work indicate that the task of identifying thesis statements in essays is well defined. The empirical evaluation of our algorithm indicates that with a relatively small corpus of manually annotated essay data, one can build a Bayes classifier that identifies thesis statements with good accuracy. The evaluations also provide evidence that this method for automated thesis selection in essays is generalizable. That is, once trained on a few human annotated prompts, it can be applied to other prompts given a similar population of writers, in this case, writers at the college freshman level. The larger implication is that we begin to see that there are underlying discourse elements in essays that can be identified, independent of the topic of the test question. For essay evaluation applications this is critical since new test questions are continuously being introduced into on-line essay evaluation applications. Our results compare favorably with results reported by Teufel and Moens (1999) who also use Bayes classification techniques to identify rhetorical arguments such as aim and background in scientific texts, although the texts we are working with are extremely noisy. Because EPT essays are often produced for high-stake exams, under severe time constraints, they are often ungrammatical, repetitive, and poorly organized at the discourse level. Current investigations indicate that this technique can be used to reliably identify other essay-specific discourse elements, such as, concluding statements, main points of arguments, and supporting ideas. In addition, we are exploring how we can use estimated probabilities as confidence measures of the decisions made by the system. If the confidence level associated with the identification of a thesis statement is low, the system would instruct the student that no explicit thesis statement has been found in the essay. Acknowledgements We would like to thank our annotation experts, Marisa Farnum, Hilary Persky, Todd Farley, and Andrea King. References Burstein, J., Kukich, K. Wolff, S. Lu, C. Chodorow, M, Braden-Harder, L. and Harris M.D. (1998). Automated Scoring Using A Hybrid Feature Identification Technique. Proceedings of ACL, 206-210. Foltz, P. W., Kintsch, W., and Landauer, T.. (1998). The Measurement of Textual Coherence with Latent Semantic Analysis. Discourse Processes, 25(2&3), 285-307. Grosz B. and Sidner, C. (1986). Attention, Intention, and the Structure of Discourse. Computational Linguistics, 12 (3), 175-204. Krippendorff K. (1980). Content Analysis: An Introduction to Its Methodology. Sage Publ. Larkey, L. and Croft, W. B. (1996). Combining Classifiers in Text Categorization. Proceedings of SIGIR, 289-298. Larkey, L. (1998). Automatic Essay Grading Using Text Categorization Techniques. Proceedings of SIGIR, pages 90-95. Mani, I. and Maybury, M. (1999). Advances in Automatic Text Summarization. The MIT Press. Mann, W.C. and Thompson, S.A.(1988). Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text 8(3), 243–281. Martin, J. (1992). English Text. System and Structure. John Benjamin Publishers. Marcu, D. (2000). The Theory and Practice of Discourse Parsing and Summarization. The MIT Press. McCallum, A. and Nigam, K. (1998). A Comparison of Event Models for Naive Bayes Text Classification. The AAAI-98 Workshop on "Learning for Text Categorization". Page, E.B. and Peterson, N. (1995). The computer moves into essay grading: updating the ancient test. Phi Delta Kappa, March, 561565. Scardamalia, M. and Bereiter, C. (1985). Development of Dialectical Processes in Composition. In Olson, D. R., Torrance, N. and Hildyard, A. (eds), Literacy, Language, and Learning: The nature of consequences of reading and writing. Cambridge University Press. Siegel S. and Castellan, N.J. (1988). Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill. Teufel , S. and Moens, M. (1999). Discourselevel argumentation in scientific articles. Proceedings of the ACL99 Workshop on Standards and Tools for Discourse Tagging. White E.M. (1994). Teaching and Assessing Writing. Jossey-Bass Publishers, 103-108.
2001
14
From RAGS to RICHES: exploiting the potential of a flexible generation architecture Lynne Cahill  , John Carroll  , Roger Evans  , Daniel Paiva  , Richard Power  , Donia Scott  and Kees van Deemter   ITRI, University of Brighton Brighton, BN2 4GJ, UK [email protected]  School of Cognitive and Computing Sciences, University of Sussex Brighton, BN1 9QH, UK [email protected] Abstract The RAGS proposals for generic specification of NLG systems includes a detailed account of data representation, but only an outline view of processing aspects. In this paper we introduce a modular processing architecture with a concrete implementation which aims to meet the RAGS goals of transparency and reusability. We illustrate the model with the RICHES system – a generation system built from simple linguisticallymotivated modules. 1 Introduction As part of the RAGS (Reference Architecture for Generation Systems) project, Mellish et al (2000) introduces a framework for the representation of data in NLG systems, the RAGS ‘data model’. This model offers a formally well-defined declarative representation language, which supports the complex and dynamic data requirements of generation systems, e.g. different levels of representation (conceptual to syntax), mixed representations that cut across levels, partial and shared structures and ‘canned’ representations. However  We would like to acknowledge the financial support of the EPSRC (RAGS – Reference Architecture for Generation Systems: grant GR/L77102 to Donia Scott), as well as the intellectual contribution of our partners at Edinburgh (Chris Mellish and Mike Reape: grant GR/L77041 to Mellish) and other colleagues at the ITRI, especially Nedjet BouayadAgha. We would also like to acknowledge the contribution of colleagues who worked on the RICHES system previously: Neil Tipper and Rodger Kibble. We are grateful to our anonymous referees for their helpful comments. RAGS, as described in that paper, says very little about the functional structure of an NLG system, or the issues arising from more complex processing regimes (see for example Robin (1994), Inuie et al., (1992) for further discussion). NLG systems, especially end-to-end, applied NLG systems, have many functionalities in common. Reiter (1994) proposed an analysis of such systems in terms of a simple three stage pipeline. More recently Cahill et al (1999) attempted to repeat the analysis, but found that while most systems did implement a pipeline, they did not implement the same pipeline – different functionalities occurred in different ways and different orders in different systems. But this survey did identify a number of core functionalities which seem to occur during the execution of most systems. In order to accommodate this result, a ‘process model’ was sketched which aimed to support both pipelines and more complex control regimes in a flexible but structured way (see (Cahill et al., 1999),(RAGS, 2000)). In this paper, we describe our attempts to test these ideas in a simple NLG application that is based on a concrete realisation of such an architecture1. The RAGS data model aims to promote comparability and re-usability in the NLG research community, as well as insight into the organisation and processing of linguistic data in NLG. The present work has similar goals for the processing aspects: to propose a general approach to organising whole NLG systems in a way which promotes 1More details about the RAGS project, the RICHES implementation and the OASYS subsystem can be found at the RAGS project web site: http://www.itri.bton.ac.uk/projects/rags. the same ideals. In addition, we aim to test the claims that the RAGS data model approach supports the flexible processing of information in an NLG setting. 2 The RAGS data model The starting point for our work here is the RAGS data model as presented in Mellish et al (2000). This model distinguishes the following five levels of data representation that underpin the generation process: Rhetorical representations (RhetReps) define how propositions within a text are related. For example, the sentence “Blow your nose, so that it is clear” can be considered to consist of two propositions: BLOW YOUR NOSE and YOUR NOSE IS CLEAR, connected by a relation like MOTIVATION. Document representations (DocReps) encode information about the physical layout of a document, such as textual level (paragraph, orthographic sentence, etc.), layout (indentation, bullet lists etc.) and their relative positions. Semantic representations (SemReps) specify information about the meaning of individual propositions. For each proposition, this includes the predicate and its arguments, as well as links to underlying domain objects and scoping information. Syntactic representations (SynReps) define “abstract” syntactic information such as lexical features (FORM, ROOT etc.) and syntactic arguments and adjuncts (SUBJECT, OBJECT etc.). Quote representations These are used to represent literal unanalysed content used by a generator, such as canned text, pictures or tables. The representations aim to cover the core common requirements of NLG systems, while avoiding over-commitment on less clearly agreed issues relating to conceptual representation on the one hand and concrete syntax and document rendering on the other. When one considers processing aspects, however, the picture tends to be a lot less tidy: typical modules in real NLG systems often manipulate data at several levels at once, building structures incrementally, and often working with ‘mixed’ structures, which include information from more than one level. Furthermore this characteristic remains even when one considers more purely functionally-motivated ‘abstract’ NLG modules. For example, Referring Expression Generation, commonly viewed as a single task, needs to have access to at least rhetorical and document information as well as referencing and adding to the syntactic information. To accommodate this, the RAGS data model includes a more concrete representational proposal, called the ‘whiteboard’ (Calder et al., 1999), in which all the data levels can be represented in a common framework consisting of networks of typed ‘objects’ connected by typed ‘arrows’. This lingua franca allows NLG modules to manipulate data flexibly and consistently. It also facilitates modular design of NLG systems, and reusability of modules and data sets. However, it does not in itself say anything about how modules in such a system might interact. This paper describes a concrete realisation of the RAGS object and arrows model, OASYS, as applied to a simple but flexible NLG system called RICHES. This is not the first such realisation: Cahill et al., (2000) describes a partial re-implementation of the ‘Caption Generation System’ (Mittal et al., 1999) which includes an objects and arrows ‘whiteboard’. The OASYS system includes more specific proposals for processing and inter-module communication, and RICHES demonstrates how this can be used to support a modular architecture based on small scale functionally-motivated units. 3 OASYS OASYS (Objects and Arrows SYStem) is a software library which provides: an implementation of the RAGS Object and Arrows (O/A) data representation, support for representing the five-layer RAGS data model in O/A terms, an event-driven active database server for O/A representations. Together these components provide a central core for RAGS-style NLG applications, allowing separate parts of NLG functionality to be specified in independent modules, which communicate exclusively via the OASYS server. The O/A data representation is a simple typed network representation language. An O/A database consists of a collection of objects, each of which has a unique identifier and a type, and arrows, each of which has a unique identifier, a type, and source and target objects. Such a database can be viewed as a (possibly disconnected) directed network representation: the figures in section 5 give examples of such networks. OASYS pre-defines object and arrow types required to support the RAGS data model. Two arrow types, el (element) and el(<integer>), are used to build up basic network structures – el identifies its target as a member of the set represented by its source, el(3), identifies its target as the third element of the tuple represented by its source. Arrow type realised by relates structures at different levels of representation. for example, indicating that this SemRep object is realised by this SynRep object. Arrow type revised to provides for support for nondestructive modification of a structure, mapping from an object to another of the same type that can be viewed as a revision of it. Arrow type refers to allows an object at one level to indirectly refer to an object at a different level. Object types correspond to the types of the RAGS data model, and are either atomic, tuples, sets or sequences. For example, document structures are built out of DocRep (a 2-tuple), DocAttr (a set of DocFeatAtoms – feature-value pairs), DocRepSeq (a sequence of DocReps or DocLeafs) and DocLeafs. The active database server supports multiple independent O/A databases. Individual modules of an application publish and retrieve objects and arrows on databases, incrementally building the ‘higher level’, data structures. Modules communicate by accessing a shared database. Flow of control in the application is event-based: the OASYS module has the central thread of execution, calls to OASYS generate ‘events’, and modules are implemented as event handlers. A module registers interest in particular kinds of events, and when those events occur, the module’s hander is called to deal with them, which typically will involve inspecting the database and adding more structure (which generates further events). OASYS supports three kinds of events: publish events occur whenever an object or arrow is published in a database, module lifecycle events occur whenever a new module starts up or terminates, and synthetic events – arbitrary messages passed between the modules, but not interpreted by OASYS itself – may be generated by modules at any time. An application starts up by initialising all its modules. This generates initialise events, which at least one module must respond to, generating further events which other modules may respond to, and so on, until no new events are generated, at which point OASYS generates finalise events for all the modules and terminates them. This framework supports a wide range of architectural possibilities. Publish events can be used to make a module wake up whenever data of a particular sort becomes available for processing. Lifecycle events provide, among other things, an easy way to do pipelining: the second module in a pipeline waits for the finalise event of the first and then starts processing, the third waits similarly for the second to finalise etc. Synthetic events allow modules to tell each other more explicitly that some data is ready for processing, in situation where simple publication of an object is not enough. RICHES includes examples of all three regimes: the first three modules are pipelined using lifecycle events; LC and RE, FLO and REND interact using synthetic events; while SF watches the database specifically for publication events. 4 RICHES The RICHES system is a simple generation system that takes as input rhetorical plans and produces patient advice texts. The texts are intended to resemble those found at the PharmWeb site (http://www.pharmweb.net). These are simple instructional texts telling patients how to use certain types of medicines, such as nosedrops, eye drops, suppositories etc.. An example text from PharmWeb is shown in figure 1, alongside the corresponding text produced by RICHES. The main aim of RICHES is to demonstrate the feasibility of a system based on both the RAGS data model and the OASYS server model. The modules collectively construct and access the data representations in a shared blackboard space and this allows the modules to be defined in terms of their functional role, rather than say, the kind of data they manipulate or their position in a processing pipeline. Each of the modules in the sys How to Use Nose Drops 1. Blow your nose gently, so that it is clear. 2. Wash your hands. 3. Unscrew the top of the bottle and draw some liquid into the dropper. 4. Tilt your head back. 5. Hold the dropper just above your nose and put the correct number of drops into your nostril. 6. DO NOT let the dropper touch the inside of your nose. 7. Keep your head tilted back for two to three minutes to help the drops run to the back of your nose. 8. Replace the top on the bottle. KEEP ALL MEDICINES OUT OF THE REACH OF CHILDREN PharmWeb - Copyright©1994-2001. All rights reserved Blow your nose so that it is clear. Wash your hands Unscrew the top. Then draw the liquid into the dropper. Tilt your head back Hold the dropper above your nose. Then put the drops into your nostril. The dropper must not touch the inside. Keep your head tilted back for two to three minutes so that the drops run to the back. Replace the top on the bottle Generated by RICHES version 1.0 (9/5/2001) on 9/5/2001 ©2001, ITRI, University of Brighton Figure 1: An example text from PharmWeb, together with the corresponding text generated by RICHES tem is in itself very simple – our primary interest here is in the way they interact. Figure 2 shows the structure of the system2. The functionality of the individual modules is briefly described below. Rhetorical Oracle (RO) The input to the system is a RhetRep of the document to be generated: a tree with internal nodes labelled with (RST-style) rhetorical relations and RhetLeaves referring to semantic proposition representations (SemReps). RO simply accesses such a representation from a data file and initialises the OASYS database. Media Selection (MS) RICHES produces documents that may include pictures as well as text. As soon as the RhetRep becomes available, this module examines it and decides what can be illustrated and what picture should illustrate it. Pic2The dashed lines indicate flow of information, solid arrows indicate approximately flow of control between modules, double boxes indicate a completely reused module (from another system), while a double box with a dashed outer indicates a module partially reused. Ellipses indicate information sources, as opposed to processing modules. tures, annotated with their SemReps, are part of the picture library, and Media Selection builds small pieces of DocRep referencing the pictures. Document Planner (DP) The Document Planner, based on the ICONOCLAST text planner (Power, 2000) takes the input RhetRep and produces a document structure (DocRep). This specifies aspects such as the text-level (e.g., paragraph, sentence) and the relative ordering of propositions in the DocRep. Its leaves refer to SynReps corresponding to syntactic phrases. This module is pipelined after MS, to make sure that it takes account of any pictures that have been included in the document. Lexical Choice (LC) Lexical choice happens in two stages. In the first stage, LC chooses the lexical items for the predicate of each SynRep. This fixes the basic syntactic structure of the proposition, and the valency mapping between semantic and syntactic arguments. At this point the basic document structure is complete, and the LC advises REND and SF that they can start processing. LC then goes into a second phase, inTEXT SENTENCE RHETORICAL ORACLE LEXICAL FINALISER RENDERER LINGO PICTURE LIBRARY SELECTION MEDIUM FLO LEXICON CHOICE OASYS REFERRING EXPRESSIONS DOCUMENT PLANNER Figure 2: The structure of the RICHES system terleaved with RE and FLO: for each sentence, RE determines the referring expressions for each noun phrase, LC then lexicalises them, and when the sentence is complete FLO invokes LinGO to realise them. Referring Expressions (RE) The Referring Expression module adapts the SynReps to add information about the form of a noun phrase. It decides whether it should be a pronoun, a definite noun phrase or an indefinite noun phrase. Sentence Finaliser (SF) The Sentence Finaliser carries out high level sentential organisation. LC and RE together build individual syntactic phrases, but do not combine them into whole sentences. SF uses rhetorical and document structure information to decide how to complete the syntactic representations, for example, combining main and subordinate clauses. In addition, SF decides whether a sentence should be imperative, depending on who the reader of the document is (an input parameter to the system). Finalise Lexical Output (FLO) RICHES uses an external sentence realiser component with its own non-RAGS input specification. FLO provides the interface to this realiser, extracting (mostly syntactic) information from OASYS and converting it to the appropriate form for the realiser. Currently, FLO supports the LinGO realiser (Carroll et al., 1999), but we are also looking at FLO modules for RealPro (Lavoie and Rambow, 1997) and FUF/SURGE (Elhadad et al., 1997). Renderer (REND) The Renderer is the module that puts the concrete document together. Guided by the document structure, it produces HTML formatting for the text and positions and references the pictures. Individual sentences are produced for it by LinGO, via the FLO interface. FLO actually processes sentences independently of REND, so when REND makes a request, either the sentence is there already, or the request is queued, and serviced when it becomes available. LinGO The LinGO realiser uses a widecoverage grammar of English in the LKB HPSG framework, (Copestake and Flickinger, 2000). The tactical generation component accepts input in the Minimal Recursion Semantics formalism and produces the target text using a chartdriven algorithm with an optimised treatment of modification (Carroll et al., 1999). No domainspecific tuning of the grammar was required for the RICHES system, only a few additions to the lexicon were necessary. 5 An example: generation in RICHES In this section we show how RICHES generates the first sentence of the example text, Blow your nose so that it is clear and the picture that accompanies the text. The system starts with a rhetorical representation (RhetRep) provided by the RO (see Figure 3)3. The first active module to run is MS 3In the figures, labels indicate object types and the subscript numbers are identifiers provided by OASYS for each which traverses the RhetRep looking at the semantic propositions labelling the RhetRep leaves, to see if any can be illustrated by pictures in the picture library. Each picture in the library is encoded with a semantic representation. Matching between propositions and pictures is based on the algorithm presented in Van Deemter (1999) which selects the most informative picture whose representation contains nothing that is not contained in the proposition. For each picture that will be included, a leaf node of document representation is created and a realised by arrow is added to it from the semantic proposition object (see Figure 4).       el(1) el(2)    (motivation)       el(1) el(2) ! " refers to #$ $%" &refers to '( *) + + + + , . . . . . ./ el(1) el(2) el(3) '( *%) “patient’s nose is clear” 021354#6879$:; =<>'@?ABC + + + + , el  '@D*%%EF . . . . . ./ el el G '@HE4*I (blow) G '(JKL   " C     el(1) el(2)  '(JKL  NM  “patient’s nose” G '(JOLP" F (actor)  '(  ) “patient” Figure 3: Initial rhetorical and semantic representations  '(  ) Q realised by RSJUT #$ !C  el RSJUTD*%%E C%M picture: “noseblow.gif” Figure 4: Inclusion of a picture by MS The DP is an adaptation of the ICONOCLAST constraint-based planner and takes the RhetRep as its input. The DP maps the rhetorical representation into a document representation, decidobject. Those parts inside boxes are simplifications to the actual representation used in order not to clutter the figures. ing how the content will be split into sentences, paragraphs, item lists, etc., and what order the elements will appear in. It also inserts markers that will be translated to cue phrases to express some rhetorical relations explicitly. Initially the planner creates a skeleton document representation that is a one-to-one mapping of the rhetorical representation, but taking account of any nodes already introduced by the MS module, and assigns finite-domain constraint variables to the features labelling each node. It then applies constraint satisfaction techniques to identify a consistent set of assignments to these variables, and publishes the resulting document structure for other modules to process. In our example, the planner decided that the whole document will be expressed as a paragraph (that in this case consists of a single text sentence) and that the document leaves will represent text-phrases. It also decides that these two textphrases will be linked by a ‘subordinator’ marker (which will eventually be realised as “so that”), that “patient blows patient’s nose” will be realised before “patient’s nose is clear”. At this stage, the representation looks like Figure 5. The first stage of LC starts after DP has finished and chooses the lexical items for the main predicates (in this case “blow” and “clear”). These are created as SynReps, linked to the leaves of the DocRep tree. In addition the initial SynReps for the syntactic arguments are created, and linked to the corresponding arguments of the semantic proposition (for example, syntactic SUBJECT is linked to semantic ACTOR). The database at this stage (showing only the representation pertinent to the first sentence) looks like Figure 6. Until this point the flow of control has been a straight pipeline. Referring Expression Generation (RE) and the second stage of Lexical Choice (LC) operate in an interleaved fashion. RE collects the propositions in the order specified in the document representation and, for each of them, it inspects the semantic entities it contains (e.g., for our first sentence, those entities are ‘patient’ and ‘nose’) to decide whether they will be realised as a definite description or a pronoun. For our example, the final structure for the first argument in the first sentence can be seen in Figure 7 (although note that it will not be realised explicitly because   realised by '( *) “patient blow patient’s nose”  realised by RSJUT  F  + + + + ,     el(1) el(2) RSJUTD*%%E F " text level: paragraph indentation: 0 position: 1 marker: subordinator RSJUT   FM     el(1) el(2) G '(  %) “patient’s nose is clear”  realised by RSJUT! C     el R JUT#$    & el RSJUTD*%%EC%M picture: “noseblow.gif” text level: text-phrase indentation: 0 position: 1 R JUTD*%%E % text level: text-phrase indentation: 0 position: 2 Figure 5: Document representation '(  ) Q realised by  realised by RSJUT! C  el refers to RSJUTD*%%E C%M UA M%M          & . . . . . ./ el(1) el(2) el(3) el(4)  '(  )  realised by G' M ) root: blow category: verb(trans) sent type: imperative  L M5F GADNE ) < el D4 ) GA 7  C%C   & el(1) el(2) GA 7 =F (subject) GA  =F  Figure 6: First stage of Lexical Choice – part of sentence 1 the sentence is an imperative one). SF waits for the syntactic structure of indvidual clauses to be complete, and then inspects the syntactic, rhetorical and document structure to decide how to combine clauses. In the example, it decides to represent the rhetorical ‘motivation’ relation within a single text sentence by using the subordinator ‘so that’. It also makes the main clause an imperative, and the subordinate clause indicative. As soon as SF completes a whole syntactic sentence, FLO notices, and extracts the information required to interface to LinGO with an MRS structure. The string of words returned by LinGO, is stored internally by FLO until REND requests it. Finally, REND draws together all the information from the document and syntactic structures, and the realiser outputs provided by FLO, and produces HTML. The entire resultant text can be seen on the right hand side of figure 1. GA 7  C%C     el(1) el(2)  '(  ) realised by GA 7 =F (subject) GA  =F  el(1) ' =FM form: pron root: patient person: 2nd Figure 7: Second stage of Lexical Choice – entity 1 of sentence 1 6 Summary In this paper, we have described a small NLG system implemented using an event-driven, objectand-arrow based processing architecture. The system makes use of the data representation ideas proposed in the RAGS project, but adds a concrete proposal relating to application organisation and process control. Our main aims were to develop this ‘process model’ as a complement to the RAGS ‘data model,’ show that it could be implemented and used effectively, and test whether the RAGS ideas about data organisation and development can actually be deployed in such a system. Although the RICHES generator is quite simple, it demonstrates that it is possible to construct a RAGS-style generation system using these ideas, and that the OASYS processing model has the flexibility to support the kind of modularised NLG architecture that the RAGS initiative presupposes. Some of the complexity in the RICHES system is there to demonstrate the potential for different types of control strategies. Specifically, we do not make use of the possibilities offered by the interleaving of the RE and LC, as the examples we cover are too simple. However, this setup enables RE, in principle, to make use of information about precisely how a previous reference to an entity has been realised. Thus, if the first mention of an entity is as “the man”, RE may decide that a pronoun, “he” is acceptable in a subsequent reference. If, however, the first reference was realised as “the person”, it may decide to say “the man” next time around. At the beginning of this paper we mentioned systems that do not implement a standard pipeline. The RICHES system demonstrates that the RAGS model is sufficiently flexible to permit modules to work concurrently (as the REND and LC do in RICHES), alternately, passing control backwards and forwards (as the RE and LC modules do in RICHES) or pipelined (as the Document Planner and LC do in RICHES). The different types of events allow for a wide range of possible control models. In the case of a simple pipeline, each module only needs to know that its predecessor has finished. Depending on the precise nature of the work each module is doing, this may be best achievable through publish events (e.g. when a DocRep has been published, the DP may be deemed to have finished its work) or through lifecycle events (e.g. the DP effectively states that it has finished). A revision based architecture might require synthetic events to “wake up” a module to do some more work, after it has finished its first pass. References Lynne Cahill, Christine Doran, Roger Evans, Chris Mellish, Daniel Paiva, Mike Reape, Donia Scott, and Neil Tipper. 1999. In search of a reference architecture for NLG systems. In Proceedings of the Seventh European Natural Language Generation Workshop, Toulouse, France. Lynne Cahill, Christine Doran, Roger Evans, Chris Mellish, Daniel Paiva, Mike Reape, Donia Scott, and Neil Tipper. 2000. Reinterpretation of an existing NLG system in a Generic Generation Architecture. In Proceedings of the First International Natural Language Generation Conference, pages 69–76, Mitzpe Ramon, Israel. Jo Calder, Roger Evans, Chris Mellish, and Mike Reape. 1999. “Free choice” and templates: how to get both at the same time. In “May I speak freely?” Between templates and free choice in natural language generation, number D-99-01, pages 19–24. Saarbr¨ucken. John Carroll, Ann Copestake, Dan Flickinger, and Victor Poznanski. 1999. An efficient chart generator for (semi-)lexicalist grammars. In Proceedings of the 7th European Workshop on Natural Language Generation (EWNLG’99), pages 86–95, Toulouse, France. Ann Copestake and Dan Flickinger. 2000. An open source grammar development environment and broad-coverage English grammar using HPSG. In Proceedings of the 2nd International Conference on Language Resources and Evaluation, Athens, Greece. Michael Elhadad, Kathleen McKeown, and Jacques Robin. 1997. Floating constraints in lexical choice. Computational Linguistics, 23(2):195–240. K. Inui, T. Tokunaga, and H. Tanaka. 1992. Text revision: A model and its implementation. In R. Dale, E. Hovy, D. Rosner, and O. Stock, editors, Aspects of Automated Natural Language Generation, number LNAI587. Springer-Verlag. B. Lavoie and O. Rambow. 1997. A fast and portable realizer for text generation systems. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 265–68, Washington, DC. Chris Mellish, Roger Evans, Lynne Cahill, Christy Doran, Daniel Paiva, Mike Reape, Donia Scott, and Neil Tipper. 2000. A representation for complex and evolving data dependencies in generation. In Language Technology Joint Conference, ANLP-NAACL2000, Seattle. V. O. Mittal, J. D. Moore, G. Carenini, and S. Roth. 1999. Describing complex charts in natural language: A caption generation system. Computation Linguistics. Richard Power. 2000. Planning texts by constraint satisfaction. In Proceedings of the 18th International Conference on Computational Linguistics (COLING-2000), pages 642–648, Saarbr¨ucken, Germany. RAGS. 2000. Towards a Reference Architecture for Natural Language Generation Systems. Technical report, Information Technology Research Institute (ITRI), University of Brighton. Available at http://www.itri.brighton.ac.uk/projects/rags . Ehud Reiter. 1994. Has a consensus NL generation architecture appeared and is it psycholinguistically plausible? In Proceedings of the Seventh International Workshop on Natural Language Generation, pages 163–170, Kennebunkport, Maine. J. Robin. 1994. Revision-Based Generation of Natural Language Summaries Providing Historical Background:Corpus-Based Analysis, Design, Implementation and Evaluation. Technical Report CUCS-034-94, Columbia University. K. van Deemter. 1999. Document generation and picture retrieval. In Procs. of Third Int. Conf. on Visual Information Systems (VISUAL-99), Springer Lecture Notes in Computer Science no. 1614, pages 632–640, Amsterdan, Netherlands.
2001
15
Non-Verbal Cues for Discourse Structure Justine Cassell†, Yukiko I. Nakano†, Timothy W. Bickmore†, Candace L. Sidner‡, and Charles Rich‡ †MIT Media Laboratory 20 Ames Street Cambridge, MA 02139 {justine, yukiko, bickmore}@media.mit.edu ‡Mitsubishi Electric Research Laboratories 201 Broadway Cambridge, MA 02139 {sidner, rich}@merl.com Abstract This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts. 1. Introduction This paper provides empirical support for the relationship between posture shifts and discourse structure, and then derives an algorithm for generating posture shifts in an animated embodied conversational agent from discourse states produced by the middleware architecture known as Collagen [18]. Other nonverbal behaviors have been shown to be correlated with the underlying conversational structure and information structure of discourse. For example, gaze shifts towards the listener correlate with a shift in conversational turn (from the conversational participants’ perspective, they can be seen as a signal that the floor is available). Gestures correlate with rhematic content in accompanying language (from the conversational participants’ perspective, these behaviors can be seen as a signal that accompanying speech is of high interest). A better understanding of the role of nonverbal behaviors in conveying discourse structures enables improvements in the naturalness of embodied dialogue systems, such as embodied conversational agents, as well as contributing to algorithms for recognizing discourse structure in speech-understanding systems. Previous work, however, has not addressed major body shifts during discourse, nor has it addressed the nonverbal correlates of topic shifts. 2. Background Only recently have computational linguists begun to examine the association of nonverbal behaviors and language. In this section we review research by non-computational linguists and discuss how this research has been employed to formulate algorithms for natural language generation or understanding. About three-quarters of all clauses in descriptive discourse are accompanied by gestures [17], and within those clauses, the most effortful part of gestures tends to co-occur with or just before the phonologically most prominent syllable of the accompanying speech [13]. It has been shown that when speech is ambiguous or in a speech situation with some noise, listeners rely on gestural cues [22] (and, the higher the noise-tosignal ratio, the more facilitation by gesture). Even when gestural content overlaps with speech (reported to be the case in roughly 50% of utterances, for descriptive discourse), gesture often emphasizes information that is also focused pragmatically by mechanisms like prosody in speech. In fact, the semantic and pragmatic compatibility in the gesture-speech relationship recalls the interaction of words and graphics in multimodal presentations [11]. On the basis of results such as these, several researchers have built animated embodied conversational agents that ally synthesized speech with animated hand gestures. For example, Lester et al. [15] generate deictic gestures and choose referring expressions as a function of the potential ambiguity and proximity of objects referred to. Rickel and Johnson [19]'s pedagogical agent produces a deictic gesture at the beginning of explanations about objects. André et al. [1] generate pointing gestures as a sub-action of the rhetorical action of labeling, in turn a sub-action of elaborating. Cassell and Stone [3] generate either speech, gesture, or a combination of the two, as a function of the information structure status and surprise value of the discourse entity. Head and eye movement has also been examined in the context of discourse and conversation. Looking away from one’s interlocutor has been correlated with the beginning of turns. From the speaker’s point of view, this look away may prevent an overload of visual and linguistic information. On the other hand, during the execution phase of an utterance, speakers look more often at listeners. Head nods and eyebrow raises are correlated with emphasized linguistic items – such as words accompanied by pitch accents [7]. Some eye movements occur primarily at the ends of utterances and at grammatical boundaries, and appear to function as synchronization signals. That is, one may request a response from a listener by looking at the listener, and suppress the listener’s response by looking away. Likewise, in order to offer the floor, a speaker may gaze at the listener at the end of the utterance. When the listener wants the floor, s/he may look at and slightly up at the speaker [10]. It should be noted that turn taking only partially accounts for eye gaze behavior in discourse. A better explanation for gaze behavior integrates turn taking with the information structure of the propositional content of an utterance [5]. Specifically, the beginning of themes are frequently accompanied by a look-away from the hearer, and the beginning of rhemes are frequently accompanied by a look-toward the hearer. When these categories are co-temporaneous with turn construction, then they are strongly predictive of gaze behavior. Results such as these have led researchers to generate eye gaze and head movements in animated embodied conversational agents. Takeuchi and Nagao, for example, [21] generate gaze and head nod behaviors in a “talking head.” Cassell et al. [2] generate eye gaze and head nods as a function of turn taking behavior, head turns just before an utterance, and eyebrow raises as a function of emphasis. To our knowledge, research on posture shifts and other gross body movements, has not been used in the design or implementation of computational systems. In fact, although a number of conversational analysts and ethnomethodologists have described posture shifts in conversation, their studies have been qualitative in nature, and difficult to reformulate as the basis of algorithms for the generation of language and posture. Nevertheless, researchers in the non-computational fields have discussed posture shifts extensively. Kendon [13] reports a hierarchy in the organization of movement such that the smaller limbs such as the fingers and hands engage in more frequent movements, while the trunk and lower limbs change relatively rarely. A number of researchers have noted that changes in physical distance during interaction seem to accompany changes in the topic or in the social relationship between speakers. For example Condon and Osgton [9] have suggested that in a speaking individual the changes in these more slowly changing body parts occur at the boundaries of the larger units in the flow of speech. Scheflen (1973) also reports that posture shifts and other general body movements appear to mark the points of change between one major unit of communicative activity and another. Blom & Gumperz (1972) identify posture changes and changes in the spatial relationship between two speakers as indicators of what they term "situational shifts" -- momentary changes in the mutual rights and obligations between speakers accompanied by shifts in language style. Erickson (1975) concludes that proxemic shifts seem to be markers of 'important' segments. In his analysis of college counseling interviews, they occurred more frequently than any other coded indicator of segment changes, and were therefore the best predictor of new segments in the data. Unfortunately, in none of these studies are statistics provided, and their analyses rely on intuitive definitions of discourse segment or “major shift”. For this reason, we carried out our own empirical study. 3. Empirical Study Videotaped “pseudo-monologues” and dialogues were used as the basis for the current study. In “pseudo-monologues,” subjects were asked to describe each of the rooms in their home, then give directions between four pairs of locations they knew well (e.g., home and the grocery store). The experimenter acted as a listener, only providing backchannel feedback (head nods, smiles and paraverbals such as "uh-huh"). For dialogues, two subjects were asked to generate an idea for a class project that they would both like to work on, including: 1) what they would work on; 2) where they would work on it (including facilities, etc.), and 3) when they would work on it. Subjects stood in both conditions and were told to perform their tasks in 5-10 minutes. The pseudo-monologue condition (pseudo- because there was in fact an interlocutor, although he gave backchannel feedback only and never took the turn) allowed us to investigate the relationship between discourse structure and posture shift independent of turn structure. The two tasks were constructed to allow us to identify exactly where discourse segment boundaries would be placed. The video data was transcribed and coded for three features: discourse segment boundaries, turn boundaries, and posture shifts. A discourse segment is taken to be an aggregation of utterances and sub-segments that convey the discourse segment purpose, which is an intention that leads to the segment initiation [12]. In this study we chose initially to look at high-level discourse segmentation phenomena rather than those discourse segments embedded deeper in the discourse. Thus, the time points at which the assigned task topics were started served as segmentation points. Turn boundaries were coded (for dialogues only) as the point in time in which the start or end of an utterance cooccurred with a change in speaker, but excluding backchannel feedback. Turn overlaps were coded as open-floor time. We defined a posture shift as a motion or a position shift for a part of the human body, excluding hands and eyes (which we have dealt with in other work). Posture shifts were coded with start and end time of occurrence (duration), body part in play (for this paper we divided the body at the waistline and compared upper body vs. lower body shifts), and an estimated energy level of the posture shift. Energy level was normalized for each subject by taking the largest posture shift observed for each subject as 100% and coding all other posture shift energies relative to the 100% case. Posture shifts that occurred as part of gesture or were clearly intentionally generated (e.g., turning one's body while giving directions) were not coded. 4. Results Data from seven monologues and five dialogues were transcribed, and then coded and analyzed independently by two raters. A total of 70.5 minutes of data was analyzed (42.5 minutes of dialogue and 29.2 minutes of monologue). A total of 67 discourse segments were identified (25 in the dialogues and 42 in the monologues), which constituted 407 turns in the dialogue data. We used the instructions given to subjects concerning the topics to discuss as segmentation boundaries. In future research, we will address the smaller discourse segmentation. For posture shift coding, raters coded all posture shifts independently, and then calculated reliability on the transcripts of one monologue (5.2 minutes) and both speakers from one dialogue (8.5 minutes). Agreement on the presence of an upper body or lower body posture shift in a particular location (taking location to be a 1second window that contains all of or a part of a posture shift) for these three speakers was 89% (kappa = .64). For interrater reliability of the coding of energy level, a Spearman’s rho revealed a correlation coefficient of .48 (p<.01). 4.1 Analysis Posture shifts occurred regularly throughout the data (an average of 15 per speaker in both pseudo-monologues and dialogues). This, together with the fact that the majority of time was spent within discourse segments and within turns (rather than between segments), led us to normalize our posture shift data for comparison purposes. For relatively brief intervals (interdiscourse-segment and inter-turn) normalization by number of inter-segment occurrences was sufficient (ps/int), however, for long intervals (intra-discourse segment and intra-turn) we needed to normalize by time to obtain meaningful comparisons. For this normalization metric we looked at posture-shifts-per-second (ps/s). This gave us a mean average of .06 posture shifts/second (ps/s) in the monologues (SD=.07), and .07 posture shifts/second in the dialogues (SD=.08). Table 4.1.1. Posture WRT Discourse Segments Our initial analysis compared posture shifts made by the current speaker within discourse segments (intra-dseg) to those produced at the boundaries of discourse segments (inter-dseg). It can be seen (in Table 4.1.1) that posture shifts occur an order of magnitude more frequently at discourse segment boundaries than within discourse segments in both monologues and dialogues. Posture shifts also tend to be more energetic at discourse segment boundaries (F(1,251)=10.4; p<0.001). Table 4.1.2 Posture Shifts WRT Turns ps/s ps/int energy inter-turn 0.140 0.268 0.742 intra-turn 0.022 0.738 Initially, we classified data as being inter- or intra-turn. Table 4.1.2 shows that turn structure does have an influence on posture shifts; subjects were five times more likely to exhibit a shift at a boundary than within a turn. Table 4.1.3 Posture by Discourse and Turn Breakdown ps/s ps/int inter-dseg/start-turn 0.562 0.542 inter-dseg/mid-turn 0.000 0.000 inter-dseg/end-turn 0.130 0.125 intra-dseg/start-turn 0.067 0.135 intra-dseg/mid-turn 0.041 intra-dseg/end-turn 0.053 0.107 An interaction exists between turns and discourse segments such that discourse segment boundaries are ten times more likely to co-occur with turn changes than within turns. Both turn and discourse structure exhibit an influence on posture shifts, with discourse having the most predictive value. Starting a turn while starting a new discourse segment is marked with a posture shift roughly 10 times more often than when starting a turn while staying within discourse segment. We noticed, however, that posture shifts appeared to congregate at the beginnings or ends of turn boundaries, and so our subsequent analyses examined start-turns, midturns and end-turns. It is clear from these results that posture is indeed correlated with discourse state, such that speakers generate a posture shift when initiating a new discourse segment, which is often at the boundary between turns. In addition to looking at the occurrence and energy of posture shifts we also analyzed the distributions of upper vs. lower body shifts and the duration of posture shifts. Speaker upper body shifts were found to be used more frequently at the start of turns (48%) than at the middle of turns (36%) or end of turns (18%) (F(2,147)=5.39; p<0.005), with no significant Monologues Dialogues ps/s ps/int energy ps/s ps/int energy interdseg 0.340 0.837 0.832 0.332 0.533 0.844 intradseg 0.039 0.701 0.053 0.723 dependence on discourse structure. Finally, speaker posture shift duration was found to change significantly as a function of both turn and discourse structure (see Figure 4.1.3). At the start of turns, posture shift duration is approximately the same whether a new topic is introduced or not (2.5 seconds). However, when ending a turn, speakers move significantly longer (7.0 seconds) when finishing a topic than when the topic is continued by the other interlocutor (2.7 seconds) (F(1,148)=17.9; p<0.001). Figure 4.1.1 Posture Shift Duration by DSeg and Turn 5. System In the following sections we discuss how the results of the empirical study were integrated along with Collagen into our existent embodied conversational agent, Rea. 5.1 System Architecture Rea is an embodied conversational agent that interacts with a user in the real estate agent domain [2]. The system architecture of Rea is shown in Figure 5.1. Rea takes input from a microphone and two cameras in order to sense the user’s speech and gesture. The UM interprets and integrates this multimodal input and outputs a unified semantic representation. The Understanding Module then sends the output to Collagen as the Dialogue Manager. Collagen, as further discussed below, maintains the state of the dialogue as shared between Rea and a user. The Reaction Module decides Rea’s next action based on the discourse state maintained by Collagen. It also assigns information structure to output utterances so that gestures can be appropriately generated. The semantic representation of the action, including verbal and non-verbal behaviors, is sent to the Generation Module which generates surface linguistic expressions and gestures, including a set of instructions to achieve synchronization between animation and speech. These instructions are executed by a 3D animation renderer and a text-to-speech system. Table 5.1 shows the associations between discourse and conversational state that Rea is currently able to handle. In other work we have discussed how Rea deals with the association between information structure and gesture [6]. In the following sections, we focus on Rea’s generation of posture shifts. Table 5.1: Discourse functions & non-verbal behavior cues Discourse level info. Functions non-verbal behavior cues Discourse structure new segment Posture_shift turn giving eye_gaze & (stop_gesturing hand_gesture) turn keeping (look_away keep_gesture) Conversation structure turn taking eye_gaze & posture_shift Information structure emphasize information eye_gaze & beat_and other_hand_gsts DSEG mid end intra inter 8 7 6 5 4 3 2 1 start Understanding Module Dialogue Manager (Collagen) Reaction Module (RM) Animation Renderer Text to Speech Speech Recognition Vision Processing Microphone Camera Animation Speech Generation Module Sentence Realizer Gesture Component Figure5.1: System architecture 5.2 The Collagen dialogue manager CollagenTM is JAVA middleware for building COLLAborative interface AGENts to work with users on interface applications. Collagen is designed with the capability to participate in collaboration and conversation, based on [12], [16]. Collagen updates the focus stack and recipe tree using a combination of the discourse interpretation algorithm of [16] and plan recognition algorithms of [14]. It takes as input user and system utterances and interface actions, and accesses a library of recipes describing actions in the domain. After updating the discourse state, Collagen makes three resources available to the interface agent: focus of attention (using the focus stack), segmented interaction history (of completed segments) and an agenda of next possible actions created from the focus stack and recipe tree. 5.3 Output Generation The Reaction Module works as a content planner in the Rea architecture, and also plays the role of an interface agent in Collagen. It has access to the discourse state and the agenda using APIs provided by Collagen. Based on the results reported above, we describe here how Rea plans her next nonverbal actions using the resources that Collagen maintains. The empirical study revealed that posture shifts are distributed with respect to discourse segment and turn boundaries, and that the form of a posture shift differs according to these codeterminants. Therefore, generation of posture shifts in Rea is determined according to these two factors, with Collagen contributing information about current discourse state. 5.3.1 Discourse structure information Any posture shift that occurs between the end of one discourse segment and the beginning of the next is defined as an inter-discourse segment posture shift. In order to elaborate different generation rules for inter- vs. intra-discourse segments, Rea judges (D1) whether the next utterance starts a new topic, or contributes to the current discourse purpose, (D2) whether the next utterance is expected to finish a segment. First, (D1) is calculated by referring to the focus stack and agenda. In planning a next action, Rea accesses the goal agenda in Collagen and gets the content of her next utterance. She also accesses the focus stack and gets the current discourse purpose that is shared between her and the user. By comparing the current purpose and the purpose of her next utterance, Rea can judge whether the her next utterance contributes to the current discourse purpose or not. For example, if the current discourse purpose is to find a house to show the user (FindHouse), and the next utterance that Rea plans to say is as follows, (1) (Ask.What (agent Propose.What (user FindHouse <city ?>))) Rea says: "What kind of transportation access do you need?" then Rea uses Collagen APIs to compare the current discourse purpose (FindHouse) to the purpose of utterance (1). The purpose of this utterance is to ask the value of the transportation parameter of FindHouse. Thus, Rea judges that this utterance contributes to the current discourse purpose, and continues the same discourse segment (D1 = continue). On the other hand, if Rea’s next utterance is about showing a house, (2) (Propose.Should (agent ShowHouse (joint 123ElmStreet)) Rea says: "Let's look at 123 Elm Street." then this utterance does not directly contribute to the current discourse purpose because it does not ask a parameter of FindHouse, and it introduces a new discourse purpose ShowHouse. In this case, Rea judges that there is a discourse segment boundary between the previous utterance and the next one (D1 = topic change). In order to calculate (D2), Rea looks at the plan tree in Collagen, and judges whether the next utterance addresses the last goal in the current discourse purpose. If it is the case, Rea expects to finish the current discourse segment by the next utterance (D1 = finish topic). As for conversational structure, Rea needs to know; (T1) whether Rea is taking a new turn with the next utterance, or keeping her current turn for the next utterance, (T2) whether Rea’s next utterance requires that the user respond. First, (T1) is judged by referring to the dialogue history1. The dialogue history stores both system utterances and user utterances that occurred in the dialogue. In the history, each utterance is stored as a logical form based on an artificial discourse language [20]. As shown above in utterance (1), the first argument of the action indicates the speaker of the utterance; in this example, it is “agent”. The turn boundary can be estimated by comparing the speaker of the previous utterance with the speaker of the next utterance. If the speaker of the previous utterance is not Rea, there is a turn boundary before the next utterance (T1 = take turn). If the speaker of the previous utterance is Rea, that means that Rea will keep the same turn for the next utterance (T1 = keep turn). Second, (T2) is judged by looking at the type of Rea’s next utterance. For example, when Rea asks a question, as in utterance (1), Rea expects the user to answer the question. In this case, Rea must convey to the user that the system gives up the turn (T2 = give up turn). 5.3.2 Deciding and selecting a posture shift Combining information about discourse structure (D1, D2) and conversation structure (T1, T2), the system decides on posture shifts 1 We currently maintain a dialogue history in Rea even though Collagen has one as well. This is in order to store and manipulate the information to generate hand gestures and assign intonational accents. This information will be integrated into Collagen in the near future. for the beginning of the utterance and the end of the utterance. Rea decides to do or not to do a posture shift by calling a probabilistic function that looks up the probabilities in Table 5.3.1. A posture shift for the beginning of the utterance is decided based on the combination of (D1) and (T1). For example, if the combined factors match Case (a), the system decides to generate a posture shift with 54% probability for the beginning of the utterance. Note that in Case (d), that is, Rea keeps the turn without changing a topic, we cannot calculate a per interval posture shift rate. Instead, we use a posture shift rate normalized for time. This rate is used in the GenerationModule, which calculates the utterance duration and generates a posture shift during the utterance based on this posture shift rate. On the other hand, ending posture shifts are decided based on the combination of (D2) and (T2). For example, if the combined factors match Case (e), the system decides to generate a posture shift with 0.04% probability for the ending of the utterance. When Rea does decide to activate a posture shift, she then needs to choose which posture shift to perform. Our empirical data indicates that the energy level of the posture shift differs depending on whether there is a discourse segment boundary or not. Moreover the duration of a posture shift differs depending on the place in a turn: start-, mid-, or end-turn. Posture shift selection Place of a posture shift Case Discourse structure information Conversation structure information Posture shift decision probability energy duration body part a topic change take turn 0.54/int high default upper & lower b topic change keep turn 0 - - - c continue take turn 0.13/int low default upper or lower beginning of the utterance d D1 continue T1 keep turn 0.14/sec low short lower e finish topic give turn 0.04/int high long lower End of the utterance f D2 continue T2 give turn 0.11/int low default lower Table 5.3.1:Posture Decision Probabilities for Dialogue Based on these results, we define posture shift selection rules for energy, duration, and body part. The correspondence with discourse information is shown in Table 5.3.1. For example, in Case (a), the system selects a posture shift with high energy, using both upper and lower body. After deciding whether or not Rea should shift posture and (if so) choosing a kind of posture shift, Rea sends a command to the Generation Module to generate a specific kind of posture shift within a specific time duration. Posture shift selection Ca se Discourse structure information Posture shift decision probability energy g change topic 0.84/int high h D1 continue 0.04/sec low Posture shifts for pseudo-monologues can be decided using the same mechanism as that for dialogue, but omitting conversation structure information. The probabilities are given in table Table 5.3.2. For example, if Rea changes the topic with her next utterance, a posture shift is generated 84% of the time with high-energy motion. In other cases, the system randomly generates low-energy posture shifts 0.04 times per second. 6. Example Figure 6.1 shows a dialogue between Rea and the user, and shows how Rea decides to generate posture shifts. This dialogue consists of two major segments: finding a house (dialogue), and showing a house (pseudo-monologue). Based on this task structure, we defined plan recipes for Collagen. The first shared discourse purpose [goal: HaveConversation] is introduced by the user before the example. Then, in utterance (1), the user introduces the main part of the conversation [goal: FindHouse]. The next goal in the agenda, [goal: IdentifyPreferredCity], should be accomplished to identify a parameter value for [goal: FindHouse]. This goal directly contributes to the current purpose, [goal: FindHouse]. This case is judged to be a turn boundary within a discourse segment (Case (c)), and Rea decides to generate a posture shift at the beginning of the utterance with 13% probability. If Rea decides to shift posture she selects a low energy posture shift using either upper or lower body. In addition to a posture shift at the beginning of the utterance, Rea may also choose to generate a posture shift to end the turn. As utterance (2) expects the user to take the turn, and continue to work on the same discourse purpose, this is Case (f). Thus, the system generates an end utterance posture shift 11% of the time. If generated, a low energy posture shift is chosen. If a beginning and/or ending posture shifts are generated, they are sent to the GM, which calculates the schedule of these multimodal events and generates them. In utterance (25), Rea introduces a new discourse purpose [goal : ShowHouse]. Rea, using a default rule, decides to take the initiative on this goal. At this point, Rea accesses the discourse state and confirms that a new goal is about to start. Rea judges this case as a discourse segment boundary and also a turn boundary (Case (a)). Based on this information, Rea selects a high energy posture shift. An example of Rea’s high energy posture shift is shown on the right in Figure 5.2. As a subdialogue of showing a house, in a discourse purpose [goal : DiscussFeature], Rea keeps the turn and continues to describe the house. We handle this type of interaction as a pseudo-monologue. Therefore, we can use table Table 5.3.2 for deciding on posture shifts here. In utterance (27), Rea starts the discussion about the house, and takes the initiative. This is judged as Case (g), and a high energy body motion is generated 84% of the time. Table 5.3.2: Posture Decision Probabilities: Monologue 7. Conclusion and Further work We have demonstrated a clear relationship between nonverbal behavior and discourse state, and shown how this finding can be incorporated into the generation of language and nonverbal behaviors for an embodied conversational agent. Speakers produce posture shifts at 53% of discourse segment boundaries, more frequently than they produce those shifts discourse segment-internally, and with more motion energy. Furthermore, there is a relationship between discourse structure and conversational structure such that when speakers initiate a new segment at the same time as starting a turn (the most frequent case by far), they are more likely to produce a posture shift; while when they end a discourse segment and a turn at the same time, their posture shifts last longer than when these categories do not co-occur. Although this paper reports results from a limited number of monologues and dialogues, the findings are promising. In addition, they point the way to a number of future directions, both within the study of posture and discourse, and more generally within the study of nonverbal behaviors in computational linguistics. Figure 6.2: Rea demonstrating a low and high energy posture shift First, given the relationship between conversational and information structure in [5], a natural next step is to examine the three-way relationship between discourse state, conversational structure (turns), and information structure (theme/rheme). For the moment, we have demonstrated that posture shifts may signal boundaries of units; do they also signal the information content of units? Next, we need to look at finer segmentations of the discourse, to see whether larger and smaller discourse segments are distinguished through non-verbal means. Third, the question of listener posture is an important one. We found that a number of posture shifts were produced by the participant who was not speaking. More than half of these shifts were produced at the same time as a speaker shift, suggesting a kind of mirroring. In order to interpret these data, however, a more sensitive notion of turn structure is required, as one must be ready to define when exactly speakers and listeners shift roles. Also, of course, evaluation of the importance of such nonverbal behaviors to user interaction is essential. In a user study of our earlier Gandalf system [4], users rated the agent's language skills significantly higher under test conditions in which Gandalf deployed conversational behaviors (gaze, head movement and limited gesture) than when these behaviors were disabled. Such an evaluation is also necessary for the Rea-posture system. But, more generally, we need to test whether generating posture shifts of this sort actually serves as a signal to listeners, for example to initiative [Finding a house] < dialogue> (1) U: I’m looking for a house. (2) R: (c) Where do you want to live? (f) (3) U: I like Boston. (4) R: (c) (d) What kind of transportation access do you need? (f) (5) U: I need T access. …. (23) R: (c) (d) How much storage space do you need? (f) (24) U: I need to have a storage place in the basement. (25) R: (a) (d) Let’s look at 123 Elm Street. (f) (26) U: OK. [Discuss a feature of the house] (27) R: (g) Let's discuss a feature of this place. (28) R: (h) Notice the hardw ood flooring in the living room. (29) R: (h) Notice the jacuzzi. (30) R: (h) Notice the remodeled kitchen [Showing a house] <Pseudo-monologue> Figure 6.1: Example dialogue structure in task and dialogue [8]. These evaluations form part of our future research plans. 8. Acknowledgements This research was supported by MERL, France Telecom, AT&T, and the other generous sponsors of the MIT Media Lab. Thanks to the other members of the Gesture and Narrative Language Group, in particular Ian Gouldstone and Hannes Vilhjálmsson. 9. REFERENCES [1] Andre, E., Rist, T., & Muller, J., Employing AI methods to control the behavior of animated interface agents, Applied Artificial Intelligence, vol. 13, pp. 415-448, 1999. [2] Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, H., & Yan, H., Embodiment in Conversational Interfaces: Rea, Proc. of CHI 99, Pittsburgh, PA, ACM, 1999. [3] Cassell, J., Stone, M., & Yan, H., Coordination and context-dependence in the generation of embodied conversation, Proc. INLG 2000, Mitzpe Ramon, Israel, 2000. [4] Cassell, J. and Thorisson, K. R., The Power of a Nod and a Glance: Envelope vs. Emotional Feedback in Animated Conversational Agents, Applied Art. Intell., vol. 13, pp. 519-538, 1999. [5] Cassell, J., Torres, O., & Prevost, S., Turn Taking vs. Discourse Structure: How Best to Model Multimodal Conversation., in Machine Conversations, Y. Wilks, Ed. The Hague: Kluwer, 1999, pp. 143-154. [6] Cassell, J., Vilhjálmsson, H., & Bickmore, T., BEAT: The Behavior Expression Animation Toolkit, Proc. of SIGGRAPH, ACM Press, 2001. [7] Chovil, N., Discourse-Oriented Facial Displays in Conversation, Research on Language and Social Interaction, vol. 25, pp. 163-194, 1992. [8] Chu-Carroll, J. & Brown, M., Initiative in Collaborative Interactions - Its Cues and Effects, Proc. of AAAI Spring 1997 Symp. on Computational Models of Mixed Initiative, 1997. [9] Condon, W. S. & Osgton, W. D., Speech and body motion synchrony of the speaker-hearer, in The perception of language, D. Horton & J. Jenkins, Eds. NY: Academic Press, 1971, pp. 150-184. [10] Duncan, S., On the structure of speaker-auditor interaction during speaking turns, Language in Society, vol. 3, pp. 161-180, 1974. [11] Green, N., Carenini, G., Kerpedjiev, S., & Roth, S, A Media-Independent Content Language for Integrated Text and Graphics Generation, Proc. of Workshop on Content Visualization and Intermedia Representations at COLING and ACL '98, 1998. [12] Grosz, B. & Sidner, C., Attention, Intentions, and the Structure of Discourse, Computational Linguistics, vol. 12, pp. 175-204, 1986. [13] Kendon, A., Some Relationships between Body Motion and Speech, in Studies in Dyadic Communication, A. W. Siegman and B. Pope, Eds. Elmsford, NY: Pergamon Press, 1972, pp. 177-210. [14] Lesh, N., Rich, C., & Sidner, C., Using Plan Recognition in Human-Computer Collaboration, Proc. of the Conference on User Modelling, Banff, Canada, NY: Springer Wien, 1999. [15] Lester, J., Towns, S., Callaway, C., Voerman, J., & FitzGerald, P., Deictic and Emotive Communication in Animated Pedagogical Agents, in Embodied Conversational Agents, J. Cassell, J. Sullivan, et. al, Eds. Cambridge: MIT Press, 2000. [16] Lochbaum, K., A Collaborative Planning Model of Intentional Structure, Computational Linguistics, vol. 24, pp. 525-572, 1998. [17] McNeill, D., Hand and Mind: What Gestures Reveal about Thought. Chicago, IL/London, UK: The University of Chicago Press, 1992. [18] Rich, C. & Sidner, C. L., COLLAGEN: A Collaboration Manager for Software Interface Agents, User Modeling and User-Adapted Interaction, vol. 8, pp. 315-350, 1998. [19] Rickel, J. & Johnson, W. L., Task-Oriented Collaboration with Embodied Agents in Virtual Worlds, in Embodied Conversational Agents, J. Cassell, Ed. Cambridge, MA: MIT Press, 2000. [20] Sidner, C., An Artificial Discourse Language for Collaborative Negotiation, Proc. of 12th Intnl. Conf. on Artificial Intelligence (AAAI), Seattle, WA, MIT Press, 1994. [21] Takeuchi, A. & Nagao, K., Communicative facial displays as a new conversational modality, Proc. of InterCHI '93, Amsterdam, NL, ACM, 1993. [22] Thompson, L. and Massaro, D., Evaluation and Integration of Speech and Pointing Gestures during Referential Understanding, Journal of Experimental Child Psychology, vol. 42, pp. 144-168, 1986.
2001
16
Immediate-Head Parsing for Language Models  Eugene Charniak Brown Laboratory for Linguistic Information Processing Department of Computer Science Brown University, Box 1910, Providence RI [email protected] Abstract We present two language models based upon an “immediate-head” parser — our name for a parser that conditions all events below a constituent c upon the head of c. While all of the most accurate statistical parsers are of the immediate-head variety, no previous grammatical language model uses this technology. The perplexity for both of these models significantly improve upon the trigram model base-line as well as the best previous grammarbased language model. For the better of our two models these improvements are 24% and 14% respectively. We also suggest that improvement of the underlying parser should significantly improve the model’s perplexity and that even in the near term there is a lot of potential for improvement in immediatehead language models. 1 Introduction All of the most accurate statistical parsers [1,3, 6,7,12,14] are lexicalized in that they condition probabilities on the lexical content of the sentences being parsed. Furthermore, all of these This research was supported in part by NSF grant LIS SBR 9720368 and by NSF grant 00100203 IIS0085980. The author would like to thank the members of the Brown Laboratory for Linguistic Information Processing (BLLIP) and particularly Brian Roark who gave very useful tips on conducting this research. Thanks also to Fred Jelinek and Ciprian Chelba for the use of their data and for detailed comments on earlier drafts of this paper. parsers are what we will call immediate-head parsers in that all of the properties of the immediate descendants of a constituent c are assigned probabilities that are conditioned on the lexical head of c. For example, in Figure 1 the probability that the vp expands into v np pp is conditioned on the head of the vp, “put”, as are the choices of the sub-heads under the vp, i.e., “ball” (the head of the np) and “in” (the head of the pp). It is the experience of the statistical parsing community that immediate-head parsers are the most accurate we can design. It is also worthy of note that many of these parsers [1,3,6,7] are generative — that is, for a sentence s they try to find the parse  defined by Equation 1: arg max p( j s) = arg max p(, s) (1) This is interesting because insofar as they compute p(, s) these parsers define a language-model in that they can (in principle) assign a probability to all possible sentences in the language by computing the sum in Equation 2: p(s) = X  p(, s) (2) where p(, s) is zero if the yield of  6= s. Language models, of course, are of interest because speech-recognition systems require them. These systems determine the words that were spoken by solving Equation 3: arg maxsp(s j A) = arg maxsp(s)p(A j s) (3) where A denotes the acoustic signal. The first term on the right, p(s), is the language model, and is what we compute via parsing in Equation 2. put the ball in the box verb/put det/the prep/in det/the noun/ball noun/box verb/put np/box pp/in np/ball vp/put Figure 1: A tree showing head information Virtually all current speech recognition systems use the so-called trigram language model in which the probability of a string is broken down into conditional probabilities on each word given the two previous words. E.g., p(w0,n) = Y i=0,n1 p(wi j wi1, wi2) (4) On the other hand, in the last few years there has been interest in designing language models based upon parsing and Equation 2. We now turn to this previous research. 2 Previous Work There is, of course, a very large body of literature on language modeling (for an overview, see [10]) and even the literature on grammatical language models is becoming moderately large [4, 9,15,16,17]. The research presented in this paper is most closely related to two previous efforts, that by Chelba and Jelinek [4] (C&J) and that by Roark [15], and this review concentrates on these two papers. While these two works differ in many particulars, we stress here the ways in which they are similar, and similar in ways that differ from the approach taken in this paper. In both cases the grammar based language model computes the probability of the next word based upon the previous words of the sentence. More specifically, these grammar-based models compute a subset of all possible grammatical relations for the prior words, and then compute  the probability of the next grammatical situation, and  the probability of seeing the next word given each of these grammatical situations. Also, when computing the probability of the next word, both models condition on the two prior heads of constituents. Thus, like a trigram model, they use information about triples of words. Neither of these models uses an immediatehead parser. Rather they are both what we will call strict left-to-right parsers. At each sentence position in strict left-to-right parsing one computes the probability of the next word given the previous words (and does not go back to modify such probabilities). This is not possible in immediate-head parsing. Sometimes the immediate head of a constituent occurs after it (e.g, in noun-phrases, where the head is typically the rightmost noun) and thus is not available for conditioning by a strict left-to-right parser. There are two reasons why one might prefer strict left-to-right parsing for a language model (Roark [15] and Chelba, personal communication). First, the search procedures for guessing the words that correspond to the acoustic signal works left to right in the string. If the language model is to offer guidance to the search procedure it must do so as well. The second benefit of strict left-to-right parsing is that it is easily combined with the standard trigram model. In both cases at every point in the sentence we compute the probability of the next word given the prior words. Thus one can interpolate the trigram and grammar probability estimates for each word to get a more robust estimate. It turns out that this is a good thing to do, as is clear from Table 1, which gives perplexity results for a trigram model of the data in column one, results for the grammar-model in column two, and results for a model in which the two are interpoModel Perplexity Trigram Grammar Interpolation C&J 167.14 158.28 148.90 Roark 167.02 152.26 137.26 Table 1: Perplexity results for two previous grammar-based language models lated in column three. Both the were trained and tested on the same training and testing corpora, to be described in Section 4.1. As indicated in the table, the trigram model achieved a perplexity of 167 for the testing corpus. The grammar models did slightly better (e.g., 158.28 for the Chelba and Jelinek (C&J) parser), but it is the interpolation of the two that is clearly the winner (e.g., 137.26 for the Roark parser/trigram combination). In both papers the interpolation constants were 0.36 for the trigram estimate and 0.64 for the grammar estimate. While both of these reasons for strict-left-toright parsing (search and trigram interpolation) are valid, they are not necessarily compelling. The ability to combine easily with trigram models is important only as long as trigram models can improve grammar models. A sufficiently good grammar model would obviate the need for trigrams. As for the search problem, we briefly return to this point at the end of the paper. Here we simply note that while search requires that a language model provide probabilities in a left to right fashion, one can easily imagine procedures where these probabilities are revised after new information is found (i.e., the head of the constituent). Note that already our search procedure needs to revise previous most-likely-word hypotheses when the original guess makes the subsequent words very unlikely. Revising the associated language-model probabilities complicates the search procedure, but not unimaginably so. Thus it seems to us that it is worth finding out whether the superior parsing performance of immediate-head parsers translates into improved language models. 3 The Immediate-Head Parsing Model We have taken the immediate-head parser described in [3] as our starting point. This parsing model assigns a probability to a parse  by a topdown process of considering each constituent c in  and, for each c, first guessing the pre-terminal of c, t(c) (t for “tag”), then the lexical head of c, h(c), and then the expansion of c into further constituents e(c). Thus the probability of a parse is given by the equation p() = Y c2 p(t(c) j l(c), H(c)) p(h(c) j t(c), l(c), H(c)) p(e(c) j l(c), t(c), h(c), H(c)) where l(c) is the label of c (e.g., whether it is a noun phrase (np), verb phrase, etc.) and H(c) is the relevant history of c — information outside c that our probability model deems important in determining the probability in question. In [3] H(c) approximately consists of the label, head, and head-part-of-speech for the parent of c: m(c), i(c), and u(c) respectively. One exception is the distribution p(e(c) j l(c), t(c), h(c), H(c)), where H only includes m and u.1 Whenever it is clear to which constituent we are referring we omit the (c) in, e.g., h(c). In this notation the above equation takes the following form: p() = Y c2 p(t j l, m, u, i)  p(h j t, l, m, u, i) p(e j l, t, h, m, u). (5) Because this is a point of contrast with the parsers described in the previous section, note that all of the conditional distributions are conditioned on one lexical item (either i or h). Thus only p(h j t, l, m, u, i), the distribution for the head of c, looks at two lexical items (i and h itself), and none of the distributions look at three lexical items as do the trigram distribution of Equation 4 and the previously discussed parsing language models [4, 15]. Next we describe how we assign a probability to the expansion e of a constituent. We break up a traditional probabilistic context-free grammar (PCFG) rule into a left-hand side with a label l(c) drawn from the non-terminal symbols of our grammar, and a right-hand side that is a sequence 1We simplify slightly in this section. See [3] for all the details on the equations as well as the smoothing used. of one or more such symbols. For each expansion we distinguish one of the right-hand side labels as the “middle” or “head” symbol M(c). M(c) is the constituent from which the head lexical item h is obtained according to deterministic rules that pick the head of a constituent from among the heads of its children. To the left of M is a sequence of one or more left labels Li(c) including the special termination symbol 4, which indicates that there are no more symbols to the left, and similarly for the labels to the right, Ri(c). Thus an expansion e(c) looks like: l ! 4Lm. . . L1MR1. . . Rn 4. (6) The expansion is generated by guessing first M, then in order L1 through Lm+1 (= 4), and similarly for R1 through Rn+1. In anticipation of our discussion in Section 4.2, note that when we are expanding an Li we do not know the lexical items to its left, but if we properly dovetail our “guesses” we can be sure of what word, if any, appears to its right and before M, and similarly for the word to the left of Rj. This makes such words available to be conditioned upon. Finally, the parser of [3] deviates in two places from the strict dictates of a language model. First, as explicitly noted in [3], the parser does not compute the partition function (normalization constant) for its distributions so the numbers it returns are not true probabilities. We noted there that if we replaced the “max-ent inspired” feature with standard deleted interpolation smoothing, we took a significant hit in performance. We have now found several ways to overcome this problem, including some very efficient ways to compute partition functions for this class of models. In the end, however, this was not necessary, as we found that we could obtain equally good performance by “hand-crafting” our interpolation smoothing rather than using the “obvious” method (which performs poorly). Secondly, as noted in [2], the parser encourages right branching with a “bonus” multiplicative factor of 1.2 for constituents that end at the right boundary of the sentence, and a penalty of 0.8 for those that do not. This is replaced by explicitly conditioning the events in the expansion of Equation 6 on whether or not the constituent is at the right boundary (barring sentence-final punctuation). Again, with proper attention to details, this can be known at the time the expansion is taking place. This modification is much more complex than the multiplicative “hack,” and it is not quite as good (we lose about 0.1% in precision/recall figures), but it does allow us to compute true probabilities. The resulting parser strictly speaking defines a PCFG in that all of the extra conditioning information could be included in the non-terminalnode labels (as we did with the head information in Figure 1). When a PCFG probability distribution is estimated from training data (in our case the Penn tree-bank) PCFGs define a tight (summing to one) probability distribution over strings [5], thus making them appropriate for language models. We also empirically checked that our individual distributions (p(t j l, m, u, i), and p(h j t, l, m, u, i) from Equation 5 and p(L j l, t, h, m, u), p(M j l, t, h, m, u), and p(R j l, t, h, m, u) from Equation 5) sum to one for a large, random, selection of conditioning events2 As with [3], a subset of parses is computed with a non-lexicalized PCFG, and the most probable edges (using an empirically established threshold) have their probabilities recomputed according to the complete probability model of Equation 5. Both searches are conducted using dynamic programming. 4 Experiments 4.1 The Immediate-Bihead Language Model The parser as described in the previous section was trained and tested on the data used in the previously described grammar-based language modeling research [4,15]. This data is from the Penn Wall Street Journal tree-bank [13], but modified to make the text more “speech-like”. In particular: 1. all punctuation is removed, 2. no capitalization is used, 3. all symbols and digits are replaced by the symbol N, and 2They should sum to one. We are just checking that there are no bugs in the code. Model Perplexity Trigram Grammar Interpolation C&J 167.14 158.28 148.90 Roark 167.02 152.26 137.26 Bihead 167.89 144.98 133.15 Table 2: Perplexity results for the immediatebihead model 4. all words except for the 10,000 most common are replaced by the symbol UNK. As in previous work, files F0 to F20 are used for training, F21-F22 for development, and F23-F24 for testing. The results are given in Table 2. We refer to the current model as the bihead model. “Bihead” here emphasizes the already noted fact that in this model probabilities involve at most two lexical heads. As seen in Table 2, the immediate-bihead model with a perplexity of 144.98 outperforms both previous models, even though they use trigrams of words in their probability estimates. We also interpolated our parsing model with the trigram model (interpolation constant .36, as with the other models) and this model outperforms the other interpolation models. Note, however, that because our parser does not define probabilities for each word based upon previous words (as with trigram) it is not possible to do the integration at the word level. Rather we interpolate the probabilities of the entire sentences. This is a much less powerful technique than the word-level interpolation used by both C&J and Roark, but we still observe a significant gain in performance. 4.2 The Immediate-Trihead Model While the performance of the grammatical model is good, a look at sentences for which the trigram model outperforms it makes its limitations apparent. The sentences in question have noun phrases like “monday night football” that trigram models eats up but on which our bihead parsing model performs less well. For example, consider the sentence “he watched monday night football”. The trigram model assigns this a probability of 1. 9  10 5, while the grammar model gives it a probability of 2. 77  10 7. To a first approximation, this is entirely due to the difference in probmonday night football nbar np Figure 2: A noun-phrase with sub-structure ability of the noun-phrase. For example, the trigram probability p(football j monday, night) = 0. 366, and would have been 1.0 except that smoothing saved some of the probability for other things it might have seen but did not. Because the grammar model conditions in a different order, the closest equivalent probability would be that for “monday”, but in our model this is only conditioned on “football” so the probability is much less biased, only 0. 0306. (Penn tree-bank base noun-phrases are flat, thus the head above “monday” is “football”.) This immediately suggests creating a second model that captures some of the trigram-like probabilities that the immediate-bihead model misses. The most obvious extension would be to condition upon not just one’s parent’s head, but one’s grandparent’s as well. This does capture some of the information we would like, particularly the case heads of noun-phrases inside of prepositional phrases. For example, in “united states of america”, the probability of “america” is now conditioned not just on “of” (the head of its parent) but also on “states”. Unfortunately, for most of the cases where trigram really cleans up this revision would do little. Thus, in “he watched monday night football” “monday” would now be conditioned upon “football” and “watched.” The addition of “watched” is unlikely to make much difference, certainly compared to the boost trigram models get by, in effect, recognizing the complete name. It is interesting to note, however, that virtually all linguists believe that a noun-phrase like “monday night football” has significant substructure — e.g., it would look something like Figure 2. If we assume this tree-structure the two heads above “monday” are “night” and “football” respectively, thus giving our trihead model the same power as the trigram for this case. Ignoring some of the conditioning events, we now get a probability p(h = monday j i = night, j = football), which is much higher than the corresponding bihead version p(h = monday j i = football). The reader may remember that h is the head of the current constituent, while i is the head of its parent. We now define j to be the grandparent head. We decided to adopt this structure, but to keep things simple we only changed the definition of “head” for the distribution p(h j t, l, m, u, i, j). Thus we adopted the following revised definition of head for constituents of base noun-phrases: For a pre-terminal (e.g., noun) constituent c of a base noun-phrase in which it is not the standard head (h) and which has as its right-sister another preterminal constituent d which is not itself h, the head of c is the head of d. The sole exceptions to this rule are phraseinitial determiners and numbers which retain h as their heads. In effect this definition assumes that the substructure of all base noun-phrases is left branching, as in Figure 2. This is not true, but Lauer [11] shows that about two-thirds of all branching in base-noun-phrases is leftward. We believe we would get even better results if the parser could determine the true branching structure. We then adopt the following definition of a grandparent-head feature j. 1. if c is a noun phrase under a prepositional phrase, or is a pre-terminal which takes a revised head as defined above, then j is the grandparent head of c, else 2. if c is a pre-terminal and is not next (in the production generating c) to the head of its parent (i) then j(c) is the head of the constituent next to c in the production in the direction of the head of that production, else 3. j is a “none-of-the-above” symbol. Case 1 now covers both “united states of america” and “monday night football” examples. Case 2 handles other flat constituents in Penn tree-bank style (e.g., quantifier-phrases) for which we do not have a good analysis. Case three says that this feature is a no-op in all other situations. Model Perplexity Trigram Grammar Interpolation C&J 167.14 158.28 148.90 Roark 167.02 152.26 137.26 Bihead 167.89 144.98 133.15 Trihead 167.89 130.20 126.07 Table 3: Perplexity results for the immediatetrihead model The results for this model, again trained on F0F20 and tested on F23-24, are given in Figure 3 under the heading ”Immediate-trihead model”. We see that the grammar perplexity is reduced to 130.20, a reduction of 10% over our first model, 14% over the previous best grammar model (152.26%), and 22% over the best of the above trigram models for the task (167.02). When we run the trigram and new grammar model in tandem we get a perplexity of 126.07, a reduction of 8% over the best previous tandem model and 24% over the best trigram model. 4.3 Discussion One interesting fact about the immediate-trihead model is that of the 3761 sentences in the test corpus, on 2934, or about 75%, the grammar model assigns a higher probability to the sentence than does the trigram model. One might well ask what went “wrong” with the remaining 25%? Why should the grammar model ever get beaten? Three possible reasons come to mind: 1. The grammar model is better but only by a small amount, and due to sparse data problems occasionally the worse model will luck out and beat the better one. 2. The grammar model and the trigram model capture different facts about the distribution of words in the language, and for some set of sentences one distribution will perform better than the other. 3. The grammar model is, in some sense, always better than the trigram model, but if the parser bungles the parse, then the grammar model is impacted very badly. Obviously the trigram model has no such Achilles’ heel. Sentence Group Num. Labeled Labeled Precision Recall All Sentences 3761 84.6% 83.7% Grammar High 2934 85.7% 84.9% Trigram High 827 80.1% 79.0% Table 4: Precision/recall for sentences in which trigram/grammar models performed best We ask this question because what we should do to improve performance of our grammar-based language models depends critically on which of these explanations is correct: if (1) we should collect more data, if (2) we should just live with the tandem grammar-trigram models, and if (3) we should create better parsers. Based upon a few observations on sentences from the development corpus for which the trigram model gave higher probabilities we hypothesized that reason (3), bungled parses, is primary. To test this we performed the following experiment. We divide the sentences from the test corpus into two groups, ones for which the trigram model performs better, and the ones for which the grammar model does better. We then collect labeled precision and recall statistics (the standard parsing performance measures) separately for each group. If our hypothesis is correct we expect the “grammar higher” group to have more accurate parses than the trigram-higher group as the poor parse would cause poor grammar perplexity for the sentence, which would then be worse than the trigram perplexity. If either of the other two explanations were correct one would not expect much difference between the two groups. The results are shown in Table 4. We see there that, for example, sentences for which the grammar model has the superior perplexity have average recall 5.9 (= 84. 9 79. 0) percentage points higher than the sentences for which the trigram model performed better. The gap for precision is 5.6. This seems to support our hypothesis. 5 Conclusion and Future Work We have presented two grammar-based language models, both of which significantly improve upon both the trigram model baseline for the task (by 24% for the better of the two) and the best previous grammar-based language model (by 14%). Furthermore we have suggested that improvement of the underlying parser should improve the model’s perplexity still further. We should note, however, that if we were dealing with standard Penn Tree-bank Wall-StreetJournal text, asking for better parsers would be easier said than done. While there is still some progress, it is our opinion that substantial improvement in the state-of-the-art precision/recall figures (around 90%) is unlikely in the near future.3 However, we are not dealing with standard tree-bank text. As pointed out above, the text in question has been “speechified” by removing punctuation and capitalization, and “simplified” by allowing only a fixed vocabulary of 10,000 words (replacing all the rest by the symbol “UNK”), and replacing all digits and symbols by the symbol “N”. We believe that the resulting text grossly underrepresents the useful grammatical information available to speech-recognition systems. First, we believe that information about rare or even truly unknown words would be useful. For example, when run on standard text, the parser uses ending information to guess parts of speech [3]. Even if we had never encountered the word “showboating”, the “ing” ending tells us that this is almost certainly a progressive verb. It is much harder to determine this about UNK.4 Secondly, while punctuation is not to be found in speech, prosody should give us something like equivalent information, perhaps even better. Thus significantly better parser performance on speechderived data seems possible, suggesting that highperformance trigram-less language models may be within reach. We believe that the adaptation of prosodic information to parsing use is a worthy topic for future research. Finally, we have noted two objections to immediate-head language models: first, they complicate left-to-right search (since heads are often to the right of their children) and second, 3Furthermore, some of the newest wrinkles [8] use discriminative methods and thus do not define language models at all, seemingly making them ineligible for the competition on a priori grounds. 4To give the reader some taste for the difficulties presented by UNKs, we encourage you to try parsing the following real example: “its supposedly unk unk unk a unk that makes one unk the unk of unk unk the unk radical unk of unk and unk and what in unk even seems like unk in unk”. they cannot be tightly integrated with trigram models. The possibility of trigram-less language models makes the second of these objections without force. Nor do we believe the first to be a permanent disability. If one is willing to provide sub-optimal probability estimates as one proceeds left-to-right and then amend them upon seeing the true head, left-to-right processing and immediatehead parsing might be joined. Note that one of the cases where this might be worrisome, early words in a base noun-phrase could be conditioned upon a head which comes several words later, has been made significantly less problematic by our revised definition of heads inside noun-phrases. We believe that other such situations can be brought into line as well, thus again taming the search problem. However, this too is a topic for future research. References 1. BOD, R. What is the minimal set of fragments that achieves maximal parse accuracy. In Proceedings of Association for Computational Linguistics 2001. 2001. 2. CHARNIAK, E. Tree-bank grammars. In Proceedings of the Thirteenth National Conference on Artificial Intelligence. AAAI Press/MIT Press, Menlo Park, 1996, 1031– 1036. 3. CHARNIAK, E. A maximum-entropyinspired parser. In Proceedings of the 2000 Conference of the North American Chapter of the Association for Computational Linguistics. ACL, New Brunswick NJ, 2000. 4. CHELBA, C. AND JELINEK, F. Exploiting syntactic structure for language modeling. In Proceedings for COLING-ACL 98. ACL, New Brunswick NJ, 1998, 225–231. 5. CHI, Z. AND GEMAN, S. Estimation of probabilistic context-free grammars. Computational Linguistics 24 2 (1998), 299–306. 6. COLLINS, M. J. Three generative lexicalized models for statistical parsing. In Proceedings of the 35th Annual Meeting of the ACL. 1997, 16–23. 7. COLLINS, M. J. Head-Driven Statistical Models for Natural Language Parsing. University of Pennsylvania, Ph.D. Dissertation, 1999. 8. COLLINS, M. J. Discriminative reranking for natural language parsing. In Proceedings of the International Conference on Machine Learning (ICML 2000). 2000. 9. GODDEAU, D. Using probabilistic shiftreduce parsing in speech recognition systems. In Proceedings of the 2nd International Conference on Spoken Language Processing. 1992, 321–324. 10. GOODMAN, J. Putting it all together: language model combination. In ICASSP-2000. 2000. 11. LAUER, M. Corpus statistics meet the noun compound: some empirical results. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. 1995, 47– 55. 12. MAGERMAN, D. M. Statistical decision-tree models for parsing. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. 1995, 276–283. 13. MARCUS, M. P., SANTORINI, B. AND MARCINKIEWICZ, M. A. Building a large annotated corpus of English: the Penn treebank. Computational Linguistics 19 (1993), 313–330. 14. RATNAPARKHI, A. Learning to parse natural language with maximum entropy models. Machine Learning 34 1/2/3 (1999), 151–176. 15. ROARK, B. Probabilistic top-down parsing and language modeling. Computational Linguistics (forthcoming). 16. STOLCKE, A. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics 21 (1995), 165–202. 17. STOLCKE, A. AND SEGAL, J. Precise ngram probabilities from stochastic context-free grammars. In Proceedings of the 32th Annual Meeting of the Association for Computational Linguistics. 1994, 74–79.
2001
17
Constraints on strong generative power David Chiang University of Pennsylvania Dept of Computer and Information Science 200 S 33rd St Philadelphia, PA 19104 USA [email protected] Abstract We consider the question “How much strong generative power can be squeezed out of a formal system without increasing its weak generative power?” and propose some theoretical and practical constraints on this problem. We then introduce a formalism which, under these constraints, maximally squeezes strong generative power out of context-free grammar. Finally, we generalize this result to formalisms beyond CFG. 1 Introduction “How much strong generative power can be squeezed out of a formal system without increasing its weak generative power?” This question, posed by Joshi (2000), is important for both linguistic description and natural language processing. The extension of tree adjoining grammar (TAG) to tree-local multicomponent TAG (Joshi, 1987), or the extension of context free grammar (CFG) to tree insertion grammar (Schabes and Waters, 1993) or regular form TAG (Rogers, 1994) can be seen as steps toward answering this question. But this question is difficult to answer with much finality unless we pin its terms down more precisely. First, what is meant by strong generative power? In the standard definition (Chomsky, 1965) a grammar G weakly generates a set of sentences L(G) and strongly generates a set of structural descriptions Σ(G); the strong generative capacity of a formalism F is then {Σ(G) | F provides G}. There is some vagueness in the literature, however, over what structural descriptions are and how they can reasonably be compared across theories (Miller (1999) gives a good synopsis). (a) X ϵ X X∗a X X∗b (b) XNA a X b X∗a b Figure 1: Example of weakly context-free TAG. The approach that Vijay-Shanker et al. (1987) and Weir (1988) take, elaborated on by Becker et al. (1992), is to identify a very general class of formalisms, which they call linear contextfree rewriting systems (CFRSs), and define for this class a large space of structural descriptions which serves as a common ground in which the strong generative capacities of these formalisms can be compared. Similarly, if we want to talk about squeezing strong generative power out of a formal system, we need to do so in the context of some larger space of structural descriptions. Second, why is preservation of weak generative power important? If we interpret this constraint to the letter, it is almost vacuous. For example, the class of all tree adjoining grammars which generate context-free languages includes the grammar shown in Figure 1a (which generates the language {a, b}∗). We can also add the tree shown in Figure 1b without increasing the grammar’s weak generative capacity; indeed, we can add any trees we please, provided they yield only as and bs. Intuitively, the constraint of weak context-freeness has little force. This intuition is verified if we consider that weak context-freeness is desirable for computational efficiency. Though a weakly context-free TAG might be recognizable in cubic time (if we know the equivalent CFG), it need not be parsable in cubic time—that is, given a string, to compute all its possible structural descriptions will take O(n6) time in general. If we are interested in computing structural descriptions from strings, then Derivations Structural descriptions Sentences Figure 2: Simulation: structural descriptions as derived structures. we need a tighter constraint than preservation of weak generative power. In Section 3 below we examine some restrictions on tree adjoining grammar which are weakly context-free, and observe that their parsers all work in the same way: though given a TAG G, they implicitly parse using a CFG G′ which derives the same strings as G, but also their corresponding structural descriptions under G, in such a way that preserves the dynamic-programming structure of the parsing algorithm. Based on this observation, we replace the constraint of preservation of weak generative power with a constraint of simulability: essentially, a grammar G′ simulates another grammar G if it generates the same strings that G does, as well as their corresponding structural descriptions under G (see Figure 2). So then, within the class of context-free rewriting systems, how does this constraint of simulability limit strong generative power? In Section 4.1 we define a formalism called multicomponent multifoot TAG (MMTAG) which, when restricted to a regular form, characterizes precisely those CFRSs which are simulable by a CFG. Thus, in the sense we have set forth, this formalism can be said to squeeze as much strong generative power out of CFG as is possible. Finally, we generalize this result to formalisms beyond CFG. 2 Characterizing structural descriptions First we define context-free rewriting systems. What these formalisms have in common is that their derivation sets are all local sets (that is, generable by a CFG). These derivations are taken as structural descriptions. The following definitions are adapted from Weir (1988). Definition 1 A generalized context-free grammar G is a tuple ⟨V, S, F, P⟩, where 1. V is a finite set of variables, 2. S ∈V is a distinguished start symbol, 3. F is a finite set of function symbols, and X Y ϵ XNA a X∗d YNA b Y∗c S →α(X, Y) α(⟨x1, x2⟩, ⟨y1, y2⟩) = x1y1y2x2 X →β1(X) β1(⟨x1, x2⟩) = ⟨ax1, x2d⟩ X →ϵ() ϵ() = ⟨ϵ, ϵ⟩ Y →β2(Y) β2(⟨y1, y2⟩) = ⟨by1, y2c⟩ Y →ϵ() ϵ() = ⟨ϵ, ϵ⟩ Figure 3: Example of TAG with corresponding GCFG and interpretation. Here adjunction at foot nodes is allowed. 4. P is a finite set of productions of the form A →f(A1, . . . , An) where n ≥0, f ∈F, and A, Ai ∈V. A generalized CFG G generates a set T (G) of terms, which are interpreted as derivations under some formalism. In this paper we require that G be free of spurious ambiguity, that is, that each term be uniquely generated. Definition 2 We say that a formalism F is a context-free rewriting system (CFRS) if its derivation sets can be characterized by generalized CFGs, and its derived structures are produced by a function ⟦·⟧F from terms to strings such that for each function symbol f, there is a yield function f F such that ⟦f(t1, . . . , tn)⟧F = f F (⟦t1⟧F , . . . , ⟦tn⟧F ) (A linear CFRS is subject to further restrictions, which we do not make use of.) As an example, Figure 3 shows a simple TAG with a corresponding GCFG and interpretation. A nice property of CFRS is that any formalism which can be defined as a CFRS immediately lends itself to several extensions, which arise when we give additional interpretations to the function symbols. For example, we can interpret the functions as ranging over probabilities, creating a stochastic grammar; or we can interpret them as yield functions of another grammar, creating a synchronous grammar. Now we define strong generative capacity as the relationship between strings and structural descriptions.1 1This is similar in spirit, but not the same as, the notion of derivational generative capacity (Becker et al., 1992). Definition 3 The strong generative capacity of a grammar G a CFRS F is the relation {⟨⟦t⟧F , t⟩| t ∈T (G)}. For example, the strong generative capacity of the grammar of Figure 3 is {⟨ambncndm, α(βm 1 (ϵ()), βn 2(ϵ()))⟩} whereas any equivalent CFG must have a strong generative capacity of the form {⟨ambncndm, f m(gn(e()))⟩} That is, in a CFG the n bs and cs must appear later in the derivation than the m as and ds, whereas in our example they appear in parallel. 3 Simulating structural descriptions We now take a closer look at some examples of “squeezed” context-free formalisms to illustrate how a CFG can be used to simulate formalisms with greater strong generative power than CFG. 3.1 Motivation Tree substitution grammar (TSG), tree insertion grammar (TIG), and regular-form TAG (RF-TAG) are all weakly context free formalisms which can additionally be parsed in cubic time (with a caveat for RF-TAG below). For each of these formalisms a CKY-style parser can be written whose items are of the form [X, i, j] and are combined in various ways, but always according to the schema [X, i, j] [Y, j, k] [Z, i, k] just as in the CKY parser for CFG. In effect the parser dynamically converts the TSG, TIG, or RFTAG into an equivalent CFG—each parser rule of the above form corresponds to the rule schema Z →XY. More importantly, given a grammar G and a string w, a parser can reconstruct all possible derivations of w under G by storing inside each chart item how that item was inferred. If we think of the parser as dynamically converting G into a CFG G′, then this CFG is likewise able to compositionally reconstruct TSG, TIG, or RF-TAG derivations—we say that G′ simulates G. Note that the parser specifies how to convert G into G′, but G′ is not itself a parser. Thus these three formalisms have a special relationship to CFG that is independent of any particular parsing algorithm: for any TSG, TIG, or RF-TAG G, there is a CFG that simulates G. We make this notion more precise below. 3.2 Excursus: regular form TAG Strictly speaking, the recognition algorithm Rogers gives cannot be extended to parsing; that is, it generates all possible derived trees for a given string, but not all possible derivations. It is correct, however, as a parser for a further restricted subclass of TAGs: Definition 4 We say that a TAG is in strict regular form if there exists some partial ordering ⪯ over the nonterminal alphabet such that for every auxiliary tree β, if the root and foot of β are labeled X, then for every node η along β’s spine where adjunction is allowed, X ⪯label(η), and X = label(η) only if η is a foot node. (In this variant adjunction at foot nodes is permitted.) Thus the only kinds of adjunction which can occur to unbounded depth are off-spine adjunction and adjunction at foot nodes. This stricter definition still has greater strong generative capacity than CFG. For example, the TAG in Figure 3 is in strict regular form, because the only nodes along spines where adjunction is allowed are foot nodes. 3.3 Simulability So far we have not placed any restrictions on how these structural descriptions are computed. Even though we might imagine attaching arbitrary functions to the rules of a parser, an algorithm like CKY is only really capable of computing values of bounded size, or else structuresharing in the chart will be lost, increasing the complexity of the algorithm possibly to exponential complexity. For a parser to compute arbitrary-sized objects, such as the derivations themselves, it must use back-pointers, references to the values of subcomputations but not the values themselves. The only functions on a back-pointer the parser can compute online are the identity function (by copying the back-pointer) and constant functions (by replacing the back-pointer); any other function would have to dereference the back-pointer and destroy the structure of the algorithm. Therefore such functions must be computed offline. Definition 5 A simulating interpretation ⟦·⟧is a bijection between two recognizable sets of terms such that 1. For each function symbol φ, there is a function ¯φ such that ⟦φ(t1, . . . , tn)⟧= ¯φ(⟦t1⟧, . . . , ⟦tn⟧) 2. Each ¯φ is definable as: ¯φ(⟨x11, . . . , x1m1)⟩), . . . , ⟨xn1, . . . , xmnm⟩) = ⟨w1, . . . , wm⟩ where each wi can take one of the following forms: (a) a variable xij, or (b) a function application f(xi1 j1, . . . xin jn), n ≥0 3. Furthermore, we require that for any recognizable set T, ⟦T⟧is also a recognizable set. We say that ⟦·⟧is trivial if every ¯φ is definable as ¯φ(x1, . . . xn) = f(xπ(1), . . . xπ(n)) where π is a permutation of {1, . . . , n}.2 The rationale for requirement (3) is that it should not be possible, simply by imposing local constraints on the simulating grammar, to produce a simulated grammar which does not even come from a CFRS.3 Definition 6 We say that a grammar G from a CFRS F is (trivially) simulable by a grammar G’ from another CFRS F if there is a (trivial) simulating interpretation ⟦·⟧s : T (G′) →T (G) which satisfies ⟦t⟧F ′ = ⟦⟦t⟧s⟧F for all t ∈T (G′). As an example, a CFG which simulates the TAG of Figure 3 is shown in Figure 4. Note that if we give additional interpretations to the simulated yield functions α, β1, and β2, this CFG can compute any probabilities, translations, etc., that the original TAG can. Note that if G′ trivially simulates G, they are very nearly strongly equivalent, except that the yield functions of G′ might take their arguments in a different order than G, and there might be several yield functions of G′ which correspond to a single yield function of G used in several different contexts. In fact, for technical reasons we will use this notion instead of strong equivalence for testing the strong generative power of a formal system. Thus the original problem, which was, given a formalism F , to find a formalism that has as much strong generative power as possible but remains weakly equivalent to F , is now recast as 2Simulating interpretations and trivial simulating interpretations are similar to the generalized and “ungeneralized” syntax-directed translations, respectively, of Aho and Ullman (1969; 1971). 3Without this requirement, there are certain pathological cases that cause the construction of Section 4.2 to produce infinite MM-TAGs. S →α0• α(x1, x2) ← ⟨x1, x2⟩ α0• →α0• ⟨ϵ(), x2⟩← ⟨−, x2⟩ α0• →α1• ⟨−, x2⟩← ⟨−, x2⟩ α1• →α1• ⟨−, ϵ()⟩← ⟨−, −⟩ α1• →ϵ ⟨−, −⟩← ⟨−, −⟩ α0• →β0 1[α0] ⟨β1(x1), x2⟩← ⟨x1, x2⟩ β0 1[α0] →a β2 1[α0] d ⟨x1, x2⟩← ⟨x1, x2⟩ β2 1[α0] →β0 1[α0] ⟨β1(x1), x2⟩← ⟨x1, x2⟩ β2 1[α0] →α0• ⟨ϵ(), x2⟩← ⟨−, x2⟩ α1• →β0 2[α1] ⟨−, β2(x2)⟩← ⟨−, x2⟩ β0 2[α1] →b β2 2[α1] c ⟨−, x2⟩← ⟨−, x2⟩ β2 2[α1] →β1 2[α1] ⟨−, β2(x2)⟩← ⟨−, x2⟩ β2 2[α1] →α1• ⟨−, ϵ()⟩← ⟨−, −⟩ Figure 4: CFG which simulates the grammar of Figure 3. Here we leave the yield functions anonymous; y ← x denotes the function which maps x to y. the following problem: find a formalism that trivially simulates as many grammars as possible but remains simulable by F . 3.4 Results The following is easy to show: Proposition 1 Simulability is reflexive and transitive. Because of transitivity, it is impossible that a formalism which is simulable by F could simulate a grammar that is not simulable by F . So we are looking for a formalism that can trivially simulate exactly those grammars that F can. In Section 4.1 we define a formalism called multicomponent multifoot TAG (MMTAG), and then in Section 4.2 we prove the following result: Proposition 2 A grammar G from a CFRS is simulable by a CFG if and only if it is trivially simulable by an MMTAG in regular form. The “if” direction (⇐) implies (because simulability is reflexive) that RF-MMTAG is simulable by a CFG, and therefore cubic-time parsable. (The proof below does give an effective procedure for constructing a simulating CFG for any RF-MMTAG.) The “only if” direction (⇒) shows that, in the sense we have defined, RF-MMTAG is the most powerful such formalism. We can generalize this result using the notion of a meta-level grammar (Dras, 1999). Definition 7 If F1 and F2 are two CFRSs, F2 ◦ F1 is the CFRS characterized by the interpretation function ⟦·⟧F2◦F1 = ⟦·⟧F2 ◦⟦·⟧F1. F1 is the meta-level formalism, which generates derivations for F2. Obviously F1 must be a treerewriting system. Proposition 3 For any CFRS F ′, a grammar G from a (possibly different) CFRS is simulable by a grammar in F ′ if and only if it is trivially simulable by a grammar in F ′ ◦RF-MMTAG. The “only if” direction (⇒) follows from the fact that the MMTAG constructed in the proof of Proposition 2 generates the same derived trees as the CFG. The “if” direction (⇐) is a little trickier because the constructed CFG inserts and relabels nodes. 4 Multicomponent multifoot TAG 4.1 Definitions MMTAG resembles a cross between set-local multicomponent TAG (Joshi, 1987) and ranked node rewriting grammar (Abe, 1988), a variant of TAG in which auxiliary trees may have multiple foot nodes. It also has much in common with dtree substitution grammar (Rambow et al., 1995). Definition 8 An elementary tree set ⃗α is a finite set of trees (called the components of ⃗α) with the following properties: 1. Zero or more frontier nodes are designated foot nodes, which lack labels (following Abe), but are marked with the diacritic ∗; 2. Zero or more (non-foot) nodes are designated adjunction nodes, which are partitioned into one or more disjoint sets called adjunction sites. We notate this by assigning an index i to each adjunction site and marking each node of site i with the diacritic i . 3. Each component is associated with a symbol called its type. This is analogous to the left-hand side of a CFG rule (again, following Abe). 4. The components of ⃗α are connected by dedges from foot nodes to root nodes (notated by dotted lines) to form a single tree structure. A single foot node may have multiple d-children, and their order is significant. (See Figure 5 for an example.) A multicomponent multifoot tree adjoining grammar is a tuple ⟨Σ, P, S ⟩, where: A ∗ X 1 Y 2 ∗ X 1 ∗ X 1 ∗ A ∗ X 1 Y 3 ∗ X 1 ∗ X 1 ∗ { A ∗ A Y 3 X 1 Y 2 ∗ X 1 ∗ X 1 ∗ Figure 5: Example of MMTAG adjunction. The types of the components, not shown in the figure, are all X. 1. Σ is a finite alphabet; 2. P is a finite set of tree sets; and 3. S ∈Σ is a distinguished start symbol. Definition 9 A component α is adjoinable at a node η if η is an adjunction node and the type of α equals the label of η. The result of adjoining a component α at a node η is the tree set formed by separating η from its children, replacing η with the root of α, and replacing the ith foot node of α with the ith child of η. (Thus adjunction of a one-foot component is analogous to TAG adjunction, and adjunction of a zero-foot component is analogous to substitution.) A tree set ⃗α is adjoinable at an adjunction site ⃗η if there is a way to adjoin each component of ⃗α at a different node of ⃗η (with no nodes left over) such that the dominance and precedence relations within ⃗α are preserved. (See Figure 5 for an example.) We now define a regular form for MMTAG that is analogous to strict regular form for TAG. A spine is the path from the root to a foot of a single component. Whenever adjunction takes place, several spines are inserted inside or concatenated with other spines. To ensure that unbounded insertion does not take place, we impose an ordering on spines, by means of functions ρi that map the type of a component to the rank of that component’s ith spine. Definition 10 We say that an adjunction node η ∈ ⃗η is safe in a spine if it is the lowest node (except the foot) in that spine, and if each component under that spine consists only of a member of ⃗η and zero or more foot nodes. We say that an MMTAG G is in regular form if there are functions ρi from Σ into the domain of some partial ordering ⪯such that for each component α of type X, for each adjunction node η ∈α, if the jth child of η dominates the ith foot node of α (that is, another component’s jth spine would adjoin into the ith spine), then ρi(X) ⪯ ρj(label(η)), and ρi(X) = ρj(label(η)) only if η is safe in the ith spine. Thus the only kinds of adjunction which can occur to unbounded depth are off-spine adjunction and safe adjunction. The adjunction shown in Figure 5 is an example of safe adjunction. 4.2 Proof of Proposition 2 (⇐) First we describe how to construct a simulating CFG for any RF-MMTAG; then this direction of the proof follows from the transitivity of simulability. When a CFG simulates a regular form TAG, each nonterminal must encapsulate a stack (of bounded depth) to keep track of adjunctions. In the multicomponent case, these stacks must be generalized to trees (again, of bounded size). So the nonterminals of G′ are of the form [η, t], where t is a derivation fragment of G with a dot (·) at exactly one node ⃗α, and η is a node of ⃗α. Let ¯η be the node in the derived tree where η ends up. A fragment t can be put into a normal form as follows: 1. For every ⃗α above the dot, if ¯η does not lie along a spine of ⃗α, delete everything above ⃗α. 2. For every ⃗α not above or at the dot, if ¯η does not lie along a d-edge of ⃗α, delete ⃗α and everything below and replace it with ⊤if ¯η dominates ⃗α; otherwise replace it with ⊥. 3. If there are two nodes ⃗α1 and ⃗α2 along a path which name the same tree set and ¯η lies along the same spine or same d-edge in both of them, collapse ⃗α1 and ⃗α2, deleting everything in between. Basically this process removes all unboundedly long paths, so that the set of normal forms is finite. In the rule schemata below, the terms in the lefthand sides range over normalized terms, and their corresponding right-hand sides are renormalized. Let up(t) denote the tree that results from moving the dot in t up one step. The value of a subderivation t′ of G′ under ⟦·⟧s is a tuple of partial derivations of G, one for each ⊤symbol in the root label of t′, in order. Where we do not define a yield function for a production below, the identity function is understood. For every set ⃗α with a single, S -type component rooted by η, add the rule S →[η, ·⃗α(⊤, . . . , ⊤)] ⃗α(x1, . . . , xn) ← ⟨x1, . . . , xn⟩ For every non-adjunction, non-foot node η with children η1, . . . , ηn (n ≥0), [η, t] →[η1, t] · · · [ηn, t] For every component with root η′ that is adjoinable at η, [η, up(t)] →[η′, t] If η′ is the root of the whole set ⃗α′, this rule rewrites a ⊤to several ⊤symbols; the corresponding yield function is then ⟨. . . , ⃗α′(x1, . . . , xn), . . .⟩← ⟨. . . , x1, . . . , xn, . . .⟩ For every component with ith foot η′ i that is adjoinable at a node with ith child ηi, [η′ i, t] →[ηi, up(t)] This last rule skips over deleted parts of the derivation tree, but this is harmless in a regular form MMTAG, because all the skipped adjunctions are safe. (⇒) First we describe how to decompose any given derivation t′ of G′ into a set of elementary tree sets. Let t = ⟦t′⟧s. (Note the convention that primed variables always pertain to the simulating grammar, unprimed variables to the simulated grammar.) If, during the computation of t, a node η′ creates the node η, we say that η′ is productive and produces η. Without loss of generality, let us assume that there is a one-to-one correspondence between productive nodes and nodes of t.4 To start, let η be the root of t, and η1, . . . , ηn its children. Define the domain of ηi as follows: any node in t′ that produces ηi or any of its descendants is in the domain of ηi, and any non-productive node whose parent is in the domain of ηi is also in the domain of ηi. For each ηi, excise each connected component of the domain of ηi. This operation is the reverse of adjunction (see Figure 6): each component gets 4If G′ does not have this property, it can be modified so that it does. This may change the derived trees slightly, which makes the proof of Proposition 3 trickier. • α • β1 • a • ϵ • ϵ ϵ d { Q1 : • β1 • a • ϵ ∗ d • α Q1 1 • ϵ ϵ Figure 6: Example derivation (left) of the grammar of Figure 4, and first step of decomposition. Non-adjunction nodes are shown with the placeholder • (because the yield functions in the original grammar were anonymous), the Greek letters indicating what is produced by each node. Adjunction nodes are shown with labels Qi in place of the (very long) true labels. S : • Q1 1 Q2 2 ϵ Q1 : • • a Q1 1 ∗ d Q1 : • ∗ Q2 : • • b Q2 2 c Q2 : • ϵ Figure 7: MMTAG converted from CFG of Figure 4 (cf. the original TAG in Figure 3). Each components’ type is written to its left. foot nodes to replace its lost children, and the components are connected by d-edges according to their original configuration. Meanwhile an adjunction node is created in place of each component. This node is given a label (which also becomes the type of the excised component) whose job is to make sure the final grammar does not overgenerate; we describe how the label is chosen below. The adjunction nodes are partitioned such that the ith site contains all the adjunction nodes created when removing ηi. The tree set that is left behind is the elementary tree set corresponding to η (rather, the function symbol that labels η); this process is repeated recursively on the children of η, if any. Thus any derivation of G′ can be decomposed into elementary tree sets. Let ˆG be the union of the decompositions of all possible derivations of G′ (see Figure 7 for an example). Labeling adjunction nodes For any node η′, and any list of nodes ⟨η′ 1, . . . , η′ n⟩, let the signature of η′ with respect to ⟨η′ 1, . . . , η′ n⟩be ⟨A, a1, . . . , am⟩, where A is the left-hand side of the GCFG production that generated η′, and ai = ⟨j, k⟩if η′ gets its ith field from the kth field of η′ j, or ∗if η′ produces a function symbol in its ith field. So when we excise the domain of ηi, the label of the node left behind by a component α is ⟨s, s1, . . . , sn⟩, where s is the signature of the root of α with respect to the foot nodes and s1, . . . , sn are the signatures of the foot nodes with respect to their d-children. Note that the number of possible adjunction labels is finite, though large. ˆG trivially simulates G. Since each tree of ˆG corresponds to a function symbol (though not necessarily one-to-one), it is easy to write a trivial simulating interpretation ⟦·⟧: T ( ˆG) →T (G). To see that ˆG does not overgenerate, observe that the nonterminal labels inside the signatures ensure that every derivation of ˆG corresponds to a valid derivation of G′, and therefore G. To see that ⟦·⟧is one-to-one, observe that the adjunction labels keep track of how G′ constructed its simulated derivations, ensuring that for any derivation ˆt of ˆG, the decomposition of the derived tree of ˆt is ˆt itself. Therefore two derivations of ˆG cannot correspond to the same derivation of G′, nor of G. ˆG is finite. Briefly, suppose that the number of components per tree set is unbounded. Then it is possible, by intersecting G′ with a recognizable set, to obtain a grammar whose simulated derivation set is non-recognizable. The idea is that multicomponent tree sets give rise to dependent paths in the derivation set, so if there is no bound on the number of components in a tree set, neither is there a bound on the length of dependent paths. This contradicts the requirement that a simulating interpretation map recognizable sets to recognizable sets. Suppose that the number of nodes per component is unbounded. If the number of components per tree set is bounded, so must the number of adjunction nodes per component; then it is possible, again by intersecting G′ with a recognizable set, to obtain a grammar which is infinitely ambiguous with respect to simulated derivations, which contradicts the requirement that simulating interpretations be bijective. ˆG is in regular form. A component of ˆG corresponds to a derivation fragment of G′ which takes fields from several subderivations and processes them, combining some into a larger structure and copying some straight through to the root. Let ρi(X) be the number of fields that a component of type X copies from its ith foot up to its root. This information is encoded in X, in the signature of the root. Then ˆG satisfies the regular form constraint, because when adjunction inserts one spine into another spine, the the inserted spine must copy at least as many fields as the outer one. Furthermore, if the adjunction site is not safe, then the inserted spine must additionally copy the value produced by some lower node. 5 Discussion We have proposed a more constrained version of Joshi’s question, “How much strong generative power can be squeezed out of a formal system without increasing its weak generative power,” and shown that within these constraints, a variant of TAG called MMTAG characterizes the limit of how much strong generative power can be squeezed out of CFG. Moreover, using the notion of a meta-level grammar, this result is extended to formalisms beyond CFG. It remains to be seen whether RF-MMTAG, whether used directly or for specifying meta-level grammars, provides further practical benefits on top of existing “squeezed” grammar formalisms like tree-local MCTAG, tree insertion grammar, or regular form TAG. This way of approaching Joshi’s question is by no means the only way, but we hope that this work will contribute to a better understanding of the strong generative capacity of constrained grammar formalisms as well as reveal more powerful formalisms for linguistic analysis and natural language processing. Acknowledgments This research is supported in part by NSF grant SBR-89-20230-15. Thanks to Mark Dras, William Schuler, Anoop Sarkar, Aravind Joshi, and the anonymous reviewers for their valuable help. S. D. G. References Naoki Abe. 1988. Feasible learnability of formal grammars and the theory of natural language acquisition. In Proceedings of the Twelfth International Conference on Computational Linguistics (COLING-88), pages 1–6, Budapest. A. V. Aho and J. D. Ullman. 1969. Syntax directed translations and the pushdown assembler. J. Comp. Sys. Sci, 3:37–56. A. V. Aho and J. D. Ullman. 1971. Translations on a context free grammar. Information and Control, 19:439–475. Tilman Becker, Owen Rambow, and Michael Niv. 1992. The derivational generative power of formal systems, or, Scrambling is beyond LCFRS. Technical Report IRCS-92-38, Institute for Research in Cognitive Science, University of Pennsylvania. Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Mark Dras. 1999. A meta-level grammar: redefining synchronous TAG for translation and paraphrase. In Proceedings of the 37th Annual Meeting of the Assocation for Computational Linguistics, pages 80–87, College Park, MD. Aravind K. Joshi. 1987. An introduction to tree adjoining grammars. In Alexis Manaster-Ramer, editor, Mathematics of Language. John Benjamins, Amsterdam. Aravind K. Joshi. 2000. Relationship between strong and weak generative power of formal systems. In Proceedings of the Fifth International Workshop on TAG and Related Formalisms (TAG+5), pages 107– 113. Philip H. Miller. 1999. Strong Generative Capacity: The Semantics of Linguistic Formalism. Number 103 in CSLI lecture notes. CSLI Publications, Stanford. Owen Rambow, K. Vijay-Shanker, and David Weir. 1995. D-tree grammars. In Proceedings of the 33rd Annual Meeting of the Assocation for Computational Linguistics, pages 151–158, Cambridge, MA. James Rogers. 1994. Capturing CFLs with tree adjoining grammars. In Proceedings of the 32nd Annual Meeting of the Assocation for Computational Linguistics, pages 155–162, Las Cruces, NM. Yves Schabes and Richard C. Waters. 1993. Lexicalized context-free grammars. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 121–129, Columbus, OH. K. Vijay-Shanker, David Weir, and Aravind Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, pages 104–111, Stanford, CA. David J. Weir. 1988. Characterizing Mildly ContextSensitive Grammar Formalisms. Ph.D. thesis, Univ. of Pennsylvania.
2001
18
An Algebra for Semantic Construction in Constraint-based Grammars Ann Copestake Computer Laboratory University of Cambridge New Museums Site Pembroke St, Cambridge, UK [email protected] Alex Lascarides Division of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, Scotland, UK [email protected] Dan Flickinger CSLI, Stanford University and YY Software Ventura Hall, 220 Panama St Stanford, CA 94305, USA [email protected] Abstract We develop a framework for formalizing semantic construction within grammars expressed in typed feature structure logics, including HPSG. The approach provides an alternative to the lambda calculus; it maintains much of the desirable flexibility of unificationbased approaches to composition, while constraining the allowable operations in order to capture basic generalizations and improve maintainability. 1 Introduction Some constraint-based grammar formalisms incorporate both syntactic and semantic representations within the same structure. For instance, Figure 1 shows representations of typed feature structures (TFSs) for Kim, sleeps and the phrase Kim sleeps, in an HPSG-like representation, loosely based on Sag and Wasow (1999). The semantic representation expressed is intended to be equivalent to r name(x, Kim) ∧sleep(e, x).1 Note: 1. Variable equivalence is represented by coindexation within a TFS. 2. The coindexation in Kim sleeps is achieved as an effect of instantiating the SUBJ slot in the sign for sleeps. 3. Structures representing individual predicate applications (henceforth, elementary predications, or EPs) are accumulated by an append operation. Conjunction of EPs is implicit. 1The variables are free, we will discuss scopal relationships and quantifiers below. 4. All signs have an index functioning somewhat like a λ-variable. A similar approach has been used in a large number of implemented grammars (see Shieber (1986) for a fairly early example). It is in many ways easier to work with than λ-calculus based approaches (which we discuss further below) and has the great advantage of allowing generalizations about the syntax-semantics interface to be easily expressed. But there are problems. The operations are only specified in terms of the TFS logic: the interpretation relies on an intuitive correspondence with a conventional logical representation, but this is not spelled out. Furthermore the operations on the semantics are not tightly specified or constrained. For instance, although HPSG has the Semantics Principle (Pollard and Sag, 1994) this does not stop the composition process accessing arbitrary pieces of structure, so it is often not easy to conceptually disentangle the syntax and semantics in an HPSG. Nothing guarantees that the grammar is monotonic, by which we mean that in each rule application the semantic content of each daughter subsumes some portion of the semantic content of the mother (i.e., no semantic information is dropped during composition): this makes it impossible to guarantee that certain generation algorithms will work effectively. Finally, from a theoretical perspective, it seems clear that substantive generalizations are being missed. Minimal Recursion Semantics (MRS: Copestake et al (1999), see also Egg (1998)) tightens up the specification of composition a little. It enforces monotonic accumulation of EPs by making all rules append the EPs of their daughters (an approach which was followed by Sag and Wasow (1999)) but it does not fully specKim   SYN   np HEAD noun SUBJ < > COMPS < >   SEM   INDEX 5 ref-ind RESTR < " RELN R NAME INSTANCE 5 NAME KIM # >     sleeps   SYN   HEAD verb SUBJ < " SYN np SEM h INDEX 6 RESTR 7 i # > COMPS < >   SEM   INDEX 15 event RESTR < " RELN SLEEP SIT 15 ACT 6 # >     Kim sleeps   SYN HEAD 0 verb  SEM   INDEX 2 event RESTR 10 < " RELN R NAME INSTANCE 4 NAME KIM # > ⊕11 < " RELN SLEEP SIT 2 event ACT 4 # >   HEAD-DTR.SEM h INDEX 2 RESTR 10 i NON-HD-DTR.SEM.RESTR 11   Figure 1: Expressing semantics in TFSs ify compositional principles and does not formalize composition. We attempt to rectify these problems, by developing an algebra which gives a general way of expressing composition. The semantic algebra lets us specify the allowable operations in a less cumbersome notation than TFSs and abstracts away from the specific feature architecture used in individual grammars, but the essential features of the algebra can be encoded in the hierarchy of lexical and constructional type constraints. Our work actually started as an attempt at rational reconstruction of semantic composition in the large grammar implemented by the LinGO project at CSLI (available via http://lingo.stanford.edu). Semantics and the syntax/semantics interface have accounted for approximately nine-tenths of the development time of the English Resource Grammar (ERG), largely because the account of semantics within HPSG is so underdetermined. In this paper, we begin by giving a formal account of a very simplified form of the algebra and in §3, we consider its interpretation. In §4 to §6, we generalize to the full algebra needed to capture the use of MRS in the LinGO English Resource Grammar (ERG). Finally we conclude with some comparisons to the λ-calculus and to other work on unification based grammar. 2 A simple semantic algebra The following shows the equivalents of the structures in Figure 1 in our algebra: Kim: [x2]{[]subj, []comp}[r name(x2, Kim)]{} sleeps: [e1]{[x1]subj, []comp}[sleep(e1, x1)]{} Kim sleeps: [e1]{[]subj, []comp}[sleep(e1, x1), r name(x2, Kim)]{x1 = x2} The last structure is semantically equivalent to: [sleep(e1, x1), r name(x1, Kim)]. In the structure for sleeps, the first part, [e1], is a hook and the second part ([x1]subj and []comp) is the holes. The third element (the lzt) is a bag of elementary predications (EPs).2 Intuitively, the hook is a record of the value in the semantic entity that can be used to fill a hole in another entity during composition. The holes record gaps in the semantic form which occur because it represents a syntactically unsaturated structure. Some structures have no holes, such as that for Kim. When structures are composed, a hole in one structure (the semantic head) is filled with the hook of the other (by equating the variables) and their lzts are appended. It should be intuitively obvious that there is a straightforward relationship between this algebra and the TFSs shown in Figure 1, although there are other TFS architectures which would share the same encoding. We now give a formal description of the algebra. In this section, we simplify by assuming that each entity has only one hole, which is unlabelled, and only consider two sorts of variables: events and individuals. The set of semantic entities is built from the following vocabulary: 2As usual in MRS, this is a bag rather than a set because we do not want to have to check for/disallow repeated EPs; e.g., big big car. 1. The absurdity symbol ⊥. 2. indices i1, i2, . . ., consisting of two subtypes of indices: events e1, e2, . . . and individuals x1, x2, . . .. 3. n-place predicates, which take indices as arguments 4. =. Equality can only be used to identify variables of compatible sorts: e.g., x1 = x2 is well formed, but e = x is not. Sort compatibility corresponds to unifiability in the TFS logic. Definition 1 Simple Elementary Predications (SEP) An SEP contains two components: 1. A relation symbol 2. A list of zero or more ordinary variable arguments of the relation (i.e., indices) This is written relation(arg1, . . . ,argn). For instance, like(e, x, y) is a well-formed SEP. Equality Conditions: Where i1 and i2 are indices, i1 = i2 is an equality condition. Definition 2 The Set Σ of Simple semantic Entities (SSEMENT) s ∈Σ if and only if s = ⊥or s = ⟨s1, s2, s3, s4⟩ such that: • s1 = {[i]} is a hook; • s2 = ∅or {[i′]} is a hole; • s3 is a bag of SEPs(the lzt) • s4 is a set of equalities between variables (the eqs). We write a SSEMENT as: [i1][i2][SEPs]{EQs}. Note for convenience we omit the set markers {} from the hook and hole when there is no possible confusion. The SEPs, and EQs are (partial) descriptions of the fully specified formulae of first order logic. Definition 3 The Semantic Algebra A Semantic Algebra defined on vocabulary V is the algebra ⟨Σ, op⟩where: • Σ is the set of SSEMENTs defined on the vocabulary V , as given above; • op : Σ × Σ −→Σ is the operation of semantic composition. It satisfies the following conditions. If a1 = ⊥or a2 = ⊥or hole(a2) = ∅, then op(a1, a2) = ⊥. Otherwise: 1. hook(op(a1, a2)) = hook(a2) 2. hole(op(a1, a2)) = hole(a1) 3. lzt(op(a1, a2)) = lzt(a1) ⊕lzt(a2) 4. eq(op(a1, a2)) = Tr(eq(a1)∪eq(a2)∪ hook(a1) = hole(a2)}) where Tr stands for transitive closure (i.e., if S = {x = y, y = z}, then Tr(S) = {x = y, y = z, x = z}). This definition makes a2 the equivalent of a semantic functor and a1 its argument. Theorem 1 op is a function If a1 = a3 and a2 = a4, then a5 = op(a1, a2) = op(a3, a4) = a6. Thus op is a function. Furthermore, the range of op is within Σ. So ⟨Σ, op⟩is an algebra. We can assume that semantic composition always involves two arguments, since we can define composition in ternary rules etc as a sequence of binary operations. Grammar rules (i.e., constructions) may contribute semantic information, but we assume that this information obeys all the same constraints as the semantics for a sign, so in effect such a rule is semantically equivalent to having null elements in the grammar. The correspondence between the order of the arguments to op and linear order is specified by syntax. We use variables and equality statements to achieve the same effect as coindexation in TFSs. This raises one problem, which is the need to avoid accidental variable equivalences (e.g., accidentally using x in both the signs for cat and dog when building the logical form of A dog chased a cat). We avoid this by adopting a convention that each instance of a lexical sign comes from a set of basic sements that have pairwise distinct variables. The equivalent of coindexation within a lexical sign is represented by repeating the same variable but the equivalent of coindexation that occurs during semantic composition is an equality condition which identifies two different variables. Stating this formally is straightforward but a little long-winded, so we omit it here. 3 Interpretation The SEPs and EQs can be interpreted with respect to a first order model ⟨E, A, F⟩where: 1. E is a set of events 2. A is a set of individuals 3. F is an interpretation function, which assigns tuples of appropriate kinds to the predicates of the language. The truth definition of the SEPs and EQs (which we group together under the term SMRS, for simple MRS) is as follows: 1. For all events and individuals v, [[v]]⟨M,g⟩= g(v). 2. For all n-predicates P n, [[P n]]⟨M,g⟩= {⟨t1, . . . , tn⟩: ⟨t1, . . . , tn⟩∈ F(P n)}. 3. [[P n(v1, . . . , vn)]]⟨M,g⟩= 1 iff ⟨[[v1]]⟨M,g⟩, . . . , [[vn]]⟨M,g⟩⟩∈[[P n]]⟨M,g⟩. 4. [[φ ∧ψ]]⟨M,g⟩= 1 iff [[φ]]⟨M,g⟩= 1 and [[ψ]]⟨M,g⟩= 1. Thus, with respect to a model M, an SMRS can be viewed as denoting an element of P(G), where G is the set of variable assignment functions (i.e., elements of G assign the variables e, . . . and x, . . . their denotations): [[smrs]]M = {g : g is a variable assignment function and M |=g smrs} We now consider the semantics of the algebra. This must define the semantics of the operation op in terms of a function f which is defined entirely in terms of the denotations of op’s arguments. In other words, [[op(a1, a2)]] = f([[a1]], [[a2]]) for some function f. Intuitively, where the SMRS of the SEMENT a1 denotes G1 and the SMRS of the SEMENT a2 denotes G2, we want the semantic value of the SMRS of op(a1, a2) to denote the following: G1 ∩G2 ∩[[hook(a1) = hole(a2)]] But this cannot be constructed purely as a function of G1 and G2. The solution is to add hooks and holes to the denotations of SEMENTS (cf. Zeevat, 1989). We define the denotation of a SEMENT to be an element of I × I × P(G), where I = E ∪A, as follows: Definition 4 Denotations of SEMENTs If a ̸= ⊥is a SEMENT, [[a]]M = ⟨[i], [i′], G⟩ where: 1. [i] = hook(a) 2. [i′] = hole(a) 3. G = {g : M |=g smrs(a)} [[⊥]]M = ⟨∅, ∅, ∅⟩ So, the meanings of SEMENTs are ordered threetuples, consisting of the hook and hole elements (from I) and a set of variable assignment functions that satisfy the SMRS. We can now define the following operation f over these denotations to create an algebra: Definition 5 Semantics of the Semantic Construction Algebra ⟨I × I × P(G), f⟩is an algebra, where: f(⟨∅, ∅, ∅⟩, ⟨[i2], [i′ 2], G2⟩) = ⟨∅, ∅, ∅⟩ f(⟨[i1], [i′ 1], G1⟩, ⟨∅, ∅, ∅⟩) = ⟨∅, ∅, ∅⟩ f(⟨[i1], [i′ 1], G1⟩, ⟨[i2], ∅, G2⟩= ⟨∅, ∅, ∅⟩ f(⟨[i1], [i′ 1], G1⟩, ⟨[i2], [i′ 2], G2⟩) = ⟨[i2], [i′ 1], G1 ∩G2 ∩G′⟩ where G′ = {g : g(i1) = g(i′ 2)} And this operation demonstrates that semantic construction is compositional: Theorem 2 Semantics of Semantic Construction is Compositional The mapping [[]] : ⟨Σ, op⟩−→⟨⟨I, I, G⟩, f⟩ is a homomorphism (so [[op(a1, a2)]] = f([[a1]], [[a2]])). This follows from the definitions of [[]], op and f. 4 Labelling holes We now start considering the elaborations necessary for real grammars. As we suggested earlier, it is necessary to have multiple labelled holes. There will be a fixed inventory of labels for any grammar framework, although there may be some differences between variants.3 In HPSG, complements are represented using a list, but in general there will be a fixed upper limit for the number of complements so we can label holes COMP1, COMP2, etc. The full inventory of labels for 3For instance, Sag and Wasow (1999) omit the distinction between SPR and SUBJ that is often made in other HPSGs. the ERG is: SUBJ, SPR, SPEC, COMP1, COMP2, COMP3 and MOD (see Pollard and Sag, 1994). To illustrate the way the formalization goes with multiple slots, consider opsubj: Definition 6 The definition of opsubj opsubj(a1, a2) is the following: If a1 = ⊥or a2 = ⊥or holesubj(a2) = ∅, then opsubj(a1, a2) = ⊥. And if ∃l ̸= subj such that: |holel(a1) ∪holel(a2)| > 1 then opsubj(a1, a2) = ⊥. Otherwise: 1. hook(opsubj(a1, a2)) = hook(a2) 2. For all labels l ̸= subj: holel(opsubj(a1, a2)) = holel(a1) ∪ holel(a2) 3. lzt(opsubj(a1, a2)) = lzt(a1) ⊕lzt(a2) 4. eq(opsubj(a1, a2)) = Tr(eq(a1) ∪eq(a2)∪ {hook(a1) = holesubj(a2)}) where Tr stands for transitive closure. There will be similar operations opcomp1, opcomp2 etc for each labelled hole. These operations can be proved to form an algebra ⟨Σ, opsubj, opcomp1, . . .⟩in a similar way to the unlabelled case shown in Theorem 1. A little more work is needed to prove that opl is closed on Σ. In particular, with respect to clause 2 of the above definition, it is necessary to prove that opl(a1, a2) = ⊥or for all labels l′, |holel′(opl(a1, a2))| ≤1, but it is straightforward to see this is the case. These operations can be extended in a straightforward way to handle simple constituent coordination of the kind that is currently dealt with in the ERG (e.g., Kim sleeps and talks and Kim and Sandy sleep); such cases involve daughters with non-empty holes of the same label, and the semantic operation equates these holes in the mother SEMENT. 5 Scopal relationships The algebra with labelled holes is sufficient to deal with simple grammars, such as that in Sag and Wasow (1999), but to deal with scope, more is needed. It is now usual in constraint based grammars to allow for underspecification of quantifier scope by giving labels to pieces of semantic information and stating constraints between the labels. In MRS, labels called handles are associated with each EP. Scopal relationships are represented by EPs with handle-taking arguments. If all handle arguments are filled by handles labelling EPs, the structure is fully scoped, but in general the relationship is not directly specified in a logical form but is constrained by the grammar via additional conditions (handle constraints or hcons).4 A variety of different types of condition are possible, and the algebra developed here is neutral between them, so we will simply use relh to stand for such a constraint, intending it to be neutral between, for instance, =q (qeq: equality modulo quantifiers) relationships used in MRS and the more usual ≤relationships from UDRT (Reyle, 1993). The conditions in hcons are accumulated by append. To accommodate scoping in the algebra, we will make hooks and holes pairs of indices and handles. The handle in the hook corresponds to the LTOP feature in MRS. The new vocabulary is: 1. The absurdity symbol ⊥. 2. handles h1, h2, . . . 3. indices i1, i2, . . ., as before 4. n-predicates which take handles and indices as arguments 5. relh and =. The revised definition of an EP is as in MRS: Definition 7 Elementary Predications (EPs) An EP contains exactly four components: 1. a handle, which is the label of the EP 2. a relation 3. a list of zero or more ordinary variable arguments of the relation (i.e., indices) 4. a list of zero or more handles corresponding to scopal arguments of the relation. 4The underspecified scoped forms which correspond to sentences can be related to first order models of the fully scoped forms (i.e., to models of WFFs without labels) via supervaluation (e.g., Reyle, 1993). This corresponds to stipulating that an underspecified logical form u entails a base, fully specified form φ only if all possible ways of resolving the underspecification in u entails φ. For reasons of space, we do not give details here, but note that this is entirely consistent with treating semantics in terms of a description of a logical formula. The relationship between the SEMENTS of non-sentential constituents and a more ‘standard’ formal language such as λ-calculus will be explored in future work. This is written h:r(a1, . . . ,an,sa1, . . . ,sam). For instance, h:every(x, h1, h2) is an EP.5 We revise the definition of semantic entities to add the hcons conditions and to make hooks and holes pairs of handles and indices. H-Cons Conditions: Where h1 and h2 are handles, h1relhh2 is an H-Cons condition. Definition 8 The Set Σ of Semantic Entities s ∈ Σ if and only if s = ⊥or s = ⟨s1, s2, s3, s4, s5⟩such that: • s1 = {[h, i]} is a hook; • s2 = ∅or {[h′, i′]} is a hole; • s3 is a bag of EP conditions • s4 is a bag of HCONS conditions • s5 is a set of equalities between variables. SEMENTs are: [h1, i1]{holes}[eps][hcons]{eqs}. We will not repeat the full composition definition, since it is unchanged from that in §2 apart from the addition of the append operation on hcons and a slight complication of eq to deal with the handle/index pairs: eq(op(a1, a2)) = Tr(eq(a1) ∪eq(a2)∪ {hdle(hook(a1)) = hdle(hole(a2)), ind(hook(a1)) = ind(hole(a2))}) where Tr stands for transitive closure as before and hdle and ind access the handle and index of a pair. We can extend this to include (several) labelled holes and operations, as before. And these revised operations still form an algebra. The truth definition for SEMENTS is analogous to before. We add to the model a set of labels L (handles denote these via g) and a wellfounded partial order ≤on L (this helps interpret the hcons; cf. Fernando (1997)). A SEMENT then denotes an element of H × . . . H × P(G), where the Hs (= L × I) are the new hook and holes. Note that the language Σ is first order, and we do not use λ-abstraction over higher order elements.6 For example, in the standard Montagovian view, a quantifier such as every 5Note every is a predicate rather than a quantifier in this language, since MRSs are partial descriptions of logical forms in a base language. 6Even though we do not use λ-calculus for composition, we could make use of λ-abstraction as a representation device, for instance for dealing with adjectives such as former, cf., Moore (1989). is represented by the higher-order expression λPλQ∀x(P(x), Q(x)). In our framework, however, every is the following (using qeq conditions, as in the LinGO ERG): [hf, x]{[]subj, []comp1, [h′, x]spec, . . .} [he : every(x, hr, hs)][hr =q h′]{} and dog is: [hd, y]{[]subj, []comp1, []spec, . . .}[hd : dog(y)][]{} So these composes via opspec to yield every dog: [hf, x]{[]subj, []comp1, []spec, . . .} [he : every(x, hr, hs), hd : dog(y)] [hr =q h′]{h′ = hd, x = y} This SEMENT is semantically equivalent to: [hf, x]{[]subj, []comp1, []spec, . . .} [he : every(x, hr, hs), hd : dog(x)][hr =q hd]{} A slight complication is that the determiner is also syntactically selected by the N′ via the SPR slot (following Pollard and Sag (1994)). However, from the standpoint of the compositional semantics, the determiner is the semantic head, and it is only its SPEC hole which is involved: the N′ must be treated as having an empty SPR hole. In the ERG, the distinction between intersective and scopal modification arises because of distinctions in representation at the lexical level. The repetition of variables in the SEMENT of a lexical sign (corresponding to TFS coindexation) and the choice of type on those variables determines the type of modification. Intersective modification: white dog: dog: [hd, y]{[]subj, []comp1, . . . , []mod} [hd : dog(y)][]{} white: [hw, x]{[]subj, []comp1, .., [hw, x]mod} [hw : white(x)][]{} white dog: [hw, x]{[]subj, []comp1, . . . , []mod} (opmod) [hd : dog(y), hw : white(x)][] {hw = hd, x = y} Scopal Modification: probably walks: walks: [hw, e′]{[h′, x]subj, []comp1, . . . , []mod} [hw : walks(e′, x)][]{} probably: [hp, e]{[]subj, []comp1, . . . , [h, e]mod} [hp : probably(hs)][hs =q h]{} probably [hp, e]{[h′, x]subj, []comp1, . . . , []mod} walks: [hp:probably(hs), hw:walks(e′, x)] (opmod) [hs =q h]{hw = h, e = e′} 6 Control and external arguments We need to make one further extension to allow for control, which we do by adding an extra slot to the hooks and holes corresponding to the external argument (e.g., the external argument of a verb always corresponds to its subject position). We illustrate this by showing two uses of expect; note the third slot in the hooks and holes for the external argument of each entity. In both cases, x′ e is both the external argument of expect and its subject’s index, but in the first structure x′ e is also the external argument of the complement, thus giving the control effect. expect 1 (as in Kim expected to sleep) [he, ee, x′ e]{[hs, x′ e, x′ s]subj, [hc, ec, x′ e]comp1, . . .} [he : expect(ee, x′ e, h′ e)][h′ e =q hc]{} expect 2 (Kim expected that Sandy would sleep) [he, ee, x′ e]{[hs, x′ e, x′ s]subj, [hc, ec, x′ c]comp1, . . .} [h : expect(ee, x′ e, h′ e)][h′ e =q hc]{} Although these uses require different lexical entries, the semantic predicate expect used in the two examples is the same, in contrast to Montagovian approaches, which either relate two distinct predicates via meaning postulates, or require an additional semantic combinator. The HPSG account does not involve such additional machinery, but its formal underpinnings have been unclear: in this algebra, it can be seen that the desired result arises as a consequence of the restrictions on variable assignments imposed by the equalities. This completes our sketch of the algebra necessary to encode semantic composition in the ERG. We have constrained accessibility by enumerating the possible labels for holes and by stipulating the contents of the hooks. We believe that the handle, index, external argument triple constitutes all the semantic information that a sign should make accessible to a functor. The fact that only these pieces of information are visible means, for instance, that it is impossible to define a verb that controls the object of its complement.7 Although obviously changes to the syntactic valence features would necessitate modification of the hole labels, we think it unlikely that we will need to increase the inventory further. In combination with 7Readers familiar with MRS will notice that the KEY feature used for semantic selection violates these accessibility conditions, but in the current framework, KEY can be replaced by KEYPRED which points to the predicate alone. the principles defined in Copestake et al (1999) for qeq conditions, the algebra presented here results in a much more tightly specified approach to semantic composition than that in Pollard and Sag (1994). 7 Comparison Compared with λ-calculus, the approach to composition adopted in constraint-based grammars and formalized here has considerable advantages in terms of simplicity. The standard Montague grammar approach requires that arguments be presented in a fixed order, and that they be strictly typed, which leads to unnecessary multiplication of predicates which then have to be interrelated by meaning postulates (e.g., the two uses of expect mentioned earlier). Type raising also adds to the complexity. As standardly presented, λcalculus does not constrain grammars to be monotonic, and does not control accessibility, since the variable of the functor that is λ-abstracted over may be arbitrarily deeply embedded inside a λexpression. None of the previous work on unificationbased approaches to semantics has considered constraints on composition in the way we have presented. In fact, Nerbonne (1995) explicitly advocates nonmonotonicity. Moore (1989) is also concerned with formalizing existing practice in unification grammars (see also Alshawi, 1992), though he assumes Prolog-style unification, rather than TFSs. Moore attempts to formalize his approach in the logic of unification, but it is not clear this is entirely successful. He has to divorce the interpretation of the expressions from the notion of truth with respect to the model, which is much like treating the semantics as a description of a logic formula. Our strategy for formalization is closest to that adopted in Unification Categorial Grammar (Zeevat et al, 1987), but rather than composing actual logical forms we compose partial descriptions to handle semantic underspecification. 8 Conclusions and future work We have developed a framework for formally specifying semantics within constraint-based representations which allows semantic operations in a grammar to be tightly specified and which allows a representation of semantic content which is largely independent of the feature structure architecture of the syntactic representation. HPSGs can be written which encode much of the algebra described here as constraints on types in the grammar, thus ensuring that the grammar is consistent with the rules on composition. There are some aspects which cannot be encoded within currently implemented TFS formalisms because they involve negative conditions: for instance, we could not write TFS constraints that absolutely prevent a grammar writer sneaking in a disallowed coindexation by specifying a path into the lzt. There is the option of moving to a more general TFS logic but this would require very considerable research to develop reasonable tractability. Since the constraints need not be checked at runtime, it seems better to regard them as metalevel conditions on the description of the grammar, which can anyway easily be checked by code which converts the TFS into the algebraic representation. Because the ERG is large and complex, we have not yet fully completed the exercise of retrospectively implementing the constraints throughout. However, much of the work has been done and the process revealed many bugs in the grammar, which demonstrates the potential for enhanced maintainability. We have modified the grammar to be monotonic, which is important for the chart generator described in Carroll et al (1999). A chart generator must determine lexical entries directly from an input logical form: hence it will only work if all instances of nonmonotonicity can be identified in a grammar-specific preparatory step. We have increased the generator’s reliability by making the ERG monotonic and we expect further improvements in practical performance once we take full advantage of the restrictions in the grammar to cut down the search space. Acknowledgements This research was partially supported by the National Science Foundation, grant number IRI9612682. Alex Lascarides was supported by an ESRC (UK) research fellowship. We are grateful to Ted Briscoe, Alistair Knott and the anonymous reviewers for their comments on this paper. References Alshawi, Hiyan [1992] (ed.) The Core Language Engine, MIT Press. Carroll, John, Ann Copestake, Dan Flickinger and Victor Poznanski [1999] An Efficient Chart Generator for Lexicalist Grammars, The 7th International Workshop on Natural Language Generation, 86–95. Copestake, Ann, Dan Flickinger, Ivan Sag and Carl Pollard [1999] Minimal Recursion Semantics: An Introduction, manuscript at wwwcsli.stanford.edu/˜aac/newmrs.ps Egg, Marcus [1998] Wh-Questions in Underspecified Minimal Recursion Semantics, Journal of Semantics, 15.1:37–82. Fernando, Tim [1997] Ambiguity in Changing Contexts, Linguistics and Philosophy, 20.6: 575– 606. Moore, Robert C. [1989] Unification-based Semantic Interpretation, The 27th Annual Meeting for the Association for Computational Linguistics (ACL-89), 33–41. Nerbonne, John [1995] Computational Semantics—Linguistics and Processing, Shalom Lappin (ed.) Handbook of Contemporary Semantic Theory, 461–484, Blackwells. Pollard, Carl and Ivan Sag [1994] HeadDriven Phrase Structure Grammar, University of Chicago Press. Reyle, Uwe [1993] Dealing with Ambiguities by Underspecification: Construction, Representation and Deduction, Journal of Semantics, 10.1: 123–179. Sag, Ivan, and Tom Wasow [1999] Syntactic Theory: An Introduction, CSLI Publications. Shieber, Stuart [1986] An Introduction to Unification-based Approaches to Grammar, CSLI Publications. Zeevat, Henk [1989] A Compositional Approach to Discourse Representation Theory, Linguistics and Philosophy, 12.1: 95–131. Zeevat, Henk, Ewan Klein and Jo Calder [1987] An introduction to unification categorial grammar, Nick Haddock, Ewan Klein and Glyn Morrill (eds), Categorial grammar, unification grammar, and parsing: working papers in cognitive science, Volume 1, 195–222, Centre for Cognitive Science, University of Edinburgh.
2001
19
Processing Broadcast Audio for Information Access Jean-Luc Gauvain, Lori Lamel, Gilles Adda, Martine Adda-Decker, Claude Barras, Langzhou Chen, and Yannick de Kercadio Spoken Language Processing Group LIMSI-CNRS, B.P. 133, 91403 Orsay cedex, France ([email protected] http://www.limsi.fr/tlp) Abstract This paper addresses recent progress in speaker-independent, large vocabulary, continuous speech recognition, which has opened up a wide range of near and mid-term applications. One rapidly expanding application area is the processing of broadcast audio for information access. At LIMSI, broadcast news transcription systems have been developed for English, French, German, Mandarin and Portuguese, and systems for other languages are under development. Audio indexation must take into account the specificities of audio data, such as needing to deal with the continuous data stream and an imperfect word transcription. Some near-term applications areas are audio data mining, selective dissemination of information and media monitoring. 1 Introduction A major advance in speech processing technology is the ability of todays systems to deal with nonhomogeneous data as is exemplified by broadcast data. With the rapid expansion of different media sources, there is a pressing need for automatic processing of such audio streams. Broadcast audio is challenging as it contains segments of various acoustic and linguistic natures, which require appropriate modeling. A special section in the Communications of the ACM devoted to “News on Demand” (Maybury, 2000) includes contributions from many of the sites carrying out active research in this area. Via speech recognition, spoken document retrieval (SDR) can support random access to relevant portions of audio documents, reducing the time needed to identify recordings in large multimedia databases. The TREC (Text REtrieval Conference) SDR evaluation showed that only small differences in information retrieval performance are observed for automatic and manual transcriptions (Garofolo et al., 2000). Large vocabulary continuous speech recognition (LVCSR) is a key technology that can be used to enable content-based information access in audio and video documents. Since most of the linguistic information is encoded in the audio channel of video data, which once transcribed can be accessed using text-based tools. This research has been carried out in a multilingual environment in the context of several recent and ongoing European projects. We highlight recent progress in LVCSR and describe some of our work in developing a system for processing broadcast audio for information access. The system has two main components, the speech transcription component and the information retrieval component. Versions of the LIMSI broadcast news transcription system have been developed in American English, French, German, Mandarin and Portuguese. 2 Progress in LVCSR Substantial advances in speech recognition technology have been achieved during the last decade. Only a few years ago speech recognition was primarily associated with small vocabulary isolated word recognition and with speaker-dependent (often also domain-specific) dictation systems. The same core technology serves as the basis for a range of applications such as voice-interactive database access or limited-domain dictation, as well as more demanding tasks such as the transcription of broadcast data. With the exception of the inherent variability of telephone channels, for most applications it is reasonable to assume that the speech is produced in relatively stable environmental and in some cases is spoken with the purpose of being recognized by the machine. The ability of systems to deal with nonhomogeneous data as is found in broadcast audio (changing speakers, languages, backgrounds, topics) has been enabled by advances in a variety of areas including techniques for robust signal processing and normalization; improved training techniques which can take advantage of very large audio and textual corpora; algorithms for audio segmentation; unsupervised acoustic model adaptation; efficient decoding with long span language models; ability to use much larger vocabularies than in the past - 64 k words or more is common to reduce errors due to out-of-vocabulary words. With the rapid expansion of different media sources for information dissemination including via the internet, there is a pressing need for automatic processing of the audio data stream. The vast majority of audio and video documents that are produced and broadcast do not have associated annotations for indexation and retrieval purposes, and since most of today’s annotation methods require substantial manual intervention, and the cost is too large to treat the ever increasing volume of documents. Broadcast audio is challenging to process as it contains segments of various acoustic and linguistic natures, which require appropriate modeling. Transcribing such data requires significantly higher processing power than what is needed to transcribe read speech data in a controlled environment, such as for speaker adapted dictation. Although it is usually assumed that processing time is not a major issue since computer power has been increasing continuously, it is also known that the amount of data appearing on information channels is increasing at a close rate. Therefore processing time is an important factor in making a speech transcription system viable for audio data mining and other related applications. Transcription word error rates of about 20% have been reported for unrestricted broadcast news data in several languages. As shown in Figure 1 the LIMSI broadcast news transcription system for automatic indexation consists of an audio partitioner and a speech recognizer. 3 Audio partitioning The goal of audio partitioning is to divide the acoustic signal into homogeneous segments, labeling and structuring the acoustic content of the data, and identifying and removing non-speech segments. The LIMSI BN audio partitioner relies on an audio stream mixture model (Gauvain et al., 1998). While it is possible to transcribe the continuous stream of audio data without any prior segmentation, partitioning offers several advantages over this straight-forward solution. First, in addition to the transcription of what was said, other interesting information can be extracted such as the division into speaker turns and the speaker identities, and background acoustic conditions. This information can be used both directly and indirectly for indexation and retrieval purposes. Second, by clustering segments from the same speaker, acoustic model adaptation can be carried out on a per cluster basis, as opposed to on a single segment basis, thus providing more adaptation data. Third, prior segmentation can avoid problems caused by linguistic discontinuity at speaker changes. Fourth, by using acoustic models trained on particular acoustic conditions (such as wide-band or telephone band), overall performance can be significantly improved. Finally, eliminating non-speech segments substantially reduces the computation time. The result of the partitioning process is a set of speech segments usually corresponding to speaker turns with speaker, gender and telephone/wide-band labels (see Figure 2). 4 Transcription of Broadcast News For each speech segment, the word recognizer determines the sequence of words in the segment, associating start and end times and an optional confidence measure with each word. The LIMSI system, in common with most of today’s state-ofthe-art systems, makes use of statistical models of speech generation. From this point of view, message generation is represented by a language model which provides an estimate of the probability of any given word string, and the encoding of the message in the acoustic signal is represented by a probability density function. The speakerindependent 65k word, continuous speech recognizer makes use of 4-gram statistics for language modeling and of continuous density hidden Markov models (HMMs) with Gaussian mixtures for acoustic modeling. Each word is represented by one or more sequences of context-dependent phone models as determined by its pronunciation. The acoustic and language models are trained on large, representative corpora for each task and language. Processing time is an important factor in making a speech transcription system viable for automatic indexation of radio and television broadcasts. For many applications there are limitations on the response time and the available computational resources, which in turn can significantly affect the design of the acoustic and language models. Word recognition is carried out in one or more decoding passes with more accurate acoustic and language models used in successive passes. A 4-gram single pass dynamic network decoder has been developed (Gauvain and Lamel, 2000) which can achieve faster than real-time decoding with a word error under 30%, running in less than 100 Mb of memory on widely available platforms such Pentium III or Alpha machines. 5 Multilinguality A characteristic of the broadcast news domain is that, at least for what concerns major news events, similar topics are simultaneously covered in different emissions and in different countries and languages. Automatic processing carried out on contemporaneous data sources in different languages can serve for multi-lingual indexation and retrieval. Multilinguality is thus of particular interest for media watch applications, where news may first break in another country or language. At LIMSI broadcast news transcription systems have been developed for the American English, French, German, Mandarin and Portuguese languages. The Mandarin language was chosen because it is quite different from the other languages (tone and syllable-based), and Mandarin resources are available via the LDC as well as reference performance results. Our system and other state-of-the-art systems can transcribe unrestricted American English broadcast news data with word error rates under 20%. Our transcription systems for French and German have comparable error rates for news broadcasts (Adda-Decker et al., 2000). The character error rate for Mandarin is also about 20% (Chen et al., 2000). Based on our experience, it appears that with appropriately trained models, recognizer performance is more dependent upon the type and source of data, than on the language. For example, documentaries are particularly challenging to transcribe, as the audio quality is often not very high, and there is a large proportion of voice over. 6 Spoken Document Retrieval The automatically generated partition and word transcription can be used for indexation and information retrieval purposes. Techniques commonly applied to automatic text indexation can be applied to the automatic transcriptions of the broadcast news radio and TV documents. These techniques are based on document term frequencies, where the terms are obtained after standard text processing, such as text normalization, tokenization, stopping and stemming. Most of these preprocessing steps are the same as those used to prepare the texts for training the speech recognizer language models. While this offers advantages for speech recognition, it can lead to IR errors. For better IR results, some words sequences corresponding to acronymns, multiword namedentities (e.g. Los Angeles), and words preceded by some particular prefixes (anti, co, bi, counter) are rewritten as a single word. Stemming is used to reduce the number of lexical items for a given word sense. The stemming lexicon contains about 32000 entries and was constructed using Porter’s algorithm (Porter80, 1980) on the most frequent words in the collection, and then manually corrected. The information retrieval system relies on a unLexicon Acoustic models Recognition Word Audio signal Language model Analysis Acoustic partitioned speech acoustic models Music, noise and non speech Filter out segments telephone/non-tel models word transcription (SGML file) data Male/female models Iterative segmentation and labelling Figure 1: Overview of an audio transcription system. The audio partitioner divides the data stream into homogeneous acoustic segments, removing non-speech portions. The word recognizer identifies the words in each speech segment, associating time-markers with each word. audiofile filename=19980411 1600 1630 CNN HDL language=english  segment type=wideband gender=female spkr=1 stime=50.25 etime=86.83  wtime stime=50.38 etime=50.77  c.n.n. wtime stime=50.77 etime=51.10  headline wtime stime=51.10 etime=51.44  news wtime stime=51.44 etime=51.63  i’m wtime stime=51.63 etime=51.92  robert wtime stime=51.92 etime=52.46  johnson it is a day of final farewells in alabama the first funerals for victims of this week’s tornadoes are being held today along with causing massive property damage the twisters killed thirty three people in alabama five in georgia and one each in mississippi and north carolina the national weather service says the tornado that hit jefferson county in alabama had winds of more than two hundred sixty miles per hour authorities speculated was the most powerful tornado ever to hit the southeast twisters destroyed two churches to fire stations and a school parishioners were in one church when the tornado struck /segment  segment type=wideband gender=female spkr=2 stime=88.37 etime=104.86  at one point when the table came onto my back i thought yes this is it i’m ready ready protects protect the children because the children screaming the children were screaming they were screaming in prayer that were screaming god help us /segment  segment type=wideband gender=female spkr=1 stime=104.86 etime=132.37  vice president al gore toured the area yesterday he called it the worst tornado devastation he’s ever seen we will have a complete look at the weather across the u. s. in our extended weather forecast in six minutes /segment  ... segment type=wideband gender=male spkr=19 stime=1635.60 etime=1645.71  so if their computing systems don’t tackle this problem well we have a potential business disruption and either erroneous deliveries or misdeliveries or whatever savvy businesses are preparing now so the january first two thousand would just be another day on the town not a day when fast food and everything else slows down rick lockridge c.n.n. /segment  /audiofile  Figure 2: Example system output obtained by automatic processing of the audio stream of a CNN show broadcasted on April 11, 1998 at 4pm. The output includes the partitioning and transcription results. To improve readability, word time stamps are given only for the first 6 words. Non speech segments have been removed and the following information is provided for each speech segment: signal bandwidth (telephone or wideband), speaker gender, and speaker identity (within the show). Transcriptions Werr Base BRF Closed-captions 46.9% 54.3% 10xRT 20.5% 45.3% 53.9% 1.4xRT 32.6% 40.9% 49.4% Table 1: Impact of the word error rate on the mean average precision using using a 1-gram document model. The document collection contains 557 hours of broadcast news from the period of February through June 1998. (21750 stories, 50 queries with the associated relevance judgments.) igram model per story. The score of a story is obtained by summing the query term weights which are simply the log probabilities of the terms given the story model once interpolated with a general English model. This term weighting has been shown to perform as well as the popular TF  IDF weighting scheme (Hiemstra and Wessel, 1998; Miller et al., 1998; Ng, 1999; Sp¨ark Jones et al., 1998). The text of the query may or may not include the index terms associated with relevant documents. One way to cope with this problem is to use query expansion (Blind Relevance Feedback, BRF (Walker and de Vere, 1990)) based on terms present in retrieved contemporary texts. The system was evaluated in the TREC SDR track, with known story boundaries. The SDR data collection contains 557 hours of broadcast news from the period of February through June 1998. This data includes 21750 stories and a set of 50 queries with the associated relevance judgments (Garofolo et al., 2000). In order to assess the effect of the recognition time on the information retrieval results we transcribed the 557 hours of broadcast news data using two decoder configurations: a single pass 1.4xRT system and a three pass 10xRT system. The word error rates are measured on a 10h test subset (Garofolo et al., 2000). The information retrieval results are given in terms of mean average precision (MAP), as is done for the TREC benchmarks in Table 1 with and without query expansion. For comparison, results are also given for manually produced closed captions. With query expansion comparable IR results are obtained using the closed captions and the 10xRT 0 5 10 15 20 25 30 35 40 45 50 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Percentage of sections Number of speaker turns Figure 3: Histogram of the number of speaker turns per section in 100 hours of audio data from radio and TV sources (NPR, ABC, CNN, CSPAN) from May-June 1996. transcriptions, and a moderate degradation (4% absolute) is observed using the 1.4xRT transcriptions. 7 Locating Story Boundaries The broadcast news transcription system also provides non-lexical information along with the word transcription. This information is available in the partition of the audio track, which identifies speaker turns. It is interesting to see whether or not such information can be used to help locate story boundaries, since in the general case these are not known. Statistics were made on 100 hours of radio and television broadcast news with manual transcriptions including the speaker identities. Of the 2096 sections manually marked as reports (considered stories), 40% start without a manually annotated speaker change. This means that using only speaker change information for detecting document boundaries would miss 40% of the boundaries. With automatically detected speaker changes, the number of missed boundaries would certainly increase. At the same time, 11,160 of the 12,439 speaker turns occur in the middle of a document, resulting in a false alarm rate of almost 90%. A more detailed analysis shows that about 50% of the sections involve a single speaker, but that the distribution of the number of speaker turns per section falls off very gradually (see Figure 3). False alarms are not as harmful as missed detections, since it may be possible to merge adjacent turns into a single document in subsequent processing. These results show that even perfect 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0 30 60 90 120 150 180 210 240 270 300 Density  Duration (seconds) 1997 Hub-4 0 0.005 0.01 0.015 0.02 0.025 0 30 60 90 120 150 180 210 240 270 300 Density  Duration (seconds) TREC-9 SDR Corpus Figure 4: Distribution of document durations for 100 hours of data from May-June 1996 (top) and for 557 hours from February-June 1998 (bottom). speaker turn boundaries cannot be used as the primary cue for locating document boundaries. They can, however, be used to refine the placement of a document boundary located near a speaker change. We also investigated using simple statistics on the durations of the documents. A histogram of the 2096 sections is shown in Figure 4. One third of the sections are shorter than 30 seconds. The histogram has a bimodal distribution with a sharp peak around 20 seconds, and a smaller, flat peak around 2 minutes. Very short documents are typical of headlines which are uttered by single speaker, whereas longer documents are more likely to contain data from multiple talkers. This distribution led us to consider using a multi-scale segmentation of the audio stream into documents. Similar statistics were measured on the larger corpus (Figure 4 bottom). As proposed in (Abberley et al., 1999; Johnson et al., 1999), we segment the audio stream into overlapping documents of a fixed duration. As a result of optimization, we chose a 30 second window duration with a 15 second overlap. Since there are many stories significantly shorter than 30s in broadcast shows (see Figure 4) we conjunctured that it may be of interest to use a double windowing system in order to better target short stories (Gauvain et al., 2000). The window size of the smaller window was selected to be 10 seconds. So for each query, we independently retrieved two sets of documents, one set for each window size. Then for each document set, document recombination is done by merging overlapping documents until no further merges are possible. The score of a combined document is set to maximum score of any one of the components. For each document derived from the 30s windows, we produce a time stamp located at the center point of the document. However, if any smaller documents are embedded in this document, we take the center of the best scoring document. This way we try to take advantage of both window sizes. The MAP using a single 30s window and the double windowing strategy are shown in Table 2. For comparison, the IR results using the manual story segmentation and the speaker turns located by the audio partitioner are also given. All conditions use the same word hypotheses obtained with a speech recognizer which had no knowledge about the story boundaries. manual segmentation (NIST) 59.6% audio partitioner 33.3% single window (30s) 50.0% double window 52.3% Table 2: Mean average precision with manual and automatically determined story boundaries. The document collection contains 557 hours of broadcast news from the period of February through June 1998. (21750 stories, 50 queries with the associated relevance judgments.) From these results we can clearly see the interest of using a search engine specifically designed to retrieve stories in the audio stream. Using an a priori acoustic segmentation, the mean average precision is significantly reduced compared to a “perfect” manual segmentation, whereas the window-based search engine results are much closer. Note that in the manual segmentation all non-story segments such as advertising have been removed. This reduces the risk of having out-oftopic hits and explains part of the difference between this condition and the other conditions. The problem of locating story boundaries is being further pursued in the context of the ALERT project, where one of the goals is to identify “documents” given topic profiles. This project is investigating the combined use of audio and video segmentation to more accurately locate document boundaries in the continuous data stream. 8 Recent Research Projects The work presented in this paper has benefited from a variety of research projects both at the European and National levels. These collaborative efforts have enabled access to real-world data allowing us to develop algorithms and models wellsuited for near-term applications. The European project LE-4 OLIVE: A Multilingual Indexing Tool for Broadcast Material Based on Speech Recognition (http://twentyone.tpd.tno.nl/ olive/) addressed methods to automate the disclosure of the information content of broadcast data thus allowing content-based indexation. Speech recognition was used to produce a time-linked transcript of the audio channel of a broadcast, which was then used to produce a concept index for retrieval. Broadcast news transcription systems for French and German were developed. The French data come from a variety of television news shows and radio stations. The German data consist of TV news and documentaries from ARTE. OLIVE also developed tools for users to query the database, as well as cross-lingual access based on off-line machine translation of the archived documents, and online query translation. The European project IST ALERT: Alert system for selective dissemination (http://www.fb9ti.uni-duisburg.de/alert) aims to associate stateof-the-art speech recognition with audio and video segmentation and automatic topic indexing to develop an automatic media monitoring demonstrator and evaluate it in the context of real world applications. The targeted languages are French, German and Portuguese. Major mediamonitoring companies in Europe are participating in this project. Two other related FP5 IST projects are: CORETEX: Improving Core Speech Recognition Technology and ECHO: European CHronicles Online. CORETEX (http://coretex.itc.it/), aims at improving core speech recognition technologies, which are central to most applications involving voice technology. In particular the project addresses the development of generic speech recognition technology and methods to rapidly port technology to new domains and languages with limited supervision, and to produce enriched symbolic speech transcriptions. The ECHO project (http://pc-erato2.iei.pi.cnr.it/echo) aims to develop an infrastructure for access to historical films belonging to large national audiovisual archives. The project will integrate state-of-theart language technologies for indexing, searching and retrieval, cross-language retrieval capabilities and automatic film summary creation. 9 Conclusions This paper has described some of the ongoing research activites at LIMSI in automatic transcription and indexation of broadcast data. Much of this research, which is at the forefront of todays technology, is carried out with partners with real needs for advanced audio processing technologies. Automatic speech recognition is a key technology for audio and video indexing. Most of the linguistic information is encoded in the audio channel of video data, which once transcribed can be accessed using text-based tools. This is in contrast to the image data for which no common description language is widely adpoted. A variety of near-term applications are possible such as audio data mining, selective dissemination of information (News-on-Demand), media monitoring, content-based audio and video retrieval. It appears that with word error rates on the order of 20%, comparable IR results to those obtained on text data can be achieved. Even with higher word error rates obtained by running a faster transcription system or by transcribing compressed audio data (Barras et al., 2000; J.M. Van Thong et al., 2000) (such as that can be loaded over the Internet), the IR performance remains quite good. Acknowledgments This work has been partially financed by the European Commission and the French Ministry of Defense. The authors thank Jean-Jacques Gangolf, Sylvia Hermier and Patrick Paroubek for their participation in the development of different aspects of the automatic indexation system described here. References Dave Abberley, Steve Renals, Dan Ellis and Tony Robinson, “The THISL SDR System at TREC-8”, Proc. of the 8th Text Retrieval Conference TREC-8, Nov 1999. Martine Adda-Decker, Gilles Adda, Lori Lamel, “Investigating text normalization and pronunciation variants for German broadcast transcription,” Proc. ICSLP’2000, Beijing, China, October 2000. Claude Barras, Lori Lamel, Jean-Luc Gauvain, “Automatic Transcription of Compressed Broadcast Audio Proc. ICASSP’2001, Salt Lake City, May 2001. Langzhou Chen, Lori Lamel, Gilles Adda and JeanLuc Gauvain, “Broadcast News Transcription in Mandarin,” Proc. ICSLP’2000, Beijing, China, October 2000. John S. Garofolo, Cedric G.P. Auzanne, and Ellen M. Voorhees, “The TREC Spoken Document Retrieval Track: A Success Story,” Proc. of the 6th RIAO Conference, Paris, April 2000. Also John S. Garofolo et al., “1999 Trec-8 Spoken Document Retrieval Track Overview and Results,” Proc. 8th Text Retrieval Conference TREC-8, Nov 1999. (http://trec.nist.gov). Jean-Luc Gauvain, Lori Lamel, “Fast Decoding for Indexation of Broadcast Data,” Proc. ICSLP’2000, 3:794-798, Oct 2000. Jean-Luc Gauvain, Lori Lamel, Gilles Adda, “Partitioning and Transcription of Broadcast News Data,” ICSLP’98, 5, pp. 1335-1338, Dec. 1998. Jean-Luc Gauvain, Lori Lamel, Claude Barras, Gilles Adda, Yannick de Kercadio “The LIMSI SDR system for TREC-9,” Proc. of the 9th Text Retrieval Conference TREC-9, Nov 2000. Alexander G. Hauptmann and Michael J. Witbrock, “Informedia: News-on-Demand Multimedia Information Acquisition and Retrieval,” Proc Intelligent Multimedia Information Retrieval, M. Maybury, ed., AAAI Press, pp. 213-239, 1997. Djoerd Hiemstra, Wessel Kraaij, “Twenty-One at TREC-7: Ad-hoc and Cross-language track,” Proc. of the 8th Text Retrieval Conference TREC-7, Nov 1998. Sue E. Johnson, Pierre Jourlin, Karen Sp¨arck Jones, Phil C. Woodland, “Spoken Document Retrieval for TREC-8 at Cambridge University”, Proc. of the 8th Text Retrieval Conference TREC-8, Nov 1999. Mark Maybury, ed., Special Section on “News on Demand”, Communications of the ACM, 43(2), Feb 2000. David Miller, Tim Leek, Richard Schwartz, “Using Hidden Markov Models for Information Retrieval”, Proc. of the 8th Text Retrieval Conference TREC-7, Nov 1998. Kenney Ng, “A Maximum Likelihood Ratio Information Retrieval Model,” Proc. of the 8th Text Retrieval Conference TREC-8, 413-435, Nov 1999. M. F. Porter, “An algorithm for suffix stripping”, Program, 14, pp. 130–137, 1980. Karen Sp¨ark Jones, S. Walker, Stephen E. Robertson, “A probabilistic model of information retrieval: development and status,” Technical Report of the Computer Laboratory, University of Cambridge, U.K., 1998. J.M. Van Thong, David Goddeau, Anna Litvinova, Beth Logan, Pedro Moreno, Michael Swain, “SpeechBot: a Speech Recognition based Audio Indexing System for the Web”, Proc. of the 6th RIAO Conference, Paris, April 2000. S. Walker, R. de Vere, “Improving subject retrieval in online catalogues: 2. Relevance feedback and query expansion”, British Library Research Paper 72, British Library, London, U.K., 1990.
2001
2
A machine learning approach to the automatic evaluation of machine translation Simon Corston-Oliver, Michael Gamon and Chris Brockett Microsoft Research One Microsoft Way Redmond WA 98052, USA {simonco, mgamon, chrisbkt}@microsoft.com Abstract We present a machine learning approach to evaluating the wellformedness of output of a machine translation system, using classifiers that learn to distinguish human reference translations from machine translations. This approach can be used to evaluate an MT system, tracking improvements over time; to aid in the kind of failure analysis that can help guide system development; and to select among alternative output strings. The method presented is fully automated and independent of source language, target language and domain. 1 Introduction Human evaluation of machine translation (MT) output is an expensive process, often prohibitively so when evaluations must be performed quickly and frequently in order to measure progress. This paper describes an approach to automated evaluation designed to facilitate the identification of areas for investigation and improvement. It focuses on evaluating the wellformedness of output and does not address issues of evaluating content transfer. Researchers are now applying automated evaluation in MT and natural language generation tasks, both as system-internal goodness metrics and for the assessment of output. Langkilde and Knight (1998), for example, employ n-gram metrics to select among candidate outputs in natural language generation, while Ringger et al. (2001) use ngram perplexity to compare the output of MT systems. Su et al. (1992), Alshawi et al. (1998) and Bangalore et al. (2000) employ string edit distance between reference and output sentences to gauge output quality for MT and generation. To be useful to researchers, however, assessment must provide linguistic information that can guide in identifying areas where work is required. (See Nyberg et al., 1994 for useful discussion of this issue.) The better the MT system, the more its output will resemble human-generated text. Indeed, MT might be considered a solved problem should it ever become impossible to distinguish automated output from human translation. We have observed that in general humans can easily and reliably categorize a sentence as either machine- or human-generated. Moreover, they can usually justify their decision. This observation suggests that evaluation of the wellformedness of output sentences can be treated as a classification problem: given a sentence, how accurately can we predict whether it has been translated by machine? In this paper we cast the problem of MT evaluation as a machine learning classification task that targets both linguistic features and more abstract features such as ngram perplexity. 2 Data Our corpus consists of 350,000 aligned SpanishEnglish sentence pairs taken from published computer software manuals and online help documents. We extracted 200,000 English sentences for building language models to evaluate per-sentence perplexity. From the remainder of the corpus, we extracted 100,000 aligned sentence pairs. The Spanish sentences in this latter sample were then translated by the Microsoft machine translation system, which was trained on documents from this domain (Richardson et al., 2001). This yielded a set of 200,000 English sentences, one half of which were English reference sentences, and the other half of which were MT output. (The Spanish sentences were not used in building or evaluating the classifiers). We split the 200,000 English sentences 90/10, to yield 180,000 sentences for training classifiers and 20,000 sentences that we used as held-out test data. Training and test data were evenly divided between reference English sentences and Spanish-to-English translations. 3 Features The selection of features used in our classification task was motivated by failure analysis of system output. We were particularly interested in those linguistic features that could aid in qualitative analysis, as we discuss in section 5. For each sentence we automatically extracted 46 features by performing a syntactic parse using the Microsoft NLPWin natural language processing system (Heidorn, 2000) and language modeling tools. The features extracted fall into two broad categories: (i) Perplexity measures were extracted using the CMU-Cambridge Statistical Language Modeling Toolkit (Clarkson and Rosenfeld, 1997). We calculated two sets of values: lexicalized trigram perplexity, with values discretized into deciles and part of speech (POS) trigram perplexity. For the latter we used the following sixteen POS tags: adjective, adverb, auxiliary, punctuation, complementizer, coordinating conjunction, subordinating conjunction, determiner, interjection, noun, possessor, preposition, pronoun, quantifier, verb, and other. (ii) Linguistic features fell into several subcategories: branching properties of the parse; function word density, constituent length, and other miscellaneous features We employed a selection of features to provide a detailed assessment of the branching properties of the parse tree. The linguistic motivation behind this was twofold. First, it had become apparent from failure analysis that MT system output tended to favor right-branching structures over noun compounding. Second, we hypothesized that translation from languages whose branching properties are radically different from English (e.g. Japanese, or a verbsecond language like German) might pollute the English output with non-English characteristics. For this reason, assessment of branching properties is a good candidate for a languagepair independent measure. The branching features we employed are given below. Indices are scalar counts; other measures are normalized for sentence length. ¾ number of right-branching nodes across all constituent types ¾ number of right-branching nodes for NPs only ¾ number of left-branching nodes across all constituent types ¾ number of left-branching nodes for NPs only ¾ number of premodifiers across all constituent types ¾ number of premodifiers within NPs only ¾ number of postmodifiers across all constituent types ¾ number of postmodifiers within NPs only ¾ branching index across all constituent types, i.e. the number of right-branching nodes minus number of left-branching nodes ¾ branching index for NPs only ¾ branching weight index: number of tokens covered by right-branching nodes minus number of tokens covered by left-branching nodes across all categories ¾ branching weight index for NPs only ¾ modification index, i.e. the number of premodifiers minus the number of postmodifiers across all categories ¾ modification index for NPs only ¾ modification weight index: length in tokens of all premodifiers minus length in tokens of all postmodifiers across all categories ¾ modification weight index for NPs only ¾ coordination balance, i.e. the maximal length difference in coordinated constituents We considered the density of function words, i.e. the ratio of function words to content words, because of observed problems in WinMT output. Pronouns received special attention because of frequent problems detected in failure analysis. The density features are: ¾ overall function word density ¾ density of determiners/quantifiers ¾ density of pronouns ¾ density of prepositions ¾ density of punctuation marks, specifically commas and semicolons ¾ density of auxiliary verbs ¾ density of conjunctions ¾ density of different pronoun types: Wh, 1st, 2nd, and 3rd person pronouns We also measured the following constituent sizes: ¾ maximal and average NP length ¾ maximal and average AJP length ¾ maximal and average PP length ¾ maximal and average AVP length ¾ sentence length On a lexical level, the presence of out of vocabulary (OOV) words is frequently caused by the direct transfer of source language words for which no translation could be found. The top-level syntactic template, i.e. the labels of the immediate children of the root node of a sentence, was also used, as was subject-verb disagreement. The final five features are: ¾ number of OOV words ¾ the presence of a word containing a nonEnglish letter, i.e. an extended ASCII character. This is a special case of the OOV problem. ¾ label of the root node of the sentence (declarative, imperative, question, NP, or "FITTED" for non-spanning parses) ¾ sentence template, i.e. the labels of the immediate children of the root node. ¾ subject-verb disagreement 4 Decision Trees We used a set of automated tools to construct decision trees (Chickering et al., 1997) based on the features extracted from the reference and MT sentences. To avoid overfitting, we specified that nodes in the decision tree should not be split if they accounted for fewer than fifty cases. In the discussion below we distinguish the perplexity features from the linguistic features. 4.1 Decision trees built using all training data Table 1 gives the accuracy of the decision trees, when trained on all 180,000 training sentences and evaluated against the 20,000 held-out test sentences. Since the training data and test data contain an even split between reference human translations and machine translations, the baseline for comparison is 50.00%. As Table 1 shows, the decision trees dramatically outperform this baseline. Using only perplexity features or only linguistic features yields accuracy substantially above this baseline. Combining the two sets of features yields the highest accuracy, 82.89%. Features used Accuracy (%) All features 82.89 Perplexity features only 74.73 Linguistic features only 76.51 Table 1 Accuracy of the decision trees Notably, most of the annotated features were selected by the decision tree tools. Two features were found not to be predictive. The first non-selected feature is the presence of a word containing an extended ASCII character, suggesting that general OOV features were sufficient and subsume the effect of this narrower feature. Secondly, subject-verb disagreement was also not predictive, validating the consistent enforcement of agreement constraints in the natural language generation component of the MT system. In addition, only eight of approximately 5,200 observed sentence templates turned out to be discriminatory. For a different use of perplexity in classification, see Ringger et al. (2001) who compare the perplexity of a sentence using a language model built solely from reference translations to the perplexity using a language model built solely from machine translations. The output of such a classifier could be used as an input feature in building decision trees. Effect of training data size 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 90,000 100,000 110,000 120,000 130,000 140,000 150,000 160,000 170,000 180,000 Training cases Avg best accuracy All features Perplexity only Linguistic only Figure 1 Accuracy with varying amounts of training data 4.2 Varying the amount of training data For our experiments, we had access to several hundred thousand sentences from the target domain. To measure the effect of reducing the size of the training data set on the accuracy of the classifier, we built classifiers using samples of the training data and evaluating against the same held-out sample of 20,000 sentences. We randomly extracted ten samples containing the following numbers of sentences: {1,000, 2,000, 3,000, 4,000, 5,000, 6,000, 12,000, 25,000, 50,000, 100,000, 150,000}. Figure 1 shows the effect of varying the size of the training data. The data point graphed is the average accuracy over the ten samples at a given sample size, with error bars showing the range from the least accurate decision tree at that sample size to the most accurate. As Figure 1 shows, the models built using only perplexity features do not benefit from additional training data. The models built using linguistic features, however, benefit substantially, with accuracy leveling off after 150,000 training cases. With only 2,000 training cases, the classifiers built using all features range in accuracy from 75.06% to 78.84%, substantially above the baseline accuracy of 50%. 5 Discussion As the results in section 4 show, it is possible to build classifiers that can distinguish human reference translations from the output of a machine translation system with high accuracy. We thus have an automatic mechanism that can perform the task that humans appear to do with ease, as noted in section 1. The best result, a classifier with 82.89% accuracy, is achieved by combining perplexity calculations with a set of finer-grained linguistic features. Even with as few as 2,000 training cases, accuracy exceeded 75%. In the discussion below we consider the advantages and possible uses of this automatic evaluation methodology. 5.1 Advantages of the approach Once an appropriate set of features has been selected and tools to automatically extract those features are in place, classifiers can be built and evaluated quickly. This overcomes the two problems associated with traditional manual evaluation of MT systems: manual evaluation is both costly and time-consuming. Indeed, an automated approach is essential when dealing with an MT system that is under constant development in a collaborative research environment. The output of such a system may change from day to day, requiring frequent feedback to monitor progress. The methodology does not crucially rely on any particular set of features. As an MT system matures, more and more subtle cues might be necessary to distinguish between human and machine translations. Any linguistic feature that can be reliably extracted can be proposed as a candidate feature to the decision tree tools. The methodology is also not sensitive to the domain of the training texts. All that is needed to build classifiers for a new domain is a sufficient quantity of aligned translations. 5.2 Possible applications of the approach The classifiers can be used for evaluating a system overall, providing feedback to aid in system development, and in evaluating individual sentences. Evaluating an MT system overall Evaluating the accuracy of the classifier against held-out data is equivalent to evaluating the fluency of the MT system. As the MT system improves, its output will become more like the human reference translations. To measure improvement over time, we would hold the set of features constant and build and evaluate new classifiers using the human reference translations and the output of the MT system at a given point in time. Using the same set of features, we expect the accuracy of the classifiers to go down over time as the MT output becomes more like human translations. Feedback to aid system development Our primary interest in evaluating an MT system is to identify areas that require improvement. This has been the motivation for using linguistic features in addition to perplexity measures. From the point of view of system development, perplexity is a rather opaque measure. This can be viewed as both a strength and a weakness. On the one hand, it is difficult to tune a system with the express goal of causing perplexity to improve, rendering perplexity a particularly good objective measurement. On the other hand, given a poor perplexity score, it is not clear how to improve a system without additional failure analysis. We used the DNETVIEWER tool (Heckerman et al., 2000), a visualization tool for viewing decision trees and Bayesian networks, to explore the decision trees and identify problem areas in our MT system. In one visualization, shown in Figure 2, DNETVIEWER allows the user to adjust a slider to see the order in which the features were selected during the heuristic search that guides the construction of decision trees. The most discriminatory features are those which cause the MT translations to look most awful, or are characteristics of the reference translations that ought to be emulated by the MT system. For the coarse model shown in Figure 2, the distance between pronouns (nPronDist) is the highest predictor, followed by the number of second person pronouns (n2ndPersPron), the number of function words (nFunctionWords), and the distance between prepositions (nPrepDist). Using DNETVIEWER we are able to explore the decision tree, as shown in Figure 3. Viewing the leaf nodes in the decision tree, we see a probability distribution over the possible states of the target variable. In the case of the binary classifier here, this is the probability that a sentence will be a reference translation. In Figure 3, the topmost leaf node shows that p(Human translation) is low. We modified DNETVIEWER so that double-clicking on the leaf node would display reference translations and MT sentences from the training data. We display a window showing the path through the decision tree, the probability that the sentence is a reference translation given that path, and the sentences from the training data identified by the features on the path. This visualization allows the researcher to view manageable groups of similar problem sentences with a view to identifying classes of problems within the groups. A goal for future research is to select additional linguistic features that will allow us to pinpoint problem areas in the MT system and thereby further automate failure analysis. Figure 2 Using the slider to view the best predictors Figure 3 Examining sentences at a leaf node in the decision tree Figure 4 Examining sentences at a leaf node in the decision tree Decision trees are merely one form of classifier that could be used for the automated evaluation of an MT system. In preliminary experiments, the accuracy of classifiers using support vector machines (SVMs) (Vapnik, 1998; Platt et al., 2000) exceeded the accuracy of the decision tree classifiers by a little less than one percentage point using a linear kernel function, and by a slightly greater margin using a polynomial kernel function of degree three. We prefer the decision tree classifiers because they allow a researcher to explore the classification system and focus on problem areas and sentences. We find this method for exploring the data more intuitive than attempting to visualize the location of sentences in the highdimensional space of the corresponding SVM. Evaluating individual sentences In addition to system evaluation and failure analysis, classifiers could be used on a persentence basis to guide the output of an MT system by selecting among multiple candidate strings. If no candidate is judged sufficiently similar to a human reference translation, the sentence could be flagged for human postediting. 6 Conclusion We have presented a method for evaluating the fluency of MT, using classifiers based on linguistic features to emulate the human ability to distinguish MT from human translation. The techniques we have described are system- and language-independent. Possible applications of our approach include system evaluation, failure analysis to guide system development, and selection among alternative possible outputs. We have focused on structural aspects of a text that can be used to evaluate fluency. A full evaluation of MT quality would of course need to include measurements of idiomaticity and techniques to verify that the semantic and pragmatic content of the source language had been successfully transferred to the target language. Acknowledgements Our thanks go to Eric Ringger and Max Chickering for programming assistance with the tools used in building and evaluating the decision trees, and to Mike Carlson for help in sampling the initial datasets. Thanks also to John Platt for helpful discussion on parameter setting for the SVM tools, and to the members of the MSR NLP group for feedback on the uses of the methodology presented here. References Alshawi, H., S. Bangalore, and S. Douglas. 1998. Automatic acquisition of hierarchical transduction models for machine translation. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, Montreal Canada, Vol. I: 41-47. Bangalore, S., O. Rambow, and S. Whittaker. 2000. Evaluation Metrics for Generation. In Proceedings of the International Conference on Natural Language Generation (INLG 2000), Mitzpe Ramon, Israel. 1-13. Chickering, D. M., D. Heckerman, and C. Meek. 1997. A Bayesian approach to learning Bayesian networks with local structure. In Geiger, D. and P. Punadlik Shenoy (Eds.), Uncertainty in Artificial Intelligence: Proceedings of the Thirteenth Conference. 80-89. Clarkson, P. and R. Rosenfeld. 1997. Statistical Language Modeling Using the CMU-Cambridge Toolkit. Proceedings of Eurospeech97. 27072710. Heckerman, D., D. M. Chickering, C. Meek, R. Rounthwaite, and C. Kadie. 2000. Dependency networks for inference, collaborative filtering and data visualization. Journal of Machine Learning Research 1:49-75. Heidorn, G. E., 2000. Intelligent writing assistance. In R. Dale, H. Moisl and H. Somers (Eds.). Handbook of Natural Language Processing. New York, NY. Marcel Dekker. 181-207. Langkilde, I., and K. Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, and 17th International Conference on Computational Linguistics, Montreal, Canada. 704-710. Nyberg, E. H., T. Mitamura, and J. G. Carbonnell. 1994. Evaluation Metrics for Knowledge-Based Machine Translation. In Proceedings of the 15th International Conference on Computational Linguistics, Kyoto, Japan (Coling 94). 95-99. Platt, J., N. Cristianini, J. Shawe-Taylor. 2000. Large margin DAGs for multiclass classification. In Advances in Neural Information Processing Systems 12, MIT Press. 547-553. Richardson, S., B. Dolan, A. Menezes, and J. Pinkham. 2001. Achieving commercial-quality translation with example-based methods. Submitted for review. Ringger, E., M. Corston-Oliver, and R. Moore. 2001. Using Word-Perplexity for Automatic Evaluation of Machine Translation. Manuscript. Su, K., M. Wu, and J. Chang. 1992. A new quantitative quality measure for machine translation systems. In Proceedings of COLING92, Nantes, France. 433-439. Vapnik, V. 1998. Statistical Learning Theory, WileyInterscience, New York.
2001
20